CN111709968B - Low-altitude target detection tracking method based on image processing - Google Patents

Low-altitude target detection tracking method based on image processing Download PDF

Info

Publication number
CN111709968B
CN111709968B CN202010422092.8A CN202010422092A CN111709968B CN 111709968 B CN111709968 B CN 111709968B CN 202010422092 A CN202010422092 A CN 202010422092A CN 111709968 B CN111709968 B CN 111709968B
Authority
CN
China
Prior art keywords
target
tracking
image
edge
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010422092.8A
Other languages
Chinese (zh)
Other versions
CN111709968A (en
Inventor
陈杰生
孟慧军
史朝辉
张敬卓
胡蓉
邵忠俊
佟惠军
秦岭
甘仕宁
王坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Air Force Engineering University of PLA
Original Assignee
Air Force Engineering University of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Air Force Engineering University of PLA filed Critical Air Force Engineering University of PLA
Priority to CN202010422092.8A priority Critical patent/CN111709968B/en
Publication of CN111709968A publication Critical patent/CN111709968A/en
Application granted granted Critical
Publication of CN111709968B publication Critical patent/CN111709968B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a low-altitude target detection tracking method based on image processing, which comprises the following steps: detecting a moving target; tracking a moving target; filtering the motion track; and (4) accurately positioning the target and estimating the distance. The method of the invention processes the image obtained by the camera to correct the azimuth angle and elevation angle information of the optical aiming line of the sensor; a method for obtaining a target distance through the conversion of the ratio of the size of a target image to the physical size of the target; obtaining a stable and continuous target track through a Kalman filtering algorithm; the invention realizes the functions of real-time tracking, measuring, positioning and the like of the aerial target on the basis of acquiring the image by the terminal camera, mainly solves the problems that the existing visible light and infrared monitoring system cannot automatically record the aerial target coordinate and automatically track to generate the target track, and makes up the defects of a low-altitude target reconnaissance system.

Description

Low-altitude target detection tracking method based on image processing
Technical Field
The invention belongs to the field of target detection, relates to a target tracking technology, and more particularly relates to a low-altitude target detection tracking method based on image processing.
Background
In recent years, the development of small and miniature unmanned aerial vehicles for civil use is rapid, the number of the unmanned aerial vehicles is increased geometrically, and the management of the unmanned aerial vehicles is delayed seriously. The phenomena of black flight and messy flight of the unmanned aerial vehicle are increasingly serious, and the unmanned aerial vehicle forms new threat to the flight safety of civil aviation and military aviation; the risk that unmanned aerial vehicles are used by enemies and terrorists to make affairs terminals and terrorist attacks is increasing day by day; the low altitude and the ultra-low altitude are in the blind area of military radar and also are important airspace for air attack and defense of enemies in future war. How to effectively detect, identify and monitor the safety of low-altitude unmanned aerial vehicles and other aerial targets has important significance for guaranteeing the flight safety of military civil aviation and the safety of national important targets. At present, in addition to high-end products such as low-altitude radars, photoelectric composite detection equipment and the like, common optical telescopes and infrared telescopes have the characteristics of simplicity, easiness in use, moderate price, flexibility in operation and the like, become ideal and indispensable conventional aerial sighting instruments for civil military ground aerial defense systems or government department civil defense systems, and are widely used in military and civil markets for a long time. The reconnaissance early warning capability of the traditional low-altitude aircraft is improved by utilizing an informatization method, and the method has strong practical significance.
In 2018, the inventor provides a low-altitude target detection method (patent application number: 201811655503.7) based on an intelligent terminal, core functions of high-end detection equipment and common public supplies (such as smart phones) are ingeniously and cheaply integrated, and a software tool which not only has important functions of high-end products, but also has low popularization cost and is simple and convenient to operate is innovatively developed to make up for the respective defects of the high-end products and the low-end products. The method belongs to the informatization improvement and upgrading of a reconnaissance early warning means of a traditional low-altitude aircraft, mainly solves the problems of low efficiency, poor stability, low precision and the like of the traditional optical detection of low-altitude targets, outputs high-precision and continuous low-altitude aircraft target tracks by converting and collecting parameters of an intelligent terminal sensor data system, fusing and integrating contents including artificial observation information, radar network target vanishing point data and the like, realizes reconnaissance early warning information guarantee of a low-altitude area, and further constructs a civil-military fusion low-altitude early warning monitoring system with reasonable high-medium-low collocation and automatic and manual mutual supplement.
Compared with the traditional detection means, the method for detecting the low-altitude target by using the intelligent terminal can greatly improve the working efficiency and the equipment efficiency, but also has the defects of manual aiming for measuring the azimuth angle, large high-low angle error, inaccurate manual estimation of the target distance, incapability of simultaneously measuring multiple targets and the like. When a person manually aligns the target by using the aiming line of the equipment, the measured target angle has small deviation due to the influence of factors such as hand shake and operation habits of an operator; people observe and estimate the subjective component of the target distance by naked eyes, the fusion difficulty of the gear information of far, middle and close distances at the background is high, and the precision of the measured target is severely restricted; in addition, more than two targets cannot be observed simultaneously through the aiming line of the equipment, and the detection efficiency is not high. These disadvantages severely restrict the practical use of such products.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a low-altitude target detection and tracking method based on image processing, which is characterized by comprising the following steps of:
moving object detection
The method specifically comprises the following steps:
STEP 1. Detecting motion area in image by using interframe difference method
Comparing the gray values of corresponding pixel points of two frames of images in front and back of the image sequence, subtracting the two frames, and if the gray value difference is very small, determining that no moving object passes through the point; otherwise, if the gray scale changes greatly, an object is considered to pass through; image f of k-th frame and k +1 frame k (x,y),f k+1 The change between (x, y) is defined by a binary difference d k (x, y) represents as shown in formula (1), wherein x and y represent coordinate values in horizontal and vertical directions in the image gray matrix respectively;
Figure BSA0000209070170000021
in the formula, D represents a differential image binarization threshold, the value size is related to factors such as light change during image shooting, and the like, and the differential image binarization threshold is set according to different scenes; d k (x, y) is 0 indicating no change before and after the image (x, y), and 1 indicates a change;
STEP 2, detecting the target position by using a Canny edge detection algorithm
Comprises the following steps:
smoothing the image by using a Gaussian filter, filtering high-frequency signals in the image, and removing the influence of noise on edge identification;
step II, calculating the amplitude and the direction of the gradient by using the finite difference of the first-order partial derivatives; dividing the direction of a pixel point into components in the x direction and the y direction, and calculating the gradient G of each pixel point in the image in the horizontal direction by using an edge difference Sobe1 operator x And a gradient G in the vertical direction y Horizontal and vertical Sobe1 operator S x 、S y Comprises the following steps:
Figure BSA0000209070170000031
on the basis, calculating the gradient G and the gradient angle theta of the pixel point; the calculation method comprises the following steps:
Figure BSA0000209070170000032
in the formula (3), the gradient angle theta ranges from radian-pi to pi;
step III, carrying out non-maximum suppression on the gradient amplitude; the algorithm process is as follows: comparing the gradient strength of the current point with the gradient strength of the positive and negative gradient direction points; if the gradient strength of the current point is maximum compared with the gradient strength of other points in the same direction, the value is kept; otherwise, inhibit, i.e., set to 0;
step IV, detecting a double-threshold algorithm; distinguishing edge pixels by using a high threshold value and a low threshold value; if the gradient value of the edge pixel point is larger than the high threshold value, the edge pixel point is considered as a strong edge point; if the edge gradient value is smaller than the high threshold value and larger than the low threshold value, marking as a weak edge point; points less than the low threshold are suppressed;
v, tracking a lag boundary; strong edge points can be considered as true edges; weak edge points may be true edges or may be caused by noise or color changes; searching all the connected weak edges by using a lag boundary tracking algorithm, if any point of one connected weak edge is connected with the strong edge point, keeping the weak edge, and if not, inhibiting the weak edge; searching the edge points, namely realizing the detection of the edge of the moving target;
(II) moving object tracking
The method comprises the steps that a continuous self-adaptive mean shift tracking algorithm Camshift is adopted as a target tracking basic algorithm, and a detection result obtained by a frame difference method and a Camshift edge detection algorithm is adopted as initial input of target tracking, so that automatic detection and tracking of a target are realized;
(III) motion track Filtering
The Kalman filtering is adopted to estimate the motion state of the target, so that the target tracking precision is improved, and the target loss probability is reduced;
in multi-target tracking, because the time interval between two adjacent frames of images is short, the motion state change of each target is generally small, and the target in two frames is assumed to move at a constant speed; setting the time interval of two frames as T, and defining a target motion state variable X as:
Figure BSA0000209070170000041
wherein p is x 、p y Respectively the coordinate positions of the target in the horizontal and vertical directions, v x 、v y Respectively indicating the moving speed of the target in the horizontal direction and the vertical direction; observed value Z = (p) x ,p y ) Corresponding to the observed target distance;
defining a system state transition matrix F as:
Figure BSA0000209070170000042
the observation matrix H is:
Figure BSA0000209070170000043
in addition, p and v are white noise sequences independent of each other, and the covariance matrix Q of p is taken as
Figure BSA0000209070170000044
v has a covariance matrix R of
Figure BSA0000209070170000045
In engineering practice, when unattended value is applied, a moving target obtained by calculation by using an interframe difference method and a Canny edge detection algorithm is used as the input of a Camshift algorithm and is also used as an initial state vector of a Kalman filter; when human intervention exists, calculating by taking the manually selected target position as an input value of a target state;
(IV) accurate target positioning and distance estimation
And estimating the target distance by calculating the size of the view field and calculating the target position.
In one embodiment of the present invention, the second step specifically comprises the following steps:
initializing the size and position of a search window; when full-automatic detection tracking is carried out, the detection results of a frame difference method and a Camshift edge detection algorithm are used as initial input parameters of target tracking, and the target position detected by a moving target is used as initial parameters of target tracking; during semi-automatic tracking, manually dragging the determined target position and size on the screen of the intelligent terminal to serve as initial input parameters;
step II, setting a target calculation area; according to the experimental comparison, setting a region which is a little larger than the search window as a calculation region to increase the probability of capturing the target and the stability of tracking the target;
step III, calculating a color histogram in the target area; converting an input image into an HSV color space, wherein a target area is an initially set search window range, separating a hue H component, and calculating a hue histogram of the area to obtain a color histogram of a target template; the hue H is calculated as follows:
if (r, g, b) is the coordinate of the pixel point on red, green and blue respectively, the value is a real number between 0 and 1, max is the maximum value of r, g and b, and min is the minimum value of the three, the calculation formula of the hue h (the value range of 0-360 degrees) of the HSV space is as follows
Figure BSA0000209070170000051
Step IV, carrying out back projection on the color histogram to calculate color probability distribution; for each pixel in the input image, inquiring a target model color histogram, and for the pixel in the target area, obtaining the probability that the pixel belongs to the target pixel, and for the pixel in the non-target area, the probability is 0;
a MeanShift iteration process; the method is the core of the Camshift algorithm and aims to find the position of a target center in a current frame; firstly, selecting the size and initial position of a search window in a color probability distribution diagram, and then calculating the centroid position of the search window; the calculation method is as follows:
setting the coordinate of the pixel point in the search window as (I, j), wherein I (I, j) is the corresponding value of the pixel point in the back projection image of the color histogram, and defining the zeroth order moment M of the search window 00 And a first moment M 10 、M 01 The following were used:
Figure BSA0000209070170000052
m, N is the number of pixels in the length and width directions of the search window W, and is obtained according to the manual selection of the user or the last tracking calculation result, and the centroid position corresponding to the search window is: (M) 10 /M 00 ,M 01 /M 00 );
Step VI, adjusting the center of the search window to the mass center; moving the center of the search window to the centroid, if the moving distance is greater than the set threshold value, recalculating the adjusted centroid of the window, and performing a new round of window position and size adjustment; until the moving distance between the center of the window and the centroid is smaller than a threshold value or the iteration number reaches a certain maximum value, considering that the convergence condition is met, and obtaining the latest moment center position of the target at the moment; and inputting the position and the size of the search window as the target position of the next frame, and starting to perform new target search on the next frame of image, namely repeating the steps 2 to 6.
In one embodiment of the present invention, the search rectangular window has a width of
Figure BSA0000209070170000061
The length is 1.2s.
In another embodiment of the present invention, the fourth step specifically includes the steps of:
STEP 1 field size calculation
The horizontal field angle A and the vertical field angle B of the camera field are calculated according to the following formula:
A=2atan(w/2f) B=2atan(h/2f) (12)
in the formula, w is the field width, h is the field height, and f is the focal length of the lens;
STEP 2. Target position estimation
O is the center position of the picture, and the azimuth angle and the elevation angle measured by the gravity sensor and the magnetic sensor are respectively (alpha) 0 ,β 0 ) The pixel coordinates of the target P in the picture are (x, y), the image resolution is (W, H), and the projection length of the corresponding target image on the horizontal direction is l;
measuring the value (alpha) of the sensor by using the working principle of the vernier caliper 0 ,β 0 ) When the coordinate of the center of mass of the target detected by the image is taken as the reading of the 'main scale' on the vernier caliper and the coordinate of the center of mass of the target detected by the image is taken as the reading of the 'auxiliary scale', the calculation formulas of the azimuth angle alpha and the elevation angle beta are as follows
Figure BSA0000209070170000062
Figure BSA0000209070170000063
STEP 3. Target distance estimation
If the length of the geometric dimension of a certain aircraft is set as L, the distance d between the observation point and the target is calculated according to the formula
Figure BSA0000209070170000064
The method for determining the geometric dimension of the target comprises the following steps: (1) when the target is manually clicked to track the target, manually selecting the type of the target by an operator, and taking the basic size of the typical target as the size of the target; (2) when the device is in an unattended state and automatically detects and tracks the target, the type of the target is identified by using a deep learning algorithm, and the size of the target is further determined.
The invention adopts the related algorithm of image processing to realize the functions of real-time tracking, measurement, positioning and the like of the target on the image acquired by the terminal camera, so as to make up the defects of the existing pure manual aiming and measuring means and further perfect the low-altitude target detection method based on the intelligent terminal. The invention mainly solves the problems of large error of the existing pure manual target aiming measurement data of the intelligent terminal, small target reconnaissance capacity (referring to the number of simultaneously observed targets) and the like, and obtains more accurate azimuth angle and high-low angle information of a plurality of targets in a field of view and the target distance size or distance change trend under specific conditions through real-time image processing and model conversion of the acquired images of the intelligent terminal, thereby carrying out target fusion, further improving the accuracy of target track estimation and improving the reconnaissance early warning information guarantee capability of the system to the low-altitude airspace.
Drawings
FIG. 1 shows a flow chart of a continuous adaptive mean shift tracking algorithm (Camshift);
FIG. 2 shows a schematic view of target azimuth, elevation calculation;
fig. 3 shows an effect diagram of the invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings.
The invention provides a low-altitude target detection tracking method based on image processing. Firstly, an interframe difference threshold value method is adopted to quickly and accurately detect and identify a moving target, and in order to meet the target detection requirement under a complex background, a manual click function is added; secondly, on the basis of extracting the color characteristics of the moving target, a continuous self-adaptive mean shift tracking algorithm (Camshift) is adopted to position the position and the size of the moving target in the image sequence; thirdly, predicting the position and the size of the target by using a Kalman filtering algorithm (Kalman), further solving the problems of large-area interference with the same color in the background, shielding the target part and the like, and effectively avoiding the phenomena of jumping and frame mutation detection of the detected target; and finally, converting to obtain accurate azimuth angle, elevation angle and distance value of the target on the basis of calculating the width of the field of view and obtaining the focal length of the camera.
Moving object detection
Moving object detection in video sequences is the basis for object tracking and object recognition processes. The method combines the region detection and the edge detection to accurately segment and locate the moving target. Firstly, obtaining a difference image of a target by using an interframe difference method, carrying out Gaussian filtering and binarization processing on the difference image, then obtaining edge contour information of the target by using a multistage edge detection algorithm (Canny), and further extracting a moving object surface of the target to realize target calibration. In the tracking process, the size of an initial search window of the Camshift tracking algorithm is initialized by using the extracted moving target, so that automatic tracking is realized.
STEP 1. Detecting motion area in image by using interframe difference method
The interframe difference method is the simplest background estimation method, and takes a frame of video image as a background model of a current frame, and extracts a motion region in the image according to pixel-based difference and thresholding between two adjacent frames. The algorithm is simple in principle and strong in real-time performance, and is often used for the pre-processing process of tracking.
By comparing two successive images in a sequence of imagesSubtracting two frames corresponding to the difference of the gray values of the pixel points, and if the gray value difference is very small, determining that no moving object passes through the point; otherwise, if the gray scale changes greatly, an object is considered to pass through. Image f of k-th frame and k +1 frame k (x,y),f k+1 The change between (x, y) is defined by a binary difference d k And (x, y) is expressed as shown in formula (1), wherein x and y respectively express coordinate values in the horizontal and vertical directions in the image gray matrix.
Figure BSA0000209070170000081
In the formula, D represents a differential image binarization threshold, the value size is related to factors such as light change during image shooting, and the like, and the differential image binarization threshold can be set according to different scenes; d k A value of (x, y) is 0 means that there is no change before and after the image (x, y), and 1 means a change.
The frame difference method has the characteristics of simple realization, high operation speed, strong adaptability to dynamic environment and not very sensitive to light change. Cavities are easy to generate in the moving body, particularly when the target moves at a high speed, the accurate extraction of the target area is influenced, but the experimental result verifies that the functional requirements of the invention are met.
STEP 2. The Canny edge detection algorithm is utilized to detect the target position
The Canny Edge Detection operator is an Edge Detection operator based on an optimization algorithm (Canny J.A computerized Approach to Edge Detection [ J ]. Pattern Analysis and Machine Analysis, IEEE transmissions on,1986 (6): 679-698), has the characteristic of high-precision positioning, and is a standard Edge Detection algorithm. Canny proposed three strict detection criteria to evaluate the quality of edge detection, respectively: a high signal-to-noise ratio criterion, a high positioning accuracy criterion, and a single edge response criterion. The algorithm is described as follows:
and I, smoothing the image by using a Gaussian filter. The main effect is to remove the influence of noise on edge recognition, because the noise is also concentrated on high frequency signals and can be easily recognized as a false edge. And removing noise by applying Gaussian blur, and reducing the identification of false edges. However, since the image edge information is also a high-frequency signal, the selection of the radius of the gaussian blur is important, an excessively large radius easily makes some weak edges undetectable, and is generally set according to experience, and the radius of the gaussian blur is 5. The specific calculation method is well known to those skilled in the art and will not be described again.
And II, calculating the amplitude and the direction of the gradient by using the finite difference of the first-order partial derivatives. Dividing the direction of a pixel point into components in the x direction and the y direction, and calculating the gradient G of each pixel point in the image in the horizontal direction by using an edge difference (Sobe 1) operator x And gradient G in the vertical direction y Horizontal and vertical Sobe1 operator S x 、S y Comprises the following steps:
Figure BSA0000209070170000091
on the basis, the gradient G and the gradient angle theta of the pixel point are calculated. The calculation method comprises the following steps:
Figure BSA0000209070170000092
in formula (3), the gradient angle θ ranges from radian- π to π.
And III, performing non-maximum suppression on the gradient amplitude. Non-maxima suppression is an edge refinement method, where the gradient edge, usually computed in step two, is more than one pixel wide, but multiple pixels wide, and non-maxima suppression can help preserve local maximum gradients while suppressing all other gradient values. The algorithm process is as follows: and comparing the gradient strength of the current point with the gradient strength of the positive and negative gradient direction points. If the gradient strength of the current point is maximum compared with the gradient strength of other points in the same direction, the value is kept; otherwise, inhibit, i.e., set to 0.
And IV, detecting by using a double-threshold algorithm. Edge pixels are distinguished by a high threshold and a low threshold. If the gradient value of the edge pixel point is larger than the high threshold value, the edge pixel point is considered as a strong edge point. If the edge gradient value is less than the high threshold and greater than the low threshold, the edge point is marked as a weak edge point. Points below the low threshold are suppressed.
Step v. Strong edge points can be considered as true edges. Weak edge points may be true edges or may be caused by noise or color changes. And searching all connected weak edges by using a lag boundary tracking algorithm, if any point of one connected weak edge is connected with the strong edge point, keeping the weak edge, and if not, restraining the weak edge. And searching the edge points, namely detecting the edge of the moving target. The specific procedures are well known to those skilled in the art and will not be described further.
(II) moving object tracking
The basic idea is to use color Information of moving objects in video images as features, to perform Mean Shift operation on each frame of input images (Mean Shift, FUKUNAGA K, host l.time of the Gradient Function [ J. IEEE transitions Information, 1975, 21-40), and to use the target center and search window size of the previous frame as initial values for the next Fast addition Mean Shift and search window size, as described in the following document, and to perform iterative calculation on the target Shift and search window size, as follows (msshift and search algorithm). Because the position and size of the search window are set to the position and size of the current center of the moving object before each search, and the moving object is usually near the area, the search time is shortened; in addition, in the process of target motion, the color change is not large, so that the algorithm has good robustness. Therefore, the invention adopts the algorithm as a target tracking basic algorithm, and utilizes the frame difference method and the detection result of the Camshift edge detection algorithm (the method for detecting the target motion has been introduced above) as the initial input of the target tracking, thereby realizing the automatic detection and tracking of the target. The specific implementation is as follows.
And I, initializing the size and the position of the search window. The invention provides a full-automatic detection mode and a semi-automatic manual intervention mode during initialization: when the target is detected and tracked in a full-automatic mode, the detection results of a frame difference method and a Camshift edge detection algorithm are used as initial input parameters of target tracking, the target position of moving target detection is used as the initial parameters of the target tracking, and the target detection is introduced above; and during semi-automatic tracking, manually dragging the determined target position and size on the screen of the intelligent terminal to serve as initial input parameters.
And step II, setting a target calculation area. According to experimental comparison, an area which is a little larger than a search window is set as a calculation area, so that the probability of capturing a target and the stability of tracking the target can be increased.
And step III, calculating a color histogram in the target area. Because the RGB color space is sensitive to changes in light conditions, to reduce the effect of this factor on tracking, the CamShift algorithm usually uses HSV color space for processing. Converting the input image into an HSV (Hue, saturation, value) color space, separating a Hue H component to calculate a Hue histogram of the area, and obtaining a color histogram of a target template, wherein the target area is an initially set search window range. The hue H is calculated as follows:
if (r, g, b) is the coordinate of the pixel point on red, green and blue respectively, the value is a real number between 0 and 1, max is the maximum value of r, g and b, and min is the minimum value of the three, the calculation formula of the hue h (the value range of 0-360 degrees) of the HSV space is as follows
Figure BSA0000209070170000111
And IV, back projecting the color histogram to calculate the color probability distribution. The histogram back projection is a color probability density distribution diagram of an input image under the condition of a known target color histogram, and contains coherent information of a target in a current frame. For each pixel in the input image, the target model color histogram is queried, and for pixels within the target region, the probability that the pixel belongs to the target pixel can be found, while for pixels within the non-target region, the probability is 0. (Fast and robust CAMShiff tracking A documents have described specific calculation procedures).
MeanShift iterative procedure. I.e., the portion within the dashed rectangle in fig. 2, which is the core of the CamShift algorithm, in order to find the position of the target center in the current frame. The size and initial position of the search window are first selected in the color probability distribution map, and then the centroid position of the search window is calculated. The calculation method is as follows:
setting the coordinate of a pixel point in a search window W as (I, j), wherein I (I, j) is the value corresponding to the pixel point in the back projection graph of the color histogram, and defining the zeroth order moment M of the search window 00 And a first order moment M 10 、M 01 The following were used:
Figure BSA0000209070170000112
in the formula M, N, the number of pixels in the length direction and the width direction of the search window W is obtained according to the manual selection of a user or the last tracking calculation result, and the centroid position corresponding to the search window is: (M) 10 /M 00 ,M 01 /M 00 ) The width of the search rectangular window is
Figure BSA0000209070170000113
The length is 1.2s.
And VI, adjusting the center of the search window to the center of mass. And moving the center of the search window to the centroid, if the moving distance is greater than the set threshold value, recalculating the adjusted centroid of the window, and performing a new round of window position and size adjustment. And when the moving distance between the center of the window and the centroid is smaller than the threshold value or the iteration number reaches a certain maximum value, considering that the convergence condition is met, and then obtaining the latest moment center position of the target. And inputting the position and the size of the search window as the target position of the next frame, and starting to perform new target search on the next frame image, namely repeating the steps 2 to 6.
(III) motion track Filtering
The Camshift algorithm can automatically adjust the size of a window to adapt to the size of a tracked target in an image, the operation efficiency is high, but the problems of similar color interference, target loss under the same background with large area and the like exist, especially in multi-target tracking, when moving targets are staggered and overlapped, the mutual interference among the targets can instantly lose the color characteristics of the targets, so that the accurate position of the centroid of the targets in the image is difficult to obtain, and the target loss is possibly caused. The invention adopts Kalman filtering to estimate the motion state of the target, improves the precision of target tracking, and reduces the target loss probability (He Jun, yinhua, etc. an improved target tracking algorithm based on the combination of Camshift and Kalman. Computer measurement and control [ J ].2017 (25): 209-212).
The Kalman filtering algorithm is mainly used for tracking the value of one or more variables, and the tracking basis is to predict the value according to the motion equation of the system, for example, the position of a tracking target at the next moment can be predicted by knowing the motion speed of the tracking target, but the prediction is definitely wrong and can only be used as the tracking basis. Another basis is that the value of the variable can be measured by a measurement means, which measurement is, of course, also erroneous and can only be taken as a basis, although the two bases are weighted differently. The Kalman filtering is to perform a series of iterations to track the target by using these two bases. The specific theory can refer to the related data of modern control theory (Simon D. Optional state estimation: kalman, H infinity, and nonlinear approaches [ M ]. John Wilery & Sons, 2006), and the related parameter setting in the algorithm is analyzed below.
In multi-target tracking, because the time interval between two adjacent frames of images is short (for example, 30 milliseconds), the motion state change of each target is generally small, and it is assumed that the target moves at a constant speed in two frames. Setting the time interval of two frames as T, defining the state variable X of the target motion
Figure BSA0000209070170000121
Wherein p is x 、p y Respectively refers to the coordinate positions of the target in the horizontal and vertical directions, v x 、v y Respectively indicating the moving speed of the target in the horizontal direction and the vertical direction; observed value Z = (p) x ,p y ) Corresponding to the observed target distance.
Defining a system state transition matrix F as:
Figure BSA0000209070170000131
the observation matrix H is:
Figure BSA0000209070170000132
p and v are white noise sequences independent of each other, and the covariance matrix Q of p is defined as
Figure BSA0000209070170000133
v has a covariance matrix R of
Figure BSA0000209070170000134
In engineering practice, when unattended operation is applied, a moving target (the method for detecting the target motion is introduced above, and the correlation result is also an initialization parameter of Kalman filtering) is obtained by calculation by using an interframe difference method and a Canny edge detection algorithm, and is used as the input of a Camshift algorithm and also used as an initial state vector of a Kalman filter; and when human intervention exists, calculating by taking the manually selected target position as an input value of the target state.
(IV) accurate positioning and distance estimation of target
STEP 1. Calculating the size of the field of view
The calculation formulas of the horizontal field angle A and the vertical field angle B of the camera field are as follows:
A=2atan(w/2f) B=2atan(h/2f) (12)
in the formula, w is the field width, h is the field height, and f is the focal length of the lens. In a specific embodiment of the invention, the parameters are calculated according to the camera parameters of a certain intelligent terminal.
It is known that:
(1) An image sensor: 1/2.7' 200 ten thousand pixel progressive scanning CMOS, its imaging size is: w × h =5.27mm × 3.96mm;
(2) Focal length: 3.3mm-12mm;
(3) Viewing angle range: 96-35 ° (16: 9) 79.3-27.2 ° (4: 3)
When the focal length f is 3.3mm, it can be calculated
Horizontal field angle: a =2atan (5.27 mm/2 × 3.3 mm) =77.2 ° c
Vertical field angle: b =2atan (3.96 mm/2 × 3.3 mm) =61.9 ° c
STEP 2. Calculating the target position
Fig. 2 is a schematic view of real-time tracking of an object. In FIG. 2, O is the center position of the screen, and the azimuth angle and elevation angle measured by the gravity sensor and the magnetic sensor are (α) 0 ,β 0 ) The pixel coordinates of the target P in the frame are (x, y), the image resolution is (W, H), and the projection length of the corresponding target image in the horizontal direction is l.
Measuring the value (alpha) of the sensor by using the working principle of the vernier caliper 0 ,β 0 ) When the reading of the main scale on the vernier caliper is taken as the reading of the auxiliary scale, the target mass center coordinate of the image detection is taken as the reading of the auxiliary scale, and the calculation formulas of the azimuth angle alpha and the elevation angle beta are as follows
Figure BSA0000209070170000141
Figure BSA0000209070170000142
STEP 3. Estimating the target distance
If the geometric dimension length of a certain aircraft is L, the calculation formula of the distance d between the observation point and the target is
Figure BSA0000209070170000143
In actual engineering, the geometric dimension of the target cannot be accurately obtained only through remote observation. The method for determining the geometric dimension of the target comprises the following steps: (1) when the target is manually clicked to track the target, manually selecting the type of the target by an operator, and taking the basic size of the typical target as the size of the target; (2) when the device is in an unattended state and automatically detects and tracks the target, the type of the target can be identified by utilizing a deep learning algorithm, and the size of the target is further determined. Fig. 3 illustrates the effect achieved by the present invention.
THE ADVANTAGES OF THE PRESENT INVENTION
Firstly, by using the working principle of a vernier caliper, the azimuth angle and the elevation angle of an optical sight line measured by a sensor are used as the 'rough' values of the target azimuth and elevation angle, the 'correction value' of the target position angle is obtained through target image detection and conversion after the terminal camera picture processing is carried out, and the more accurate target azimuth angle and elevation angle are calculated;
secondly, a target profile is obtained by utilizing the edge detection of a terminal camera picture, and a target distance is obtained by the proportional conversion of the size of a target image and the physical size of the target;
and thirdly, the target track tracked by the Camshift method is processed by comprehensively using a Kalman filtering algorithm to obtain a relatively stable and continuous target track, so that the problem of target position 'jumping' in a terminal camera picture is solved.
The method improves the low-altitude target reconnaissance means based on the intelligent terminal, greatly improves the precision of the terminal in measuring the azimuth angle and the elevation angle of the target by using the image processing method, better solves the problem that the terminal cannot automatically measure the target distance, increases the capacity of a single terminal for tracking the target simultaneously, makes up the defects of a low-altitude target reconnaissance system based on the intelligent terminal, is beneficial to generating more accurate and continuous target tracks, and greatly improves the efficiency and the efficiency of reconnaissance of the low-altitude target based on the intelligent terminal.
The scheme adopts an innovative thought and an efficient calculation method, realizes the automation of low-altitude target detection and tracking based on the intelligent terminal, achieves the aim of higher practical requirements by using a software optimization method, and has obvious technical and market advantages compared with various existing products in the aspects of cost-effectiveness ratio, popularization value, use prospect and the like.
Application expansion
The invention is not only suitable for reconnaissance and monitoring of aerial flying targets, but also suitable for identifying and positioning moving targets such as vehicles, pedestrians and the like on the ground, is applied to detection and tracking of targets on water such as naval vessels, ships and the like sailing at sea even after filtering the background sea wave clutter of the targets, and has wide application prospect in the field of military information reconnaissance and the security and monitoring industry of civil targets. Particularly, in the security industry, a large number of monitoring cameras and infrared detectors are arranged, the defects of high storage cost, poor digital readability, difficulty in real-time sharing and the like are gradually shown.

Claims (4)

1. A low-altitude target detection and tracking method based on image processing is characterized by comprising the following steps:
moving object detection
Comprises the following steps:
STEP 1. Detecting motion area in image by using interframe difference method
Comparing the gray values of corresponding pixel points of two frames of images in front and back of the image sequence, subtracting the two frames, and if the gray value difference is very small, determining that no moving object passes through the point; otherwise, if the gray scale changes greatly, an object is considered to pass through; image f of k-th frame and k +1 frame k (x,y),f k+1 The change between (x, y) is defined by a binary difference d k (x, y) is represented by the formula (1), wherein xY respectively represents coordinate values in the horizontal direction and the vertical direction in the image gray matrix;
Figure FSA0000209070160000011
in the formula, D represents a differential image binarization threshold, the value size is related to factors such as light change during image shooting, and the like, and the differential image binarization threshold is set according to different scenes; d k (x, y) is 0 indicating no change before and after the image (x, y), and 1 indicates a change;
STEP 2, detecting the target position by using a Canny edge detection algorithm
Comprises the following steps:
smoothing the image by using a Gaussian filter, filtering high-frequency signals in the image, and removing the influence of noise on edge identification;
step II, calculating the amplitude and the direction of the gradient by using the finite difference of the first-order partial derivatives; dividing the direction of a pixel point into components in the x direction and the y direction, and calculating the gradient G of each pixel point in the image in the horizontal direction by using an edge difference Sobel operator x And a gradient G in the vertical direction y Horizontal and vertical Sobel operator S x 、S y Comprises the following steps:
Figure FSA0000209070160000012
on the basis, calculating the gradient G and the gradient angle theta of the pixel point; the calculation method comprises the following steps:
Figure FSA0000209070160000013
in the formula (3), the gradient angle theta ranges from radian-pi to pi;
step III, performing non-maximum suppression on the gradient amplitude; the algorithm process is as follows: comparing the gradient strength of the current point with the gradient strength of the positive and negative gradient direction points; if the gradient strength of the current point is maximum compared with the gradient strength of other points in the same direction, the value is kept; otherwise, inhibit, i.e., set to 0;
step IV, detecting a double-threshold algorithm; distinguishing edge pixels by using a high threshold value and a low threshold value; if the gradient value of the edge pixel point is larger than the high threshold value, the edge pixel point is considered as a strong edge point; if the edge gradient value is smaller than the high threshold value and larger than the low threshold value, marking as a weak edge point; points less than the low threshold are suppressed;
v, tracking a lag boundary; strong edge points can be considered as true edges; weak edge points may be true edges or may be caused by noise or color changes; searching all the connected weak edges by using a lag boundary tracking algorithm, if any point of one connected weak edge is connected with a strong edge point, keeping the weak edge, and if not, inhibiting the weak edge; searching the edge points, thereby realizing the detection of the edge of the moving target;
(II) moving object tracking
The method comprises the steps that a continuous self-adaptive mean shift tracking algorithm Camshift is adopted as a target tracking basic algorithm, and a detection result obtained by a frame difference method and a Camshift edge detection algorithm is adopted as initial input of target tracking, so that automatic detection and tracking of a target are realized;
(III) motion track Filtering
The Kalman filtering is adopted to estimate the motion state of the target, so that the target tracking precision is improved, and the target loss probability is reduced;
in multi-target tracking, because the time interval between two adjacent frames of images is short, the motion state change of each target is generally small, and the target in two frames is assumed to move at a constant speed; setting the time interval of two frames as T, and defining a target motion state variable X as:
Figure FSA0000209070160000021
wherein p is x 、p y Respectively the coordinate positions of the target in the horizontal and vertical directions, v x 、v y Respectively refers to the moving speed of the target in the horizontal and vertical directionsSize; observed value Z = (p) x ,p y ) Corresponding to the observed target distance;
defining a system state transition matrix F as:
Figure FSA0000209070160000031
the observation matrix H is:
Figure FSA0000209070160000032
in addition, p and v are white noise sequences independent of each other, and the covariance matrix Q of p is taken as
Figure FSA0000209070160000033
v has a covariance matrix R of
Figure FSA0000209070160000034
In engineering practice, when unattended value is applied, a moving target obtained by calculation by using an interframe difference method and a Canny edge detection algorithm is used as the input of a Camshift algorithm and is also used as an initial state vector of a Kalman filter; when human intervention exists, calculating by taking the manually selected target position as an input value of a target state;
(IV) accurate positioning and distance estimation of target
And estimating the target distance by calculating the size of the view field and calculating the target position.
2. The low-altitude target detection and tracking method according to claim 1, wherein the second step specifically comprises the steps of:
initializing the size and position of a search window; during full-automatic detection and tracking, taking a frame difference method and a Camshift edge detection algorithm detection result as an initial input parameter of target tracking, and taking a target position detected by a moving target as an initial parameter of the target tracking; during semi-automatic tracking, manually dragging the determined target position and size on the screen of the intelligent terminal to serve as initial input parameters;
step II, setting a target calculation area; according to the experimental comparison, setting a region which is a little larger than the search window as a calculation region to increase the probability of capturing the target and the stability of tracking the target;
step III, calculating a color histogram in the target area; converting an input image into an HSV color space, wherein a target area is an initially set search window range, separating a hue H component, and calculating a hue histogram of the area to obtain a color histogram of a target template; the hue H is calculated as follows:
if (r, g, b) is the coordinate of the pixel point on red, green and blue respectively, the value is a real number between 0 and 1, max is the maximum value of r, g and b, and min is the minimum value of the three, the calculation formula of the hue h (the value range of 0-360 degrees) of the HSV space is as follows
Figure FSA0000209070160000041
Step IV, carrying out back projection on the color histogram to calculate color probability distribution; for each pixel in the input image, inquiring a target model color histogram, and for the pixels in a target area, obtaining the probability that the pixel belongs to the target pixel, and for the pixels in a non-target area, the probability is 0;
a MeanShift iteration process; the method is the core of the Camshift algorithm and aims to find the position of a target center in a current frame; firstly, selecting the size and the initial position of a search window in a color probability distribution diagram, and then calculating the centroid position of the search window; the calculation method is as follows:
setting the coordinate of a pixel point in a search window as (I, j), wherein I (I, j) is the value corresponding to the pixel point in the back projection graph of the color histogram, and defining the zeroth order of the search windowMoment M 00 And a first moment M 10 、M 01 The following were used:
Figure FSA0000209070160000042
m, N is the number of pixels in the length and width directions of the search window W, and is obtained according to the manual selection of the user or the last tracking calculation result, and the centroid position corresponding to the search window is: (M) 10 /M 00 ,M 01 /M 00 );
Step VI, adjusting the center of the search window to the mass center; moving the center of the search window to the centroid, if the moving distance is greater than the set threshold value, recalculating the adjusted centroid of the window, and performing a new round of window position and size adjustment; until the moving distance between the center of the window and the centroid is smaller than a threshold value or the iteration number reaches a certain maximum value, considering that the convergence condition is met, and obtaining the latest moment center position of the target at the moment; and inputting the position and the size of the search window as the target position of the next frame, and starting to perform new target search on the next frame image, namely repeating the steps 2 to 6.
3. The low-altitude target detection and tracking method according to claim 2, wherein the width of the search rectangular window is
Figure FSA0000209070160000051
The length is 1.2s.
4. The low-altitude target detection and tracking method according to claim 1, wherein the fourth step specifically includes the steps of:
STEP 1 field size calculation
The calculation formulas of the horizontal field angle A and the vertical field angle B of the camera field are as follows:
A=2atan(w/2f) B=2atan(h/2f) (12)
in the formula, w is the field width, h is the field height, and f is the focal length of the lens;
STEP 2. Target position estimation
O is the center position of the picture, and the azimuth angle and the elevation angle measured by the gravity sensor and the magnetic sensor are respectively (alpha) 0 ,β 0 ) The pixel coordinates of the target P in the picture are (x, y), the image resolution is (W, H), and the projection length of the corresponding target image on the horizontal direction is l;
measuring the value (alpha) of the sensor by using the working principle of the vernier caliper 0 ,β 0 ) When the reading of the main scale on the vernier caliper is taken as the reading of the auxiliary scale, the target mass center coordinate of the image detection is taken as the reading of the auxiliary scale, and the calculation formulas of the azimuth angle alpha and the elevation angle beta are as follows
Figure FSA0000209070160000052
Figure FSA0000209070160000053
STEP 3. Target distance estimation
If the geometric dimension length of a certain aircraft is L, the calculation formula of the distance d between the observation point and the target is
Figure FSA0000209070160000054
The method for determining the geometric dimension of the target comprises the following steps: (1) when the target is manually clicked to track the target, manually selecting the type of the target by an operator, and taking the basic size of the typical target as the size of the target; (2) when the device is in an unattended state and automatically detects and tracks the target, the type of the target is identified by using a deep learning algorithm, and the size of the target is further determined.
CN202010422092.8A 2020-05-08 2020-05-08 Low-altitude target detection tracking method based on image processing Active CN111709968B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010422092.8A CN111709968B (en) 2020-05-08 2020-05-08 Low-altitude target detection tracking method based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010422092.8A CN111709968B (en) 2020-05-08 2020-05-08 Low-altitude target detection tracking method based on image processing

Publications (2)

Publication Number Publication Date
CN111709968A CN111709968A (en) 2020-09-25
CN111709968B true CN111709968B (en) 2022-10-11

Family

ID=72537706

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010422092.8A Active CN111709968B (en) 2020-05-08 2020-05-08 Low-altitude target detection tracking method based on image processing

Country Status (1)

Country Link
CN (1) CN111709968B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112669280B (en) * 2020-12-28 2023-08-08 莆田市山海测绘技术有限公司 Unmanned aerial vehicle inclination aerial photography right-angle image control point target detection method based on LSD algorithm
CN113286077A (en) * 2021-04-19 2021-08-20 瑞泰影像科技(深圳)有限公司 Full-automatic camera tracking and identifying technology
CN113723432B (en) * 2021-10-27 2022-02-22 深圳火眼智能有限公司 Intelligent identification and positioning tracking method and system based on deep learning
CN116385472B (en) * 2023-06-07 2023-08-08 深圳市锦红兴科技有限公司 Hardware stamping part deburring effect evaluation method
CN117132423B (en) * 2023-08-22 2024-04-12 深圳云创友翼科技有限公司 Park management system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109102523A (en) * 2018-07-13 2018-12-28 南京理工大学 A kind of moving object detection and tracking
CN109816692A (en) * 2019-01-11 2019-05-28 南京理工大学 A kind of motion target tracking method based on Camshift algorithm

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109102523A (en) * 2018-07-13 2018-12-28 南京理工大学 A kind of moving object detection and tracking
CN109816692A (en) * 2019-01-11 2019-05-28 南京理工大学 A kind of motion target tracking method based on Camshift algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于CamShift和Kalman滤波混合的视频手势跟踪算法;罗元等;《计算机应用研究》;20090315(第03期);全文 *

Also Published As

Publication number Publication date
CN111709968A (en) 2020-09-25

Similar Documents

Publication Publication Date Title
CN111709968B (en) Low-altitude target detection tracking method based on image processing
CN111179334B (en) Sea surface small-area oil spill area detection system and detection method based on multi-sensor fusion
CN107993245B (en) Aerospace background multi-target detection and tracking method
CN106447680B (en) The object detecting and tracking method that radar is merged with vision under dynamic background environment
CN111079556A (en) Multi-temporal unmanned aerial vehicle video image change area detection and classification method
US9031285B2 (en) Detection of floating objects in maritime video using a mobile camera
EP1505543A2 (en) Video object tracking
WO2013088175A1 (en) Image processing method
Lipschutz et al. New methods for horizon line detection in infrared and visible sea images
Prasad et al. MSCM-LiFe: multi-scale cross modal linear feature for horizon detection in maritime images
CN110084830B (en) Video moving object detection and tracking method
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
US11132802B2 (en) Method of detecting moving objects from a temporal sequence of images
CN110245566B (en) Infrared target remote tracking method based on background features
CN109446978B (en) Method for tracking moving target of airplane based on staring satellite complex scene
CN109492525B (en) Method for measuring engineering parameters of base station antenna
CN111145198B (en) Non-cooperative target motion estimation method based on rapid corner detection
Dumble et al. Horizon profile detection for attitude determination
Miller et al. Person tracking in UAV video
CN111311640B (en) Unmanned aerial vehicle identification and tracking method based on motion estimation
CN111583315A (en) Novel visible light image and infrared image registration method and device
Fu et al. Infrared small dim target detection under maritime near sea-sky line based on regional-division local contrast measure
CN116862832A (en) Three-dimensional live-action model-based operator positioning method
CN115144828B (en) Automatic online calibration method for intelligent automobile multi-sensor space-time fusion
CN114429593A (en) Infrared small target detection method based on rapid guided filtering and application thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant