CN111415370A - Embedded infrared complex scene target real-time tracking method and system - Google Patents
Embedded infrared complex scene target real-time tracking method and system Download PDFInfo
- Publication number
- CN111415370A CN111415370A CN202010283683.1A CN202010283683A CN111415370A CN 111415370 A CN111415370 A CN 111415370A CN 202010283683 A CN202010283683 A CN 202010283683A CN 111415370 A CN111415370 A CN 111415370A
- Authority
- CN
- China
- Prior art keywords
- target
- image
- tracking
- targets
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 24
- 238000012545 processing Methods 0.000 claims abstract description 20
- 238000001914 filtration Methods 0.000 claims abstract description 19
- 230000006870 function Effects 0.000 claims description 10
- 230000004044 response Effects 0.000 claims description 10
- 230000017105 transposition Effects 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 4
- 230000011218 segmentation Effects 0.000 claims description 4
- 238000004458 analytical method Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 3
- 238000012935 Averaging Methods 0.000 claims description 2
- 230000001419 dependent effect Effects 0.000 claims description 2
- 230000009466 transformation Effects 0.000 claims description 2
- 230000001131 transforming effect Effects 0.000 claims 1
- 238000004891 communication Methods 0.000 abstract description 6
- 230000007246 mechanism Effects 0.000 description 8
- 238000005457 optimization Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012827 research and development Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003331 infrared imaging Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an embedded-based real-time tracking method for an infrared complex scene target, which comprises the following steps: after receiving the rough position of the target sent by the external communication unit, identifying the target from the initial frame image counted from the moment and finding out the precise coordinate of the target; judging the background complexity of the current frame image; if the image background is complex, tracking the target by adopting a kernel correlation filtering tracking algorithm after the arrival of the subsequent frame; if the image background is simple, tracking the target by adopting a centroid tracking algorithm based on gray contrast after the arrival of the subsequent frame. On the basis, the embedded infrared complex scene target real-time tracking system is provided, is used for solving the problems that the real-time processing is difficult, the target tracking is easy to lose, a template needs to be stored in advance and the like in the prior art, and is suitable for infrared searching and tracking systems of a seeker, an infrared alarm, a photoelectric pod and the like.
Description
Technical Field
The invention relates to the technical field of target tracking, in particular to a real-time target tracking method and a real-time target tracking system based on an embedded infrared image.
Background
With the development of national defense and military technology, higher and higher requirements are put forward on the tracking performance and the real-time performance of infrared search and tracking systems such as a seeker, an infrared alarm, a photoelectric pod and the like. An infrared imaging sensor is often adopted in an infrared search tracking system to detect a long-distance target, target pixels in an infrared video image are few, and texture information is few. The background of the target changes with the movement of the target, so that the target is often interfered by background objects and noise in the process of detecting and tracking the target.
The target in the image needs to be accurately positioned to the coordinate of the target in the image, then the coordinate is fed back to a servo mechanism in the tracking system, and the servo mechanism controls the infrared camera to move, so that the optical axis of the infrared sensor always points to the target in the moving process of the target, and the purposes of guidance and monitoring are achieved. This requires the target tracker to be able to process the infrared target in real time, i.e. to accurately find the target position for each frame of image in the video stream and to update the information of the target position sent to the servo mechanism in real time.
The improvement of the processing capability of the processor and the rapid development of the integrated circuit process level accelerate the application of the image target tracking algorithm in the guided weapon system. The embedded system can meet the requirements of the weapon system on indexes such as temperature and humidity of the application environment, and has the advantages of small size, easiness in integration, strong processing capability and the like. Especially, the embedded circuit taking FPGA and DSP as core processors is particularly suitable for an image tracking system. The research and development of the real-time target tracking technology based on the embedded type has important significance for the upgrading and updating of the infrared search tracking system and the research and development of new products.
The prior patent about image target tracking mainly aims at the optimization of an image target tracking algorithm, image data are collected well in advance, and the algorithm is mostly realized on a general computer or an industrial personal computer through simulation software. The implementation mode of the algorithm in the actual infrared search and tracking system and whether the real-time performance of the algorithm meets the system requirements are not considered. The operational effect is only simulated. The infrared image target tracking algorithm on the embedded platform has less research on real-time performance and tracking performance under a complex background. Patent document with application number 201210595161.0 entitled "automatic target identification and tracking method and system under complex scene" discloses a target identification and tracking method and system based on embedded type, but before target identification and tracking, a large number of templates of targets at different positions need to be stored in advance for subsequent matching and identification, so that the practicability of the method is general. For an infrared search tracking system, an infrared camera must perform angle adjustment along with the movement of a target in the process of tracking the target, so that the optical axis target of the camera can point to the target at any time. The frame rate of an input video of the infrared searching and tracking system is 50Hz, so the running time of a single frame of the algorithm in the embedded platform needs to be controlled within 20ms, and a servo mechanism can adjust the angle of the camera in time. It is also desirable to track objects stably in a dynamically changing context. And high requirements are placed on the real-time performance and the tracking performance of the target tracking system.
Disclosure of Invention
The invention provides an embedded infrared complex scene target real-time tracking method and system, which are mainly used for overcoming the defects that the real-time processing is difficult, the target tracking is easy to lose, a target template needs to be stored in advance and the like in the prior art aiming at an infrared target, realize the real-time tracking of the target by fusing a centroid tracking based on gray contrast and a kernel correlation filtering algorithm, reduce the single-frame operation time of the algorithm and improve the processing instantaneity, and are suitable for infrared target searching and tracking systems such as a seeker, an infrared alarm, a photoelectric pod and the like.
In order to achieve the aim, the invention provides an embedded infrared complex scene target real-time tracking method, which comprises the following steps:
step S1, receiving the rough coordinate of the target in the initial frame image;
step S2, identifying the target in the initial frame image according to the received tracking command and the rough coordinate of the target to be tracked, and obtaining the accurate position of the target; the method comprises the following steps:
step S21, reading a local area image with a preset size and taking the rough coordinate position of the target to be tracked as the center from the collected current frame video image, carrying out high-pass filtering processing, and obtaining a target enhanced binary image after threshold segmentation;
step S22, performing connected domain analysis on the target enhanced binary image to obtain the shape and position of the suspected target;
step S23, obtaining candidate targets and quantity according to the comparison result of the characteristic of the appearance of the suspected target and the threshold value;
step S24, taking the candidate target with the largest area as the recognition target and outputting the accurate position coordinates of the recognition target;
step S4, when the number of the candidate targets is larger than the target number threshold, the image background is considered to be complex, tracking is carried out through a kernel correlation filtering algorithm according to the received video image and the position coordinates of the tracking target obtained last time, and the position coordinates of the tracking target at this time are obtained;
step S5, when the number of the candidate targets is smaller than the target number threshold, the image background is considered to be simple and the centroid tracking based on the gray contrast is used for obtaining the position coordinates of the tracking target;
and step S6, repeating the steps S1, S2, S4 and S5, and completing target tracking of the last frame of video image.
In order to achieve the above object, the present invention further provides an embedded infrared complex scene based target real-time tracking system, which includes a DDR memory, a Flash memory, and a processor, wherein the DDR memory stores image data, the Flash memory stores an embedded infrared complex scene based target real-time tracking program, the processor includes a processing circuit that uses an FPGA and a multi-core DSP as core processing units, and the steps of the above method are executed when the processor runs the embedded infrared complex scene based target real-time tracking program.
The real-time target tracking method and system based on the embedded infrared complex scene, provided by the invention, have the advantages of high running speed and accurate positioning to the center of the target based on the gray contrast centroid tracking, and are suitable for scenes with relatively simple backgrounds; the kernel correlation filtering has the advantage of good robustness for tracking targets with relatively complex backgrounds. Compared with the prior art, the scheme can realize stable tracking of the appointed dynamic infrared target, does not need to store a target template in advance, and can meet the requirement of an infrared search tracking system on real-time target position feedback of a tracker. The algorithm is realized by using C language, and the compiling environment is CCS5.5 and above. The method is operated on an embedded platform which takes an FPGA and a multi-core DSP as core devices.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a flowchart of an embedded infrared complex scene-based target real-time tracking method according to an embodiment of the present invention;
FIG. 2 is a block diagram of a real-time tracking system for an embedded infrared-based complex scene target according to an embodiment of the present invention;
FIG. 3 is an initial frame of infrared images acquired;
fig. 4 shows an image of a 128 × 156 extracted local area;
FIG. 5 is a high pass filtered image of 128 x 128 size;
FIG. 6 is an enhanced binary image containing a target;
FIG. 7 is a flow chart of a kernel correlation filtering algorithm;
fig. 8 is a tracking result diagram of an embedded infrared complex scene-based target real-time tracking system.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that all the directional indicators (such as up, down, left, right, front, and rear … …) in the embodiment of the present invention are only used to explain the relative position relationship between the components, the movement situation, etc. in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indicator is changed accordingly.
In addition, the descriptions related to "first", "second", etc. in the present invention are only for descriptive purposes and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "connected," "secured," and the like are to be construed broadly, and for example, "secured" may be a fixed connection, a removable connection, or an integral part; the connection can be mechanical connection, electrical connection, physical connection or wireless communication connection; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In addition, the technical solutions in the embodiments of the present invention may be combined with each other, but it must be based on the realization of those skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should not be considered to exist, and is not within the protection scope of the present invention.
Example one
As shown in fig. 2, an embodiment of the present invention provides an embedded infrared complex scene target real-time tracking system, which includes an infrared Camera 1, a Camera L ink decoder 2, an FPGA3, a RapidIO interface 4, an EMIF interface 5, a DSP6, a DDR memory 7, a Flash memory 8, a serial port controller 9, an external communication unit 10, a servo 11, and a display unit 12.
The FPGA3 adopts XC7K325T of Xilinx, the DSP6 adopts TMS320C6657 of TI, the FPGA3 is responsible for collecting and transmitting image data, receiving an instruction of an external communication unit and transmitting a tracking result to a display unit and a servo mechanism, the DSP6 is responsible for operating an image tracking algorithm to obtain a tracking result, the FPGA3 is connected with the DSP6 through a RapidIO interface 4 and an EMIF interface 5, the FPGA3 receives a video image with the size of 640 x 512 transmitted from the infrared Camera 1 through a Camera L ink decoder 2, the video image is transmitted to the DSP6 through the RapidIO interface 4 and is subsequently processed, the tracking command is transmitted to the FPGA3 through a serial port controller 9 by the external communication unit 10, the FPGA3 transmits the tracking instruction to the DSP6 through the EMIF interface 5, the DSP6 receives the tracking command, executes the tracking algorithm, returns the tracking result to the FPGA3 through the EMIF interface 5, and transmits the processing result to the servo mechanism 11 and the display unit 12 through the FPGA 3.
Example two
As shown in fig. 1, an embodiment of the present invention provides a real-time tracking method for an embedded infrared-based complex scene target, and the following describes the above method with reference to a tracking system:
step S1, receiving the rough coordinate of the target in the initial frame image;
the FPGA3 receives a tracking command sent by the external communication unit 10 and a rough position of an object to be tracked through the serial port controller 9, and the coordinates are。
Step S2, identifying the target and finding the accurate position of the target;
specifically, the FPGA3 sends the tracking command and the coarse position of the target to be tracked to the DSP6 through the EMIF interface 5, and the DSP6 identifies the target from the initial frame image starting from the reception of the tracking command and the coarse coordinates of the target to be tracked, and obtains the accurate position of the identified target. Step S2 includes:
step S21, reading a local area image with a preset size and taking the rough coordinate position of the target to be tracked as the center from the collected current frame video image, carrying out high-pass filtering processing, and obtaining a target enhanced binary image after threshold segmentation; the method specifically comprises the following steps:
step S211, reading in the video image of the current frameHas a central dimension ofLocal area image of;
An infrared video image shot by the infrared Camera 1 is decoded by a Camera L ink decoder 2 and then is transmitted to the DSP6 through the RapidIO interface 4 by the FPGA3, the DSP6 collects the infrared video image frame by frame according to the set sampling frequency of 50Hz, and the infrared video image is read from the collected initial frame image shown in figure 3 to be transmitted to the DSP6Is a central, transverse dimension ofA longitudinal dimension ofLocal area image ofL is the length of the horizontal extending on both sides of the left and right boundaries of M-M square local area (square image with side length M), the units of M and L are pixels, the rough position coordinate of the target in the current frame video image is (132,240) in this embodiment, hereTaking out the 128 pixels of the image,taking 14 pixels, local area imageIs of a size ofI.e., 128 x 156 (128, 156 units are pixels), as shown in fig. 4.
Step S212, for the local area imageHigh-pass filtering is performed. I.e. to the local area imageIn each of the left and right boundariesAnd (3) performing the following processing on each pixel except the column of pixels:
then, the absolute value of the above formula is obtained to obtain the size ofNamely a high-pass filtering image with the size of 128 by 128 (the side length is 128 pixels), namely a preprocessing imageAs shown in fig. 5; whereinx、yBeing the abscissa and the ordinate of the pixel,pfor each left and right boundaryLThe index number of each pixel in the row pixels can obtain the abscissa and the ordinate of the pixel point through the index number,fthe gray value of the pixel point is obtained;
step S213-215, for the pre-processed imagePerforming threshold segmentation to obtain a binary image containing the target, and then performing primary expansion processing on the binary image containing the target to obtain an expanded binary image as shown in fig. 6.
Step S213, for the preprocessed imageAveragingSum mean square errorAnd obtaining a threshold value(ii) a Wherein k is a preset constant, here taken to be 1.2;
step S214, utilizing thresholdFor the pre-processed imagePerforming binarization when preprocessing the imageMiddle pixel pointGrey scale valueGreater than a threshold valueTime-dependent pixel pointThe gray value is assigned to 255, otherwise, the gray value is assigned to 0, and a binary image with the size of 128 x 128 (the side length is 128 pixels) is obtained;
Step S215, for the binary imageAnd performing primary expansion processing to obtain a complete target enhancement binary image.
And step S22, performing connected domain analysis on the expanded binary image to obtain the shape and position of the suspected target. Step S22 includes:
step S221, pixel points with the gray value of 255, which are connected together in the binary image containing the target enhancement, namely connected domains are used as suspected targets, and the row number and the initial column number of each row where all the suspected targets are located are marked;
in step S222, the length, width, area, and centroid coordinates of all suspected targets are obtained by analyzing the number of rows and the number of initial columns in each row of the connected domain.
Step S23, obtaining candidate target coordinates and the number thereof according to the result of comparing the feature of the outline of the suspected target with a preset threshold value. Step S23 includes:
judging the length-width ratio, area, length-width ratio threshold value and area threshold value size relationship of all suspected targets to obtain the target areaSatisfy the requirement ofAnd aspect ratio ofSatisfy the requirement ofThe remaining are removed to obtain candidate targets, and the number of the candidate targets is recorded asHere, the Is 10.
Step S24, taking the candidate target with the largest area as the recognition target and outputting the accurate position coordinates of the recognition target; and sorting all the candidate targets according to the target area to obtain the candidate target with the largest area. The candidate object is taken as the identified object, and the precise coordinates of the object are obtained (105,240).
In step S3, the complexity of the background of the current frame image is determined. If the number of the candidate targets in the current frame target local area image is larger than the preset threshold valueThen the background of the current frame image is considered to be complex. Here, theAnd taking 3. And if the number of the candidate targets in the current frame target local area image is not more than a preset threshold value, the background of the current frame image is considered to be simpler.
In step S4, if the background of the current frame image is complex, the target is tracked by the subsequent frame using the kernel correlation filtering algorithm. Here, satisfying the condition of image background complexity, the kernel correlation filtering algorithm is executed, as shown in fig. 7, including:
in step S41, an image is received.
In step S42, it is determined whether or not the frame is an initial frame.
In step S43, in the initial frame image, a local area image of a predetermined size centered on the target precise coordinates is extracted.
Step S44, the HOG feature of the local area image is then calculatedThe initial HOG characteristic template is used as a target and is converted into a frequency domain by using Fourier transform to obtain. HOG (Histogram of Oriented Gradient, HOG for short).
Step S45, using the extracted initial target HOG feature templateCalculating an initial tracking filter modelWhereinFor the constant set to take 0.0001,the regression label is expected to be of the gaussian type,is thatNuclear autocorrelation of。
In step S46, when the next frame of image arrives, a local area image having the same predetermined size is extracted from the current frame of image with the position of the target in the previous frame of image as the center.
Step S47, converting the frequency domain into the frequency domain by Fourier transform to obtain HOG characteristics of the detected sample,。
Step S48, using HOG feature of detection sample of current frameHOG feature template with targetCross correlation of the kernels。
Step S49, using kernel cross correlationAnd tracking filtering modelPerforming point multiplication operation to obtain a target confidence response diagram:. To target confidence response mapSolving Fourier inverse transformation to obtain a response graph of a time domain
Step S410, obtaining a target confidence response mapMaximum valueThe coordinates corresponding to the maximum value are the coordinates of the object in the image。
Step S411, taking the coordinates of the target in the obtained image as the center, acquiring a new target HOG characteristic template. To findNuclear autocorrelation ofObtaining a new tracking filter model。
Step S412, using the new target HOG characteristic template obtained aboveAnd target HOG feature template of previous momentPush typeAnd calculating to obtain an updated HOG characteristic template of the current frame target. Using the new tracking filter model obtained aboveAnd tracking filter model of previous timePush typeIs calculated toThe filtering model is tracked to the updated current frame. Here, theFor the constant set, 0.012 was taken. Thus, the target tracking of one frame is completed. After the subsequent image frame comes, steps S42 to S412 are executed in sequence.
Step S5, if the image background is simple, the subsequent frame tracks the target using a centroid tracking algorithm based on gray contrast. The method comprises the following specific steps: reading in the next frame of image, and tracking based on the centroid of the gray contrast; the method comprises the following steps:
step S51, if the number of the candidate targetsLess than or equal to the target number thresholdAfter the coming of the subsequent frame image, the target coordinate is tracked last timeRead size in centered image ofAnd local images, and executing steps S21-S24 to complete the next tracking.
Step S52, if no tracking target is found, continuing to make the next frame of image comeReading from the image as a center of a size ofAnd performing steps S21-S24 on the local image, searching the candidate target with the largest area as the final target, and if the candidate target is continuousIf the target is not found, N is the set threshold value, thenThe target is considered lost.
The DSP6 returns a tracking result to the FPGA3 through the EMIF interface 5, the FPGA3 sends the tracking result to the servo mechanism 11 and the display unit 12 through the serial port controller 9, and the servo mechanism 11 drives the infrared camera 1 to rotate to track the target by utilizing the tracking result. The system takes a trace image of this time as shown in fig. 8. It should be noted that, because the KCF runs in the DSP in a time-consuming manner, it is difficult to perform real-time processing, and the KCF needs to be optimized to ensure that the running time of processing a single frame image in the DSP is less than 20 ms. In the KCF algorithm, two-dimensional Fourier transform needs to be solved for HOG characteristics of an image, and two-dimensional inverse Fourier transform needs to be solved for a target confidence response image. The Fast Fourier Transform (FFT) function provided by the TI dsplib function library is faster to implement the two-dimensional fourier transform and the two-dimensional inverse fourier transform than to directly perform the two-dimensional fourier transform and the two-dimensional inverse fourier transform. Specifically, FFT conversion is carried out on each row of data of the HOG characteristic of the two-dimensional image by adopting a one-dimensional FFT function DSPF _ sp _ ffSPxSP () provided by a TIdsppib function library, then row and column transposition is carried out on the FFT conversion result by using DSPF _ dp _ mat _ trans () provided by the TIdsppib function library, FFT is carried out on each column of the two-dimensional data after row and column transposition, then row and column transposition is carried out on the result after the second FFT conversion, and finally the data after the two-dimensional Fourier conversion is obtained. The inverse fourier transform operates as above. The following optimization is performed on the kernel correlation filtering algorithm: starting-O3 level optimization provided by a CCS5.5 compiler; performing a shift operation on the code instead of a multiply-divide operation; and (4) completing the calculation of the trigonometric function value in a table look-up manner. Inverse Fast Fourier Transform (IFFT) is used to perform Inverse Fourier Transform on the target confidence response map.
The following optimization measures are implemented for all program codes: starting the-O3 level optimization function provided by the CCS5.5 compiler ensures real-time performance in DSP processing through the above algorithm and code double optimization.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (10)
1. An embedded infrared complex scene target real-time tracking method is characterized by comprising the following steps:
step S1, receiving the rough coordinate of the target in the initial frame image;
step S2, identifying the target in the initial frame image according to the received tracking command and the rough coordinate of the target to be tracked, and obtaining the accurate position of the target; the method comprises the following steps:
step S21, reading a local area image with a preset size and taking the rough coordinate position of the target to be tracked as the center from the collected current frame video image, carrying out high-pass filtering processing, and obtaining a target enhanced binary image after threshold segmentation;
step S22, performing connected domain analysis on the target enhanced binary image to obtain the shape and position of the suspected target;
step S23, obtaining candidate targets and quantity according to the comparison result of the characteristic of the appearance of the suspected target and the threshold value;
step S24, taking the candidate target with the largest area as the recognition target and outputting the accurate position coordinates of the recognition target;
step S4, when the number of the candidate targets is larger than the target number threshold, the image background is considered to be complex, tracking is carried out through a kernel correlation filtering algorithm according to the received video image and the position coordinates of the tracking target obtained last time, and the position coordinates of the tracking target at this time are obtained;
step S5, when the number of the candidate targets is smaller than the target number threshold, the image background is considered to be simple and the centroid tracking based on the gray contrast is used for obtaining the position coordinates of the tracking target;
and step S6, repeating the steps S1, S2, S4 and S5, and completing target tracking of the last frame of video image.
2. The embedded real-time tracking method for infrared complex scene targets as claimed in claim 1, wherein the step S21 includes:
step S211, reading in the video image of the current frameHas a central dimension ofLocal area image of;Respectively representing the horizontal coordinate and the vertical coordinate of the target in the current frame video image; m is a local area imageThe longitudinal width of the first and second support members,as local area imagesL is the length extending laterally on both sides of the left and right boundaries of the M × M square local region, and the units of M and L are pixels;
step S212, local area imageExcept left and right transverse boundariesAnd (3) performing the following processing on each pixel except the column of pixels:
to obtain a size ofPre-processing the image(ii) a Whereinx、yBeing the abscissa and the ordinate of the pixel,pfor each left and right boundaryLThe index number of each pixel in the row pixels can obtain the abscissa and the ordinate of the pixel point through the index number,fthe gray value of the pixel point is obtained;
step S213, for the preprocessed imageAveragingSum mean square errorAnd obtaining a threshold value(ii) a WhereinkIs a preset constant;
step S214, utilizing thresholdFor the pre-processed imagePerforming binarization when preprocessing the imageMiddle pixel pointGrey scale valueIs greater thanTime-dependent pixel pointThe gray value is assigned to 255, otherwise, the gray value is assigned to 0, and the size is obtainedBinary image of;
3. The embedded real-time tracking method for infrared complex scene targets as claimed in claim 2, wherein the step S22 includes the following steps:
step S221, pixel points with the gray value of 255 connected together in the target enhanced binary image are used as suspected targets, and the row number and the initial column number of each row where all the suspected targets are located are marked;
in step S222, the length, width, area, and centroid coordinates of all suspected targets are obtained by analyzing the number of rows and the number of initial columns in each row of the connected domain.
4. The embedded real-time tracking method for infrared complex scene targets as claimed in claim 3, wherein the step S23 includes the following steps:
removing the length-width ratio excessiveness by judging the length-width ratio, the area, the length-width ratio threshold value and the area threshold value of all the suspected targetsObtaining the suspected targets with the length-width ratio threshold value and the suspected targets with the areas exceeding the area threshold value to obtain candidate targets, and recording the number of the candidate targets as。
5. The embedded real-time tracking method for infrared complex scene targets as claimed in claim 1, wherein the step of tracking by kernel correlation filtering algorithm in step S4 includes:
step S43, when the current frame video image is the initial frame image, extracting a local area image of a predetermined size centered on the target precise coordinates therefrom;
step S44, extracting the HOG feature of the local area imageThe initial HOG characteristic template as the target is obtained by converting the initial HOG characteristic template into a frequency domain by using Fourier transform;
Step S45, using the extracted initial HOG feature templateCalculating an initial tracking filter modelWhereinFor the constant set to take 0.0001,the regression label is expected to be of the gaussian type,is thatNuclear autocorrelation of;
Step S46, when the current frame video image is a non-initial frame image, extracting a local area image of the same predetermined size from the current frame image with the position of the target in the previous frame image as the center;
step S47, extracting the HOG feature of the local area image, and converting the extracted HOG feature into the frequency domain by Fourier transform to obtain the HOG feature of the detection sample,;
Step S48, using HOG feature of detection sample of current frameHOG feature template with targetCross correlation of the kernels;
Step S49, using kernel cross correlationAnd tracking filtering modelPerforming point multiplication operation to obtain a target confidence response diagram:(ii) a To target confidence response mapSolving Fourier inverse transformation to obtain a response graph of a time domain;
Step S410, obtaining a target confidence response mapMaximum valueThe coordinates corresponding to the maximum value are the coordinates of the object in the image;
Step S411, taking the coordinates of the target in the obtained image as the center, acquiring a new target HOG characteristic template(ii) a To findNuclear autocorrelation ofObtaining a new tracking filter model;
Step S412, using the new target HOG characteristic template obtained aboveAnd target HOG feature template of previous momentPush typeCalculating to obtain an updated HOG characteristic template of the current frame target; using the new tracking filter model obtained aboveAnd tracking filter model of previous timePush typeCalculating to obtain an updated current frame tracking filtering model; here, theTaking 0.012 for a set constant; thus completing target tracking of one frame;
and S43-S412 are sequentially executed to finish the target tracking of the last frame of image.
6. The embedded real-time tracking method for infrared complex scene targets as claimed in claim 5, wherein the Fourier transforming step in the steps S44 and S45 comprises:
and performing one-dimensional FFT (fast Fourier transform) on each row of the received HOG (histogram of oriented gradient) features of the two-dimensional image, then performing row-column transposition, performing one-dimensional FFT on each column of the two-dimensional data subjected to row-column transposition, and then performing row-column transposition to finally obtain the data subjected to two-dimensional Fourier transform.
7. The embedded real-time tracking method for infrared complex scene targets as claimed in claim 6, wherein in the steps S44, S45:
the FFT conversion and the row-column transposition of the data are realized by respectively adopting two functions of a one-dimensional FFT function DSPF _ sp _ ffSPxSP () and DSPF _ dp _ mat _ trans () provided by a TI dsplib function library.
8. The embedded real-time tracking method for infrared complex scene targets as claimed in claim 7, wherein the step of inverse fourier transform of step S49 includes:
and performing inverse Fourier transform on the target confidence map by using IFFT.
9. The embedded real-time infrared complex scene target tracking method as claimed in claim 1, wherein the step of centroid tracking based on gray contrast in step S5 includes:
step S51, if the number of the candidate targetsLess than or equal to the target number thresholdAfter the coming of the subsequent frame image, the target coordinate is tracked last timeRead size in centered image ofLocal images, and executing the steps S21-S24 to complete the next tracking;
step S52, if no tracking target is found, continuing to make the next frame of image comeReading from the image as a center of a size ofAnd performing steps S21-S24 on the local image, searching the candidate target with the largest area as the final target, and if the candidate target is continuousThe next time the target is not found,the target is considered lost for a set threshold.
10. The embedded infrared complex scene target real-time tracking system comprises a DDR memory, a Flash memory and a processor, wherein the DDR memory stores image data, the Flash memory stores an embedded infrared complex scene target real-time tracking program, the processor comprises a processing circuit which takes an FPGA and a multi-core DSP as core processing units, and when the processor runs the embedded infrared complex scene target real-time tracking program, the steps of the method of any one of claims 1 to 9 are executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010283683.1A CN111415370A (en) | 2020-04-13 | 2020-04-13 | Embedded infrared complex scene target real-time tracking method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010283683.1A CN111415370A (en) | 2020-04-13 | 2020-04-13 | Embedded infrared complex scene target real-time tracking method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111415370A true CN111415370A (en) | 2020-07-14 |
Family
ID=71491949
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010283683.1A Pending CN111415370A (en) | 2020-04-13 | 2020-04-13 | Embedded infrared complex scene target real-time tracking method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111415370A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112526506A (en) * | 2020-11-17 | 2021-03-19 | 中国科学院长春光学精密机械与物理研究所 | Target searching and tracking method and target tracking device |
CN112613524A (en) * | 2020-10-30 | 2021-04-06 | 西安方元明科技股份有限公司 | Searching and tracking rotary table image processing system |
CN115311470A (en) * | 2022-09-28 | 2022-11-08 | 北京万龙精益科技有限公司 | Infrared small target real-time detection and tracking method of adaptive block matching filtering |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103065131A (en) * | 2012-12-28 | 2013-04-24 | 中国航天时代电子公司 | Method and system of automatic target recognition tracking under complex scene |
US8810640B2 (en) * | 2011-05-16 | 2014-08-19 | Ut-Battelle, Llc | Intrinsic feature-based pose measurement for imaging motion compensation |
CN105184820A (en) * | 2015-09-15 | 2015-12-23 | 杭州中威电子股份有限公司 | Background modeling and motion object detection method and apparatus with image gradient and gray scale integration |
CN105654454A (en) * | 2014-11-10 | 2016-06-08 | 中国船舶重工集团公司第七二三研究所 | Fast and stable contrast tracking method |
CN106570486A (en) * | 2016-11-09 | 2017-04-19 | 华南理工大学 | Kernel correlation filtering target tracking method based on feature fusion and Bayesian classification |
CN107438398A (en) * | 2015-01-06 | 2017-12-05 | 大卫·伯顿 | Portable wearable monitoring system |
CN108648213A (en) * | 2018-03-16 | 2018-10-12 | 西安电子科技大学 | A kind of implementation method of KCF track algorithms on TMS320C6657 |
CN109299735A (en) * | 2018-09-14 | 2019-02-01 | 上海交通大学 | Anti-shelter target tracking based on correlation filtering |
CN109584266A (en) * | 2018-11-15 | 2019-04-05 | 腾讯科技(深圳)有限公司 | A kind of object detection method and device |
CN109614936A (en) * | 2018-12-12 | 2019-04-12 | 哈尔滨工业大学 | The layered recognition method of remote sensing images Aircraft Targets |
CN109859250A (en) * | 2018-11-20 | 2019-06-07 | 北京悦图遥感科技发展有限公司 | A kind of outer video multi-target detection of aviation red and tracking and device |
CN109993052A (en) * | 2018-12-26 | 2019-07-09 | 上海航天控制技术研究所 | The method for tracking target and system of dimension self-adaption under a kind of complex scene |
CN110796684A (en) * | 2019-10-24 | 2020-02-14 | 浙江大华技术股份有限公司 | Target tracking method and related device |
CN110796687A (en) * | 2019-10-30 | 2020-02-14 | 电子科技大学 | Sky background infrared imaging multi-target tracking method |
CN110929560A (en) * | 2019-10-11 | 2020-03-27 | 杭州电子科技大学 | Video semi-automatic target labeling method integrating target detection and tracking |
-
2020
- 2020-04-13 CN CN202010283683.1A patent/CN111415370A/en active Pending
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8810640B2 (en) * | 2011-05-16 | 2014-08-19 | Ut-Battelle, Llc | Intrinsic feature-based pose measurement for imaging motion compensation |
CN103065131A (en) * | 2012-12-28 | 2013-04-24 | 中国航天时代电子公司 | Method and system of automatic target recognition tracking under complex scene |
CN105654454A (en) * | 2014-11-10 | 2016-06-08 | 中国船舶重工集团公司第七二三研究所 | Fast and stable contrast tracking method |
CN107438398A (en) * | 2015-01-06 | 2017-12-05 | 大卫·伯顿 | Portable wearable monitoring system |
CN105184820A (en) * | 2015-09-15 | 2015-12-23 | 杭州中威电子股份有限公司 | Background modeling and motion object detection method and apparatus with image gradient and gray scale integration |
CN106570486A (en) * | 2016-11-09 | 2017-04-19 | 华南理工大学 | Kernel correlation filtering target tracking method based on feature fusion and Bayesian classification |
CN108648213A (en) * | 2018-03-16 | 2018-10-12 | 西安电子科技大学 | A kind of implementation method of KCF track algorithms on TMS320C6657 |
CN109299735A (en) * | 2018-09-14 | 2019-02-01 | 上海交通大学 | Anti-shelter target tracking based on correlation filtering |
CN109584266A (en) * | 2018-11-15 | 2019-04-05 | 腾讯科技(深圳)有限公司 | A kind of object detection method and device |
CN109859250A (en) * | 2018-11-20 | 2019-06-07 | 北京悦图遥感科技发展有限公司 | A kind of outer video multi-target detection of aviation red and tracking and device |
CN109614936A (en) * | 2018-12-12 | 2019-04-12 | 哈尔滨工业大学 | The layered recognition method of remote sensing images Aircraft Targets |
CN109993052A (en) * | 2018-12-26 | 2019-07-09 | 上海航天控制技术研究所 | The method for tracking target and system of dimension self-adaption under a kind of complex scene |
CN110929560A (en) * | 2019-10-11 | 2020-03-27 | 杭州电子科技大学 | Video semi-automatic target labeling method integrating target detection and tracking |
CN110796684A (en) * | 2019-10-24 | 2020-02-14 | 浙江大华技术股份有限公司 | Target tracking method and related device |
CN110796687A (en) * | 2019-10-30 | 2020-02-14 | 电子科技大学 | Sky background infrared imaging multi-target tracking method |
Non-Patent Citations (3)
Title |
---|
GIOELE CIAPARRONE 等: ""Deep Learning in Video Multi-Object Tracking: A Survey"", 《NEUROCOMPUTING》 * |
李均利 等: ""视频目标跟踪技术综述"", 《燕山大学学报》 * |
谢昊: ""视频中的运动目标检测与跟踪算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112613524A (en) * | 2020-10-30 | 2021-04-06 | 西安方元明科技股份有限公司 | Searching and tracking rotary table image processing system |
CN112526506A (en) * | 2020-11-17 | 2021-03-19 | 中国科学院长春光学精密机械与物理研究所 | Target searching and tracking method and target tracking device |
CN112526506B (en) * | 2020-11-17 | 2024-03-01 | 中国科学院长春光学精密机械与物理研究所 | Target searching and tracking method and target tracking device |
CN115311470A (en) * | 2022-09-28 | 2022-11-08 | 北京万龙精益科技有限公司 | Infrared small target real-time detection and tracking method of adaptive block matching filtering |
CN115311470B (en) * | 2022-09-28 | 2023-01-24 | 北京万龙精益科技有限公司 | Infrared small target real-time detection and tracking method of adaptive block matching filtering, system and device thereof and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107481264A (en) | A kind of video target tracking method of adaptive scale | |
CN111415370A (en) | Embedded infrared complex scene target real-time tracking method and system | |
CN108537822B (en) | Moving target tracking method based on weighted confidence estimation | |
CN112734809A (en) | Online multi-pedestrian tracking method and device based on Deep-Sort tracking framework | |
CN112489088A (en) | Twin network visual tracking method based on memory unit | |
CN111640138A (en) | Target tracking method, device, equipment and storage medium | |
CN111539987B (en) | Occlusion detection system and method based on discrimination model | |
Rapuru et al. | Correlation-based tracker-level fusion for robust visual tracking | |
Rosales et al. | Faster r-cnn based fish detector for smart aquaculture system | |
CN108986139B (en) | Feature integration method with significance map for target tracking | |
Feng | Mask RCNN-based single shot multibox detector for gesture recognition in physical education | |
CN108876776B (en) | Classification model generation method, fundus image classification method and device | |
CN112949453B (en) | Training method of smoke and fire detection model, smoke and fire detection method and equipment | |
CN112861808B (en) | Dynamic gesture recognition method, device, computer equipment and readable storage medium | |
CN117392187A (en) | SAR image change detection method and equipment based on spatial attention model | |
CN112070181A (en) | Image stream-based cooperative detection method and device and storage medium | |
CN116433722A (en) | Target tracking method, electronic device, storage medium, and program product | |
CN112614158B (en) | Sampling frame self-adaptive multi-feature fusion online target tracking method | |
CN114743257A (en) | Method for detecting and identifying image target behaviors | |
CN113033356A (en) | Scale-adaptive long-term correlation target tracking method | |
CN110660079A (en) | Single target tracking method based on space-time context | |
CN115061574B (en) | Human-computer interaction system based on visual core algorithm | |
KR102637343B1 (en) | Method and apparatus for tracking object | |
JP2019200527A (en) | Information processing device, information processing method, and program | |
Zhu et al. | A moving infrared small target detection method based on optical flow-guided neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200714 |