CN111583307A - Real-time detection and tracking system and method for moving target - Google Patents

Real-time detection and tracking system and method for moving target Download PDF

Info

Publication number
CN111583307A
CN111583307A CN202010392558.4A CN202010392558A CN111583307A CN 111583307 A CN111583307 A CN 111583307A CN 202010392558 A CN202010392558 A CN 202010392558A CN 111583307 A CN111583307 A CN 111583307A
Authority
CN
China
Prior art keywords
image
frame
detection
tracking
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010392558.4A
Other languages
Chinese (zh)
Inventor
赵伟龙
张燕
陈�峰
焉保卿
杨玉宽
张国栋
朱春健
赵明建
胡红磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Vt Electron Technology Co ltd
Original Assignee
Shandong Vt Electron Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Vt Electron Technology Co ltd filed Critical Shandong Vt Electron Technology Co ltd
Priority to CN202010392558.4A priority Critical patent/CN111583307A/en
Publication of CN111583307A publication Critical patent/CN111583307A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a real-time detection and tracking system and a real-time detection and tracking method for a moving target, which comprise the following steps: a detection device; the detection equipment is connected with the holder, the holder is provided with a camera, the camera stores the acquired image into a memory, and the memory is connected with the detection equipment; the detection equipment is used for processing the image collected by the camera to complete the automatic focusing of the camera; the detection equipment is also used for processing the image collected by the camera to finish the real-time detection and tracking of the moving target. The method for detecting and tracking the unmanned aerial vehicle by combining the yolov3 deep learning algorithm and the sort multi-target tracking algorithm has the advantages of high detection precision, good robustness and capability of favorably avoiding the problems of shielding, tracking loss and the like; by adopting an MPSoC hardware structure, the multi-core heterogeneous structure can enable a deep learning algorithm to reach real time under an ARM + FPGA architecture; the monitoring system has small equipment and does not need excessive manual intervention and control.

Description

Real-time detection and tracking system and method for moving target
Technical Field
The disclosure relates to the technical field of moving target detection and tracking, in particular to a moving target real-time detection and tracking system and a moving target real-time detection and tracking method.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
The identification, detection and tracking of moving targets are hot problems in the field of computer vision, and have wide application in the aspects of man-machine interaction, video tracking, visual navigation, robots, military guidance and the like. In recent years, the rapid growth of consumer-grade unmanned aerial vehicle market, the price of the consumer-grade unmanned aerial vehicle with powerful functions is continuously reduced, the simplicity of operation is continuously improved, and the unmanned aerial vehicle is rapidly shifting from sophisticated military equipment to mass market, and becomes a toy in the hands of common people. However, the continuous emergence of new unmanned aerial vehicles with more and more advanced functions also brings safety and privacy concerns, such as the invasion of privacy by peeping of the unmanned aerial vehicle, the harm to national safety by flying in sensitive areas such as national organs, military station, airport periphery and the like, and safety accidents caused by improper operation of the unmanned aerial vehicle.
yolov3 is an object detection network in deep learning, is widely applied to the recognition and detection level of single-frame images, and has the advantages of higher detection precision and higher detection speed compared with the traditional object detection method. The target tracking based on detection is a common target tracking method, and the tracking of video series can be completed by carrying out target identification and detection on each frame of image. However, yolov3 based on deep learning has high requirements on early training samples, and if once the shot target and background images are not contained in the training samples, yolov3 cannot detect the target, thereby causing tracking failure.
The sort multi-target tracking algorithm efficiently realizes target detection and uses Kalman filtering de-filtering and Hungarian algorithm for tracking, but the accuracy is low under the condition of shielding.
Disclosure of Invention
In order to solve the defects of the prior art, the present disclosure provides a moving target real-time detection tracking system and method;
in a first aspect, the present disclosure provides a moving object real-time detection tracking system;
the real-time detection tracking system of moving object includes: a detection device;
the detection equipment is connected with the holder, the holder is provided with a camera, the camera stores the acquired image into a memory, and the memory is connected with the detection equipment;
the detection equipment is used for processing the image collected by the camera to complete the automatic focusing of the camera; the detection equipment is also used for processing the image collected by the camera to finish the real-time detection and tracking of the moving target.
Further, processing the image collected by the camera to complete automatic focusing of the camera; the method is completed through a sort multi-target tracking algorithm.
Further, processing the image collected by the camera to complete real-time detection and tracking of the moving target; is accomplished by the yolov3 deep learning algorithm.
In a second aspect, the present disclosure provides a moving target real-time detection tracking method;
the real-time detection and tracking method for the moving target comprises the following steps:
after the system is electrified and starts working, the camera finishes automatic focusing according to an automatic focusing algorithm;
the camera carries out video acquisition on the tracked object to be detected, and the obtained image of the tracked object to be detected is stored in a memory;
and the detection equipment processes the image in the memory to obtain a tracking result of the object to be tracked.
Further, the detection equipment processes the image in the memory to obtain a tracking result of the object to be tracked; the method comprises the following specific steps:
reading an image shot by a camera from a memory by a PS end of an MPSoC chip of the detection equipment, and transmitting the read image to a PL end; the PL terminal preprocesses the image, then sends the preprocessed image to a DPU (deep learning processor) of the PL terminal through an AXI (advanced extensible interface) bus for convolution processing, and then sends the convolution processing result to the PS terminal; a Darknet-53 network structure of yolov3 algorithm is deployed on the DPU;
the PS end receives a processing result sent by the PL end, and the PS end processes the result to obtain a detection frame;
initializing the detection frame according to a target tracking algorithm, predicting the motion track of the target and obtaining a tracking frame; combining the detection frame and the tracking frame to obtain a final target frame;
and generating a control instruction according to the final coordinate information of the target frame, sending the generated control instruction to the holder, controlling the holder to drive the camera to rotate, and shooting the image of the next visual angle.
Compared with the prior art, the beneficial effect of this disclosure is:
1. the yolov3 deep learning algorithm and the sort multi-target tracking algorithm are combined to detect and track the unmanned aerial vehicle, so that the unmanned aerial vehicle detection system is high in detection precision and good in robustness, and the problems of shielding, tracking loss and the like are favorably avoided;
2. by adopting an MPSoC hardware structure, the multi-core heterogeneous structure can enable a deep learning algorithm to reach real time under an ARM + FPGA architecture;
3. the monitoring system has small equipment and does not need excessive manual intervention and control.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and are not to limit the disclosure.
Fig. 1 is a schematic structural diagram of a first embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating a second embodiment of the present disclosure;
fig. 3 is a schematic diagram of a Darknet-53 network architecture of yolov3 algorithm according to the second embodiment of the disclosure.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present disclosure. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and it should be understood that the terms "comprises" and "comprising", and any variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example one
The embodiment provides a moving target real-time detection and tracking system;
as shown in fig. 1, the moving object real-time detection and tracking system includes: a detection device;
the detection equipment is connected with the holder, the holder is provided with a camera, the camera stores the acquired image into a memory, and the memory is connected with the detection equipment;
the detection equipment is used for processing the image collected by the camera to complete the automatic focusing of the camera; the detection equipment is also used for processing the image collected by the camera to finish the real-time detection and tracking of the moving target.
Further, processing the image collected by the camera to complete automatic focusing of the camera; the method is completed through a sort multi-target tracking algorithm.
Further, processing the image collected by the camera to complete real-time detection and tracking of the moving target; is accomplished by the yolov3 deep learning algorithm.
Further, the detection apparatus includes: the MPSoC chip is provided with a PS (Processing System) end and a PL (Programmable Logic) end, and the PS end and the PL end are connected through an AXI bus.
Wherein, PS end includes: ARM Cortex-A53 quad-core Processing systems (rate up to 1.5GHz), ARM Cortex-R5 quad-core Processing systems (rate up to 600MHz), and GPUs (Graphics Processing units, rate up to 667 MHz);
among them, the PL module, that is, an FPGA (Field Programmable Gate Array) includes: a DPU (Deep Learning Processor Unit) and an IPU (image preprocessor Unit).
Further, the camera can realize 360-degree rotation shooting in a monitored range.
Further, the detection equipment is also respectively connected with a power supply and a display; the image display displays the processing result of the PS end.
Further, a PS terminal of an MPSoC chip of the detection equipment reads an image shot by a camera from a memory and transmits the read image to a PL terminal; the PL terminal preprocesses the image, then sends the preprocessed image to a DPU (deep learning processor) of the PL terminal through an AXI (advanced extensible interface) bus for convolution processing, and then sends the convolution processing result to the PS terminal;
the PS end receives a processing result sent by the PL end, processes the result to obtain a detection frame, performs initialization operation according to the detection frame, and predicts the motion trail of a target to obtain a tracking frame;
and combining the detection frame and the tracking frame to obtain a final target frame, generating a control instruction according to the coordinate information of the final target frame, sending the generated control instruction to the holder, controlling the holder to drive the camera to rotate, and shooting an image at the next visual angle.
Further, the preprocessing the image includes: image normalization, image scaling, and image chromaticity space transformation (RGB to BRG).
Further, a Darknet-53 network with yolov3 algorithm is deployed on the DPU; the deep learning processor DPU receives the preprocessed image, obtains yolo layer data of the two network branches after being processed by a Darknet-53 network of the DPU, sends the yolo layer data of the two network branches to a PS (packet switch system) end of the MPSoC chip, processes, screens and sequences the yolo layer data of the two network branches according to a threshold value at the PS end of the MPSoC chip to obtain a coordinate position of a final detection frame, performs initialization operation according to the coordinate position of the final detection frame at the PS end to predict a motion track of a target to obtain a tracking frame, and then combines the tracking frame and the final detection frame to obtain a final target frame.
The network architecture of the Darknet-53 network of yolov3 algorithm is shown in FIG. 3.
Further, generating a control instruction according to the final coordinate information of the target frame, and sending the generated control instruction to the holder; the method is realized by a PS end, the rotating angle and the speed of the next visual angle of the holder are obtained according to the final coordinate position of the target frame, and the rotating angle and the speed of the next visual angle are sent to the holder.
Preferably, the memory is a DDR/SDRAM memory module.
Preferably, the display is a VGA/HDMI display module.
It should be understood that the memory is connected to the MPSoC chip by a bus.
It should be understood that the display is connected to the MPSoC chip by a DP line.
The power supply module provides a working power supply for the work of the system.
Example two
The embodiment provides a real-time detection and tracking method for a moving target;
the real-time detection and tracking method for the moving target comprises the following steps:
s101: after the system is electrified and starts working, the camera finishes automatic focusing according to an automatic focusing algorithm;
s102: the camera carries out video acquisition on the tracked object to be detected, and the obtained image of the tracked object to be detected is stored in a memory;
s103: and the detection equipment processes the image in the memory to obtain a tracking result of the object to be tracked.
Further, as shown in fig. 2, in S103, the detecting device processes the image in the memory to obtain a tracking result of the tracked object to be detected; the method comprises the following specific steps:
reading an image shot by a camera from a memory by a PS end of an MPSoC chip of the detection equipment, and transmitting the read image to a PL end; the PL terminal preprocesses the image, then sends the preprocessed image to a DPU (deep learning processor) of the PL terminal through an AXI (advanced extensible interface) bus for convolution processing, and then sends the convolution processing result to the PS terminal; a Darknet-53 network structure of yolov3 algorithm is deployed on the DPU;
the PS end receives a processing result sent by the PL end, and the PS end processes the result to obtain a detection frame;
initializing the detection frame according to a target tracking algorithm, predicting the motion track of the target and obtaining a tracking frame; combining the detection frame and the tracking frame to obtain a final target frame;
and generating a control instruction according to the final coordinate information of the target frame, sending the generated control instruction to the holder, controlling the holder to drive the camera to rotate, and shooting the image of the next visual angle.
Further, the method further comprises:
s104: and the image display displays the position of the to-be-detected tracking object detected in the current frame.
The tracked object to be detected comprises but is not limited to an unmanned aerial vehicle and other equipment.
Further, in S101, after the system is powered on and starts working, the camera completes automatic focusing according to an automatic focusing algorithm; the method comprises the following specific steps:
s1011: converting an image chromaticity space, namely converting an acquired image from an RGB image into a gray level image;
s1012: carrying out edge detection on the gray level image by using a sobel edge detection algorithm to obtain an edge detection image; calculating the average gray value of the edge detection image, and recording the mean value;
s1013: judging whether the current image is a first frame image or not, and controlling the camera to rotate if the current image is the first frame image; if the image is not the first frame image, comparing the average gray value of the current image with the average gray value of the previous frame image:
if the average gray value of the current image is smaller than the average gray value of the previous frame, the rotation direction of the camera is wrong, the image is more and more blurred, and the camera is controlled to rotate in the opposite direction;
if the average gray value of the current image is larger than the average gray value of the previous frame, controlling the camera to continue rotating;
s1014: repeating S1013, and obtaining an optimal image through N times of iterative rotation; wherein N is a set value and N is a positive integer.
It should be understood that, if it is the first frame image, the camera is controlled to rotate, where the direction of rotation is left or right, and this is not limited here, and the camera rotation step is 1.
Further, initializing the detection frame according to a target tracking algorithm, predicting the motion track of the target and obtaining a tracking frame; combining the detection frame and the tracking frame to obtain a final target frame; the method comprises the following specific steps:
s1031: judging whether the image of the detection frame is the first frame image or not;
if the image is the first frame image, establishing a Kalman tracker according to the detection frame, and entering S1032;
otherwise, predicting the position of the object to be tracked in the next frame of image according to the Kalman filter, and entering S1032;
s1032: calculating the intersection ratio IOU (intersection over Union) of the detection frame and the tracking frame;
s1033: when the intersection ratio IOU of the detection frame and the tracking frame is larger than a set threshold value, the targets in the detection frame and the tracking frame are the same target, and the Hungarian algorithm is used for carrying out linear distribution on the detection frame and the tracking frame to obtain a matched combination;
s1034: judging whether a matched combination is obtained or not, if so, updating the Kalman tracker according to the matching result, and entering S1035, otherwise, entering S1035 without updating;
s1035: judging whether a new target appears in the detection frame, if the new target appears in the detection frame and all new targets exist in the continuous M frames, initializing 1 new Kalman tracker, and entering the tracking of the next frame; otherwise, the target is not updated, the target is considered to be unreliable, and the tracking of the next frame is carried out;
s1036: and the matching combination obtained in the step S1033 is the final target frame.
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (10)

1. The real-time detection and tracking system of the moving target is characterized by comprising: a detection device;
the detection equipment is connected with the holder, the holder is provided with a camera, the camera stores the acquired image into a memory, and the memory is connected with the detection equipment;
the detection equipment is used for processing the image collected by the camera to complete the automatic focusing of the camera; the detection equipment is also used for processing the image collected by the camera to finish the real-time detection and tracking of the moving target.
2. The system of claim 1, wherein the processing of the image captured by the camera completes the auto-focusing of the camera; the method is completed through a sort multi-target tracking algorithm.
3. The system of claim 1, wherein the processing of the images collected by the camera accomplishes real-time detection and tracking of the moving object; is accomplished by the yolov3 deep learning algorithm.
4. The system of claim 1, wherein the PS side of the mposc chip of the inspection apparatus reads an image photographed by the camera from the memory and transmits the read image to the PL side; the PL terminal preprocesses the image, then sends the preprocessed image to a DPU (deep learning processor) of the PL terminal through an AXI (advanced extensible interface) bus for convolution processing, and then sends the convolution processing result to the PS terminal;
the PS end receives a processing result sent by the PL end, processes the result to obtain a detection frame, performs initialization operation according to the detection frame, and predicts the motion trail of a target to obtain a tracking frame;
and combining the detection frame and the tracking frame to obtain a final target frame, generating a control instruction according to the coordinate information of the final target frame, sending the generated control instruction to the holder, controlling the holder to drive the camera to rotate, and shooting an image at the next visual angle.
5. The method of claim 4, wherein the camera performs auto-focusing according to an auto-focusing algorithm after the system is powered on and starts to work; the method comprises the following specific steps:
converting an image chromaticity space, namely converting an acquired image from an RGB image into a gray level image;
carrying out edge detection on the gray level image by using a sobel edge detection algorithm to obtain an edge detection image; calculating the average gray value of the edge detection image, and recording the mean value;
judging whether the current image is a first frame image or not, and controlling the camera to rotate if the current image is the first frame image; if the image is not the first frame image, comparing the average gray value of the current image with the average gray value of the previous frame image:
if the average gray value of the current image is smaller than the average gray value of the previous frame, the rotation direction of the camera is wrong, the image is more and more blurred, and the camera is controlled to rotate in the opposite direction;
if the average gray value of the current image is larger than the average gray value of the previous frame, controlling the camera to continue rotating;
obtaining an optimal image through N times of iterative rotation; wherein N is a set value and N is a positive integer.
6. The method as claimed in claim 4, wherein the detection frame is initialized according to an object tracking algorithm, and the motion trail of the object is predicted to obtain a tracking frame; combining the detection frame and the tracking frame to obtain a final target frame; the method comprises the following specific steps:
judging whether the image of the detection frame is the first frame image or not; if the image is the first frame image, a Kalman tracker is established according to the detection frame; otherwise, predicting the position of the object to be tracked in the next frame of image according to the Kalman filter;
calculating the intersection ratio IOU of the detection frame and the tracking frame;
when the intersection ratio IOU of the detection frame and the tracking frame is larger than a set threshold value, the targets in the detection frame and the tracking frame are the same target, and the Hungarian algorithm is used for carrying out linear distribution on the detection frame and the tracking frame to obtain a matched combination;
judging whether a matched combination is obtained or not, if so, updating the Kalman tracker according to the matched result, otherwise, not updating;
judging whether a new target appears in the detection frame, if the new target appears in the detection frame and all new targets exist in the continuous M frames, initializing 1 new Kalman tracker, and entering the tracking of the next frame; otherwise, the target is not updated, the target is considered to be unreliable, and the tracking of the next frame is carried out;
and obtaining the obtained matching combination as a final target frame.
7. The real-time detection and tracking method of the moving target is characterized by comprising the following steps:
after the system is electrified and starts working, the camera finishes automatic focusing according to an automatic focusing algorithm;
the camera carries out video acquisition on the tracked object to be detected, and the obtained image of the tracked object to be detected is stored in a memory;
and the detection equipment processes the image in the memory to obtain a tracking result of the object to be tracked.
8. The method as claimed in claim 7, wherein the detection device processes the image in the memory to obtain a tracking result of the object to be tracked; the method comprises the following specific steps:
reading an image shot by a camera from a memory by a PS end of an MPSoC chip of the detection equipment, and transmitting the read image to a PL end; the PL terminal preprocesses the image, then sends the preprocessed image to a DPU (deep learning processor) of the PL terminal through an AXI (advanced extensible interface) bus for convolution processing, and then sends the convolution processing result to the PS terminal; a Darknet-53 network structure of yolov3 algorithm is deployed on the DPU;
the PS end receives a processing result sent by the PL end, and the PS end processes the result to obtain a detection frame;
initializing the detection frame according to a target tracking algorithm, predicting the motion track of the target and obtaining a tracking frame; combining the detection frame and the tracking frame to obtain a final target frame;
and generating a control instruction according to the final coordinate information of the target frame, sending the generated control instruction to the holder, controlling the holder to drive the camera to rotate, and shooting the image of the next visual angle.
9. The method of claim 8, wherein after the system is powered on and starts working, the camera performs auto-focusing according to an auto-focusing algorithm; the method comprises the following specific steps:
converting an image chromaticity space, namely converting an acquired image from an RGB image into a gray level image;
carrying out edge detection on the gray level image by using a sobel edge detection algorithm to obtain an edge detection image; calculating the average gray value of the edge detection image, and recording the mean value;
judging whether the current image is a first frame image or not, and controlling the camera to rotate if the current image is the first frame image; if the image is not the first frame image, comparing the average gray value of the current image with the average gray value of the previous frame image:
if the average gray value of the current image is smaller than the average gray value of the previous frame, the rotation direction of the camera is wrong, the image is more and more blurred, and the camera is controlled to rotate in the opposite direction;
if the average gray value of the current image is larger than the average gray value of the previous frame, controlling the camera to continue rotating;
obtaining an optimal image through N times of iterative rotation; wherein N is a set value and N is a positive integer.
10. The method as claimed in claim 8, wherein the detection frame is initialized according to an object tracking algorithm, and the motion trail of the object is predicted to obtain a tracking frame; combining the detection frame and the tracking frame to obtain a final target frame; the method comprises the following specific steps:
judging whether the image of the detection frame is the first frame image or not; if the image is the first frame image, a Kalman tracker is established according to the detection frame; otherwise, predicting the position of the object to be tracked in the next frame of image according to the Kalman filter;
calculating the intersection ratio IOU of the detection frame and the tracking frame;
when the intersection ratio IOU of the detection frame and the tracking frame is larger than a set threshold value, the targets in the detection frame and the tracking frame are the same target, and the Hungarian algorithm is used for carrying out linear distribution on the detection frame and the tracking frame to obtain a matched combination;
judging whether a matched combination is obtained or not, if so, updating the Kalman tracker according to the matched result, otherwise, not updating;
judging whether a new target appears in the detection frame, if the new target appears in the detection frame and all new targets exist in the continuous M frames, initializing 1 new Kalman tracker, and entering the tracking of the next frame; otherwise, the target is not updated, the target is considered to be unreliable, and the tracking of the next frame is carried out;
and obtaining the obtained matching combination as a final target frame.
CN202010392558.4A 2020-05-11 2020-05-11 Real-time detection and tracking system and method for moving target Pending CN111583307A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010392558.4A CN111583307A (en) 2020-05-11 2020-05-11 Real-time detection and tracking system and method for moving target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010392558.4A CN111583307A (en) 2020-05-11 2020-05-11 Real-time detection and tracking system and method for moving target

Publications (1)

Publication Number Publication Date
CN111583307A true CN111583307A (en) 2020-08-25

Family

ID=72122823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010392558.4A Pending CN111583307A (en) 2020-05-11 2020-05-11 Real-time detection and tracking system and method for moving target

Country Status (1)

Country Link
CN (1) CN111583307A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113176268A (en) * 2021-05-18 2021-07-27 哈尔滨理工大学 Wind power blade surface damage detection method based on cloud deck shooting image
CN113298053A (en) * 2021-07-26 2021-08-24 季华实验室 Multi-target unmanned aerial vehicle tracking identification method and device, electronic equipment and storage medium
CN114943955A (en) * 2022-07-25 2022-08-26 山东广通汽车科技股份有限公司 Automatic unloading control method for semitrailer
CN115107653A (en) * 2022-07-29 2022-09-27 山东浪潮科学研究院有限公司 Electronic rearview mirror system based on FPGA
CN116991182A (en) * 2023-09-26 2023-11-03 北京云圣智能科技有限责任公司 Unmanned aerial vehicle holder control method, device, system, computer device and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898122A (en) * 2018-07-03 2018-11-27 河南亚视软件技术有限公司 A kind of Intelligent human-face recognition methods
CN110399808A (en) * 2019-07-05 2019-11-01 桂林安维科技有限公司 A kind of Human bodys' response method and system based on multiple target tracking
CN110717403A (en) * 2019-09-16 2020-01-21 国网江西省电力有限公司电力科学研究院 Face multi-target tracking method
CN111008994A (en) * 2019-11-14 2020-04-14 山东万腾电子科技有限公司 Moving target real-time detection and tracking system and method based on MPSoC

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898122A (en) * 2018-07-03 2018-11-27 河南亚视软件技术有限公司 A kind of Intelligent human-face recognition methods
CN110399808A (en) * 2019-07-05 2019-11-01 桂林安维科技有限公司 A kind of Human bodys' response method and system based on multiple target tracking
CN110717403A (en) * 2019-09-16 2020-01-21 国网江西省电力有限公司电力科学研究院 Face multi-target tracking method
CN111008994A (en) * 2019-11-14 2020-04-14 山东万腾电子科技有限公司 Moving target real-time detection and tracking system and method based on MPSoC

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宋志杰;宋薇;章亚男;沈林勇;: "三目显微视觉系统自动对焦调整" *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113176268A (en) * 2021-05-18 2021-07-27 哈尔滨理工大学 Wind power blade surface damage detection method based on cloud deck shooting image
CN113298053A (en) * 2021-07-26 2021-08-24 季华实验室 Multi-target unmanned aerial vehicle tracking identification method and device, electronic equipment and storage medium
CN114943955A (en) * 2022-07-25 2022-08-26 山东广通汽车科技股份有限公司 Automatic unloading control method for semitrailer
CN115107653A (en) * 2022-07-29 2022-09-27 山东浪潮科学研究院有限公司 Electronic rearview mirror system based on FPGA
CN116991182A (en) * 2023-09-26 2023-11-03 北京云圣智能科技有限责任公司 Unmanned aerial vehicle holder control method, device, system, computer device and medium
CN116991182B (en) * 2023-09-26 2023-12-22 北京云圣智能科技有限责任公司 Unmanned aerial vehicle holder control method, device, system, computer device and medium

Similar Documents

Publication Publication Date Title
CN111583307A (en) Real-time detection and tracking system and method for moving target
US20220114739A1 (en) Real-time visual object tracking for unmanned aerial vehicles (uavs)
US10699125B2 (en) Systems and methods for object tracking and classification
CN110998594B (en) Method and system for detecting motion
CN107016690B (en) Unmanned aerial vehicle intrusion detection and identification system and method based on vision
KR101645722B1 (en) Unmanned aerial vehicle having Automatic Tracking and Method of the same
US8116527B2 (en) Using video-based imagery for automated detection, tracking, and counting of moving objects, in particular those objects having image characteristics similar to background
US8457356B2 (en) Method and system of video object tracking
CN109741369B (en) Method and system for robot to track target pedestrian
WO2017034689A1 (en) System and method for laser depth map sampling
CN112785628B (en) Track prediction method and system based on panoramic view angle detection tracking
Everding et al. Low-latency line tracking using event-based dynamic vision sensors
CN109508636A (en) Vehicle attribute recognition methods, device, storage medium and electronic equipment
CN111814752A (en) Indoor positioning implementation method, server, intelligent mobile device and storage medium
CN112069879A (en) Target person following method, computer-readable storage medium and robot
Tulpan et al. Experimental evaluation of four feature detection methods for close range and distant airborne targets for Unmanned Aircraft Systems applications
Liu et al. SETR-YOLOv5n: A Lightweight Low-Light Lane Curvature Detection Method Based on Fractional-Order Fusion Model
Suto Real-time lane line tracking algorithm to mini vehicles
Palvanov et al. DHCNN for visibility estimation in foggy weather conditions
Padole et al. Wigner distribution based motion tracking of human beings using thermal imaging
US10549853B2 (en) Apparatus, system, and method for determining an object's location in image video data
CN115147809A (en) Obstacle detection method, device, equipment and storage medium
Kaimkhani et al. UAV with Vision to Recognise Vehicle Number Plates
CN112507965A (en) Target identification method and system of electronic lookout system
CN113408325A (en) Method and device for identifying surrounding environment of vehicle and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination