CN111402288A - Target detection tracking method and device - Google Patents

Target detection tracking method and device Download PDF

Info

Publication number
CN111402288A
CN111402288A CN202010225115.6A CN202010225115A CN111402288A CN 111402288 A CN111402288 A CN 111402288A CN 202010225115 A CN202010225115 A CN 202010225115A CN 111402288 A CN111402288 A CN 111402288A
Authority
CN
China
Prior art keywords
target
detection
image
targets
filtering operation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010225115.6A
Other languages
Chinese (zh)
Inventor
吴飞红
黄晓峰
邢卫国
闫野鹤
陈科
贾惠柱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Boya Hongtu Video Technology Co ltd
Original Assignee
Hangzhou Boya Hongtu Video Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Boya Hongtu Video Technology Co ltd filed Critical Hangzhou Boya Hongtu Video Technology Co ltd
Priority to CN202010225115.6A priority Critical patent/CN111402288A/en
Publication of CN111402288A publication Critical patent/CN111402288A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a target detection tracking method and a target detection tracking device, wherein the target detection tracking method comprises the following steps: carrying out target detection on the first image at the current moment to obtain a first number of detection targets; if the first number is larger than a preset number threshold, performing priority sequencing on the first number of detection targets; filtering the detection target according to the sequence of the priority from high to low, and stopping until the filtering operation times reach the preset number threshold; and determining the detection target which is not subjected to the filtering operation as a non-interesting target. According to the scheme, the processing time of the image is controlled by controlling the maximum times of the filtering operation, so that the image processing efficiency is improved, and the real-time performance of target detection tracking is effectively improved.

Description

Target detection tracking method and device
Technical Field
The invention belongs to the technical field of machine vision, and particularly relates to a target detection tracking method and device.
Background
The main functions of machine vision are target recognition and tracking. Compared with manual or traditional mechanical methods, the machine vision system has a series of advantages of high speed, high precision, high accuracy and the like. Among all machine-processed sensing systems, the machine vision system contains the largest amount of information, and is one of the most complex and challenging research fields in computer applications, and in order to meet the demand of computational diversification, we need more and more powerful and more efficient computing systems, and a multi-core heterogeneous platform (e.g., a multi-core heterogeneous chip) is developed.
The existing target detection tracking based on the multi-core heterogeneous platform has the problems that the real-time performance is difficult to meet due to overlarge calculation pressure and the like under the condition of multi-path real-time tracking.
Disclosure of Invention
In view of this, embodiments of the present invention provide a target detection and tracking method and apparatus, so as to overcome a problem that a real-time property is difficult to satisfy due to an excessive calculation pressure.
An embodiment of a first aspect of the present invention provides a target detection and tracking method, including:
carrying out target detection on the first image at the current moment to obtain a first number of detection targets;
if the first number is larger than a preset number threshold, performing priority sequencing on the first number of detection targets;
filtering the detection target according to the sequence of the priority from high to low, and stopping until the filtering operation times reach the preset number threshold; the filtering operation refers to judging whether the detection target is an interested target or a non-interested target; if the detected target is the target of interest, further matching the category of the detected target;
and determining the detection target which is not subjected to the filtering operation as a non-interesting target.
An embodiment of a second aspect of the present invention provides a target detection and tracking apparatus, including:
the detection module is used for carrying out target detection on the first image at the current moment to obtain a first number of detection targets;
the sorting module is used for carrying out priority sorting on the first number of detection targets if the first number is greater than a preset number threshold;
the filtering module is used for filtering the detection target from high to low according to the priority, and stopping the filtering operation until the filtering operation times reach the preset number threshold; the filtering operation refers to judging whether the detection target is an interested target or a non-interested target; if the detected target is the target of interest, further matching the category of the detected target; and determining the detection target which is not subjected to the filtering operation as a non-interesting target.
According to the scheme, the target detection is carried out on the image at the current moment to obtain the detection targets with the corresponding quantity, if the quantity is larger than the preset quantity threshold value, priority sequencing is carried out on all the detection targets, the detection targets are subjected to filtering operation from high to low according to the priority, the filtering operation is stopped until the filtering operation frequency reaches the preset quantity threshold value, the detection targets which are not subjected to the filtering operation are directly determined as non-interested targets, and therefore the processing time of the image is controlled by controlling the maximum filtering operation frequency, the image processing efficiency is improved, and the real-time performance of target detection tracking is effectively improved.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flow chart of a target detection and tracking method of the present invention;
FIG. 2 is a schematic flow chart of the detection and tracking algorithm employed in the present invention;
FIG. 3 shows a schematic diagram of the length of time that each Graph runs;
FIG. 4 shows a schematic diagram of a single set (DA + DB) of DSP pipeline scheduling;
fig. 5 is a block diagram of an object detecting and tracking device according to the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
These and other aspects of embodiments of the invention will be apparent with reference to the following description and attached drawings. In the description and drawings, particular embodiments of the invention have been disclosed in detail as being indicative of some of the ways in which the principles of the embodiments of the invention may be practiced, but it is understood that the scope of the embodiments of the invention is not limited correspondingly. On the contrary, the embodiments of the invention include all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto.
The multi-channel real-time detection tracking is carried out based on a multi-core heterogeneous chip platform, and the problems that the real-time performance is difficult to meet due to overlarge calculation pressure, the memory occupation is excessive due to key frame image data storage and the like exist. In order to solve the real-time problem, the method and the system go deep from two aspects of algorithm and pipeline processing, the algorithm controls the algorithm processing time by controlling the maximum times of filter operation, and the pipeline adopts the pipeline technology based on the OpenVX software framework to realize the pipeline control of each module. Aiming at the problem of excessive memory occupation caused by key frame storage, the key frame is compressed by a compression algorithm such as JPEG and then stored, so that the memory space allocated for key frame storage can be greatly reduced.
The following describes a target detection and tracking method and apparatus proposed by the embodiments of the present invention with reference to the accompanying drawings.
First, a target detection and tracking method provided by an embodiment of the present invention is described. Fig. 1 is a flowchart of a target detection and tracking method according to an embodiment of the present invention.
As shown in fig. 1, the target detection and tracking method provided in the embodiment of the present invention may be performed based on hardware structures such as a multi-core heterogeneous chip, and may be performed based on an OpenVX software framework. The multi-core heterogeneous chip may be a multi-core heterogeneous chip including a CPU/DSP processor, and specifically may be completed by cooperative scheduling of one CPU and 4 DSPs (DA0, DA1, DB0, DB 1). Wherein, the DA0 and the DA1 are special DSP for AI (artificial intelligence) and are suitable for processing Neural Network (NN) algorithm; DB0 and DB1 are comprehensive processors, i.e., suitable for both conventional algorithm processing and NN algorithm processing, but DB type DSPs are less capable of processing NN algorithms than DA type DSPs. A DSP (Digital Signal Processor) is a microprocessor specially used for Digital Signal processing, and is capable of quickly performing algorithm processing of various Digital signals in real time.
Fig. 2 is a schematic flow chart of the detection and tracking algorithm employed in the present invention. The algorithm flow is as follows: inputting a YUV image of a current frame, converting the YUV image into an RGB image, zooming the RGB image to a fixed size, entering a detector, and obtaining a detection target Det _ object of the framet(class 4: four-wheel vehicle, three-wheel vehicle, pedestrian, and two-wheel vehicle) and the tracking target trk _ object of the previous framet-1And (3) carrying out target association to obtain the following three types of targets:
the first type: matched target pair (Matched _ obj _ calls)t&t-1) I.e. the same target (i.e. the target on tracking) existing in both the previous and the next frames;
for each matched target pair, updating the information of the target from the time t-1 to the time t, then judging whether the target is a possible key target in the current frame, if the target is the possible key target, extracting the image of the frame where the target is located from the RGB image of the frame, zooming to a fixed size, sending the image to a filter, if the filter judges that the target is a normal target (interested target), indicating that the target is the key target in the frame, updating the key target information at the moment, and needing to update the YUV input image of the frame to a key frame buffer (buffer).
The second type: unmatched tracking target (unagrated _ old _ objs)t-1) I.e., objects for which the previous frame exists and the current frame disappears (undetected objects);
and adding 1 to the continuous loss times of each unmatched tracking target, and judging that the target disappears in the scene if the continuous loss times of the target are greater than the loss threshold value at the moment. For a lost object, if the lost object has a long object track, the key object information of the lost object in the scene, namely a key object frame and a corresponding key frame image, needs to be output.
In the third category: unmatched detection target (unagrated _ new _ objs)t) I.e. a new object which does not exist in the previous frame and appears in the current frame;
and for each unmatched detection target, extracting the image content in the target frame from the RGB image of the frame, zooming to a fixed size, sending the image content to a filter (filtering abnormal targets and distinguishing detailed categories, wherein the abnormal targets are non-interested targets), and if the filter judges that the abnormal targets are normal targets, indicating that the targets are new targets and needing to be added into the tracking targets for management. Meanwhile, whether the YUV input image is a possible key target needs to be judged, if the YUV input image is a possible key target, the YUV input image is directly judged to be the key target (repeated judgment is not needed at the moment because the target passes the judgment of the filter in the front), the key target information is stored, and the YUV input image is stored in a key frame buffer.
In the flow of the detection and tracking algorithm shown in fig. 2, except for the general logic judgment part, the parts mainly consuming computing resources include: converting a YUV image into an RGB image, adding image scaling, a target detector, a filter and target association, wherein the target detector and the filter adopt an AI algorithm, the calculated amount of the target detector is the largest, DA-type DSP is adopted for processing, the calculated amount of a single filter is relatively smaller, and DB-type DSP is adopted for processing; the YUV image to RGB image plus image scaling and target association use a traditional algorithm, both of which can only be processed on a DB-type DSP. The logic judgment part of the key frame storage and result output part is processed by the CPU.
The software platform implemented by the detection and tracking method of the present invention is based on OpenVX, and when the algorithm flow shown in fig. 2 is implemented on the platform based on OpenVX, the algorithm flow is divided into 4 graphs (concepts in the OpenVX protocol) according to the sequence of algorithm execution and the running core, as shown in fig. 2:
graph 0: YUV is converted into RBG image, then image scaling is carried out, and the image is operated in DB type DSP;
graph 1: the target detection algorithm is operated in the DA type DSP special for AI;
graph 2: target association, filter and inter-algorithm logic judgment (collectively called tracking) and running in a DB type DSP;
graph 3: and updating key target and key frame information, and outputting a result, wherein the result is operated in the CPU.
Wherein the length of time each Graph was run is shown in fig. 3. Graph0 is a traditional image conversion and image scaling algorithm, and takes about 4ms for the same resolution input; the Graph1 is a target detection neural network algorithm, the required time is longest, but the consumed time is fixed, and the detection algorithm time is about 13ms at present; the duration of Graph2 is related to the number of filters, and can be different according to different scenes, the time consumption is not fixed, and the general scene is less than 8 ms; graph3 runs on the CPU and contains only a few simple logic decisions, so the time taken is very short, currently around 400 us.
With continued reference to the flow of the detection and tracking algorithm shown in fig. 2, if the processing is performed serially, i.e., after going from Graph0 to Graph3 and then from Graph0 to Graph3, in a general scenario, the required time is about 25ms, and at most, only 1.6 passes can be processed under the real-time condition of 25 fps. By adopting the serial method, only one DSP is operated at the same time in the algorithm operation process, the other 3 DSPs are idle, and the utilization rate of the DSPs is extremely low. In order to improve the utilization rate of the DSP and improve the operation efficiency of the algorithm, an efficient pipeline technology is added to the software platform based on OpenVX.
The assembly line technology comprises the following steps: the method comprises the steps of adopting a classic Directed Acyclic Graph (DAG) and a Directed Cyclic Graph (DCG) as core data structures for realizing pipeline functions, adopting an AOVNetWork (Activity On vertex NetWork) as logic representation of tasks in application, using a BFS (Breadth First Search) algorithm to schedule the AOV NetWork, and realizing the fast scheduling of DAG vertexes and the mapping to a DSP (DSP DG).
For example, the following steps are carried out: firstly, dividing 4 DSP cores into two groups (DA + DB), wherein the two groups of cores are independent from each other and respectively run a set of detection tracking algorithm. Then, for each set of (DA + DB) DSPs, the pipeline processing shown in fig. 4 can be implemented by using the pipeline technology, and as can be seen from fig. 4, the time for processing one frame of image by using the pipeline technology is only about 13ms, namely, Graph1, so that one set of DSPs can theoretically process 3 paths in real time (25fps), and 2 sets of DSPs can process 6 paths. However, at present, the CPU also consumes a lot of time when performing the pipeline scheduling, so that it is difficult to implement the theoretical 6-channel method, but it is possible to implement the 4-channel method in real time.
As can be seen from the algorithm flowchart of fig. 2, the number of filter operations required to complete the detection and tracking of one frame of image is uncertain, and is related to the number of targets in the scene and the target state. If the filter is operated for too many times, Graph2 takes more time, which may affect the real-time performance of detection tracking. Therefore, the following method is further proposed, as shown in fig. 1, comprising the steps of:
step S101: and carrying out target detection on the first image at the current moment to obtain a first number of detection targets.
According to some embodiments of the present application, before step S101, the image to be detected at the current time may be preprocessed to obtain the first image, where the preprocessing mode is at least one of image format conversion and image size scaling.
Step S102: and if the first number is larger than a preset number threshold, performing priority sequencing on the first number of detection targets.
Step S103: filtering the detection target according to the sequence of the priority from high to low, and stopping until the filtering operation times reach the preset number threshold; and determining the detection target which is not subjected to the filtering operation as a non-interesting target.
The filtering operation refers to judging whether the detection target is an interested target or a non-interested target; and if the detected target is the target of interest, further matching the category to which the detected target belongs.
The number of filtering operations required for completing the detection and tracking of one frame of image is uncertain and is related to the number of targets and the target state in the scene.
Before step S102, the method may further include: performing target association on the first number of detection targets and at least one tracking target obtained at the previous moment;
after step S103, the method may further include: for a detection target which is not associated with the tracking target, if the detection target is judged to be an interested target after being subjected to filtering operation, further judging whether the detection target is a key target; and if so, storing the first image as a key frame corresponding to the detection target. And if the first image is converted from the format of the image to be detected, storing the image to be detected.
Specifically, the detection targets may be prioritized according to the detection scores corresponding to the detection targets. For example, a high priority is given to a high detection score.
For example, for unmatched detection targets, the detection targets may be divided into two parts, which are 20000 or more and 20000 or less, according to the target detection scores.
The subdivision of the part of 20000 or more is as follows:
1.1 four-wheel vehicle: because the classification needs to be classified finely, the detection scores are sorted from high to low, and the part exceeding a preset number threshold is directly judged as an invalid target;
1.2 pedestrians, two-wheel vehicles, three-wheel vehicles: and the detection scores are sorted from low to high, and the part exceeding the preset number threshold is directly judged as the effective target and is the key target.
The sub-divisions smaller than 20000 are as follows:
1.3 pedestrians, two-wheel vehicles, three-wheel vehicles: the detection score is changed from high to low, and the part exceeding a preset number threshold value is directly judged as an invalid target;
1.4 four-wheel vehicle: and (4) judging the part of the detection score exceeding a preset quantity threshold value as an invalid target directly from high to low.
For the detected targets in the matching target pair, since the key frame needs to be updated, the priority is as follows:
2.1 there is no detection target of key frame;
2.2 there are already detection targets for which the key frame needs to be updated.
The priority levels are from high to low in the above numbering order, i.e. 1.1>1.2>1.3>1.4>2.1> 2.2.
Specifically, the first image may be compressed and stored. Or, firstly, judging whether the first image is stored as a key frame corresponding to other detection targets; if so, recording the number of key targets corresponding to the first image; and if not, compressing and storing the first image. Alternatively, the key frames of the multi-path target detection tracking can be stored in the same key frame buffer.
Each new key target in each path needs to store a key frame image thereof, generally, most targets in a scene have a corresponding key frame, so the number of targets in the scene directly determines the number of key frames needing to be stored, and the more scene targets, the more key frames need to be stored, and the larger memory space is needed. However, the size of the memory space available in the system-on-chip is limited, and it is necessary to control the size of the space required by the key frame as much as possible, and the adopted strategy is as follows:
and JPEG compressing the key frame YUV image and then storing the key frame YUV image. For a frame of image in YUV420 format, the data size is about 3MB, and the size can be controlled below 512KB, about 1/6 by JPEG compression.
Multiple paths share a key frame buffer. Because the number of targets in each path of scene may have a large difference, if the same size but independent key frame buffers are allocated to each path, the key frame buffers of the paths with fewer targets may have a large space waste phenomenon, and the key frame buffers of the paths with more targets may have a space shortage. Therefore, the same key frame buffer is shared by multiple paths, and the memory space can be more effectively utilized.
For different key targets with the same key frame, the key frame is stored only once, and the key frame can be added, updated, deleted and the like by recording the number of the corresponding key targets.
Fig. 5 is a block diagram illustrating an object detecting and tracking apparatus according to an embodiment of the present invention. As shown in fig. 5, an embodiment of the present invention further provides an object detecting and tracking apparatus 100, where the apparatus 100 may include:
the detection module 101 is configured to perform target detection on the first image at the current time to obtain a first number of detection targets;
a sorting module 102, configured to perform priority sorting on the first number of detection targets if the first number is greater than a preset number threshold;
the filtering module 103 is configured to perform filtering operation on the detection targets according to the sequence of priorities from high to low, and stop the filtering operation until the number of times of the filtering operation reaches the preset number threshold; the filtering operation refers to judging whether the detection target is an interested target or a non-interested target; if the detected target is the target of interest, further matching the category of the detected target; and determining the detection target which is not subjected to the filtering operation as a non-interesting target.
According to an embodiment of the present invention, the sorting module is specifically configured to:
and carrying out priority ordering on the first number of detection targets according to the detection scores corresponding to the detection targets.
According to an embodiment of the present invention, the filtering module is specifically configured to, if the first number is greater than a preset number threshold, perform target association between the first number of detection targets and at least one tracking target obtained at a previous time before performing priority ranking on the first number of detection targets;
performing filtering operation on the detection targets from high to low according to the priority, stopping the filtering operation until the number of times of the filtering operation reaches the preset number threshold, and further judging whether the detection target is a key target or not for the detection target which is not associated with the tracking target if the detection target is judged to be the target of interest after the filtering operation is performed on the detection target; and if so, storing the first image as a key frame corresponding to the detection target.
According to an embodiment of the present invention, the filtering module is specifically configured to:
and compressing and storing the first image.
According to an embodiment of the present invention, the filtering module is specifically configured to:
judging whether the first image is stored as a key frame corresponding to other detection targets;
if so, recording the number of key targets corresponding to the first image;
and if not, compressing and storing the first image.
According to an embodiment of the present invention, the filtering module is specifically configured to: and storing the key frames of the multi-path target detection tracking in the same key frame buffer.
According to an embodiment of the present invention, the detection module is specifically configured to, before performing target detection on the first image at the current time, perform preprocessing on the image to be detected at the current time to obtain the first image. The pre-processing includes at least one of image format conversion, image size scaling.
According to one embodiment of the invention, the device employs an OpenVX software framework.
The specific working principle and benefits of the target detection and tracking device provided by the embodiment of the present invention are similar to those of the target detection and tracking method provided by the embodiment of the present invention, and will not be described herein again.
Although the embodiments of the present invention have been described in detail with reference to the accompanying drawings, the embodiments of the present invention are not limited to the details of the above embodiments, and various simple modifications can be made to the technical solutions of the embodiments of the present invention within the technical idea of the embodiments of the present invention, and these simple modifications all belong to the protection scope of the embodiments of the present invention.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, the embodiments of the present invention do not describe every possible combination.
Those skilled in the art can understand that all or part of the steps in the method for implementing the above embodiments may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a (may be a single chip, a chip, etc.) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In addition, any combination of various different implementation manners of the embodiments of the present invention is also possible, and the embodiments of the present invention should be considered as disclosed in the embodiments of the present invention as long as the combination does not depart from the spirit of the embodiments of the present invention.

Claims (10)

1. A target detection tracking method is characterized by comprising the following steps:
carrying out target detection on the first image at the current moment to obtain a first number of detection targets;
if the first number is larger than a preset number threshold, performing priority sequencing on the first number of detection targets;
filtering the detection target according to the sequence of the priority from high to low, and stopping until the filtering operation times reach the preset number threshold; the filtering operation refers to judging whether the detection target is an interested target or a non-interested target; if the detected target is the target of interest, further matching the category of the detected target;
and determining the detection target which is not subjected to the filtering operation as a non-interesting target.
2. The method of claim 1, wherein prioritizing the first number of detection targets comprises:
and carrying out priority ordering on the first number of detection targets according to the detection scores corresponding to the detection targets.
3. The method of claim 1, wherein prior to prioritizing the first number of detection targets if the first number is greater than a predetermined number threshold, further comprising: performing target association on the first number of detection targets and at least one tracking target obtained at the previous moment;
the filtering operation is performed on the detection targets according to the sequence of the priorities from high to low, and after the filtering operation is stopped after the number of times of the filtering operation reaches the preset number threshold, the method further comprises the following steps:
for a detection target which is not associated with the tracking target, if the detection target is judged to be an interested target after being subjected to filtering operation, further judging whether the detection target is a key target; and if so, storing the first image as a key frame corresponding to the detection target.
4. The method according to claim 3, wherein storing the first image as a key frame corresponding to the detection target comprises:
and compressing and storing the first image.
5. The method according to claim 3, wherein storing the first image as a key frame corresponding to the detection target comprises:
judging whether the first image is stored as a key frame corresponding to other detection targets;
if so, recording the number of key targets corresponding to the first image;
and if not, compressing and storing the first image.
6. The method according to any one of claims 3 to 5, further comprising: and storing the key frames of the multi-path target detection tracking in the same key frame buffer.
7. The method of claim 1, wherein prior to the object detection of the first image at the current time, the method further comprises:
and preprocessing the image to be detected at the current moment to obtain a first image.
8. The method of claim 7, wherein the pre-processing comprises at least one of image format conversion and image size scaling.
9. The method of claim 1, wherein the method is used under the OpenVX software framework.
10. An object detection tracking apparatus, comprising:
the detection module is used for carrying out target detection on the first image at the current moment to obtain a first number of detection targets;
the sorting module is used for carrying out priority sorting on the first number of detection targets if the first number is greater than a preset number threshold;
the filtering module is used for filtering the detection target from high to low according to the priority, and stopping the filtering operation until the filtering operation times reach the preset number threshold; the filtering operation refers to judging whether the detection target is an interested target or a non-interested target; if the detected target is the target of interest, further matching the category of the detected target; and determining the detection target which is not subjected to the filtering operation as a non-interesting target.
CN202010225115.6A 2020-03-26 2020-03-26 Target detection tracking method and device Pending CN111402288A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010225115.6A CN111402288A (en) 2020-03-26 2020-03-26 Target detection tracking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010225115.6A CN111402288A (en) 2020-03-26 2020-03-26 Target detection tracking method and device

Publications (1)

Publication Number Publication Date
CN111402288A true CN111402288A (en) 2020-07-10

Family

ID=71431240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010225115.6A Pending CN111402288A (en) 2020-03-26 2020-03-26 Target detection tracking method and device

Country Status (1)

Country Link
CN (1) CN111402288A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001457A (en) * 2020-07-14 2020-11-27 浙江大华技术股份有限公司 Image preprocessing method, device, system and computer readable storage medium
CN116091552A (en) * 2023-04-04 2023-05-09 上海鉴智其迹科技有限公司 Target tracking method, device, equipment and storage medium based on deep SORT
CN116935290A (en) * 2023-09-14 2023-10-24 南京邮电大学 Heterogeneous target detection method and system for high-resolution array camera in airport scene

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010281782A (en) * 2009-06-08 2010-12-16 Mitsubishi Electric Corp Target tracking apparatus
US20110058708A1 (en) * 2008-03-14 2011-03-10 Sony Computer Entertainment Inc. Object tracking apparatus and object tracking method
CN102186064A (en) * 2011-05-30 2011-09-14 无锡中星微电子有限公司 Distributed video monitoring system and monitoring method
US20110285845A1 (en) * 2010-05-21 2011-11-24 Honeywell International Inc. Distant face recognition system
US20110304633A1 (en) * 2010-06-09 2011-12-15 Paul Beardsley display with robotic pixels
US20130215131A1 (en) * 2012-02-16 2013-08-22 Canon Kabushiki Kaisha Image generating apparatus and control method therefor
CN103731855A (en) * 2012-10-11 2014-04-16 中国移动通信集团上海有限公司 Method and device for screening high-data service hot spot cells
US20140119598A1 (en) * 2012-10-31 2014-05-01 Qualcomm Incorporated Systems and Methods of Merging Multiple Maps for Computer Vision Based Tracking
CN103886617A (en) * 2014-03-07 2014-06-25 华为技术有限公司 Method and device for detecting moving object
US20140334668A1 (en) * 2013-05-10 2014-11-13 Palo Alto Research Center Incorporated System and method for visual motion based object segmentation and tracking
US20150063628A1 (en) * 2013-09-04 2015-03-05 Xerox Corporation Robust and computationally efficient video-based object tracking in regularized motion environments
US20160196523A1 (en) * 2015-01-05 2016-07-07 Alliance Enterprises Inc. Goal management system
US20160350938A1 (en) * 2014-02-07 2016-12-01 Safran Electronics & Defense Method for detecting and tracking targets
CN106845385A (en) * 2017-01-17 2017-06-13 腾讯科技(上海)有限公司 The method and apparatus of video frequency object tracking
CN106934817A (en) * 2017-02-23 2017-07-07 中国科学院自动化研究所 Based on multiattribute multi-object tracking method and device
CN107992366A (en) * 2017-12-26 2018-05-04 网易(杭州)网络有限公司 Method, system and the electronic equipment that multiple destination objects are detected and tracked
CN108470332A (en) * 2018-01-24 2018-08-31 博云视觉(北京)科技有限公司 A kind of multi-object tracking method and device
CN109783028A (en) * 2019-01-16 2019-05-21 Oppo广东移动通信有限公司 Optimization method, device, storage medium and the intelligent terminal of I/O scheduling
CN110298380A (en) * 2019-05-22 2019-10-01 北京达佳互联信息技术有限公司 Image processing method, device and electronic equipment
CN110706256A (en) * 2019-09-27 2020-01-17 杭州博雅鸿图视频技术有限公司 Detection tracking algorithm optimization method based on multi-core heterogeneous platform

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110058708A1 (en) * 2008-03-14 2011-03-10 Sony Computer Entertainment Inc. Object tracking apparatus and object tracking method
JP2010281782A (en) * 2009-06-08 2010-12-16 Mitsubishi Electric Corp Target tracking apparatus
US20110285845A1 (en) * 2010-05-21 2011-11-24 Honeywell International Inc. Distant face recognition system
US20110304633A1 (en) * 2010-06-09 2011-12-15 Paul Beardsley display with robotic pixels
CN102186064A (en) * 2011-05-30 2011-09-14 无锡中星微电子有限公司 Distributed video monitoring system and monitoring method
US20130215131A1 (en) * 2012-02-16 2013-08-22 Canon Kabushiki Kaisha Image generating apparatus and control method therefor
CN103731855A (en) * 2012-10-11 2014-04-16 中国移动通信集团上海有限公司 Method and device for screening high-data service hot spot cells
US20140119598A1 (en) * 2012-10-31 2014-05-01 Qualcomm Incorporated Systems and Methods of Merging Multiple Maps for Computer Vision Based Tracking
US20140334668A1 (en) * 2013-05-10 2014-11-13 Palo Alto Research Center Incorporated System and method for visual motion based object segmentation and tracking
US20150063628A1 (en) * 2013-09-04 2015-03-05 Xerox Corporation Robust and computationally efficient video-based object tracking in regularized motion environments
US20160350938A1 (en) * 2014-02-07 2016-12-01 Safran Electronics & Defense Method for detecting and tracking targets
CN103886617A (en) * 2014-03-07 2014-06-25 华为技术有限公司 Method and device for detecting moving object
US20160196523A1 (en) * 2015-01-05 2016-07-07 Alliance Enterprises Inc. Goal management system
CN106845385A (en) * 2017-01-17 2017-06-13 腾讯科技(上海)有限公司 The method and apparatus of video frequency object tracking
CN106934817A (en) * 2017-02-23 2017-07-07 中国科学院自动化研究所 Based on multiattribute multi-object tracking method and device
CN107992366A (en) * 2017-12-26 2018-05-04 网易(杭州)网络有限公司 Method, system and the electronic equipment that multiple destination objects are detected and tracked
CN108470332A (en) * 2018-01-24 2018-08-31 博云视觉(北京)科技有限公司 A kind of multi-object tracking method and device
CN109783028A (en) * 2019-01-16 2019-05-21 Oppo广东移动通信有限公司 Optimization method, device, storage medium and the intelligent terminal of I/O scheduling
CN110298380A (en) * 2019-05-22 2019-10-01 北京达佳互联信息技术有限公司 Image processing method, device and electronic equipment
CN110706256A (en) * 2019-09-27 2020-01-17 杭州博雅鸿图视频技术有限公司 Detection tracking algorithm optimization method based on multi-core heterogeneous platform

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
HUA-JUN SONG,ET AL: "Target tracking algorithm based on optical flow method using corner detection", MULTIMEDIA TOOLS AND APPLICATIONS, vol. 52, 28 January 2010 (2010-01-28), pages 121 - 131, XP019880019, DOI: 10.1007/s11042-010-0464-8 *
李利乐等: "运动目标检测技术现状及进展", 南阳师范学院学报, no. 09, 26 September 2009 (2009-09-26), pages 79 - 82 *
郎晓彤;: "混合模型的运动目标检测与跟踪算法", 现代电子技术, no. 03, 1 February 2020 (2020-02-01), pages 70 - 73 *
金鑫等: "复杂情况下的多目标跟踪统计技术", 计算机科学, no. 06, 15 June 2013 (2013-06-15), pages 268 - 271 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001457A (en) * 2020-07-14 2020-11-27 浙江大华技术股份有限公司 Image preprocessing method, device, system and computer readable storage medium
CN116091552A (en) * 2023-04-04 2023-05-09 上海鉴智其迹科技有限公司 Target tracking method, device, equipment and storage medium based on deep SORT
CN116935290A (en) * 2023-09-14 2023-10-24 南京邮电大学 Heterogeneous target detection method and system for high-resolution array camera in airport scene
CN116935290B (en) * 2023-09-14 2023-12-12 南京邮电大学 Heterogeneous target detection method and system for high-resolution array camera in airport scene

Similar Documents

Publication Publication Date Title
CN111402288A (en) Target detection tracking method and device
CN112183471A (en) Automatic detection method and system for standard wearing of epidemic prevention mask of field personnel
EP4080416A1 (en) Adaptive search method and apparatus for neural network
Ren et al. A novel squeeze YOLO-based real-time people counting approach
CN112016461A (en) Multi-target behavior identification method and system
CN110276756A (en) Road surface crack detection method, device and equipment
CN112132071A (en) Processing method, device and equipment for identifying traffic jam and storage medium
US20230206485A1 (en) Target detection method based on heterogeneous platform, and terminal device and storage medium
CN108985221A (en) Video clip detection method, device, equipment and storage medium
WO2023179133A1 (en) Target algorithm selection method and apparatus, and electronic device and storage medium
CN112491891A (en) Network attack detection method based on hybrid deep learning in Internet of things environment
CN113762314A (en) Smoke and fire detection method and device
CN117456167A (en) Target detection algorithm based on improved YOLOv8s
Li et al. Multi-scale traffic sign detection algorithm based on improved YOLO_V4
CN114615495A (en) Model quantization method, device, terminal and storage medium
CN117132910A (en) Vehicle detection method and device for unmanned aerial vehicle and storage medium
CN112132207A (en) Target detection neural network construction method based on multi-branch feature mapping
CN115984946A (en) Face recognition model forgetting method and system based on ensemble learning
Mao Real-time small-size pixel target perception algorithm based on embedded system for smart city
CN117496131B (en) Electric power operation site safety behavior identification method and system
CN113793627B (en) Attention-based multi-scale convolution voice emotion recognition method and device
CN113743602B (en) Method for improving post-processing speed of model
CN115529197B (en) Policy control method, device, equipment and storage medium for large-scale data
Chen et al. An improved network for pedestrian-vehicle detection based on YOLOv7
CN118260054A (en) Acquisition method, training method, task processing method and related devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination