CN115546254A - Double-process multi-target tracking method - Google Patents

Double-process multi-target tracking method Download PDF

Info

Publication number
CN115546254A
CN115546254A CN202211398601.3A CN202211398601A CN115546254A CN 115546254 A CN115546254 A CN 115546254A CN 202211398601 A CN202211398601 A CN 202211398601A CN 115546254 A CN115546254 A CN 115546254A
Authority
CN
China
Prior art keywords
result
main line
tracker
target tracking
correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211398601.3A
Other languages
Chinese (zh)
Inventor
王文
张春龙
宛敏红
李特
林哲远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202211398601.3A priority Critical patent/CN115546254A/en
Publication of CN115546254A publication Critical patent/CN115546254A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a double-process multi-target tracking method which comprises two branches, namely a main line tracker running in a main process and an auxiliary line corrector running in a sub-process, wherein the main line tracker branches use a high-speed low-precision model to perform multi-target tracking, and the auxiliary line corrector branches use a low-speed high-precision model to correct the main line tracker. And fusing the results of the two branches through a fusion device to obtain a final multi-target tracking result. The invention can improve the tracking precision of the main line tracker without reducing the tracking speed of the main line tracker.

Description

Double-process multi-target tracking method
Technical Field
The invention relates to the field of computer vision multi-target tracking, in particular to a double-process multi-target tracking method.
Background
The multi-target tracking technology is a key technology in the field of computer vision, has high application value and is widely applied to a plurality of scenes. With the rapid development of the related field of artificial intelligence in recent years, the multi-target tracking technology also shows the important position. In the field of video monitoring, a multi-target tracking technology is utilized, so that statistical analysis on human flow, tracking and positioning on suspects and the like can be realized; in the field of intelligent robots, the multi-target tracking technology is utilized, so that real-time obstacle avoidance of pedestrians, vehicles and the like can be realized; in the military field, the multi-target tracking technology is utilized, and accurate striking to different targets can be realized.
The prior art has at least the following problems:
the existing multi-target tracking technology is difficult to balance the relation between speed and precision, the multi-target tracking technology with high speed is poor in precision generally; the multi-target tracking technology with high precision is generally low in speed. It is seen that the multi-target tracking technology with high design speed and high precision is an urgent problem to be solved.
Disclosure of Invention
The invention provides a double-process multi-target tracking method to overcome the defects in the prior art, and aims to solve the technical problem that the speed and the precision are difficult to balance in the related technology.
The invention provides a double-process multi-target tracking method, which comprises the following steps:
s1, acquiring a current frame image and a target detection result thereof in a main process;
s2, inputting the image and the target detection result thereof to a main line tracker in a main process to obtain a main line tracker result;
s3, inputting the image and the main line tracker result to an auxiliary line corrector of a subprocess in the main process;
s4, in the subprocess, the auxiliary line corrector judges whether the result of the main line tracker is correct or not, obtains a correction result and stores the correction result into a shared memory;
and S5, in the main process, fusing the main line tracker result and the correction result by the fusion device to obtain a final multi-target tracking result.
Further, in step S2, the main line tracker uses a high-speed low-precision multi-target tracking model, and the auxiliary line corrector uses a low-speed high-precision identity feature extraction model.
Further, step S4 specifically includes:
s41, extracting features of a main line tracker result by using the identity feature extraction model to form a tracking feature set;
s42, matching the tracking feature set with a historical cache feature set to obtain a correction result and storing the correction result into the shared memory;
s43, updating the historical cache feature set by using the modified tracking feature set.
Further, matching the tracking feature set in step S42 with the historical cache feature set, and obtaining a correction result according to the formula:
m=f(T,H) (1)
wherein f (·,) represents a matching function of two feature sets, T represents the tracking feature set, H represents the historical cache feature set, and m represents a modification result, which can be written as m = [ < i, j >,. Multidot. ], which indicates that the ID of a target with ID i in the mainline tracker result should be modified to j.
Further, step S5 specifically includes:
detecting whether the correction result exists in the shared memory, if not, directly taking the result of the main line tracker as a final multi-target tracking result; and if so, acquiring the correction result from the shared memory, and fusing the correction result with the result of the main line tracker to obtain a final multi-target tracking result.
Further, the correction result is obtained from the shared memory device, and is fused with the result of the main line tracker to be used as a final multi-target tracking result, and the formula is as follows:
r=g(n t ,m t-Δt ) (2)
where t denotes the current time, n t The mainline tracker result, m, representing the current time instant t-Δt The correction result obtained from the shared memory at the current moment is represented, and because the auxiliary line corrector uses a low-speed high-precision identity characteristic extraction model, the correction result obtained at the current moment has a certain delay, and the delay time is delta t; g (,) represents a fusion function, whose function is to modify the mainline tracker result at the current moment according to the modification result; and r represents the final multi-target tracking result.
The invention also includes a dual-process multi-target tracking device, comprising:
the image acquisition module is used for acquiring a current frame image and a target detection result thereof in a main process;
a result obtaining module of the main line tracker, which is used for inputting the image and the target detection result thereof to the main line tracker in the main process and obtaining the result of the main line tracker;
the auxiliary line corrector input module is used for inputting the image and the main line tracker result to an auxiliary line corrector of a subprocess in the main process;
a main line tracker result correction module, configured to, in the sub-process, determine whether the main line tracker result is correct by the auxiliary line corrector, obtain a correction result, and store the correction result in the shared memory;
and the fusion module is used for fusing the result of the main line tracker and the correction result by the fusion device in the main process to be used as a final multi-target tracking result.
The invention also comprises a double-process multi-target tracking system which comprises a memory and one or more processors, wherein executable codes are stored in the memory, and when the one or more processors execute the executable codes, the double-process multi-target tracking system is used for realizing the double-process multi-target tracking method.
The present invention also includes a computer readable storage medium having stored thereon a program which, when executed by a processor, implements a dual-process multi-target tracking method of the present invention.
The invention has the following beneficial effects:
according to the embodiment, the tracking precision of the tracker can be improved by the method without reducing the tracking speed of the tracker aiming at the problem of multi-target tracking. Because the main line tracker of the embodiment runs in the main process, the overall tracking speed is the same as that of the main line tracker; the auxiliary line corrector of the embodiment operates in the sub-process, so that the speed of the main line tracker is not influenced, the auxiliary line corrector uses a low-speed high-precision identity feature extraction model, and the tracking result of the main line tracker using high speed and low precision can be periodically corrected, so that the final tracking precision is achieved. The invention can improve the tracking precision of the main line tracker without reducing the tracking speed of the main line tracker.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flow diagram illustrating a method for dual-process multi-target tracking in accordance with an exemplary embodiment.
FIG. 2 is a flow diagram illustrating a calculation of a correction result by a siding corrector according to an example embodiment.
FIG. 3 illustrates a process by which the fuser outputs a final result when no corrected result is present in the shared memory, according to an example embodiment.
FIG. 4 is a process by which the fuser outputs a final result when there is a modified result in the shared memory, according to an example embodiment.
Fig. 5 is a schematic structural diagram of a dual-process multi-target tracking apparatus according to an embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if," as used herein, may be interpreted as "at \8230; \8230when" or "when 8230; \823030when" or "in response to a determination," depending on the context.
Fig. 1 is a flowchart illustrating a dual-process multi-target tracking method according to an exemplary embodiment, as shown in fig. 1, which may include the following steps:
step S11: and acquiring the current frame image and the target detection result thereof in the main process.
Specifically, the target detection result may be an outer surrounding rectangular frame of the tracked target, the tracked target may be a pedestrian, a vehicle, or the like, and the target detection result may be obtained by a target detection algorithm, and preferably, a target detection algorithm based on a deep neural network, such as YOLOv3, maskRCNN, or the like, may be used.
Step S12: and inputting the image and the target detection result thereof to a main line tracker in the main process to obtain the result of the main line tracker.
Specifically, the main line tracker is a tracker using a tracking-by-tracking (tracking-by-tracking) framework, and preferably, a high-speed multi-target tracker such as a SORT or a deppsort is used. The dominant line tracker result may be the outer bounding rectangle of the tracked target and its ID.
Step S13: inputting the image and the result of the main line tracker to an auxiliary line corrector of a subprocess in the main process;
step S14: in the subprocess, the assistant line corrector judges whether the result of the main line tracker is correct or not, obtains a correction result and stores the correction result in the shared memory.
Specifically, the assistant line corrector uses a low-speed high-precision identity feature extraction model, the model has large calculation amount and low running speed, but has high precision, and the identity feature of the tracked target can be extracted. Preferably, in an application in which a pedestrian is a tracked target, a highly accurate pedestrian re-recognition model can be used. Alternatively, a face recognition model may be used.
The shared memory is used for sharing the correction result to the fusion device in the main process, and preferably, pipeline communication is used.
Fig. 2 is a flowchart illustrating a correction result calculated by the assistant line corrector according to an exemplary embodiment, and as shown in fig. 2, the method may include the following steps:
s41: extracting features of a main line tracker result by using the identity feature extraction model to form a tracking feature set;
s42: matching the tracking feature set with a historical cache feature set to obtain a correction result and storing the correction result in the shared memory;
specifically, the tracking feature set and the history cache feature set are feature sets of a plurality of tracked targets, and the matching is to determine whether a target in the tracking feature set and a target in the history cache feature set are the same target. The matched calculation formula is as follows:
m=f(T,H) (1)
wherein f (·,) represents a matching function of two feature sets, T represents the tracking feature set, H represents the historical cache feature set, and m represents a revised result, which can be written as m = [ < i, j >,. Multidot. ], which indicates that the ID of a target with ID i in the main-line tracker result should be revised as j. Preferably, a distance matrix between the tracking feature set and the historical cache feature set can be calculated by using Euclidean distance, and a matching relation between the two feature sets is obtained by using a Hungarian matching algorithm.
S43: and updating the historical caching feature set by using the modified tracking feature set.
Step S15: in the main process, the fusion device fuses the main line tracker result and the correction result to be used as a final multi-target tracking result.
Specifically, the fuser first detects whether the correction result exists in the shared memory, and if not, directly takes the main line tracker result as a final multi-target tracking result; and if so, acquiring the correction result from the shared memory, and fusing the correction result with the result of the main line tracker to obtain a final multi-target tracking result. Wherein, the fusion formula of the correction result and the mainline tracker result is:
r=g(n t ,m t-Δt ) (2)
wherein t represents the current time, n t The mainline tracker result, m, representing the current time instant t-Δt The correction result obtained from the shared memory at the current moment is represented, and because the auxiliary line corrector uses a low-speed high-precision identity characteristic extraction model, the correction result obtained at the current moment has a certain delay, and the delay time is delta t; g (·,) represents a fusion function, the function of which is to modify the result of the mainline tracker at the current moment according to the modification result; and r represents the final multi-target tracking result.
FIG. 3 is a process by which the fuser outputs a final result when no revised results are present in the shared memory, according to an exemplary embodiment. As shown in FIG. 3, at time t, the secondary line modifier is calculating the modified result of the primary line tracker result at time t- Δ t, and the calculation is not completed yet. And after obtaining the t-moment main line tracker result, the fusion device detects that no correction result exists in the shared memory device, and directly outputs the t-moment main line tracker result as a final multi-target tracking result.
FIG. 4 is a process by which the fuser outputs a final result when there is a modified result in the shared memory, according to an example embodiment. As shown in fig. 4, at time t, the secondary line corrector completes the calculation of the corrected result of the primary line tracker result at time t- Δ t, and stores the corrected result in the shared memory. And after acquiring the result of the main line tracker at the time t, the fusion device detects that a correction result exists in the shared memory, and fuses the result of the main line tracker at the time t and the correction result at the time t-delta t to serve as a final multi-target tracking result.
Fig. 5 shows a dual-process multi-target tracking apparatus according to the present invention, which includes:
the image acquisition module is used for acquiring a current frame image and a target detection result thereof in a main process;
a result obtaining module of the main line tracker, which is used for inputting the image and the target detection result thereof to the main line tracker in the main process and obtaining the result of the main line tracker;
the auxiliary line corrector input module is used for inputting the image and the main line tracker result to an auxiliary line corrector of a subprocess in the main process;
the main line tracker result correction module is used for judging whether the main line tracker result is correct or not by the auxiliary line corrector in the subprocess, obtaining a correction result and storing the correction result in the shared memory;
and the fusion module is used for fusing the main line tracker result and the correction result by the fusion device in the main process to obtain a final multi-target tracking result.
The invention also comprises a double-process multi-target tracking system which comprises a memory and one or more processors, wherein executable codes are stored in the memory, and when the one or more processors execute the executable codes, the double-process multi-target tracking system is used for realizing the double-process multi-target tracking method.
The present invention also includes a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements a dual-process multi-target tracking method of the present invention.
In the hardware level, the dual-process multi-target tracking device or system of the invention comprises a processor, an internal bus, a network interface, a memory and a nonvolatile memory, and may also comprise hardware required by other services. The processor reads the corresponding computer program from the non-volatile memory into the memory and then runs the computer program to implement the data collection method described in fig. 1. Of course, besides the software implementation, the present invention does not exclude other implementations, such as logic devices or combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may be hardware or logic devices.
Improvements to one technique can clearly distinguish between hardware improvements (e.g. improvements to the circuit structure of diodes, transistors, switches, etc.) and software improvements (improvements to the method flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain a corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually manufacturing an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development, but the original code before compiling is also written in a specific Programming Language, which is called Hardware Description Language (HDL), and the HDL is not only one kind but many kinds, such as abll (Advanced boot Expression Language), AHDL (alternate hard Description Language), traffic, CUPL (computer universal Programming Language), HDCal (Java hard Description Language), lava, lola, HDL, PALASM, software, rhydl (Hardware Description Language), and vhul-Language (vhyg-Language), which is currently used in the field. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be regarded as a hardware component and the means for performing the various functions included therein may also be regarded as structures within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the units may be implemented in one or more of software and/or hardware in implementing the invention.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus comprising the element.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
All the embodiments in the invention are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.

Claims (10)

1. A double-process multi-target tracking method is characterized by comprising the following steps:
s11, acquiring a current frame image and a target detection result thereof in a main process;
s12, inputting the image and the target detection result thereof to a main line tracker in a main process to obtain a main line tracker result;
s13, inputting the image and the main line tracker result to an auxiliary line corrector of a subprocess in the main process;
s14, in the subprocess, the auxiliary line corrector judges whether the result of the main line tracker is correct or not, obtains a correction result and stores the correction result in a shared memory;
and S15, in the main process, the fusion device fuses the main line tracker result and the correction result to be used as a final multi-target tracking result.
2. The method according to claim 1, wherein the main line tracker in step S12 uses a high-speed low-precision multi-target tracking model.
3. The method of claim 1, wherein the guideline corrector of step S13 uses a low-speed high-accuracy identity feature extraction model.
4. The method according to claim 1, wherein step S14 specifically comprises:
s41, extracting features of a main line tracker result by using the identity feature extraction model to form a tracking feature set;
s42, matching the tracking feature set with a historical cache feature set to obtain a correction result and storing the correction result into the shared memory;
s43, updating the historical cache feature set by using the modified tracking feature set.
5. The method according to claim 3, wherein the tracking feature set and the history cache feature set in step S42 are matched to obtain the correction result according to the formula:
m=f(T,H) (1)
wherein f (·,) represents a matching function of two feature sets, T represents the tracking feature set, H represents the historical cache feature set, and m represents a revised result, which can be written as m = [ < i, j >,. Multidot. ], which indicates that the ID of a target with ID i in the main-line tracker result should be revised as j.
6. The method according to claim 1, wherein step S15 specifically comprises:
detecting whether the correction result exists in the shared memory, if not, directly taking the main line tracker result as a final multi-target tracking result; and if so, acquiring the correction result from the shared memory, and fusing the correction result with the main line tracker result to be used as a final multi-target tracking result.
7. The method according to claim 5, wherein the correction result is obtained from the shared memory, and is merged with the main line tracker result to be used as a final multi-target tracking result, and a formula of the final multi-target tracking result is as follows:
r=g(n t ,m t-Δt ) (2)
wherein t represents the current time, n t The mainline tracker result, m, representing the current time instant t-Δt The correction result obtained from the shared memory at the current moment is represented, and because the auxiliary line corrector uses a low-speed high-precision identity feature extraction model, the correction result obtained at the current moment has time delay, and the time length of the time delay is delta t; g (,) represents a fusion function whose function is to correct the current time according to the correction resultCorrecting the engraved result of the main line tracker; and r represents the final multi-target tracking result.
8. A double-process multi-target tracking device is characterized in that: the method comprises the following steps:
the image acquisition module is used for acquiring a current frame image and a target detection result thereof in a main process;
a result obtaining module of the main line tracker, which is used for inputting the image and the target detection result thereof to the main line tracker in the main process and obtaining the result of the main line tracker;
the auxiliary line corrector input module is used for inputting the image and the main line tracker result to an auxiliary line corrector of a subprocess in the main process;
a main line tracker result correction module, configured to, in the sub-process, determine whether the main line tracker result is correct by the auxiliary line corrector, obtain a correction result, and store the correction result in the shared memory;
and the fusion module is used for fusing the result of the main line tracker and the correction result by the fusion device in the main process to be used as a final multi-target tracking result.
9. A double-process multi-target tracking system is characterized in that: comprising a memory having stored therein executable code and one or more processors configured to implement a dual-process multi-target tracking method as recited in any one of claims 1-6 when the executable code is executed by the one or more processors.
10. A computer-readable storage medium, having stored thereon a program which, when executed by a processor, implements a dual-process multi-target tracking method according to any one of claims 1 to 6.
CN202211398601.3A 2022-11-09 2022-11-09 Double-process multi-target tracking method Pending CN115546254A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211398601.3A CN115546254A (en) 2022-11-09 2022-11-09 Double-process multi-target tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211398601.3A CN115546254A (en) 2022-11-09 2022-11-09 Double-process multi-target tracking method

Publications (1)

Publication Number Publication Date
CN115546254A true CN115546254A (en) 2022-12-30

Family

ID=84720270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211398601.3A Pending CN115546254A (en) 2022-11-09 2022-11-09 Double-process multi-target tracking method

Country Status (1)

Country Link
CN (1) CN115546254A (en)

Similar Documents

Publication Publication Date Title
CN108320296B (en) Method, device and equipment for detecting and tracking target object in video
CN112801229B (en) Training method and device for recognition model
CN108334892B (en) Vehicle type identification method, device and equipment based on convolutional neural network
CN111260726A (en) Visual positioning method and device
CN112001456B (en) Vehicle positioning method and device, storage medium and electronic equipment
CN111238450B (en) Visual positioning method and device
CN111508258A (en) Positioning method and device
CN111311634A (en) Face image detection method, device and equipment
CN111062372B (en) Method and device for predicting obstacle track
CN114419679B (en) Data analysis method, device and system based on wearable device data
CN112861831A (en) Target object identification method and device, storage medium and electronic equipment
CN111238523A (en) Method and device for predicting motion trail
CN112966577B (en) Method and device for model training and information providing
CN111192303A (en) Point cloud data processing method and device
CN112990099B (en) Method and device for detecting lane line
CN117197781B (en) Traffic sign recognition method and device, storage medium and electronic equipment
CN112883871A (en) Model training and unmanned vehicle motion strategy determining method and device
CN116434787B (en) Voice emotion recognition method and device, storage medium and electronic equipment
CN111798489A (en) Feature point tracking method, device, medium and unmanned device
CN115546254A (en) Double-process multi-target tracking method
CN112712561B (en) Picture construction method and device, storage medium and electronic equipment
CN112734851B (en) Pose determination method and device
CN113673436A (en) Behavior recognition and model training method and device
CN114187355A (en) Image calibration method and device
CN114926437A (en) Image quality evaluation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination