CN110532883B - Improvement of on-line tracking algorithm by off-line tracking algorithm - Google Patents

Improvement of on-line tracking algorithm by off-line tracking algorithm Download PDF

Info

Publication number
CN110532883B
CN110532883B CN201910695584.1A CN201910695584A CN110532883B CN 110532883 B CN110532883 B CN 110532883B CN 201910695584 A CN201910695584 A CN 201910695584A CN 110532883 B CN110532883 B CN 110532883B
Authority
CN
China
Prior art keywords
tracking
offline
line
tracked
online
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910695584.1A
Other languages
Chinese (zh)
Other versions
CN110532883A (en
Inventor
苏智辉
陈思静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910695584.1A priority Critical patent/CN110532883B/en
Priority to PCT/CN2019/117538 priority patent/WO2021017283A1/en
Publication of CN110532883A publication Critical patent/CN110532883A/en
Application granted granted Critical
Publication of CN110532883B publication Critical patent/CN110532883B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an online tracking method and device based on offline and a computer readable storage medium, wherein the method comprises the following steps: acquiring an online video in real time, acquiring each frame of image containing a target to be tracked in the online video, preprocessing each frame of image to generate an initial result of the target to be tracked, and correcting the initial result through a pre-trained offline tracking model to generate a final result. According to the method, by combining the offline tracking method and the online tracking method, on one hand, the result before forward backtracking and modification is realized through the offline tracking method, so that the accuracy of processing real-time online video streams is improved; on the other hand, the real-time online video stream is processed by an online tracking method.

Description

Improvement of on-line tracking algorithm by off-line tracking algorithm
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an online tracking method and apparatus based on offline and a computer readable storage medium.
Background
Target tracking, which is also a visual target tracking, is an important research direction in the current machine vision field. In general, object tracking detects an object of interest in an image sequence to extract, identify and track the object, so as to obtain motion state parameters (such as position, speed, acceleration, motion track, etc.) of the object to be tracked, so that the object can be further processed and analyzed to understand behaviors of the moving object, provide reference data for other technical fields (such as visual navigation, pose estimation, motion analysis, etc.), and have wide application in the fields of intelligent monitoring, man-machine interaction, robot navigation, etc. In these applications, object tracking is the basis for the robot to perceive and react to the external environment, and is the key to understanding the image.
The current target tracking technology is mainly divided into two main types of off-line type and on-line type. The central idea of the off-line type method is that the detection result of the object in each frame is connected into small tracking segments, then the segments are combined by more reliable characteristics, and compared with the representative off-line type method, the off-line type method mainly comprises an off-line algorithm such as a minimum cost network flow algorithm, an energy minimization method, a minimum complete graph algorithm and the like; the offline method uses more information in the front and rear frames, so that the result before modification can be traced back, and better precision is finally obtained. The online method starts with matching of targets in the current frame and the next frame, namely, when the current frame appears, a result needs to be given by a vertical horse, so that the online method can process real-time video streaming and offline video, and can achieve better instantaneity, thereby having a place in practical application; at present, the traditional online method mostly applies Kalman filtering, particle filtering or Markov decision process; but the accuracy of the online type method tends to be lower than that of the offline type method.
Disclosure of Invention
The invention provides an online tracking method and device based on offline and a computer readable storage medium, which mainly aim to improve the accuracy of processing real-time online video streams.
In order to achieve the above object, the present invention provides an online tracking method based on offline, the method comprising:
step A: acquiring online videos in real time;
and (B) step (B): acquiring each frame of image containing a target to be tracked from an online video;
step C: preprocessing each frame of image to generate an initial result of a target to be tracked;
step D: correcting the initial result through a pre-trained offline tracking model; a kind of electronic device with high-pressure air-conditioning system
Step E: producing a final result of the object to be tracked.
Optionally, the step C includes:
step C1: presetting parameters of video acquisition equipment;
step C2: collecting environmental parameters of a current scene; wherein the environmental parameters may include, but are not limited to including: illumination, hue, noise, etc.;
step C3: denoising and normalizing each frame of image based on the video acquisition equipment parameters and the current scene environment parameters;
step C4: an initial result of the object to be tracked is generated.
Optionally, the step C further includes:
step C5: and setting a tracking area for each frame of image subjected to denoising and normalization, wherein the tracking area is polygonal, and the tracking area is a detection area comprising an object to be tracked.
Optionally, the environmental parameter comprises illumination, hue, or noise.
Optionally, the step D includes:
step D1: and inputting the initial result into a pre-trained offline tracking model.
Optionally, the step D further includes:
step D2: judging whether the initial result needs to be corrected or not; if the initial result is judged to need to be corrected, executing a step D3;
step D3: and correcting the initial result through a pre-trained offline tracking model.
Optionally, before executing the step D, the online tracking method based on the offline tracking algorithm further includes: and pre-training an offline tracking model, wherein the pre-trained offline tracking model comprises an offline tracking algorithm.
Optionally, the principle of the offline tracking algorithm is:
taking a tracking object in each frame of the video as a node;
obtaining similarity measurement between every two objects to be tracked through fusion of a pedestrian re-recognition model and a motion model;
wherein a smaller similarity measure indicates that the two tracked objects are more similar.
In order to achieve the above object, the present invention also provides an off-line based on-line tracking apparatus, the apparatus including a memory and a processor, the memory storing an off-line based on-line tracking program executable on the processor, the off-line based on-line tracking program implementing the off-line based on-line tracking method as described above when executed by the processor.
In addition, to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon an off-line based on-line tracking program executable by one or more processors to implement the steps of the off-line based on-line tracking method as described above.
In the online tracking method device and the computer readable storage medium based on the offline type, firstly, online videos are acquired in real time, then, each frame of image containing a target to be tracked in the online videos is acquired, each frame of image is preprocessed to generate an initial result of the target to be tracked, and the initial result is corrected through a pre-trained offline tracking model to generate a final result. According to the method, by combining the offline tracking method and the online tracking method, on one hand, the result before forward backtracking and modification is realized through the offline tracking method, so that the accuracy of processing real-time online video streams is improved; on the other hand, the real-time online video stream is processed by an online tracking method.
Drawings
FIG. 1 is a schematic flow chart of an online tracking method based on offline according to an embodiment of the invention;
FIG. 2 is a schematic flow chart of step C in FIG. 1;
FIG. 3 is a schematic flow chart of step D in FIG. 1;
FIG. 4 is a schematic diagram illustrating an exemplary embodiment of an off-line based on-line tracking device according to the present invention;
FIG. 5 is a schematic diagram of an online tracking program based on offline according to an embodiment of the invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The invention provides an online tracking method based on offline type. Referring to fig. 1, a flow chart of an online tracking method based on offline according to an embodiment of the invention is shown. The method may be performed by an apparatus, which may be implemented in software and/or hardware.
In this embodiment, the online tracking method based on offline comprises:
step A: acquiring online videos in real time;
and (B) step (B): acquiring each frame of image containing a target to be tracked from an online video;
step C: preprocessing each frame of image to generate an initial result of a target to be tracked;
step D: correcting the initial result through a pre-trained offline tracking model; a kind of electronic device with high-pressure air-conditioning system
Step E: producing a final result of the object to be tracked.
Further, the source of the online video in the step A is a video image acquired by the video acquisition device.
Further, the preprocessing in the step C may include, but is not limited to, including: and eliminating the influence of illumination, environmental tone, noise and the like in different scenes by processing methods such as denoising, normalization and the like by combining preset video acquisition equipment parameters and environmental parameters of the current scene.
Referring to fig. 2, in more detail, the step C includes:
step C1: presetting parameters of video acquisition equipment;
step C2: collecting environmental parameters of a current scene; wherein the environmental parameters may include, but are not limited to including: illumination, hue, noise, etc.;
step C3: denoising and normalizing each frame of image based on the video acquisition equipment parameters and the current scene environment parameters;
step C4: an initial result of the object to be tracked is generated.
Further, the step C further includes:
step C5: and setting a tracking area for each frame of image subjected to denoising and normalization. The tracking area may be a polygon of any shape, and the tracking area is a detection area including an object to be tracked.
Further, the target to be tracked in the step C may be a person, an animal, a plant, an object, or the like. The person may be, but is not limited to, a pedestrian, a person in an operative state, a person riding a car, a person driving a car, or a person on a preset vehicle, etc. The animal may be, but is not limited to, a cat, dog, pig, bird, fish, etc. The plant may be, but is not limited to, flowers, grass, trees, and the like. The object may be, but is not limited to, a computer, a code scanning device, a balloon, or the like, having a certain shape.
In this embodiment, the target to be tracked is illustrated by taking a pedestrian as an example.
Further, before executing the step D, the online tracking method based on the offline tracking algorithm further includes: the offline tracking model is trained in advance. The pre-trained offline tracking model includes an offline tracking algorithm.
Specifically, the principle of the offline tracking algorithm is as follows:
and taking the tracked object in each frame of the video as a node, and fusing a Person Re-identification (ReID) model and a motion model to obtain the similarity measurement between every two objects to be tracked. Wherein a smaller similarity measure indicates that the two tracked objects are more similar.
Regarding two concepts of the current target detection field, an intersectional-over-Union (IoU) and a Selective Search (Selective Search), a brief description is given:
1. the intersection ratio is the overlapping ratio of the generated candidate boxes (candidate bound) and the original marked boxes (ground truth bound), namely the ratio of intersection to union; the most ideal is a complete overlap, i.e. a ratio of 1.
2. In Selective Search (Selective Search), there are three strategies, respectively:
by using various color spaces with different invariant properties;
by using different similarity measures; a kind of electronic device with high-pressure air-conditioning system
By using different initialization areas.
In one embodiment, there are different combinations of similarity metrics, such as the following four metrics:
1) Color similarity: s (color), wherein S (color) represents a measurement algorithm of color similarity;
2) Texture similarity: s (texture), wherein S (texture) represents a measurement algorithm of color similarity;
3) Size similarity: s (size), where S (size) represents a measure of color similarity;
4) Matching similarity: s (fit), wherein S (fit) represents a measurement algorithm of color similarity;
combining the four measurement algorithms into a strategy in a certain mode: s=a×s (color) +b×s (texture) +c×s (size) +d×s (fit).
In practical applications, since the objects to be tracked in the same frame cannot be associated, and if one object to be tracked a and one object to be tracked B are the same object, and at the same time, the object to be tracked a and the object to be tracked C are the same object, then the object to be tracked B and the object to be tracked C are the same object. Under these constraints, the tracking problem is converted into a binary programming problem. In this embodiment, the Gurob is used to solve the binary programming problem. Gurob is a large-scale mathematical programming optimizer. The binary programming is applied to an inter-frame difference method, and the positions of the images of the targets in different image frames are different due to the fact that the targets in the scene are moving. The inter-frame difference method carries out difference operation on two or three continuous frames of images in time, pixel points corresponding to different frames are subtracted, the absolute value of gray level difference is judged, and when the absolute value exceeds a certain threshold value, the moving object can be judged, so that the detection function of the object is realized. The specific principle is as follows: the n-th frame and the n-1-th frame of images are respectively denoted as fn and fn-1, gray values of corresponding pixel points of the two frames are respectively denoted as fn (x, y) and fn-1 (x, y), the gray values of corresponding pixel points of the two frames of images are subtracted according to the following formula, and absolute values of the gray values are taken to obtain a difference image Dn:
wherein Dn (x, y) = |fn (x, y) -fn-1 (x, y) |
Setting a threshold T, and carrying out binarization processing on pixel points one by one according to the following formula to obtain a binarized image Rn'. Wherein, the point with the gray value of 255 is a foreground (target to be tracked) point, and the point with the gray value of 0 is a background point; and carrying out connectivity analysis on the image Rn', and finally obtaining the image Rn containing the complete moving target.
When Dn (x, y) > T, rn' (x, y) =255;
otherwise, rn' (x, y) =0.
Further, the step D includes:
step D1: inputting the initial result into a pre-trained offline tracking model;
step D2: judging whether the initial result needs to be corrected or not; if the initial result is judged to need to be corrected, executing a step D3; if it is determined that the initial result does not need to be corrected, step D4 is performed.
Step D3: correcting the initial result through a pre-trained offline tracking model;
step D4: no correction is made to the initial result.
In this embodiment, after the initial result is corrected by the pre-trained offline tracking model, a final result of the target to be tracked is generated.
Specifically, in one embodiment, the principle of the step D is as follows: the generated initial result is recalculated every preset number of frames (taking account of calculation amount and instantaneity, and not every frame, for example every four frames), and then the offline tracking algorithm using the offline tracking model performs continuous tracking calculation on the current result to obtain the final result. More specifically, when the current similarity value recalculated by the offline tracking algorithm is smaller than the last similarity value, the initial result is judged to need to be corrected. It will be appreciated that in this embodiment, the offline tracking algorithm is an offline tracking algorithm in the prior art, and the embodiment of the present invention is not particularly limited to the offline tracking algorithm.
Further, in the present embodiment, taking a pedestrian as an example, the final result of the target to be tracked generated in the step E includes the position and the number of pedestrians.
According to the online tracking method based on the offline type, firstly, online videos are collected in real time, then, each frame of image containing a target to be tracked in the online videos is obtained, preprocessing is carried out on each frame of image obtained to generate an initial result of the target to be tracked, and then, correction processing is carried out on the initial result through a pre-trained offline tracking model to generate a final result. According to the method, by combining the offline tracking method and the online tracking method, on one hand, the result before forward backtracking and modification is realized through the offline tracking method, so that the accuracy of processing real-time online video streams is improved; on the other hand, the real-time online video stream is processed by an online tracking method.
The invention also provides an online tracking device based on the offline type. Referring to fig. 4, an internal structure diagram of an online tracking device based on offline is shown according to an embodiment of the present invention. The online tracking device based on the offline type can be a PC (Personal Computer ) or terminal equipment such as a smart phone, a tablet personal computer, a portable computer and the like. The code library management device comprises at least a memory 11, a processor 12, a network interface 13 and a communication bus 14.
The memory 11 includes at least one type of readable storage medium including flash memory, a hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of an off-line based on-line tracking device, such as a hard disk of the off-line based on-line tracking device. The memory 11 may in other embodiments also be an external storage device based on an off-line online tracking device, for example based on a plug-in hard disk provided on an off-line online tracking device, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like. Further, the memory 11 may also include both an internal memory unit and an external memory device of the off-line based on-line tracking device. The memory 11 may be used not only for storing application software installed in an off-line-based on-line tracking device and various types of data, such as codes of an off-line-based on-line tracking program, etc., but also for temporarily storing data that has been output or is to be output.
The processor 12 may in some embodiments be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor or other data processing chip for running program code or processing data stored in the memory 11, e.g. executing an off-line based on-line tracking program or the like.
The network interface 13 may optionally comprise a standard wired interface, a wireless interface (e.g., WI-FI interface), typically used to establish a communication connection between the reading understanding based marketing thread extracting device and other electronic devices.
The communication bus 14 is used to enable connection communications between these components.
Fig. 4 shows only an off-line based on-line tracking device with components 11 to 14 and an off-line based on-line tracking procedure, it will be understood by those skilled in the art that the configuration shown in fig. 4 does not constitute a limitation of an off-line based on-line tracking device, and may include fewer or more components than shown, or may combine some components, or a different arrangement of components.
In the embodiment of the online tracking device based on offline as shown in fig. 4, the memory 11 stores an online tracking program based on offline; the processor 12 implements the following steps when executing the online tracking program based on offline stored in the memory 11:
step A: acquiring online videos in real time;
and (B) step (B): acquiring each frame of image containing a target to be tracked from an online video;
step C: preprocessing each frame of image to generate an initial result of a target to be tracked;
step D: correcting the initial result through a pre-trained offline tracking model; a kind of electronic device with high-pressure air-conditioning system
Step E: producing a final result of the object to be tracked.
The online tracking program based on the offline type can be divided into one or more functional modules according to different functions of the online tracking program. One or more modules are stored in the memory 11 and executed by one or more processors (processor 12 in this embodiment) to perform the present invention, and the modules referred to herein are a series of computer program instruction segments capable of performing specific functions for describing the execution of an off-line based on-line tracking program in an off-line based on-line tracking device.
For example, referring to fig. 5, a program module diagram of an off-line based on-line tracking program in an embodiment of an off-line based on-line tracking device according to the present invention is shown, where the off-line based on-line tracking program may be divided into a video capturing module 31, a frame image capturing module 32, a preprocessing module 33, an off-line tracking module 34 and a result generating module 35, which are exemplary:
the video acquisition module 31 is used for acquiring online videos in real time;
a frame image acquisition module 32, configured to acquire each frame image containing the target to be tracked from the online video;
a preprocessing module 33, configured to preprocess each acquired frame of image to generate an initial result of the target to be tracked;
the offline tracking module 34 is configured to correct the initial result through a pre-trained offline tracking model;
a result generation module 35 for generating a final result of the object to be tracked.
The functions or operation steps implemented when the program modules such as the video capturing module 31, the frame image obtaining module 32, the preprocessing module 33, the offline tracking module 34, and the result generating module 35 are executed are substantially the same as those of the foregoing embodiments, and will not be described herein.
Fig. 5 shows only an off-line based on-line tracking device with modules 31-35 and an off-line based on-line tracking program, it will be understood by those skilled in the art that the configuration shown in fig. 5 is not limiting of the off-line based on-line tracking device and may include fewer or more modules than shown, or some modules in combination, or a different arrangement of modules.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integration can be realized in a form of hardware or a form of hardware plus a software functional module.
In addition, an embodiment of the present invention also proposes a computer-readable storage medium having stored thereon an off-line based on-line tracking program executable by one or more processors to implement the operations of:
step A: acquiring online videos in real time;
and (B) step (B): acquiring each frame of image containing a target to be tracked from an online video;
step C: preprocessing each frame of image to generate an initial result of a target to be tracked;
step D: correcting the initial result through a pre-trained offline tracking model; a kind of electronic device with high-pressure air-conditioning system
Step E: producing a final result of the object to be tracked.
The computer-readable storage medium of the present invention is substantially the same as the embodiments of the off-line based on-line tracking apparatus and method described above, and will not be described in detail herein.
It should be noted that, the foregoing reference numerals of the embodiments of the present invention are merely for describing the embodiments, and do not represent the advantages and disadvantages of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, apparatus, article or method that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (7)

1. An online tracking method based on offline, which is characterized by comprising the following steps:
step A: acquiring online videos in real time;
and (B) step (B): acquiring each frame of image containing a target to be tracked from an online video;
step C: preprocessing each frame of image to generate an initial result of a target to be tracked; comprising the following steps:
step C1: presetting parameters of video acquisition equipment;
step C2: collecting environmental parameters of a current scene; wherein the environmental parameters include: illumination, hue, or noise;
step C3: denoising and normalizing each frame of image based on the video acquisition equipment parameters and the current scene environment parameters;
step C4: generating an initial result of a target to be tracked;
step D: re-calculating the similarity value between every two targets to be tracked by utilizing a pre-trained offline tracking model for the generated initial result every other preset number of frames; when the recalculated current similarity value is smaller than the previous similarity value, correcting the initial result through the offline tracking model; a kind of electronic device with high-pressure air-conditioning system
Step E: producing a final result of the object to be tracked.
2. The offline-based online tracking method according to claim 1, wherein the step C further comprises:
step C5: and setting a tracking area for each frame of image subjected to denoising and normalization, wherein the tracking area is polygonal, and the tracking area is a detection area comprising an object to be tracked.
3. An off-line based on-line tracking method as claimed in claim 1 or 2, wherein the environmental parameter comprises illumination, hue or noise.
4. The offline-based online tracking method of claim 1, wherein prior to performing step D, the offline-based online tracking method further comprises: and pre-training an offline tracking model, wherein the pre-trained offline tracking model comprises an offline tracking algorithm.
5. The offline-based online tracking method of claim 4, wherein the offline tracking algorithm is based on the principle of:
taking a tracking object in each frame of the video as a node;
obtaining similarity measurement between every two objects to be tracked through fusion of a pedestrian re-recognition model and a motion model;
wherein a smaller similarity measure indicates that the two tracked objects are more similar.
6. An off-line based on-line tracking device, comprising a memory and a processor, the memory having an off-line based on-line tracking program executable on the processor, the off-line based on-line tracking program when executed by the processor implementing the off-line based on-line tracking method of any of claims 1-5.
7. A computer readable storage medium having stored thereon an off-line based on-line tracking program executable by one or more processors to implement the off-line based on-line tracking method steps of any of claims 1 to 5.
CN201910695584.1A 2019-07-30 2019-07-30 Improvement of on-line tracking algorithm by off-line tracking algorithm Active CN110532883B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910695584.1A CN110532883B (en) 2019-07-30 2019-07-30 Improvement of on-line tracking algorithm by off-line tracking algorithm
PCT/CN2019/117538 WO2021017283A1 (en) 2019-07-30 2019-11-12 Offline method-based online tracking method and apparatus, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910695584.1A CN110532883B (en) 2019-07-30 2019-07-30 Improvement of on-line tracking algorithm by off-line tracking algorithm

Publications (2)

Publication Number Publication Date
CN110532883A CN110532883A (en) 2019-12-03
CN110532883B true CN110532883B (en) 2023-09-01

Family

ID=68661109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910695584.1A Active CN110532883B (en) 2019-07-30 2019-07-30 Improvement of on-line tracking algorithm by off-line tracking algorithm

Country Status (2)

Country Link
CN (1) CN110532883B (en)
WO (1) WO2021017283A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111814725A (en) * 2020-07-20 2020-10-23 北京华正明天信息技术股份有限公司 Early warning method for judging ignition of monitoring video based on CNN + LSTM + MLP combined neural network
CN112131966A (en) * 2020-09-01 2020-12-25 深圳中兴网信科技有限公司 Mud truck monitoring method and system and storage medium
CN112612768B (en) * 2020-12-11 2022-09-16 上海哔哩哔哩科技有限公司 Model training method and device
CN112861971A (en) * 2021-02-07 2021-05-28 启迪云控(上海)汽车科技有限公司 Cross-point road side perception target tracking method and system
CN113658210A (en) * 2021-09-02 2021-11-16 西安中科西光航天科技有限公司 Front-end real-time target tracking method based on Jetson NX platform
CN113744299B (en) * 2021-09-02 2022-07-12 上海安维尔信息科技股份有限公司 Camera control method and device, electronic equipment and storage medium
CN114092516B (en) * 2021-11-08 2024-05-14 国汽智控(北京)科技有限公司 Multi-target tracking detection method, device, equipment and medium
CN114495612B (en) * 2021-12-15 2023-12-26 华中光电技术研究所(中国船舶重工集团公司第七一七研究所) Online simulation training device for infrared tracking warning equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101325691A (en) * 2007-06-14 2008-12-17 清华大学 Method and apparatus for tracing a plurality of observation model with fusion of differ durations
CN104134078A (en) * 2014-07-22 2014-11-05 华中科技大学 Automatic selection method for classifiers in people flow counting system
CN107679455A (en) * 2017-08-29 2018-02-09 平安科技(深圳)有限公司 Target tracker, method and computer-readable recording medium
CN109447121A (en) * 2018-09-27 2019-03-08 清华大学 A kind of Visual Sensor Networks multi-object tracking method, apparatus and system
CN109636829A (en) * 2018-11-24 2019-04-16 华中科技大学 A kind of multi-object tracking method based on semantic information and scene information

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218816B (en) * 2013-04-18 2016-05-04 中山大学 A kind of crowd density estimation method and people flow rate statistical method based on video analysis
TWI543117B (en) * 2014-06-18 2016-07-21 台達電子工業股份有限公司 Method for recognizing and locating object
CN109800624A (en) * 2018-11-27 2019-05-24 上海眼控科技股份有限公司 A kind of multi-object tracking method identified again based on pedestrian

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101325691A (en) * 2007-06-14 2008-12-17 清华大学 Method and apparatus for tracing a plurality of observation model with fusion of differ durations
WO2008151577A1 (en) * 2007-06-14 2008-12-18 Tsinghua University Tracking method and device adopting a series of observation models with different lifespans
CN104134078A (en) * 2014-07-22 2014-11-05 华中科技大学 Automatic selection method for classifiers in people flow counting system
CN107679455A (en) * 2017-08-29 2018-02-09 平安科技(深圳)有限公司 Target tracker, method and computer-readable recording medium
WO2019041519A1 (en) * 2017-08-29 2019-03-07 平安科技(深圳)有限公司 Target tracking device and method, and computer-readable storage medium
CN109447121A (en) * 2018-09-27 2019-03-08 清华大学 A kind of Visual Sensor Networks multi-object tracking method, apparatus and system
CN109636829A (en) * 2018-11-24 2019-04-16 华中科技大学 A kind of multi-object tracking method based on semantic information and scene information

Also Published As

Publication number Publication date
WO2021017283A1 (en) 2021-02-04
CN110532883A (en) 2019-12-03

Similar Documents

Publication Publication Date Title
CN110532883B (en) Improvement of on-line tracking algorithm by off-line tracking algorithm
US10719743B2 (en) License plate reader using optical character recognition on plural detected regions
US11195038B2 (en) Device and a method for extracting dynamic information on a scene using a convolutional neural network
CN110569702B (en) Video stream processing method and device
US9111148B2 (en) Unsupervised learning of feature anomalies for a video surveillance system
US8175333B2 (en) Estimator identifier component for behavioral recognition system
Laradji et al. Where are the masks: Instance segmentation with image-level supervision
US8300924B2 (en) Tracker component for behavioral recognition system
CN112257502A (en) Pedestrian identification and tracking method and device for surveillance video and storage medium
US9805271B2 (en) Scene preset identification using quadtree decomposition analysis
US9113143B2 (en) Detecting and responding to an out-of-focus camera in a video analytics system
Gurram et al. Monocular depth estimation through virtual-world supervision and real-world sfm self-supervision
CN112949366B (en) Obstacle identification method and device
WO2021063476A1 (en) Method for training a generative adversarial network, modified image generation module and system for detecting features in an image
CN110599516A (en) Moving target detection method and device, storage medium and terminal equipment
CN111353429A (en) Interest degree method and system based on eyeball turning
CN117292338B (en) Vehicle accident identification and analysis method based on video stream analysis
US20190325306A1 (en) Device and a method for processing data sequences using a convolutional neural network
CN107862314B (en) Code spraying identification method and device
CN111259700A (en) Method and apparatus for generating gait recognition model
CN117392638A (en) Open object class sensing method and device for serving robot scene
KR20200123324A (en) A method for pig segmentation using connected component analysis and yolo algorithm
CN115880662A (en) 3D target detection method for autonomous driving by utilizing synergistic effect of heterogeneous sensors
Razzok et al. Pedestrian detection under weather conditions using conditional generative adversarial network
WO2022126355A1 (en) Image-based processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant