WO2021232652A1 - 目标跟踪方法、装置、电子设备及计算机可读存储介质 - Google Patents

目标跟踪方法、装置、电子设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2021232652A1
WO2021232652A1 PCT/CN2020/117751 CN2020117751W WO2021232652A1 WO 2021232652 A1 WO2021232652 A1 WO 2021232652A1 CN 2020117751 W CN2020117751 W CN 2020117751W WO 2021232652 A1 WO2021232652 A1 WO 2021232652A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
frame
image
information
target detection
Prior art date
Application number
PCT/CN2020/117751
Other languages
English (en)
French (fr)
Inventor
苏翔博
袁宇辰
孙昊
Original Assignee
北京百度网讯科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京百度网讯科技有限公司 filed Critical 北京百度网讯科技有限公司
Priority to US17/776,155 priority Critical patent/US20220383535A1/en
Priority to EP20936648.3A priority patent/EP4044117A4/en
Priority to JP2022527078A priority patent/JP7375192B2/ja
Priority to KR1020227025087A priority patent/KR20220110320A/ko
Publication of WO2021232652A1 publication Critical patent/WO2021232652A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Definitions

  • the present disclosure relates to the field of artificial intelligence, and in particular to the field of computer vision technology.
  • a detector for target tracking in a real-time video stream, can first be used to extract all target detection frames in the current frame image, and then all target detection frames can be associated and matched with the existing trajectory to obtain the target under the current frame image. New trajectory.
  • the motion state of the target changes drastically, such as a sudden movement after a long period of rest, a sudden stop during the movement, and a significant change in the moving speed, it will cause the detection frame of the target to fail to match the existing track position. Resulting in tracking failure.
  • the embodiments of the present disclosure provide a target tracking method, device, electronic equipment, and computer-readable storage medium, so as to solve the current problem of tracking failure when the motion state of the tracking target changes sharply.
  • embodiments of the present disclosure provide a target tracking method, including:
  • the target detection frame and the target tracking frame in the current frame image are associated and matched.
  • the Mahalanobis distance between the target detection frame and the target tracking frame can be calculated based on the predicted error covariance matrix after the error-tolerant correction, so that the Mahalanobis distance can be maintained even when the target motion state changes sharply.
  • the robustness of target tracking under different motion states can be enhanced.
  • a target tracking device including:
  • the detection module is configured to perform target detection on the current frame image to obtain first information of the target detection frame in the current frame image, where the first information is used to indicate a first position and a first size;
  • a tracking module configured to perform target tracking using Kalman filtering to obtain second information of the target tracking frame in the current frame image, where the second information is used to indicate a second position and a second size;
  • the correction module is used to perform fault-tolerant correction on the prediction error covariance matrix in Kalman filtering to obtain the corrected covariance matrix
  • the first calculation module is configured to calculate the Mahalanobis distance between the target detection frame and the target tracking frame in the current frame image according to the first information, the second information, and the corrected covariance matrix ;
  • the matching module is used to associate and match the target detection frame and the target tracking frame in the current frame image according to the Mahalanobis distance.
  • the embodiments of the present disclosure also provide an electronic device, including:
  • At least one processor At least one processor
  • a memory communicatively connected with the at least one processor; wherein,
  • the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the target tracking method as described above.
  • the embodiments of the present disclosure also provide a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to make the computer execute the target tracking method as described above.
  • the Mahalanobis distance between the target detection frame and the target tracking frame can be calculated based on the error-tolerant corrected prediction error covariance matrix, so that even if the target motion state changes sharply
  • the Mahalanobis distance can also be maintained within a reasonable range, so that when the target detection frame and target tracking frame in the current frame image are associated and matched according to the Mahalanobis distance, the target can be enhanced in different motion states Robustness for tracking down.
  • the Kalman filter is used for target tracking To obtain the second information of the target tracking frame in the current frame image, where the second information is used to indicate the second position and the second size; error-tolerant correction is performed on the prediction error covariance matrix in the Kalman filter to obtain the correction After the covariance matrix; according to the first information, the second information and the corrected covariance matrix, calculate the Mahalanobis distance between the target detection frame and the target tracking frame in the current frame image; According to the Mahalanobis distance, the technical means of associating and matching the target detection frame and target tracking frame in the current frame image, so as to overcome the current technical problem of tracking failure when the motion state of the tracking target changes sharply. , And then achieve the technical effect of enhancing the robustness of target tracking in different motion states.
  • Fig. 1 is a flowchart of a target tracking method according to an embodiment of the present disclosure
  • Figure 2 is a flowchart of the target tracking process in a specific example of the present disclosure
  • FIG. 3 is a block diagram of a tracking device used to implement the target tracking method of an embodiment of the present disclosure
  • Fig. 4 is a block diagram of an electronic device used to implement the target tracking method of an embodiment of the present disclosure.
  • FIG. 1 is a flowchart of a target tracking method provided by an embodiment of the present disclosure. The method is applied to an electronic device. As shown in FIG. 1, the method includes the following steps:
  • Step 101 Perform target detection on a current frame image to obtain first information of a target detection frame in the current frame image.
  • the first information is used to indicate the first position and the first size, that is, indicate the position information (such as coordinate information) and size information of the target contained in the corresponding target detection frame.
  • the first information can be expressed as (x, y, w, h), where x represents the abscissa of the upper left corner of the target detection frame, y represents the ordinate of the upper left corner of the target detection frame, and w represents the width of the target detection frame.
  • h represents the height of the target detection frame; further, the x, y, w, and h may all be in pixels, corresponding to an area of the target in the image.
  • the foregoing process of performing target detection on the current frame image may include: inputting the current frame image into the target detection model (or called target detector) to obtain the first target detection frame in the current frame image.
  • the target detection model or called target detector
  • the number of target detection frames obtained through target detection can be multiple, that is, a series of target detection frames can be obtained after target detection, and each target detection frame contains coordinate information and size information of the corresponding target.
  • the above-mentioned target detection model can be obtained by training based on deep learning methods in related technologies, and can be any of the following: direct multi-target detection (Single Shot Multi Box Detector, SSD) model, fine direct multi-target detection (Single-Shot Refinement Neural) Network for Object Detection, RefineDet, Direct Multi-Object Detection (MobileNet-based Single Shot MultiBox Detector, MobileNet-SSD) model based on efficient convolutional neural networks for mobile vision applications, and unified real-time target detection (You Only Look Once: Unified, Real-Time Object Detection, YOLO) models and so on.
  • direct multi-target detection Single Shot Multi Box Detector, SSD
  • fine direct multi-target detection Single-Shot Refinement Neural Network for Object Detection, RefineDet
  • Direct Multi-Object Detection MobileNet-based Single Shot MultiBox Detector, MobileNet-SSD
  • unified real-time target detection You Only Look Once:
  • the target detection model when used for target detection, if the target detection model is obtained based on image training after preprocessing, the current frame image needs to be preprocessed before target detection is performed on the current frame image. For example, the current frame image is scaled to a fixed size (such as 512*512), and the uniform RGB average value (such as [104,117,123]) is subtracted to ensure the unity with the training samples during the model training process and enhance the robustness of the model.
  • a fixed size such as 512*512
  • the uniform RGB average value such as [104,117,123]
  • the above-mentioned current frame image may be an image in a real-time video stream of a surveillance or other scene camera.
  • the above targets can be pedestrians, vehicles, etc.
  • Step 102 Use Kalman filter to perform target tracking to obtain second information of the target tracking frame in the current frame of image.
  • the second information is used to indicate the second position and the second size, that is, the position information (such as coordinate information) and size information of the target contained in the corresponding target tracking frame.
  • the second information can be expressed as (x, y, w, h), where x represents the abscissa of the upper left corner of the target tracking frame, y represents the ordinate of the upper left corner of the target tracking frame, and w represents the width of the target tracking frame.
  • h represents the height of the target tracking frame; further, the x, y, w, and h may all be in pixels, corresponding to an area of the target in the image.
  • Kalman filter for target tracking can be understood as: predicting the possible position and size of the target in the current frame image based on the existing motion state of the target trajectory.
  • the target trajectory can be expressed as target detection frames on different frames of images belonging to the same target in several frames of images before the current frame of image.
  • Each target trajectory corresponds to a Kalman filter.
  • the Kalman filter is initialized with the detection frame where the target appears for the first time, and after the correlation matching of each frame of image is completed, the Kalman filter is corrected with the target detection frame on the matching.
  • the Kalman filtering of all stored target trajectories can be predicted to obtain the position where the target trajectory appears in the current frame image, and the prediction error covariance matrix ⁇ of the Kalman filter can be obtained .
  • the prediction error covariance matrix ⁇ can be selected as a matrix with a dimension of 4x4, which is used to describe the error covariance between the predicted value and the true value in target tracking.
  • Step 103 Perform fault-tolerant correction on the prediction error covariance matrix in the Kalman filter to obtain a corrected covariance matrix.
  • Step 104 Calculate the Mahalanobis distance between the target detection frame and the target tracking frame in the current frame image according to the first information, the second information and the corrected covariance matrix.
  • the above-mentioned error-tolerant correction of the prediction error covariance matrix in the Kalman filter is mainly to improve the Mahalanobis distance calculation formula, so that the target detection frame and target tracking frame calculated by the improved Mahalanobis distance calculation formula The Mahalanobis distance between them can be maintained within a reasonable range even when the target motion state changes drastically.
  • the above-mentioned fault-tolerant correction method can be set based on actual requirements, and there is no restriction here.
  • Step 105 Perform association matching on the target detection frame and target tracking frame in the current frame image according to the Mahalanobis distance.
  • a graph matching algorithm such as the Hungarian algorithm can be used to perform the association matching between the target detection frame and the target tracking frame, so as to obtain the pairings between several matched target detection frames and target tracking frames.
  • the target detection frame and the target tracking frame in the pairing belong to the same target trajectory and belong to the same target, and a unified target identification ID can be assigned.
  • the new target trajectory under the current frame image can be obtained, including the update of the existing target trajectory, the deletion of the existing target trajectory, and/or the addition of a new target trajectory.
  • the process of association matching in this step may include: when the Mahalanobis distance is less than or equal to a preset threshold, determining a match between the corresponding target detection frame and the target tracking frame; or, when the Mahalanobis distance is less than or equal to a preset threshold; When the distance is greater than the preset threshold, it is determined that there is a mismatch between the corresponding target detection frame and the target tracking frame. In other words, when the Mahalanobis distance between the target detection frame and the target tracking frame is smaller, the possibility that the two belong to the same target is greater. As a result, by comparing the distance information with the preset threshold value for association matching, the matching process can be easily realized.
  • the target tracking method of the embodiment of the present disclosure can calculate the Mahalanobis distance between the target detection frame and the target tracking frame based on the error-tolerant corrected prediction error covariance matrix, so that even when the target motion state changes sharply, the The Mahalanobis distance can also be maintained within a reasonable range, so that when the target detection frame and target tracking frame in the current frame of image are associated and matched according to the Mahalanobis distance, the robustness of target tracking under different motion states can be enhanced.
  • the Mahalanobis distance can also be maintained within a reasonable range, so that when the target detection frame and target tracking frame in the current frame of image are associated and matched according to the Mahalanobis distance, the robustness of target tracking under different motion states can be enhanced.
  • the Mahalanobis distance calculation formula in related technologies can be as follows:
  • represents the mean value (x, y, w, h) of the Kalman filter, which is the coordinates and width and height dimensions of the Kalman filter predicted target (ie, the target tracking frame) in the current frame image.
  • represents the prediction error covariance matrix of the Kalman filter.
  • X represents the coordinates and width and height dimensions of the target detection frame in the current frame image, and is a variable describing the current actual motion state (x, y, w, h) of a certain target.
  • the covariance ⁇ of the Kalman filter is small, and ⁇ -1 is large, that is, the predicted value is considered to be the same as The deviation of the true value is small, and it tends to predict that the target trajectory will maintain the original motion state in the next frame.
  • the process of calculating the Mahalanobis distance in the foregoing step 104 may be:
  • X represents the first information of the target detection frame in the current frame of image. For example, including position information and size information, it can be expressed as (x, y, w, h).
  • represents the second information of the target tracking frame in the current frame image obtained based on Kalman filtering, such as including position information and size information, which can be expressed as (x, y, w, h).
  • represents the prediction error covariance matrix of the Kalman filter. ( ⁇ + ⁇ E) represents the corrected covariance matrix, ⁇ is a preset coefficient greater than 0, and E represents the identity matrix.
  • the calculated Mahalanobis distance can be maintained within a reasonable range, thereby enhancing the target in different motion states. Robustness for tracking down.
  • the method further includes:
  • the distance similarity between the frames can be selected as the reciprocal of the Mahalanobis distance D Mnew between the i-th target tracking frame and the j-th target detection frame, that is, D Mnew -1 , or other
  • the value after processing the Mahalanobis distance D Mnew only needs to reflect the similarity
  • Characterized in apparent depth calculated similarity matrix M A wherein the value of the j-th column in the i-th row M A represents, on an image of the i-th frame corresponding to the target tracking features apparent depth F i and the j th target
  • the foregoing step 105 may include: performing an associative matching on the target detection frame and the target tracking frame in the current frame image according to the similarity matching matrix.
  • weighted averaging methods can be used for M and D M A fusion obtained, if the matching similarity matrix is equal aM D plus bM A, where a and b are M D and M a right weight, may be set in advance based on actual demand.
  • the Hungarian algorithm can be used to perform bipartite graph matching to obtain the target detection frame and the target tracking frame. The result of a one-to-one match between.
  • the embodiments of the present disclosure propose to use the front and back topological relationship of the target to perform constraint matching.
  • the center point of the lower edge of the ground target detection frame can be regarded as the grounding point of the target.
  • the intersection ratio between them is greater than a certain threshold, it can be considered that the corresponding two targets are severely occluded.
  • the relationship between the two targets can be judged. Among them, the target close to the camera is the foreground occluded target, and the target far away from the camera is the background occluded target.
  • the front-to-back relationship between all occluded targets can be called the front-to-back topological relationship of the target.
  • the consistency of the front and back topological relationship can be defined as: in consecutive frames (images), if the two targets A and B in the previous frame are severely occluded, target A is the foreground occluded target, and target B is the background occluded target, then in the next frame In, if the targets A and B are still heavily occluded, then the target A is still the foreground occluded target, and the target B is the background occluded target.
  • the front and back topological relationship between the target trajectories in the previous frame can be obtained, and the consistency of the front and back topological relationships can be used to constrain the association matching to make the matching more accurate.
  • the method may further include:
  • the above correction processing can be understood as: if the relationship between the i-th target and the j-th target has changed in the previous frame and the current frame, then the matching of the i-th target and the j-th target in the current frame The detection frame is exchanged to correct the result of association matching in the target tracking process.
  • the consistency constraint of the topological relationship before and after the obstructions in the adjacent frame images can be used to enhance the reliability of matching when the target is severely obscured, thereby ensuring the smooth progress of the target tracking process.
  • the center point (x+w/2, y+h) of the lower edge of the target detection frame can be used as the grounding point of the corresponding target.
  • the greater the vertical coordinate y+h The closer the target is to the camera, the farther the target is from the camera.
  • the ordinates of the center point of the lower edge of the corresponding target detection frame can be compared. For example, take M T1 as an example, where the value in the i-th row and the j-th column represents the context t between the i-th target and the j-th target in the current frame image.
  • M T2 it can be used as a way to set M T1.
  • the value in the i-th row and j-th column of M 0 is 0 or 1, that is, the context between the i-th target and the j-th target has not changed; and if the value of the i-th row and j-th column in M 0 is -1, it means that the i-th target and the j-th target have a matching error.
  • the front and back relationship of j targets in two adjacent frames has changed. At this time, the detection frames matched by the two targets in the current frame image can be exchanged to correct the corresponding target trajectory and ensure the smooth tracking process conduct.
  • whether there is an occlusion relationship between the two targets can be determined by using the intersection over union (IoU) of the corresponding detection frame and the tracking frame.
  • the applicable scenarios of the embodiments of the present disclosure include, but are not limited to, continuous tracking of targets such as pedestrians and/or vehicles in scenarios such as smart cities, smart transportation, smart retail, etc., to obtain information such as the location, identity, movement state, and historical trajectory of the target.
  • the corresponding target tracking process may include the following steps:
  • S24 Use Kalman filter to perform target tracking, and obtain coordinate and size information of the target contained in the target tracking frame in the current frame of image;
  • S25 Calculate the Mahalanobis distance between the target detection frame and the target tracking frame in the current frame image with the aid of the improved Mahalanobis distance calculation formula; the specific process can refer to the above content;
  • S26 Perform association matching on the target detection frame and target tracking frame in the current frame image according to the Mahalanobis distance obtained in S25; for example, the Hungarian algorithm is used for bipartite image matching;
  • FIG. 3 is a schematic structural diagram of a target tracking device provided by an embodiment of the present disclosure.
  • the target tracking device 30 includes:
  • the detection module 31 is configured to perform target detection on the current frame image to obtain first information of the target detection frame in the current frame image, where the first information is used to indicate a first position and a first size;
  • the tracking module 32 is configured to perform target tracking using Kalman filtering to obtain second information of the target tracking frame in the current frame image, where the second information is used to indicate a second position and a second size;
  • the correction module 33 is used to perform error-tolerant correction on the prediction error covariance matrix in the Kalman filter to obtain the corrected covariance matrix;
  • the first calculation module 34 is configured to calculate the Markov between the target detection frame and the target tracking frame in the current frame image according to the first information, the second information, and the corrected covariance matrix. distance;
  • the matching module 35 is configured to associate and match the target detection frame and the target tracking frame in the current frame image according to the Mahalanobis distance.
  • the first calculation module 34 is specifically configured to calculate the Mahalanobis distance between the target detection frame and the target tracking frame in the current frame image by using the following formula:
  • X represents the first information
  • represents the second information
  • represents the prediction error covariance matrix in the Kalman filter
  • ( ⁇ + ⁇ E) represents the corrected covariance matrix
  • is For preset coefficients greater than 0, E represents the identity matrix.
  • the matching module 35 is specifically configured to: when the Mahalanobis distance is less than or equal to a preset threshold, determine the matching between the corresponding target detection frame and the target tracking frame; or, when the Mahalanobis distance is greater than When the preset threshold is used, it is determined that there is a mismatch between the corresponding target detection frame and the target tracking frame.
  • the target tracking device 30 further includes:
  • An obtaining module configured to obtain the topological relationship matrix M T1 of the current frame image, and obtain the topological relationship matrix M T2 of the previous frame image of the current frame image;
  • the second calculation module is configured to multiply the M T1 and the M T2 element by element to obtain a topology change matrix M 0 ;
  • a processing module configured to use the M 0 to correct the matching result of the target detection frame in the current frame image
  • the target tracking device 30 further includes:
  • a third calculating module, according to the Mahalanobis distance is calculated from the similarity matrix M D; wherein the value of j-th column in the i-th row M D represents the current i-th frame image tracking The distance similarity between the frame and the j-th target detection frame;
  • a determining module determines the similarity matching matrix
  • the matching module 35 is specifically configured to:
  • the target detection frame and the target tracking frame in the current frame image are associated and matched.
  • target tracking device 30 of the embodiment of the present disclosure can implement the various processes implemented in the method embodiment shown in FIG. 1 and achieve the same beneficial effects. In order to avoid repetition, details are not described herein again.
  • the present disclosure also provides an electronic device and a readable storage medium.
  • FIG. 4 it is a block diagram of an electronic device used to implement the target tracking method of an embodiment of the present disclosure.
  • Electronic devices are intended to represent various forms of digital computers, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
  • Electronic devices can also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions are merely examples, and are not intended to limit the implementation of the present disclosure described and/or required herein.
  • the electronic device includes one or more processors 401, a memory 402, and interfaces for connecting various components, including a high-speed interface and a low-speed interface.
  • the various components are connected to each other using different buses, and can be installed on a common motherboard or installed in other ways as needed.
  • the processor may process instructions executed in the electronic device, including instructions stored in or on the memory to display graphical information of the GUI on an external input/output device (such as a display device coupled to an interface).
  • an external input/output device such as a display device coupled to an interface.
  • multiple processors and/or multiple buses can be used with multiple memories and multiple memories.
  • multiple electronic devices can be connected, and each device provides part of the necessary operations (for example, as a server array, a group of blade servers, or a multi-processor system).
  • a processor 401 is taken as an example.
  • the memory 402 is a non-transitory computer-readable storage medium provided by this disclosure.
  • the memory stores instructions executable by at least one processor, so that the at least one processor executes the target tracking method provided in the present disclosure.
  • the non-transitory computer-readable storage medium of the present disclosure stores computer instructions for causing a computer to execute the target tracking method provided by the present disclosure.
  • the memory 402 can be used to store non-transitory software programs, non-transitory computer-executable programs and modules, such as program instructions/modules corresponding to the target tracking method in the embodiments of the present disclosure (for example, attached The detection module 31, the tracking module 32, the correction module 33, the first calculation module 34, and the matching module 35 shown in FIG. 3).
  • the processor 401 executes various functional applications and data processing of the server by running non-transient software programs, instructions, and modules stored in the memory 402, that is, implements the target tracking method in the foregoing method embodiment.
  • the memory 402 may include a program storage area and a data storage area.
  • the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created by the use of an electronic device, and the like.
  • the memory 402 may include a high-speed random access memory, and may also include a non-transitory memory, such as at least one magnetic disk storage device, a flash memory device, or other non-transitory solid-state storage devices.
  • the memory 402 may optionally include memories remotely provided with respect to the processor 401, and these remote memories may be connected to the electronic device through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • the electronic device of the target tracking method may further include: an input device 403 and an output device 404.
  • the processor 401, the memory 402, the input device 403, and the output device 404 may be connected by a bus or in other ways. In FIG. 4, the connection by a bus is taken as an example.
  • the input device 403 can receive inputted numeric or character information, and generate key signal input related to the user settings and function control of the electronic device of the target tracking method, such as touch screen, keypad, mouse, track pad, touch pad, pointing stick, One or more mouse buttons, trackballs, joysticks and other input devices.
  • the output device 404 may include a display device, an auxiliary lighting device (for example, LED), a tactile feedback device (for example, a vibration motor), and the like.
  • the display device may include, but is not limited to, a liquid crystal display (LCD), a light emitting diode (LED) display, and a plasma display. In some embodiments, the display device may be a touch screen.
  • Various implementations of the systems and techniques described herein can be implemented in digital electronic circuit systems, integrated circuit systems, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: being implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, the programmable processor It can be a dedicated or general-purpose programmable processor that can receive data and instructions from the storage system, at least one input device, and at least one output device, and transmit the data and instructions to the storage system, the at least one input device, and the at least one output device. An output device.
  • machine-readable medium and “computer-readable medium” refer to any computer program product, device, and/or device used to provide machine instructions and/or data to a programmable processor ( For example, magnetic disks, optical disks, memory, programmable logic devices (PLD)), including machine-readable media that receive machine instructions as machine-readable signals.
  • machine-readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor.
  • the systems and techniques described here can be implemented on a computer that has: a display device for displaying information to the user (for example, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) ); and a keyboard and a pointing device (for example, a mouse or a trackball) through which the user can provide input to the computer.
  • a display device for displaying information to the user
  • LCD liquid crystal display
  • keyboard and a pointing device for example, a mouse or a trackball
  • Other types of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (for example, visual feedback, auditory feedback, or tactile feedback); and can be in any form (including Voice input, voice input, or tactile input) to receive input from the user.
  • the systems and technologies described herein can be implemented in a computing system that includes back-end components (for example, as a data server), or a computing system that includes middleware components (for example, an application server), or a computing system that includes front-end components (for example, A user computer with a graphical user interface or a web browser, through which the user can interact with the implementation of the system and technology described herein), or includes such back-end components, middleware components, Or any combination of front-end components in a computing system.
  • the components of the system can be connected to each other through any form or medium of digital data communication (for example, a communication network). Examples of communication networks include: local area network (LAN), wide area network (WAN), and the Internet.
  • the computer system can include clients and servers.
  • the client and server are generally far away from each other and usually interact through a communication network.
  • the relationship between the client and the server is generated by computer programs that run on the corresponding computers and have a client-server relationship with each other.
  • the Mahalanobis distance between the target detection frame and the target tracking frame can be calculated based on the prediction error covariance matrix after the fault-tolerant correction, so that even when the target motion state changes sharply, the The Mahalanobis distance can also be maintained within a reasonable range, so that when the target detection frame and target tracking frame in the current frame of image are associated and matched according to the Mahalanobis distance, the robustness of target tracking under different motion states can be enhanced.
  • the Mahalanobis distance can be calculated based on the prediction error covariance matrix after the fault-tolerant correction, so that even when the target motion state changes sharply, the The Mahalanobis distance can also be maintained within a reasonable range, so that when the target detection frame and target tracking frame in the current frame of image are associated and matched according to the Mahalanobis distance, the robustness of target tracking under different motion states can be enhanced.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

一种目标跟踪方法、装置、电子设备及计算机可读存储介质,涉及计算机视觉技术领域。具体方案为:对当前帧图像进行目标检测,得到目标检测框的第一信息,该第一信息表示第一位置和第一尺寸(101);利用卡尔曼滤波进行目标跟踪,得到当前帧图像中的目标跟踪框的第二信息,该第二信息表示第二位置和第二尺寸(102);对卡尔曼滤波中的预测误差协方差矩阵进行容错修正,得到修正后的协方差矩阵(103);根据第一信息、第二信息以及修正后的协方差矩阵,计算当前帧图像中的目标检测框和目标跟踪框之间的马氏距离(104);根据该马氏距离,对当前帧图像中的目标检测框和目标跟踪框进行关联匹配(105)。

Description

目标跟踪方法、装置、电子设备及计算机可读存储介质
相关申请的交叉引用
本公开主张在2020年5月22日在中国提交的中国专利申请号No.202010443892.8的优先权,其全部内容通过引用包含于此。
技术领域
本公开涉及人工智能领域,尤其涉及计算机视觉技术领域。
背景技术
相关技术中,对于实时视频流中的目标跟踪,可首先利用检测器提取当前帧图像中所有目标检测框,然后将所有目标检测框与已有轨迹进行关联匹配,以得到目标在当前帧图像下新的轨迹。但是如果目标的运动状态发生急剧变化,比如出现长时间静止后突然运动、在移动过程中突然静止、移动速度发生明显变化等情况,则会导致目标的检测框和已有轨迹位置无法成功匹配,导致跟踪失效。
发明内容
本公开实施例提供一种目标跟踪方法、装置、电子设备及计算机可读存储介质,以解决目前当跟踪目标的运动状态发生急剧变化时容易导致跟踪失效的问题。
为了解决上述技术问题,本公开是这样实现的:
第一方面,本公开实施例提供了一种目标跟踪方法,包括:
对当前帧图像进行目标检测,得到所述当前帧图像中的目标检测框的第一信息,所述第一信息用于表示第一位置和第一尺寸;
利用卡尔曼滤波进行目标跟踪,得到所述当前帧图像中的目标跟踪框的第二信息,所述第二信息用于表示第二位置和第二尺寸;
对卡尔曼滤波中的预测误差协方差矩阵进行容错修正,得到修正后的协方差矩阵;
根据所述第一信息、所述第二信息以及所述修正后的协方差矩阵,计算所述当前帧图像中的目标检测框和目标跟踪框之间的马氏距离;
根据所述马氏距离,对所述当前帧图像中的目标检测框和目标跟踪框进行关联匹配。
这样,可以基于容错修正后的预测误差协方差矩阵来计算目标检测框和目标跟踪框之间的马氏距离,从而即使在目标运动状态发生急剧变化的情况下,该马氏距离也可维持在比较合理的范围内,从而在根据该马氏距离对当前帧图像中的目标检测框和目标跟踪框进行关联匹配时,可以增强目标在不同运动状态下进行跟踪的鲁棒性。
第二方面,本公开实施例提供了一种目标跟踪装置,包括:
检测模块,用于对当前帧图像进行目标检测,得到所述当前帧图像中的目标检测框的第一信息,所述第一信息用于表示第一位置和第一尺寸;
跟踪模块,用于利用卡尔曼滤波进行目标跟踪,得到所述当前帧图像中的目标跟踪框的第二信息,所述第二信息用于表示第二位置和第二尺寸;
修正模块,用于对卡尔曼滤波中的预测误差协方差矩阵进行容错修正,得到修正后的协方差矩阵;
第一计算模块,用于根据所述第一信息、所述第二信息以及所述修正后的协方差矩阵,计算所述当前帧图像中的目标检测框和目标跟踪框之间的马氏距离;
匹配模块,用于根据所述马氏距离,对所述当前帧图像中的目标检测框和目标跟踪框进行关联匹配。
第三方面,本公开实施例还提供了一种电子设备,包括:
至少一个处理器;以及
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如上所述的目标跟踪方法。
第四方面,本公开实施例还提供了一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行如上所述的 目标跟踪方法。
上述申请中的一个实施例具有如下优点或有益效果:可以基于容错修正后的预测误差协方差矩阵来计算目标检测框和目标跟踪框之间的马氏距离,从而即使在目标运动状态发生急剧变化的情况下,该马氏距离也可维持在比较合理的范围内,从而在根据该马氏距离对当前帧图像中的目标检测框和目标跟踪框进行关联匹配时,可以增强目标在不同运动状态下进行跟踪的鲁棒性。因为采用了对当前帧图像进行目标检测,得到所述当前帧图像中的目标检测框的第一信息,所述第一信息用于表示第一位置和第一尺寸;利用卡尔曼滤波进行目标跟踪,得到所述当前帧图像中的目标跟踪框的第二信息,所述第二信息用于表示第二位置和第二尺寸;对卡尔曼滤波中的预测误差协方差矩阵进行容错修正,得到修正后的协方差矩阵;根据所述第一信息、所述第二信息以及所述修正后的协方差矩阵,计算所述当前帧图像中的目标检测框和目标跟踪框之间的马氏距离;根据所述马氏距离,对所述当前帧图像中的目标检测框和目标跟踪框进行关联匹配的技术手段,所以克服了目前当跟踪目标的运动状态发生急剧变化时容易导致跟踪失效的技术问题,进而达到增强目标在不同运动状态下进行跟踪的鲁棒性的技术效果。
上述可选方式所具有的其他效果将在下文中结合具体实施例加以说明。
附图说明
附图用于更好地理解本方案,不构成对本公开的限定。其中:
图1是本公开实施例的目标跟踪方法的流程图;
图2是本公开具体实例中目标跟踪过程的流程图;
图3是用来实现本公开实施例的目标跟踪方法的跟踪装置的框图;
图4是用来实现本公开实施例的目标跟踪方法的电子设备的框图。
具体实施方式
以下结合附图对本公开的示范性实施例做出说明,其中包括本公开实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当认识到,可以对这里描述的实施例做出各种改变和修改, 而不会背离本公开的范围和精神。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。
本公开的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例可以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
请参见图1,图1是本公开实施例提供的一种目标跟踪方法的流程图,该方法应用于电子设备,如图1所示,该方法包括如下步骤:
步骤101:对当前帧图像进行目标检测,得到所述当前帧图像中的目标检测框的第一信息。
本实施例中,该第一信息用于表示第一位置和第一尺寸,即表示对应目标检测框所包含目标的位置信息(如坐标信息)和尺寸信息。比如,该第一信息可以表示为(x,y,w,h),其中x表示目标检测框左上角的横坐标,y表示目标检测框左上角的纵坐标,w表示目标检测框的宽度,h表示目标检测框的高度;进一步的该x、y、w和h可以均以像素为单位,对应一个目标在图像中的区域。
可选的,上述对当前帧图像进行目标检测的过程可以包括:将当前帧图像输入到目标检测模型(或称为目标检测器)中,得到所述当前帧图像中的目标检测框的第一信息。可理解的,经过目标检测得到的目标检测框的数量可为多个,即经过目标检测可得到一系列目标检测框,每个目标检测框包含对应目标的坐标信息和尺寸信息。上述目标检测模型可以选用相关技术中基于深度学习的方法训练得到,可以为以下任意一种:直接多目标检测(Single Shot Multi Box Detector,SSD)模型、精细直接多目标检测(Single-Shot Refinement Neural Network for Object Detection,RefineDet)、基于针对移动端视觉应用的高效卷积神经网络的直接多目标检测(MobileNet based Single  Shot Multi Box Detector,MobileNet-SSD)模型、统一实时目标检测(You Only Look Once:Unified,Real-Time Object Detection,YOLO)模型等等。
一种实施方式中,当利用目标检测模型进行目标检测时,若该目标检测模型是基于预处理后的图像训练得到,则在对当前帧图像进行目标检测之前,需对当前帧图像进行预处理,比如将当前帧图像缩放成固定尺寸(如512*512),并减去统一的RGB均值(如[104,117,123]),以保证与模型训练过程中训练样本的统一,增强模型鲁棒性。
另一种实施方式中,上述当前帧图像可为监控或其他场景摄像头的实时视频流中的图像。上述目标可选为行人、车辆等。
步骤102:利用卡尔曼滤波进行目标跟踪,得到所述当前帧图像中的目标跟踪框的第二信息。
本实施例中,该第二信息用于表示第二位置和第二尺寸,即表示对应目标跟踪框所包含目标的位置信息(如坐标信息)和尺寸信息。比如,该第二信息可以表示为(x,y,w,h),其中x表示目标跟踪框左上角的横坐标,y表示目标跟踪框左上角的纵坐标,w表示目标跟踪框的宽度,h表示目标跟踪框的高度;进一步的该x、y、w和h可以均以像素为单位,对应一个目标在图像中的区域。
上述利用卡尔曼滤波(Kalman滤波)进行目标跟踪可理解为:基于目标轨迹已有的运动状态,预测该目标在当前帧图像中可能出现的位置和尺寸大小。该目标轨迹可表示为在当前帧图像之前的若干帧图像中,所有属于同一目标的不同帧图像上的目标检测框。每一个目标轨迹对应一个Kalman滤波,该Kalman滤波用目标第一次出现的检测框进行初始化,并在每帧图像的关联匹配完成后,用匹配上的目标检测框对该Kalman滤波进行修正。对于新得到的一帧图像(如当前帧图像),可对所有存储的目标轨迹的Kalman滤波进行预测,得到目标轨迹预测在当前帧图像出现的位置,以及得到Kalman滤波的预测误差协方差矩阵Σ。该预测误差协方差矩阵Σ可选为一个维度4x4的矩阵,用来描述目标跟踪中预测值与真实值的误差协方差。
步骤103:对卡尔曼滤波中的预测误差协方差矩阵进行容错修正,得到修正后的协方差矩阵。
步骤104:根据第一信息、第二信息以及修正后的协方差矩阵,计算所述当前帧图像中的目标检测框和目标跟踪框之间的马氏距离。
可理解的,上述对卡尔曼滤波中的预测误差协方差矩阵进行容错修正主要是为了改进马氏距离计算公式,以使通过改进后的马氏距离计算公式计算得到的目标检测框和目标跟踪框之间的马氏距离,即使在目标运动状态发生急剧变化的情况下,也可维持在比较合理的范围内。对于上述容错修正的方式,可以基于实际需求来设定,在此不进行限制。
步骤105:根据所述马氏距离,对所述当前帧图像中的目标检测框和目标跟踪框进行关联匹配。
可选的,此步骤中可以利用匈牙利算法等图匹配算法进行目标检测框和目标跟踪框之间的关联匹配,以得到若干匹配的目标检测框和目标跟踪框之间的配对。该配对中的目标检测框与目标跟踪框属于同一目标轨迹,属于同一目标,可赋予统一目标标识ID。在关联匹配完成后即可得到当前帧图像下新的目标轨迹,包括已有目标轨迹的更新、已有目标轨迹的删除,和/或增加新的目标轨迹。
可选的,此步骤中进行关联匹配的过程可以包括:当所述马氏距离小于或等于预设阈值时,确定对应的目标检测框和目标跟踪框之间匹配;或者,当所述马氏距离大于所述预设阈值时,确定对应的目标检测框和目标跟踪框之间不匹配。也就是说,当目标检测框和目标跟踪框之间的马氏距离越小时,两者属于同一目标的可能性越大。由此,借助距离信息与预设阈值的比较来进行关联匹配,可以简便实现匹配过程。
本公开实施例的目标跟踪方法,可以基于容错修正后的预测误差协方差矩阵来计算目标检测框和目标跟踪框之间的马氏距离,从而即使在目标运动状态发生急剧变化的情况下,该马氏距离也可维持在比较合理的范围内,从而在根据该马氏距离对当前帧图像中的目标检测框和目标跟踪框进行关联匹配时,可以增强目标在不同运动状态下进行跟踪的鲁棒性。
在多目标跟踪中,相关技术中马氏距离计算公式可如下所示:
Figure PCTCN2020117751-appb-000001
其中,μ表示卡尔曼滤波的均值(x,y,w,h),为卡尔曼滤波预测目标(即 目标跟踪框)在当前帧图像中的坐标和宽高尺寸。Σ表示卡尔曼滤波的预测误差协方差矩阵。X表示当前帧图像中目标检测框的坐标和宽高尺寸,为描述某一目标当前实际运动状态(x,y,w,h)的变量。当一个目标在一段时间内保持同样的运动状态(如长时间静止或长时间维持同样的运动速度等)时,卡尔曼滤波的协方差Σ较小,Σ -1较大,即认为预测值与真实值的偏差较小,倾向于预测目标轨迹在下一帧时仍维持原有的运动状态。当目标保持原有运动状态时,即(X-μ)接近于0,在Σ -1较大的情况下计算得到的马氏距离D M值较小;而当目标的运动状态发生突变时,(X-μ)的值变大,在Σ -1较大的情况下计算得到的马氏距离D M值将变得异常大,导致后续匹配错误。当计算得到马氏距离D M大于一个预先设定好的阈值时,则认为目标检测框X不属于该卡尔曼滤波所对应的轨迹,导致跟踪失败。
一种实施方式中,上述步骤104中计算马氏距离的过程可为:
利用如下公式(该公式可理解为改进后的马氏距离计算公式),计算当前帧图像中的目标检测框和目标跟踪框之间的马氏距离:
Figure PCTCN2020117751-appb-000002
其中,X表示当前帧图像中的目标检测框的第一信息,如包括位置信息和尺寸信息,可表示为(x,y,w,h)。μ表示基于卡尔曼滤波得到的当前帧图像中目标跟踪框的第二信息,如包括位置信息和尺寸信息,可表示为(x,y,w,h)。Σ表示卡尔曼滤波的预测误差协方差矩阵。(∑+αE)表示修正后的协方差矩阵,α为大于0的预设系数,E表示单位矩阵。
通过对上述改进后的马氏距离计算公式的分析可知:
当α>0时,恒有如下不等式(1)至(3):
∑<∑+αE   (1)
-1>(∑+αE) -1  (2)
Figure PCTCN2020117751-appb-000003
基于上述不等式(3)可得到:D M(X,μ)>D Mnew(X,μ)。
此外还存在如下不等式(4)至(7):
α∑<∑+α∑  (4)
(α∑) -1>(∑+αE) -1  (5)
Figure PCTCN2020117751-appb-000004
Figure PCTCN2020117751-appb-000005
基于上述不等式(7)可得到:
Figure PCTCN2020117751-appb-000006
也就是说,对于任意的X,均有D Mnew<D M,并且Σ越小,两者的偏差越大。当一个目标在一段时间内保持同样的运动状态(如长时间静止或长时间维持同样的运动速度等)时,卡尔曼滤波的协方差Σ较小。当目标保持原有运动状态时,即(X-μ)接近于0,相比于D M计算得到的D Mnew值较小。当目标的运动状态发生突变时,(X-μ)的值变大,但相比于D M计算得到的D Mnew将约束至更小的值。
由此,借助上述改进后的马氏距离计算公式,即使在目标运动状态发生急剧变化的情况下,也可使得计算得到的马氏距离维持在比较合理的范围内,从而增强目标在不同运动状态下进行跟踪的鲁棒性。
本公开实施例中,为了增强关联匹配的准确性,在计算得到的马氏距离的基础上,还可结合其他辅助关联匹配的相似度度量方法中的如外观特征相似度、形状轮廓相似度等,构成相似度匹配矩阵,以基于该相似度匹配矩阵进行关联匹配。可选的,上述步骤104之后,所述方法还包括:
根据所述马氏距离,计算距离相似度矩阵M D;其中,所述M D中第i行第j列的值表示,所述当前帧图像中第i个目标跟踪框与第j个目标检测框之间的距离相似度;比如,该距离相似度可选为第i个目标跟踪框与第j个目标检测框之间的马氏距离D Mnew的倒数,即D Mnew -1,或者采用其他方式对该马氏距离D Mnew处理后的值,只要体现出相似度即可;
计算外观深度特征相似度矩阵M A;其中,所述M A中第i行第j列的值表示,第i个目标跟踪框对应的上一帧图像中外观深度特征F i与第j个目标检测框的外观深度特征F j的余弦相似度cos(F i,F j);对于外观深度特征F可以利用深度卷积神经网络(如残差神经网络ResNet)从相应帧图像中提取得到;
根据所述M D和所述M A,确定相似度匹配矩阵。
上述步骤105可包括:根据所述相似度匹配矩阵,对所述当前帧图像中的目标检测框和目标跟踪框进行关联匹配。
一种实施方式中,在确定相似度匹配矩阵时,可以采用加权平均的方式 对M D和M A进行融合得到,如该相似度匹配矩阵等于aM D加bM A,其中a和b分别是M D和M A的权重,可以基于实际需求预先设置。
另一种实施方式中,在根据相似度匹配矩阵,对当前帧图像中的目标检测框和目标跟踪框进行关联匹配时,可以利用匈牙利算法进行二分图匹配,从而得到目标检测框和目标跟踪框之间的一一匹配的结果。
可理解的,在多目标跟踪中,可能会出现前后目标严重遮挡的情况,由于离镜头近的目标遮挡了离镜头远的目标的大部分区域,可能会导致目标跟踪错误,在后续帧图像中得到错误的跟踪结果。为了克服此问题,本公开实施例提出了利用目标的前后拓扑关系进行约束匹配。
由于透视关系的存在,在摄像装置(如摄像头)采集的图像中,地面目标检测框下边缘的中心点可以视为目标的接地点,该点越靠近图像下方,则可以认为是离镜头越近,反之则离镜头越远。对于两个目标检测框,当他们之间的交并比大于一定阈值时,可以认为对应的两个目标严重遮挡。通过目标接地点的位置,可以判断两个目标的前后关系。其中离摄像头近的目标为前景遮挡目标,离摄像头远的目标为背景被遮挡目标。所有遮挡目标之间的前后关系可称为目标的前后拓扑关系。前后拓扑关系一致性可以定义为:在连续帧(图像)中,若前一帧两个目标A和B严重遮挡,目标A为前景遮挡目标,目标B为背景被遮挡目标,则在后一帧中,如果目标A和B仍然严重遮挡,则目标A仍然为前景遮挡目标,目标B为背景被遮挡目标。当前帧图像中多个目标发生严重遮挡时,可以获得上一帧目标轨迹之间的前后拓扑关系,并在关联匹配中利用前后拓扑关系一致性加以约束,使得匹配更加准确。
可选的,上述步骤105之后,所述方法还可包括:
获取所述当前帧图像的拓扑关系矩阵M T1,和获取所述当前帧图像的上一帧图像的拓扑关系矩阵M T2
将M T1和M T2进行逐元素相乘,得到拓扑变化矩阵M 0
利用M 0,对当前帧图像中的目标检测框的匹配结果进行修正处理。
其中,所述M T1中第i行第j列的值表示,所述当前帧图像中第i个目标与第j个目标的前后关系;所述M T2中第i行第j列的值表示,所述上一帧图像中第i个目标与第j个目标的前后关系;所述M 0中第i行第j列的值表示 相比于所述上一帧图像,所述当前帧图像中的第i个目标与第j个目标的前后关系是否发生了变化。上述修正处理可理解为:若在上一帧和当前帧中,第i个目标与第j个目标的前后关系发生了变化,则对当前帧中第i个目标与第j个目标所匹配的检测框进行互换处理,以修正目标跟踪过程中关联匹配的结果。
这样,可以利用相邻帧图像中遮挡物前后拓扑关系一致性约束,从而增强目标在被严重遮挡时进行匹配的可靠性,从而保证目标跟踪过程的顺利进行。
例如,在获取M T1和M T2时,可以将目标检测框的下边缘中心点(x+w/2,y+h)作为相应目标的接地点,根据透视原理,纵坐标y+h越大则目标距离摄像头越近,反之距离摄像头越远。当确定两目标之间的前后关系时,可以比较相应目标检测框的下边缘中心点的纵坐标。比如,以M T1为例,其中第i行第j列的值表示当前帧图像中第i个目标与第j个目标的前后关系t,若第i个目标与第j个目标存在遮挡关系,且y i+h i<y j+h j,则t=-1,表示第i个目标在第j个目标的前面;或者若第i个目标与第j个目标存在遮挡关系,且y i+h i>y j+h j,则t=1,表示第i个目标在第j个目标的后面;或者当第i个目标与第j个目标不存在遮挡关系时,则t=0。对于M T2,可采用如上M T1的方式来设定。这样,将M T1和M T2进行逐元素相乘得到的拓扑变化矩阵M 0中,若第i个目标与第j个目标均匹配正确的,则M 0中第i行第j列的值为0或1,即第i个目标与第j个目标的前后关系没有发生变化;而若M 0中第i行第j列的值为-1,则表示因匹配错误,第i个目标与第j个目标在相邻两帧中的前后关系发生了变化,此时可对当前帧图像中该两个目标所匹配的检测框进行互换处理,以修正相应的目标轨迹,保证跟踪过程的顺利进行。
可选的,对于两个目标是否存在遮挡关系,可以利用相应的检测框和跟踪框的交并比(Intersection over Union,IoU)来确定。
本公开实施例适用的场景包括但不限于智慧城市、智慧交通、智慧零售等场景下的行人和/或车辆等目标的持续跟踪,以获得目标的位置、身份、运动状态以及历史轨迹等信息。
下面结合图2对本公开具体实例中目标跟踪过程进行说明。
如图2所示,对应的目标跟踪过程可包括如下步骤:
S21:获取监控或其他场景摄像头的实时视频流;
S22:从该实时视频流中抽取当前帧图像,并进行预处理,比如缩放成固定尺寸,以及减去统一的RGB均值等;
S23:将预处理后的当前帧图像输入到预设的目标检测器中,输出一系列目标检测框,每个框包含目标的坐标和尺寸信息;
S24:利用卡尔曼滤波进行目标跟踪,得到当前帧图像中的目标跟踪框所包含目标的坐标和尺寸信息;
S25:借助改进后的马氏距离计算公式,计算当前帧图像中的目标检测框和目标跟踪框之间的马氏距离;具体过程可参见上述内容;
S26:根据S25中得到的马氏距离,对当前帧图像中的目标检测框和目标跟踪框进行关联匹配;如利用匈牙利算法进行二分图匹配;
S27:利用相邻帧图像中目标的前后拓扑关系对关联匹配结果进行一致性约束;
S28:结束当前帧图像中跟踪过程,并抽取下一图像帧,重复上述S22至S27,直至视频流结束。而对于存在记录却在一定时间内(超过若干图像帧)未与任何检测框所匹配上的目标轨迹,可将其标记为离场并在未来不再参与关联匹配过程。
请参见图3,图3是本公开实施例提供的一种目标跟踪装置的结构示意图,如图3所示,该目标跟踪装置30包括:
检测模块31,用于对当前帧图像进行目标检测,得到所述当前帧图像中的目标检测框的第一信息,所述第一信息用于表示第一位置和第一尺寸;
跟踪模块32,用于利用卡尔曼滤波进行目标跟踪,得到所述当前帧图像中的目标跟踪框的第二信息,所述第二信息用于表示第二位置和第二尺寸;
修正模块33,用于对卡尔曼滤波中的预测误差协方差矩阵进行容错修正,得到修正后的协方差矩阵;
第一计算模块34,用于根据所述第一信息、所述第二信息以及所述修正后的协方差矩阵,计算所述当前帧图像中的目标检测框和目标跟踪框之间的马氏距离;
匹配模块35,用于根据所述马氏距离,对所述当前帧图像中的目标检测框和目标跟踪框进行关联匹配。
可选的,所述第一计算模块34具体用于:利用如下公式,计算所述当前帧图像中的目标检测框和目标跟踪框之间的马氏距离:
Figure PCTCN2020117751-appb-000007
其中,X表示所述第一信息,μ表示所述第二信息,Σ表示所述卡尔曼滤波中的预测误差协方差矩阵,(Σ+αE)表示所述修正后的协方差矩阵,α为大于0的预设系数,E表示单位矩阵。
可选的,所述匹配模块35具体用于:当所述马氏距离小于或等于预设阈值时,确定对应的目标检测框和目标跟踪框之间匹配;或者,当所述马氏距离大于所述预设阈值时,确定对应的目标检测框和目标跟踪框之间不匹配。
可选的,所述目标跟踪装置30还包括:
获取模块,用于获取所述当前帧图像的拓扑关系矩阵M T1,和获取所述当前帧图像的上一帧图像的拓扑关系矩阵M T2
第二计算模块,用于将所述M T1和所述M T2进行逐元素相乘,得到拓扑变化矩阵M 0
处理模块,用于利用所述M 0,对所述当前帧图像中的目标检测框的匹配结果进行修正处理;
其中,所述M T1中第i行第j列的值表示,所述当前帧图像中第i个目标与第j个目标的前后关系;所述M T2中第i行第j列的值表示,所述上一帧图像中第i个目标与第j个目标的前后关系;所述M 0中第i行第j列的值表示相比于所述上一帧图像,所述当前帧图像中的第i个目标与第j个目标的前后关系是否发生了变化。
可选的,所述目标跟踪装置30还包括:
第三计算模块,用于根据所述马氏距离,计算距离相似度矩阵M D;其中,所述M D中第i行第j列的值表示,所述当前帧图像中第i个目标跟踪框与第j个目标检测框之间的距离相似度;
第四计算模块,用于计算外观深度特征相似度矩阵M A;其中,所述M A中第i行第j列的值表示,第i个目标跟踪框对应的上一帧图像中外观深度特 征与第j个目标检测框的外观深度特征的余弦相似度;
确定模块,用于根据所述M D和所述M A,确定相似度匹配矩阵;
所述匹配模块35具体用于:
根据所述相似度匹配矩阵,对所述当前帧图像中的目标检测框和目标跟踪框进行关联匹配。
可理解的,本公开实施例的目标跟踪装置30,可以实现上述图1所示方法实施例中实现的各个过程,以及达到相同的有益效果,为避免重复,这里不再赘述。
根据本公开的实施例,本公开还提供了一种电子设备和一种可读存储介质。
如图4所示,是用来实现本公开实施例的目标跟踪方法的电子设备的框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本公开的实现。
如图4所示,该电子设备包括:一个或多个处理器401、存储器402,以及用于连接各部件的接口,包括高速接口和低速接口。各个部件利用不同的总线互相连接,并且可以被安装在公共主板上或者根据需要以其它方式安装。处理器可以对在电子设备内执行的指令进行处理,包括存储在存储器中或者存储器上以在外部输入/输出装置(诸如,耦合至接口的显示设备)上显示GUI的图形信息的指令。在其它实施方式中,若需要,可以将多个处理器和/或多条总线与多个存储器和多个存储器一起使用。同样,可以连接多个电子设备,各个设备提供部分必要的操作(例如,作为服务器阵列、一组刀片式服务器、或者多处理器系统)。图4中以一个处理器401为例。
存储器402即为本公开所提供的非瞬时计算机可读存储介质。其中,所述存储器存储有可由至少一个处理器执行的指令,以使所述至少一个处理器执行本公开所提供的目标跟踪方法。本公开的非瞬时计算机可读存储介质存 储计算机指令,该计算机指令用于使计算机执行本公开所提供的目标跟踪方法。
存储器402作为一种非瞬时计算机可读存储介质,可用于存储非瞬时软件程序、非瞬时计算机可执行程序以及模块,如本公开实施例中的目标跟踪方法对应的程序指令/模块(例如,附图3所示的检测模块31、跟踪模块32、修正模块33、第一计算模块34和匹配模块35)。处理器401通过运行存储在存储器402中的非瞬时软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现上述方法实施例中的目标跟踪方法。
存储器402可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储电子设备的使用所创建的数据等。此外,存储器402可以包括高速随机存取存储器,还可以包括非瞬时存储器,例如至少一个磁盘存储器件、闪存器件、或其他非瞬时固态存储器件。在一些实施例中,存储器402可选包括相对于处理器401远程设置的存储器,这些远程存储器可以通过网络连接至电子设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
目标跟踪方法的电子设备还可以包括:输入装置403和输出装置404。处理器401、存储器402、输入装置403和输出装置404可以通过总线或者其他方式连接,图4中以通过总线连接为例。
输入装置403可接收输入的数字或字符信息,以及产生与目标跟踪方法的电子设备的用户设置以及功能控制有关的键信号输入,例如触摸屏、小键盘、鼠标、轨迹板、触摸板、指示杆、一个或者多个鼠标按钮、轨迹球、操纵杆等输入装置。输出装置404可以包括显示设备、辅助照明装置(例如,LED)和触觉反馈装置(例如,振动电机)等。该显示设备可以包括但不限于,液晶显示器(LCD)、发光二极管(LED)显示器和等离子体显示器。在一些实施方式中,显示设备可以是触摸屏。
此处描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、专用ASIC(专用集成电路)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机 程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。
这些计算程序(也称作程序、软件、软件应用、或者代码)包括可编程处理器的机器指令,并且可以利用高级过程和/或面向对象的编程语言、和/或汇编/机器语言来实施这些计算程序。如本文使用的,术语“机器可读介质”和“计算机可读介质”指的是用于将机器指令和/或数据提供给可编程处理器的任何计算机程序产品、设备、和/或装置(例如,磁盘、光盘、存储器、可编程逻辑装置(PLD)),包括,接收作为机器可读信号的机器指令的机器可读介质。术语“机器可读信号”指的是用于将机器指令和/或数据提供给可编程处理器的任何信号。
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声音输入、语音输入或者、触觉输入)来接收来自用户的输入。
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)和互联网。
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。
根据本公开实施例的技术方案,可以基于容错修正后的预测误差协方差矩阵来计算目标检测框和目标跟踪框之间的马氏距离,从而即使在目标运动状态发生急剧变化的情况下,该马氏距离也可维持在比较合理的范围内,从而在根据该马氏距离对当前帧图像中的目标检测框和目标跟踪框进行关联匹配时,可以增强目标在不同运动状态下进行跟踪的鲁棒性。
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本公开中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本公开公开的技术方案所期望的结果,本文在此不进行限制。
上述具体实施方式,并不构成对本公开保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本公开的精神和原则之内所作的修改、等同替换和改进等,均应包含在本公开保护范围之内。

Claims (12)

  1. 一种目标跟踪方法,包括:
    对当前帧图像进行目标检测,得到所述当前帧图像中的目标检测框的第一信息,所述第一信息用于表示第一位置和第一尺寸;
    利用卡尔曼滤波进行目标跟踪,得到所述当前帧图像中的目标跟踪框的第二信息,所述第二信息用于表示第二位置和第二尺寸;
    对卡尔曼滤波中的预测误差协方差矩阵进行容错修正,得到修正后的协方差矩阵;
    根据所述第一信息、所述第二信息以及所述修正后的协方差矩阵,计算所述当前帧图像中的目标检测框和目标跟踪框之间的马氏距离;
    根据所述马氏距离,对所述当前帧图像中的目标检测框和目标跟踪框进行关联匹配。
  2. 根据权利要求1所述的方法,其中,所述根据所述第一信息、所述第二信息以及所述修正后的协方差矩阵,计算所述当前帧图像中的目标检测框和目标跟踪框之间的马氏距离,包括:
    利用如下公式,计算所述当前帧图像中的目标检测框和目标跟踪框之间的马氏距离:
    Figure PCTCN2020117751-appb-100001
    其中,X表示所述第一信息,μ表示所述第二信息,Σ表示所述卡尔曼滤波中的预测误差协方差矩阵,(∑+αE)表示所述修正后的协方差矩阵,α为大于0的预设系数,E表示单位矩阵。
  3. 根据权利要求1所述的方法,其中,所述根据所述马氏距离,对所述当前帧图像中的目标检测框和目标跟踪框进行关联匹配,包括:
    当所述马氏距离小于或等于预设阈值时,确定对应的目标检测框和目标跟踪框之间匹配;或者,当所述马氏距离大于所述预设阈值时,确定对应的目标检测框和目标跟踪框之间不匹配。
  4. 根据权利要求1所述的方法,还包括:
    获取所述当前帧图像的拓扑关系矩阵M T1,和获取所述当前帧图像的上 一帧图像的拓扑关系矩阵M T2
    将所述M T1和所述M T2进行逐元素相乘,得到拓扑变化矩阵M 0
    利用所述M 0,对所述当前帧图像中的目标检测框的匹配结果进行修正处理;
    其中,所述M T1中第i行第j列的值表示,所述当前帧图像中第i个目标与第j个目标的前后关系;所述M T2中第i行第j列的值表示,所述上一帧图像中第i个目标与第j个目标的前后关系;所述M 0中第i行第j列的值表示相比于所述上一帧图像,所述当前帧图像中的第i个目标与第j个目标的前后关系是否发生了变化。
  5. 根据权利要求1所述的方法,其中,所述计算所述当前帧图像中的目标检测框和目标跟踪框之间的马氏距离之后,所述方法还包括:
    根据所述马氏距离,计算距离相似度矩阵M D;其中,所述M D中第i行第j列的值表示,所述当前帧图像中第i个目标跟踪框与第j个目标检测框之间的距离相似度;
    计算外观深度特征相似度矩阵M A;其中,所述M A中第i行第j列的值表示,第i个目标跟踪框对应的上一帧图像中外观深度特征与第j个目标检测框的外观深度特征的余弦相似度;
    根据所述M D和所述M A,确定相似度匹配矩阵;
    所述根据所述马氏距离,对所述当前帧图像中的目标检测框和目标跟踪框进行关联匹配,包括:
    根据所述相似度匹配矩阵,对所述当前帧图像中的目标检测框和目标跟踪框进行关联匹配。
  6. 一种目标跟踪装置,包括:
    检测模块,用于对当前帧图像进行目标检测,得到所述当前帧图像中的目标检测框的第一信息,所述第一信息用于表示第一位置和第一尺寸;
    跟踪模块,用于利用卡尔曼滤波进行目标跟踪,得到所述当前帧图像中的目标跟踪框的第二信息,所述第二信息用于表示第二位置和第二尺寸;
    修正模块,用于对卡尔曼滤波中的预测误差协方差矩阵进行容错修正,得到修正后的协方差矩阵;
    第一计算模块,用于根据所述第一信息、所述第二信息以及所述修正后的协方差矩阵,计算所述当前帧图像中的目标检测框和目标跟踪框之间的马氏距离;
    匹配模块,用于根据所述马氏距离,对所述当前帧图像中的目标检测框和目标跟踪框进行关联匹配。
  7. 根据权利要求6所述的装置,其中,
    所述第一计算模块具体用于:利用如下公式,计算所述当前帧图像中的目标检测框和目标跟踪框之间的马氏距离:
    Figure PCTCN2020117751-appb-100002
    其中,X表示所述第一信息,μ表示所述第二信息,Σ表示所述卡尔曼滤波中的预测误差协方差矩阵,(∑+αE)表示所述修正后的协方差矩阵,α为大于0的预设系数,E表示单位矩阵。
  8. 根据权利要求6所述的装置,其中,
    所述匹配模块具体用于:当所述马氏距离小于或等于预设阈值时,确定对应的目标检测框和目标跟踪框之间匹配;或者,当所述马氏距离大于所述预设阈值时,确定对应的目标检测框和目标跟踪框之间不匹配。
  9. 根据权利要求6所述的装置,还包括:
    获取模块,用于获取所述当前帧图像的拓扑关系矩阵M T1,和获取所述当前帧图像的上一帧图像的拓扑关系矩阵M T2
    第二计算模块,用于将所述M T1和所述M T2进行逐元素相乘,得到拓扑变化矩阵M 0
    处理模块,用于利用所述M 0,对所述当前帧图像中的目标检测框的匹配结果进行修正处理;
    其中,所述M T1中第i行第j列的值表示,所述当前帧图像中第i个目标与第j个目标的前后关系;所述M T2中第i行第j列的值表示,所述上一帧图像中第i个目标与第j个目标的前后关系;所述M 0中第i行第j列的值表示相比于所述上一帧图像,所述当前帧图像中的第i个目标与第j个目标的前后关系是否发生了变化。
  10. 根据权利要求6所述的装置,还包括:
    第三计算模块,用于根据所述马氏距离,计算距离相似度矩阵M D;其中,所述M D中第i行第j列的值表示,所述当前帧图像中第i个目标跟踪框与第j个目标检测框之间的距离相似度;
    第四计算模块,用于计算外观深度特征相似度矩阵M A;其中,所述M A中第i行第j列的值表示,第i个目标跟踪框对应的上一帧图像中外观深度特征与第j个目标检测框的外观深度特征的余弦相似度;
    确定模块,用于根据所述M D和所述M A,确定相似度匹配矩阵;
    所述匹配模块具体用于:
    根据所述相似度匹配矩阵,对所述当前帧图像中的目标检测框和目标跟踪框进行关联匹配。
  11. 一种电子设备,包括:
    至少一个处理器;以及
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-5中任一项所述的方法。
  12. 一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行权利要求1-5中任一项所述的方法。
PCT/CN2020/117751 2020-05-22 2020-09-25 目标跟踪方法、装置、电子设备及计算机可读存储介质 WO2021232652A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US17/776,155 US20220383535A1 (en) 2020-05-22 2020-09-25 Object Tracking Method and Device, Electronic Device, and Computer-Readable Storage Medium
EP20936648.3A EP4044117A4 (en) 2020-05-22 2020-09-25 TARGET TRACKING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER READABLE STORAGE MEDIUM
JP2022527078A JP7375192B2 (ja) 2020-05-22 2020-09-25 ターゲット追跡方法、装置、電子機器、コンピュータ読み取り可能な記憶媒体及びコンピュータプログラムプロダクト
KR1020227025087A KR20220110320A (ko) 2020-05-22 2020-09-25 오브젝트 추적 방법, 오브젝트 추적 장치, 전자 기기 및 컴퓨터 판독가능 저장 매체

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010443892.8 2020-05-22
CN202010443892.8A CN111640140B (zh) 2020-05-22 2020-05-22 目标跟踪方法、装置、电子设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021232652A1 true WO2021232652A1 (zh) 2021-11-25

Family

ID=72331521

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/117751 WO2021232652A1 (zh) 2020-05-22 2020-09-25 目标跟踪方法、装置、电子设备及计算机可读存储介质

Country Status (6)

Country Link
US (1) US20220383535A1 (zh)
EP (1) EP4044117A4 (zh)
JP (1) JP7375192B2 (zh)
KR (1) KR20220110320A (zh)
CN (1) CN111640140B (zh)
WO (1) WO2021232652A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063452A (zh) * 2022-06-13 2022-09-16 中国船舶重工集团公司第七0七研究所九江分部 一种针对海上目标的云台摄像头跟踪方法
CN115082713A (zh) * 2022-08-24 2022-09-20 中国科学院自动化研究所 引入空间对比信息的目标检测框提取方法、系统及设备

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640140B (zh) * 2020-05-22 2022-11-25 北京百度网讯科技有限公司 目标跟踪方法、装置、电子设备及计算机可读存储介质
CN112257502A (zh) * 2020-09-16 2021-01-22 深圳微步信息股份有限公司 一种监控视频行人识别与跟踪方法、装置及存储介质
CN112270302A (zh) * 2020-11-17 2021-01-26 支付宝(杭州)信息技术有限公司 肢体控制方法、装置和电子设备
CN112419368A (zh) * 2020-12-03 2021-02-26 腾讯科技(深圳)有限公司 运动目标的轨迹跟踪方法、装置、设备及存储介质
CN112488058A (zh) * 2020-12-17 2021-03-12 北京比特大陆科技有限公司 面部跟踪方法、装置、设备和存储介质
CN112528932B (zh) * 2020-12-22 2023-12-08 阿波罗智联(北京)科技有限公司 用于优化位置信息的方法、装置、路侧设备和云控平台
CN112800864B (zh) * 2021-01-12 2024-05-07 北京地平线信息技术有限公司 目标跟踪方法及装置、电子设备和存储介质
CN114764814A (zh) * 2021-01-12 2022-07-19 富泰华工业(深圳)有限公司 植物高度确定方法、装置、电子设备及介质
CN112785625B (zh) * 2021-01-20 2023-09-22 北京百度网讯科技有限公司 目标跟踪方法、装置、电子设备及存储介质
CN112785630A (zh) * 2021-02-02 2021-05-11 宁波智能装备研究院有限公司 一种显微操作中多目标轨迹异常处理方法及系统
CN112836684B (zh) * 2021-03-09 2023-03-10 上海高德威智能交通系统有限公司 基于辅助驾驶的目标尺度变化率计算方法、装置及设备
CN112907636B (zh) * 2021-03-30 2023-01-31 深圳市优必选科技股份有限公司 多目标跟踪方法、装置、电子设备及可读存储介质
CN113177968A (zh) * 2021-04-27 2021-07-27 北京百度网讯科技有限公司 目标跟踪方法、装置、电子设备及存储介质
CN113223083B (zh) * 2021-05-27 2023-08-15 北京奇艺世纪科技有限公司 一种位置确定方法、装置、电子设备及存储介质
CN113326773A (zh) * 2021-05-28 2021-08-31 北京百度网讯科技有限公司 识别模型训练方法、识别方法、装置、设备及存储介质
JP7482090B2 (ja) * 2021-08-27 2024-05-13 株式会社東芝 推定装置、推定方法及びプログラム
CN113763431B (zh) * 2021-09-15 2023-12-12 深圳大学 一种目标跟踪方法、系统、电子装置及存储介质
CN114001976B (zh) * 2021-10-19 2024-03-12 杭州飞步科技有限公司 控制误差的确定方法、装置、设备及存储介质
CN114549584A (zh) * 2022-01-28 2022-05-27 北京百度网讯科技有限公司 信息处理的方法、装置、电子设备及存储介质
CN115223135B (zh) * 2022-04-12 2023-11-21 广州汽车集团股份有限公司 车位跟踪方法、装置、车辆及存储介质
CN114881982A (zh) * 2022-05-19 2022-08-09 广州敏视数码科技有限公司 一种减少adas目标检测误检的方法、装置及介质
CN116129350B (zh) * 2022-12-26 2024-01-16 广东高士德电子科技有限公司 数据中心安全作业的智能监控方法、装置、设备及介质
CN115908498B (zh) * 2022-12-27 2024-01-02 清华大学 一种基于类别最优匹配的多目标跟踪方法及装置
CN115995062B (zh) * 2023-03-22 2023-08-04 西南交通大学 一种接触网电联接线线夹螺母异常识别方法及系统
CN116563769B (zh) * 2023-07-07 2023-10-20 南昌工程学院 一种视频目标识别追踪方法、系统、计算机及存储介质
CN117351039B (zh) * 2023-12-06 2024-02-02 广州紫为云科技有限公司 一种基于特征查询的非线性多目标跟踪方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103281476A (zh) * 2013-04-22 2013-09-04 中山大学 基于电视图像运动目标的自动跟踪方法
US20150055829A1 (en) * 2013-08-23 2015-02-26 Ricoh Company, Ltd. Method and apparatus for tracking object
CN107516303A (zh) * 2017-09-01 2017-12-26 成都通甲优博科技有限责任公司 多目标跟踪方法及系统
CN109635657A (zh) * 2018-11-12 2019-04-16 平安科技(深圳)有限公司 目标跟踪方法、装置、设备及存储介质
CN109816690A (zh) * 2018-12-25 2019-05-28 北京飞搜科技有限公司 基于深度特征的多目标追踪方法及系统
CN111640140A (zh) * 2020-05-22 2020-09-08 北京百度网讯科技有限公司 目标跟踪方法、装置、电子设备及计算机可读存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5229126B2 (ja) * 2009-06-17 2013-07-03 日本電気株式会社 目標追尾処理器及びそれに用いる誤差共分散行列の補正方法
US9552648B1 (en) * 2012-01-23 2017-01-24 Hrl Laboratories, Llc Object tracking with integrated motion-based object detection (MogS) and enhanced kalman-type filtering
CN109785368B (zh) * 2017-11-13 2022-07-22 腾讯科技(深圳)有限公司 一种目标跟踪方法和装置
CN110348332B (zh) * 2019-06-24 2023-03-28 长沙理工大学 一种交通视频场景下机非人多目标实时轨迹提取方法
CN110544272B (zh) * 2019-09-06 2023-08-04 腾讯科技(深圳)有限公司 脸部跟踪方法、装置、计算机设备及存储介质
CN111192296A (zh) * 2019-12-30 2020-05-22 长沙品先信息技术有限公司 一种基于视频监控的行人多目标检测与跟踪方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103281476A (zh) * 2013-04-22 2013-09-04 中山大学 基于电视图像运动目标的自动跟踪方法
US20150055829A1 (en) * 2013-08-23 2015-02-26 Ricoh Company, Ltd. Method and apparatus for tracking object
CN107516303A (zh) * 2017-09-01 2017-12-26 成都通甲优博科技有限责任公司 多目标跟踪方法及系统
CN109635657A (zh) * 2018-11-12 2019-04-16 平安科技(深圳)有限公司 目标跟踪方法、装置、设备及存储介质
CN109816690A (zh) * 2018-12-25 2019-05-28 北京飞搜科技有限公司 基于深度特征的多目标追踪方法及系统
CN111640140A (zh) * 2020-05-22 2020-09-08 北京百度网讯科技有限公司 目标跟踪方法、装置、电子设备及计算机可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4044117A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063452A (zh) * 2022-06-13 2022-09-16 中国船舶重工集团公司第七0七研究所九江分部 一种针对海上目标的云台摄像头跟踪方法
CN115063452B (zh) * 2022-06-13 2024-03-26 中国船舶重工集团公司第七0七研究所九江分部 一种针对海上目标的云台摄像头跟踪方法
CN115082713A (zh) * 2022-08-24 2022-09-20 中国科学院自动化研究所 引入空间对比信息的目标检测框提取方法、系统及设备
CN115082713B (zh) * 2022-08-24 2022-11-25 中国科学院自动化研究所 引入空间对比信息的目标检测框提取方法、系统及设备

Also Published As

Publication number Publication date
JP7375192B2 (ja) 2023-11-07
EP4044117A1 (en) 2022-08-17
CN111640140A (zh) 2020-09-08
JP2023500969A (ja) 2023-01-11
CN111640140B (zh) 2022-11-25
US20220383535A1 (en) 2022-12-01
KR20220110320A (ko) 2022-08-05
EP4044117A4 (en) 2023-11-29

Similar Documents

Publication Publication Date Title
WO2021232652A1 (zh) 目标跟踪方法、装置、电子设备及计算机可读存储介质
Bai et al. Adaptive dilated network with self-correction supervision for counting
EP3822857B1 (en) Target tracking method, device, electronic apparatus and storage medium
US11694436B2 (en) Vehicle re-identification method, apparatus, device and storage medium
US20210350146A1 (en) Vehicle Tracking Method, Apparatus, and Electronic Device
US11514607B2 (en) 3-dimensional reconstruction method, 3-dimensional reconstruction device, and storage medium
US11514676B2 (en) Method and apparatus for detecting region of interest in video, device and medium
CN110659600B (zh) 物体检测方法、装置及设备
US11688177B2 (en) Obstacle detection method and device, apparatus, and storage medium
CN111814633B (zh) 陈列场景检测方法、装置、设备以及存储介质
CN110675635B (zh) 相机外参的获取方法、装置、电子设备及存储介质
WO2022213857A1 (zh) 动作识别方法和装置
EP4080470A2 (en) Method and apparatus for detecting living face
WO2019157922A1 (zh) 一种图像处理方法、装置及ar设备
JP7150074B2 (ja) エッジベースの拡張現実3次元追跡登録方法、装置及び電子機器
US20210256725A1 (en) Target detection method, device, electronic apparatus and storage medium
KR20220153667A (ko) 특징 추출 방법, 장치, 전자 기기, 저장 매체 및 컴퓨터 프로그램
CN113409368B (zh) 建图方法及装置、计算机可读存储介质、电子设备
US20220004812A1 (en) Image processing method, method for training pre-training model, and electronic device
CN111191619A (zh) 车道线虚线段的检测方法、装置、设备和可读存储介质
CN112734800A (zh) 一种基于联合检测与表征提取的多目标跟踪系统和方法
CN111832459A (zh) 目标检测方法、装置、设备以及存储介质
Dowson et al. Metric mixtures for mutual information (M/sup 3/I) tracking
CN115880776B (zh) 关键点信息的确定方法和离线动作库的生成方法、装置
CN111402333B (zh) 参数估计方法、装置、设备和介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20936648

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022527078

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020936648

Country of ref document: EP

Effective date: 20220510

ENP Entry into the national phase

Ref document number: 20227025087

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE