WO2018099268A1 - 目标跟踪方法、装置及存储介质 - Google Patents

目标跟踪方法、装置及存储介质 Download PDF

Info

Publication number
WO2018099268A1
WO2018099268A1 PCT/CN2017/111175 CN2017111175W WO2018099268A1 WO 2018099268 A1 WO2018099268 A1 WO 2018099268A1 CN 2017111175 W CN2017111175 W CN 2017111175W WO 2018099268 A1 WO2018099268 A1 WO 2018099268A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
target tracking
preset
tracking frame
frame
Prior art date
Application number
PCT/CN2017/111175
Other languages
English (en)
French (fr)
Inventor
张兆丰
牟永强
田第鸿
Original Assignee
深圳云天励飞技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳云天励飞技术有限公司 filed Critical 深圳云天励飞技术有限公司
Publication of WO2018099268A1 publication Critical patent/WO2018099268A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention relates to the field of artificial intelligence, and in particular to a target tracking method, device and storage medium.
  • the face detection speed is slower and the tracking speed is faster, in the real-time face recognition system, only the image of the partial frame from the camera is extracted for face detection, and the detected target is made on the image of other frames. Tracking, under the premise of ensuring real-time, can make the system miss the face detection as much as possible, and store different face images detected by the same person as the same target. For each person in the monitoring range, one or a small number of high-quality face images can be selected for background processing, preventing all detected faces from being transmitted to the background, increasing computational overhead.
  • the embodiment of the invention provides a target tracking method, device and storage medium, so as to improve the target tracking speed and accuracy.
  • an embodiment of the present invention provides a target tracking method, including:
  • the N target tracking frames are matched with the M target object frames based on a Hungarian algorithm to update the preset target tracking frame set.
  • an embodiment of the present invention provides a target tracking method apparatus, including:
  • An acquiring module configured to acquire a target image, where the target image includes at least one target object
  • a determining module configured to track the target image by using an optical flow tracking algorithm to determine N target tracking frames in the target image based on the preset target tracking frame set, where N is a positive integer;
  • a detecting module configured to detect, by using an image detection algorithm, M target object frames in the target image, where M is a positive integer;
  • an update module configured to match the N target tracking frames with the M target object frames based on a Hungarian algorithm to update the preset target tracking frame set.
  • a target image is acquired, the target image includes at least one target object; and the target image is tracked by using an optical flow tracking algorithm based on the preset target tracking frame set to determine N target tracking frames in the target image, the N is a positive integer; detecting M target object frames in the target image by using an image detection algorithm, the M being a positive integer; the N is based on a Hungarian algorithm
  • the target tracking frame is matched with the M target object frames to update the preset target tracking frame set.
  • the target tracking frame is matched with the target object frame according to the Hungarian algorithm to update the preset target tracking frame set, so that the preset target tracking frame can be updated according to the target object, and the target tracking accuracy is improved.
  • FIG. 1 is a schematic flow chart of a first embodiment of a target tracking method according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of tracking an object image by using an optical flow tracking algorithm according to an embodiment of the present invention
  • FIG. 3 is a schematic flow chart of a second embodiment of a target tracking method according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a first embodiment of a target tracking apparatus according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a determining module according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a second embodiment of a target tracking apparatus according to an embodiment of the present invention.
  • the embodiment of the invention provides a target tracking method and device, so as to improve the target tracking speed and accuracy.
  • the target image includes at least one target object; tracking the target image by using an optical flow tracking algorithm based on the preset target tracking frame set to determine N target tracking frames in the target image, the N a positive integer; detecting, by using an image detection algorithm, M target object frames in the target image, the M being a positive integer; matching the N target tracking frames with the M target object frames based on a Hungarian algorithm Updating the preset target tracking frame set.
  • FIG. 1 is a schematic flowchart diagram of a first embodiment of a target tracking method according to an embodiment of the present invention.
  • the target tracking method provided by the embodiment of the present invention includes the following steps:
  • the target image may refer to each frame image acquired from the video stream, and preferably, the image includes an image of a human face.
  • the target object refers to a feature in the target image that needs attention, for example, if the target image is a face image, the target object may be a human face.
  • a video stream is acquired by installing a camera in a target area or a location, and then decoding the video stream to obtain a video image of one frame, that is, a target image, from the video stream, and then The target image is image processed.
  • the camera can be installed at a cell door, a school gate, an entrance and exit gate, and the like.
  • a camera may be installed at the gateway position, then the video stream captured by the camera is acquired, and the video stream is decoded to obtain a target image, and then based on The target object in the target image, that is, the face object, performs the person counting, but in the process of counting based on the face object, since the same face object may exist in different frames in the video stream, in order to prevent repeated counting, it may be used.
  • the target tracking method provided by the embodiment of the present invention tracks the target object to deduplicate and improves the counting accuracy.
  • S102 Track the target image by using an optical flow tracking algorithm based on the preset target tracking frame set to determine N target tracking frames in the target image, where N is a positive integer.
  • the preset target tracking frame set refers to a preset target tracking frame set corresponding to the preset target appearing in the target image before the target image at the moment. For example, if a target image is acquired at a certain time in order to count the number of people at a certain gate, but since the target face may appear in the target image before the time, it is necessary to The face of the target face is deduplicated so that the preset face tracking frame can be used to determine the repeated preset face and filter out.
  • the N target tracking frames in the target image are used to track the target in the target image by using an optical flow tracking algorithm, where the target tracking frame refers to a target tracking frame of the target object in the target image, for example, if the target image is a human For the face image, the target tracking frame is the target face image tracking frame.
  • FIG. 2 is a schematic flowchart of tracking an object image by using an optical flow tracking algorithm according to an embodiment of the present invention, including:
  • the first target feature point refers to a feature point related to the target in the previous target image of the target image.
  • the method of extracting the mesh nodes may be adopted, and each pixel may also be calculated.
  • Point tracking performance then select some easy to track points, and ensure a certain distance between the points.
  • the target feature point may be a face feature point.
  • the optical flow may be calculated based on the first target feature point, and the first target feature point may be obtained by using the optical flow information. a second target feature point in the target image.
  • the three face target feature points in the target image can be obtained by calculating the optical flow.
  • the target tracking frame refers to a tracking frame with a characteristic shape for tracking the target feature points, so as to track the target.
  • the difference between each feature point in the position of the previous frame and the current frame is calculated, and the difference is sorted according to the size of the difference, and the difference in the middle is taken as the distance moved by the tracking frame.
  • Calculate the distance between each feature point in the previous frame and calculate the distance between each feature point in the current frame.
  • the dimensions of the matrix formed by the feature point distance in the two frames are consistent, and the corresponding two frames
  • the distance is divided by two to get the quotient, the quotient is sorted by size, and the middle quotient is taken as the scaling of the tracking frame.
  • the correlation is a measure to more accurately represent the degree of similarity between the preset target tracking frame and the target tracking frame.
  • acquiring a correlation between the preset target tracking frame in the preset target tracking frame set and the target tracking frame including:
  • NCC normalized cross correlation
  • the preset target tracking frame set is updated.
  • the frequency of the target tracking frame set may be set to be updated, and if the template is updated, the image of the target tracking frame is assigned to the preset target tracking frame.
  • the template update frequency can be 1 time every 3 frames, so that it can be well judged whether the target is occluded.
  • the first person is obtained in the previous frame target image. a face feature point, and calculating a second face feature point of the target image based on the face feature point, and obtaining a face tracking frame based on the second face feature point, and then calculating the obtained 3 face tracking frames and pre-
  • the correlation between the three face tracking frames is set, and when the correlation is greater than a certain threshold, the face tracking frame corresponding to the second feature point is updated with the preset face tracking frame corresponding to the preset face tracking frame set.
  • the image detection algorithm may be, for example, a Sift feature matching algorithm, or may be another image detection algorithm.
  • three target face frames in a face image can be accurately detected by the Sift feature matching algorithm.
  • only partial frames in the video image stream may be further detected using an image detection algorithm. For example, one frame of video images is selected for detection in 10 frames of video images. Therefore, while improving the tracking accuracy, the detection time can also be reduced. Thus, when the current frame is a face detection frame, face detection is performed.
  • the overlap between the target tracking frame and the target object frame is calculated by using the following formula:
  • r face refers to the face frame and r tracker refers to the tracking frame.
  • the weight matrix for characterizing the relationship between the face frame and the tracking frame is constructed by using the degree of overlap, and then the Hungarian algorithm is used to find the maximum right bipartite graph, and the sum of the overlap between the target object frame and the target tracking frame is the largest, and A target object frame matches at most one target tracking frame, and a target tracking frame also matches at most one target object frame.
  • the matching can be considered as the best match between the target object frame and the target tracking frame.
  • the N target tracking frame is matched with the M target object frames based on a Hungarian algorithm to update the preset target tracking frame set, including:
  • the face object frame in the face image is first determined by using the optical flow tracking method, and the preset face tracking frame is updated and updated.
  • the image detection algorithm is used to detect the accurate 4 face object frames in the face image, and the 4 face object frames and 5 preset face tracking frames are performed by the Hungarian algorithm.
  • Matching if 3 of the 5 preset face object frames match the 4 face object frames successfully, 4 face object frames have a face object frame that never appears in the 5 preset face object frames. Therefore, there are still two preset face object frames that cannot match any of the face object frames in the four face object frames, and the three face object frames that match the success are updated to the original preset face object frame.
  • the three preset face object frames in the frame add the face object frame that does not appear in the preset face object frame to the preset face object frame, and the two unmatched successful preset persons Face object frame from the original preset face object To remove, this last pre-Face object frame collection will include four preset face object frame.
  • the unmatched successful target object frame is added to the preset target tracking frame set, the unmatched successful preset target tracking frame is deleted from the preset target tracking frame, and the matching is performed.
  • the successful target object frame replaces the preset target tracking frame corresponding to the successfully matched target object frame to update the preset target tracking frame set, so that subsequent tracking of the target is more accurate, and the tracking time is saved. Improve target tracking efficiency.
  • the target image is acquired, the target image includes at least one target object; and the target image is tracked by using an optical flow tracking algorithm based on the preset target tracking frame set to determine the target image.
  • N target tracking frames the N is a positive integer; detecting M target object frames in the target image by using an image detection algorithm, the M being a positive integer; and the N target tracking frames based on a Hungarian algorithm Matching with the M target object frames to update the preset target tracking frame set.
  • the target tracking frame is matched with the target object frame according to the Hungarian algorithm to update the preset target tracking frame set, so that the preset target tracking frame can be updated according to the target object, and the target tracking accuracy is improved.
  • FIG. 3 is a schematic flowchart diagram of a second embodiment of a target tracking method according to an embodiment of the present invention.
  • the target tracking method provided by the embodiment of the present invention includes the following steps:
  • step S304 is performed.
  • step S301 if the target image is not a preset detection frame, execution returns to step S301.
  • a preset detection frame may be selected at intervals of a certain number of frames for detecting that the target is further detected by using an image detection algorithm.
  • step S308 that is, after the preset target tracking frame is updated, the process proceeds to step S301, so that the effect of subsequent face tracking using the preset target tracking frame is better.
  • steps S306, S307, and S308 of updating the preset target tracking frame are not strictly sequential.
  • the target image is acquired, the target image includes at least one target object; and the target image is tracked by using an optical flow tracking algorithm based on the preset target tracking frame set to determine the target image.
  • N target tracking frames the N is a positive integer; detecting M target object frames in the target image by using an image detection algorithm, the M being a positive integer; and the N target tracking frames based on a Hungarian algorithm Matching with the M target object frames to update the preset target tracking frame set.
  • the target tracking frame is matched with the target object frame according to the Hungarian algorithm to update the preset target tracking frame set, so that the preset target tracking frame can be updated according to the target object, and the target tracking accuracy is improved.
  • the embodiment of the invention further provides a target tracking device, including:
  • An acquiring module configured to acquire a target image, where the target image includes at least one target object
  • a determining module configured to track the target image by using an optical flow tracking algorithm to determine N target tracking frames in the target image based on the preset target tracking frame set, where N is a positive integer;
  • a detecting module configured to detect, by using an image detection algorithm, M target object frames in the target image, where M is a positive integer;
  • an update module configured to match the N target tracking frames with the M target object frames based on a Hungarian algorithm to update the preset target tracking frame set.
  • FIG. 4 is a schematic structural diagram of a first embodiment of a target tracking device according to an embodiment of the present invention, which is used to implement a target tracking method disclosed in an embodiment of the present invention. among them, As shown in FIG. 4, a target tracking apparatus 400 provided by an embodiment of the present invention may include:
  • the acquisition module 410 The acquisition module 410, the determination module 420, the detection module 430, and the update module 440.
  • the obtaining module 410 is configured to acquire a target image, where the target image includes at least one target object.
  • the target image may refer to each frame image acquired from the video stream, and preferably, the image includes an image of a human face.
  • the target object refers to a feature in the target image that needs attention, for example, if the target image is a face image, the target object may be a human face.
  • a video stream is acquired by installing a camera in a target area or a location, and then decoding the video stream to obtain a video image of one frame, that is, a target image, from the video stream, and then The target image is image processed.
  • the camera can be installed at a cell door, a school gate, an entrance and exit gate, and the like.
  • a camera may be installed at the gateway position, then the video stream captured by the camera is acquired, and the video stream is decoded to obtain a target image, and then based on The target object in the target image, that is, the face object, performs the person counting, but in the process of counting based on the face object, since the same face object may exist in different frames in the video stream, in order to prevent repeated counting, it may be used.
  • the target tracking method provided by the embodiment of the present invention tracks the target object to deduplicate and improves the counting accuracy.
  • the determining module 420 is configured to track the target image by using an optical flow tracking algorithm to determine N target tracking frames in the target image based on the preset target tracking frame set, where N is a positive integer.
  • the preset target tracking frame set refers to a preset target tracking frame set corresponding to the preset target appearing in the target image before the target image at the moment. For example, if a target image is acquired at a certain time in order to count the number of people at a certain gate, but since the target face may appear in the target image before the time, it is necessary to The face of the target face is deduplicated so that the preset face tracking frame can be used to determine the repeated preset face and filter out.
  • the N target tracking frames in the target image are used to track the target in the target image by using an optical flow tracking algorithm, where the target tracking frame refers to a target tracking frame of the target object in the target image, for example, if the target image is a human For the face image, the target tracking frame is the target face image tracking frame.
  • FIG. 5 is a schematic structural diagram of a determining module according to an embodiment of the present invention.
  • the determining module 420 includes:
  • the extracting unit 421 is configured to extract the first target feature point in the last target image of the target image.
  • the first target feature point refers to a feature point related to the target in the previous target image of the target image.
  • the method of extracting the mesh nodes may be adopted, and the tracking performance of each pixel point may be calculated, and some points that are easy to track are selected from the points, and a certain distance between the points is ensured.
  • the target feature point may be a face feature point.
  • the obtaining unit 422 is configured to acquire, according to the optical flow, the target feature point corresponding to the target image Second target feature point.
  • the optical flow may be calculated based on the first target feature point, and the first target feature point may be obtained by using the optical flow information. a second target feature point in the target image.
  • the three face target feature points in the target image can be obtained by calculating the optical flow.
  • the acquiring unit 422 is further configured to acquire a target tracking frame of the target image based on the second target feature point.
  • the target tracking frame refers to a tracking frame with a characteristic shape for tracking the target feature points, so as to track the target.
  • the difference between each feature point in the position of the previous frame and the current frame is calculated, and the difference is sorted according to the size of the difference, and the difference in the middle is taken as the distance moved by the tracking frame.
  • Calculate the distance between each feature point in the previous frame and calculate the distance between each feature point in the current frame.
  • the dimensions of the matrix formed by the feature point distance in the two frames are consistent, and the corresponding two frames
  • the distance is divided by two to get the quotient, the quotient is sorted by size, and the middle quotient is taken as the scaling of the tracking frame.
  • the obtaining unit 422 is further configured to acquire a correlation between the preset target tracking frame in the preset target tracking frame set and the target tracking frame.
  • the correlation is a measure to more accurately represent the degree of similarity between the preset target tracking frame and the target tracking frame.
  • the acquiring unit 422 obtains a correlation between the preset target tracking frame in the preset target tracking frame set and the target tracking frame, and specifically:
  • NCC normalized cross correlation
  • the updating unit 423 is configured to replace, by using the target tracking frame, when the correlation between the preset target tracking frame and the target tracking frame in the preset target tracking frame set is greater than or equal to a preset threshold
  • the corresponding target tracking frame in the target tracking frame set is preset to update the preset target tracking frame set.
  • the preset target tracking frame set is updated.
  • the frequency of the target tracking frame set may be set to be updated, and if the template is updated, the image of the target tracking frame is assigned to the preset target tracking frame.
  • the template update frequency can be 1 time every 3 frames, so that it can be well judged whether the target is occluded.
  • the first person is obtained in the previous frame target image. a face feature point, and calculating a second face feature point of the target image based on the face feature point, and obtaining a face tracking frame based on the second face feature point, and then calculating the obtained 3 face tracking frames and pre-
  • the correlation between the three face tracking frames is set, and when the correlation is greater than a certain threshold, the face tracking frame corresponding to the second feature point is updated with the preset face tracking frame corresponding to the preset face tracking frame set.
  • the detecting module 430 is configured to detect M target object frames in the target image by using an image detection algorithm, where the M is a positive integer.
  • the image detection algorithm may be, for example, a Sift feature matching algorithm, or may be another image detection algorithm.
  • three target face frames in a face image can be accurately detected by the Sift feature matching algorithm.
  • only partial frames in the video image stream may be further detected using an image detection algorithm. For example, one frame of video images is selected for detection in 10 frames of video images. Therefore, while improving the tracking accuracy, the detection time can also be reduced. Thus, when the current frame is a face detection frame, face detection is performed.
  • the updating module 440 is configured to match the N target tracking frames with the M target object frames based on a Hungarian algorithm to update the preset target tracking frame set.
  • the overlap between the target tracking frame and the target object frame is calculated by using the following formula:
  • r face refers to the face frame and r tracker refers to the tracking frame.
  • the weight matrix for characterizing the relationship between the face frame and the tracking frame is constructed by using the degree of overlap, and then the Hungarian algorithm is used to find the maximum right bipartite graph, and the sum of the overlap between the target object frame and the target tracking frame is the largest, and A target object frame matches at most one target tracking frame, and a target tracking frame also matches at most one target object frame.
  • the matching can be considered as the best match between the target object frame and the target tracking frame.
  • the update module 440 includes:
  • a determining unit 441 configured to match the N target tracking frames with the M target object frames based on a Hungarian algorithm to determine that the target object frames in the M target object frames match successfully, and the target objects that do not match successfully a preset target tracking frame that is not successfully matched in the box and the preset target tracking frame set;
  • An update subunit 442 configured to add the unmatched target object frame to the preset target Tracking the frame, deleting the unmatched successful preset target tracking frame from the preset target tracking frame, and replacing the successfully matched target object frame with the preset corresponding to the successfully matched target object frame A target tracking box to update the preset target tracking frame set.
  • the face object frame in the face image is first determined by using the optical flow tracking method, and the preset face tracking frame is updated and updated.
  • the image detection algorithm is used to detect the accurate 4 face object frames in the face image, and the 4 face object frames and 5 preset face tracking frames are performed by the Hungarian algorithm.
  • Matching if 3 of the 5 preset face object frames match the 4 face object frames successfully, 4 face object frames have a face object frame that never appears in the 5 preset face object frames. Therefore, there are still two preset face object frames that cannot match any of the face object frames in the four face object frames, and the three face object frames that match the success are updated to the original preset face object frame.
  • the three preset face object frames in the frame add the face object frame that does not appear in the preset face object frame to the preset face object frame, and the two unmatched successful preset persons Face object frame from the original preset face object To remove, this last pre-Face object frame collection will include four preset face object frame.
  • the unmatched successful target object frame is added to the preset target tracking frame set, the unmatched successful preset target tracking frame is deleted from the preset target tracking frame, and the matching is performed.
  • the successful target object frame replaces the preset target tracking frame corresponding to the successfully matched target object frame to update the preset target tracking frame set, so that subsequent tracking of the target is more accurate, and the tracking time is saved. Improve target tracking efficiency.
  • the target tracking device 400 acquires a target image, and the target image includes at least one target object; the target tracking device 400 further uses the optical flow method to track the algorithm based on the preset target tracking frame set.
  • the image is tracked to determine N target tracking frames in the target image, the N is a positive integer; and M target object frames in the target image are detected by an image detection algorithm, the M being a positive integer;
  • the target tracking device 400 matches the N target tracking frames with the M target object frames based on a Hungarian algorithm to update the preset target tracking frame set.
  • the target tracking frame is matched with the target object frame according to the Hungarian algorithm to update the preset target tracking frame set, so that the preset target tracking frame can be updated according to the target object, and the target tracking accuracy is improved.
  • the target tracking device 400 is presented in the form of a unit.
  • a "unit” herein may refer to an application-specific integrated circuit (ASIC), a processor and memory that executes one or more software or firmware programs, integrated logic circuits, and/or other devices that provide the functionality described above. .
  • ASIC application-specific integrated circuit
  • FIG. 6 is a diagram of a second embodiment of a target tracking device according to an embodiment of the present invention.
  • the schematic diagram is used to implement the image recognition method disclosed in the embodiment of the present invention.
  • the target tracking device 600 may include at least one bus 601, at least one processor 602 connected to the bus 601, and at least one memory 603 connected to the bus 601.
  • the processor 602 calls, by using the bus 601, code stored in the memory for acquiring a target image, where the target image includes at least one target object; and the target image is performed by using an optical flow tracking algorithm based on the preset target tracking frame set. Tracking to determine N target tracking frames in the target image, the N being a positive integer; detecting M target object frames in the target image by using an image detection algorithm, the M being a positive integer; based on a Hungarian algorithm The N target tracking frames are matched with the M target object frames to update the preset target tracking frame set.
  • the processor 602 matches the N target tracking frames with the M target object frames based on a Hungarian algorithm to update the preset target tracking frame. Collections, including:
  • the object frame replaces the preset target tracking frame corresponding to the successfully matched target object frame to update the preset target tracking frame set.
  • the processor 602 performs tracking of the target image by using an optical flow tracking algorithm to determine N target tracking in the target image based on the preset target tracking frame set. Box, including:
  • the target tracking frame is replaced by the target tracking frame.
  • Corresponding target tracking box to update the preset target tracking frame set.
  • the processor 602 acquires a correlation between a preset target tracking frame in the preset target tracking frame set and the target tracking frame, including:
  • the target object is a face object.
  • the target tracking device 600 acquires a target image, and the target image includes at least one target object; the target tracking device 600 further uses the optical flow method to track the algorithm based on the preset target tracking frame set. Tracking the image to determine N target tracking frames in the target image, the N being a positive integer; and detecting M items in the target image using an image detection algorithm
  • the target object frame, the M is a positive integer; the final target tracking device 600 matches the N target tracking frames with the M target object frames based on a Hungarian algorithm to update the preset target tracking frame set.
  • the target tracking frame is matched with the target object frame according to the Hungarian algorithm to update the preset target tracking frame set, so that the preset target tracking frame can be updated according to the target object, and the target tracking accuracy is improved.
  • the target tracking device 600 is presented in the form of a unit.
  • a "unit” herein may refer to an application-specific integrated circuit (ASIC), a processor and memory that executes one or more software or firmware programs, integrated logic circuits, and/or other devices that provide the functionality described above. .
  • ASIC application-specific integrated circuit
  • the embodiment of the present invention further provides a computer storage medium, wherein the computer storage medium can store a program, and the program includes some or all of the steps of any target tracking method described in the foregoing method embodiments.
  • the disclosed apparatus may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Image Analysis (AREA)

Abstract

一种目标跟踪方法、装置及存储介质,所述方法包括:获取目标图像,所述目标图像中包括至少一个目标对象(S101);基于预设目标跟踪框集合利用光流法跟踪算法对目标图像进行跟踪以确定所述目标图像中的N个目标跟踪框,所述N为正整数(S102);利用图像检测算法检测所述目标图像中的M个目标对象框,所述M为正整数(S103);基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以更新所述预设目标跟踪框集合(S104)。该方法通过基于匈牙利算法将目标跟踪框与目标对象框进行匹配以更新预设目标跟踪框集合,从而使得预设目标跟踪框能根据目标对象进行更新,提高目标跟踪准确率。

Description

目标跟踪方法、装置及存储介质
本申请要求于2016年11月29日提交中国专利局,申请号为201611075159.5、发明名称为“一种目标跟踪方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及人工智能领域,具体涉及一种目标跟踪方法、装置及存储介质。
背景技术
因人脸检测的速度较慢,跟踪的速度较快,在实时人脸识别系统中,往往只抽取来自摄像机的部分帧的图像进行人脸检测,在其他帧的图像上对检测到的目标作跟踪,在保证实时的前提下,可使得系统尽量不出现人脸的漏检,并将同一人所检测到的不同人脸图像作为同一个目标进行存储。对每个监控范围内的人员,可选取一张或者少量优质人脸图像传入后台处理,防止全部检测到的人脸都传到后台,增加计算开销。
目前为了实现目标跟踪,主要基于光流跟踪算法来实现。较为流行的是使用双向光流来保证跟踪的可靠性,虽然可靠性增加了,但计算耗时较多。并且光流跟踪本身在帧率较高时(如25fps),对遮挡不敏感,经常会出现人流交叉穿行导致的跟踪框漂移的问题,当跟踪框漂移后,人脸框也很容易与跟踪框错配,导致目标跟踪准确度低。
发明内容
本发明实施例提供了一种目标跟踪方法、装置及存储介质,以期可以提高目标跟踪速度与准确度。
第一方面,本发明实施例提供一种目标跟踪方法,包括:
获取目标图像,所述目标图像中包括至少一个目标对象;
基于预设目标跟踪框集合利用光流法跟踪算法对目标图像进行跟踪以确定所述目标图像中的N个目标跟踪框,所述N为正整数;
利用图像检测算法检测所述目标图像中的M个目标对象框,所述M为正整数;
基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以更新所述预设目标跟踪框集合。
第二方面,本发明实施例提供一种目标跟踪方法装置,包括:
获取模块,用于获取目标图像,所述目标图像中包括至少一个目标对象;
确定模块,用于基于预设目标跟踪框集合利用光流法跟踪算法对目标图像进行跟踪以确定所述目标图像中的N个目标跟踪框,所述N为正整数;
检测模块,用于利用图像检测算法检测所述目标图像中的M个目标对象框,所述M为正整数;
更新模块,用于基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以更新所述预设目标跟踪框集合。
可以看出,本发明实施例所提供的技术方案中,获取目标图像,所述目标图像中包括至少一个目标对象;基于预设目标跟踪框集合利用光流法跟踪算法对目标图像进行跟踪以确定所述目标图像中的N个目标跟踪框,所述N为正整数;利用图像检测算法检测所述目标图像中的M个目标对象框,所述M为正整数;基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以更新所述预设目标跟踪框集合。本发明实施例通过基于匈牙利算法将目标跟踪框与目标对象框进行匹配以更新预设目标跟踪框集合,从而使得预设目标跟踪框能根据目标对象进行更新,提高目标跟踪准确率。
进一步的,通过使用单向光流跟踪算法,减少了计算开销,提高目标跟踪效率。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种目标跟踪方法的第一实施例流程示意图;
图2示出了本发明实施例提供的一种基于光流法跟踪算法对目标图像进行跟踪的流程示意图;
图3是本发明实施例提供的一种目标跟踪方法的第二实施例流程示意图;
图4是本发明实施例提供的一种目标跟踪装置的第一实施例的结构示意图;
图5示出了本发明实施例提供的一种确定模块的结构示意图;
图6是本发明实施例提供的一种目标跟踪装置的第二实施例的结构示意图。
具体实施方式
本发明实施例提供了一种目标跟踪方法及装置,以期可以提高目标跟踪速度与准确度。
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”和“第三”等是用于区别不同对象,而非用于描述特定顺序。此外,术语“包括”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
本发明实施例提供的一种目标跟踪方法,包括:
获取目标图像,所述目标图像中包括至少一个目标对象;基于预设目标跟踪框集合利用光流法跟踪算法对目标图像进行跟踪以确定所述目标图像中的N个目标跟踪框,所述N为正整数;利用图像检测算法检测所述目标图像中的M个目标对象框,所述M为正整数;基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以更新所述预设目标跟踪框集合。
参见图1,图1是本发明实施例提供的一种目标跟踪方法的第一实施例流程示意图。如图1所示,本发明实施例提供的目标跟踪方法包括以下步骤:
S101、获取目标图像,所述目标图像中包括至少一个目标对象。
其中,目标图像可以是指从视频流中获取到的各帧图像,优选地,该图像包括人脸的图像。目标对象是指该目标图像中需要关注的特征,例如,若该目标图像为人脸图像,该目标对象可以为人脸。
在本发明实施例中,通过在目标区域或位置安装摄像头来获取视频流,再对该视频流进行解码,以从该视频流中获取一帧帧的视频图像,也即目标图像,再对该目标图像进行图像处理。
在本发明实施例中,可以在小区门口、学校门口、进出关口等位置安装该摄像头。
举例说明,在本发明的一个示例中,若为了统计某一关口的人数量,可以在关口位置安装一摄像头,然后获取摄像头拍摄的视频流,并对视频流进行解码得到目标图像,然后再基于该目标图像中的目标对象,也即人脸对象进行人物计数,但基于人脸对象进行计数的过程中,由于视频流中的不同帧可能存在同一人脸对象,所以为了防止重复计数,可以使用本发明实施例提供的目标跟踪方法对目标对象进行跟踪以去重,提高计数准确率。
S102、基于预设目标跟踪框集合利用光流法跟踪算法对目标图像进行跟踪以确定所述目标图像中的N个目标跟踪框,所述N为正整数。
其中,预设目标跟踪框集合是指在该时刻目标图像之前出现在目标图像中的预设目标所对应的预设目标跟踪框集合。例如,若为了统计某一关口的人数量,在某一时刻获取到一帧目标图像,但由于在该时刻之前的目标图像中可能出现过目标人脸,从而需要对该帧目标图像中与之前的目标人脸重复的人脸进行去重,从而可以使用预设人脸跟踪框来确定重复的预设人脸并进行滤除。
其中,目标图像中的N个目标跟踪框是指利用光流法跟踪算法跟踪到目标图像中的目标,该目标跟踪框是指目标图像中的目标对象的目标跟踪框,例如,若目标图像为人脸图像,则该目标跟踪框为目标人脸图像跟踪框。
具体地,参见图2,图2示出了本发明实施例提供的一种基于光流法跟踪算法对目标图像进行跟踪的流程示意图,包括:
S201、在所述目标图像的上一个目标图像中提取第一目标特征点。
其中,第一目标特征点是指目标图像的上一个目标图像中的与目标相关的特征点。
具体地,在上一帧目标图像的目标跟踪框内,提取易于跟踪的特征点。
更进一步,具体地,可以采用提取网格节点的方式,也可以计算每个像素 点的跟踪性能,再从中选取一些易于跟踪的点,并保证各点之间有一定的距离。
在本发明实施例中,若该目标图像为人脸图像,则该目标特征点可以为人脸特征点。
S202、基于光流获取所述目标特征点在所述目标图像中对应的第二目标特征点。
具体地,当获取到目标图像的上一个目标图像中的第一目标特征点后,可以基于该第一目标特征点,计算光流,并利用光流信息,即可得到第一目标特征点在该目标图像中的第二目标特征点。
例如,若上一目标图像中存在3个人脸目标特征点,则可以通过计算光流,得到该目标图像中这3个人脸目标特征点。
S203、基于所述第二目标特征点获取所述目标图像的目标跟踪框。
其中,目标跟踪框是指为了对目标特征点进行跟踪的一个具有特征形状的跟踪框,以便于对目标进行跟踪。
更进一步,具体地,计算每个特征点在上一帧与当前帧中位置的差值,按差值的大小进行排序,取中间的差值作为跟踪框移动的距离。计算上一帧中每个特征点之间的距离,同时计算当前帧中每个特征点之间的距离,显然两帧中由特征点距离构成的矩阵的维度是一致的,将两帧中对应的距离两两相除得到商值,将商值按大小排序,取中间的商值作为跟踪框的缩放比例。
S204、获取所述预设目标跟踪框集合中的预设目标跟踪框与所述目标跟踪框之间的相关度。
其中,该相关度是为了更为准确地表示预设目标跟踪框与目标跟踪框之间相似程度的一个度量。
具体地,获取所述预设目标跟踪框集合中的预设目标跟踪框与所述目标跟踪框之间的相关度,包括:
将所述目标跟踪框与所述预设目标跟踪缩放至相同尺寸;
基于归一化相似性度量函数(Normalized cross correlation,简称NCC)计算所述预设目标跟踪框集合中的预设目标跟踪框与所述目标跟踪框之间的相关度。
可以理解,通过以NCC来评价预设目标跟踪框与所述目标跟踪框之间的相关度,从而使得相关度计算更为准确,提高跟踪准确度。
具体地,该NNC的公式表示如下:
Figure PCTCN2017111175-appb-000001
S205、在所述预设目标跟踪框集合中的预设目标跟踪框与所述目标跟踪框之间的相关度大于或等于预设阈值时,利用所述目标跟踪框替换所述预设目标跟踪框集合中相应的目标跟踪框,以更新所述预设目标跟踪框集合。
具体地,当利用NCC来评价相似度时,当NNC值高于预设阈值时,对预设目标跟踪框集合更行更新。
更进一步,具体地,可以设置利用预设目标跟踪框集合更新频率,若处于模板更新帧,则将目标跟踪框的图像赋值给预设目标跟踪框。例如,对于25帧 每秒的视频,模板更新频率可以为每3帧1次,从而可以很好的判断目标是否遮挡。
举例说明,在本发明的一个示例中,若在预设人脸跟踪框集合中有3个人脸跟踪框,也即3个不同的人脸图像,然后在上一帧目标图像中获取第一人脸特征点,并基于该人脸特征点计算该目标图像的第二人脸特征点,并基于该第二人脸特征点得到人脸跟踪框,然后再计算得到的3个人脸跟踪框与预设的3个人脸跟踪框的相关度,当相关度大于一定的阈值时,将该第二特征点对应的人脸跟踪框更新预设人脸跟踪框集合对应的预设人脸跟踪框。
可以理解,由于当相关度大于一定的阈值时,则证明该预设目标跟踪框集合中的预设目标跟踪框与当前图像的目标跟踪框匹配,但很显然当前图像中检测到的目标跟踪框相对预设目标跟踪框更为准确,所以此时利用当前图像的目标跟踪框去替换预设目标跟踪框,将使得后续对目标的跟踪更为准确,提高目标跟踪准确率。
S103、利用图像检测算法检测所述目标图像中的M个目标对象框,所述M为正整数。
具体地,该图像检测算法例如可以为Sift特征匹配算法,也可以为其它图像检测算法。
举例说明,在本发明的一个示例中,可以通过Sift特征匹配算法准确地检测出来人脸图像中的3个目标人脸框。
可选地,可以只对视频图像流中的部分帧利用图像检测算法进一步进行检测。例如,对10帧视频图像中选择一帧视频图像进行检测。从而在提高跟踪准确率的同时,也可以减少检测时间。从而当当前帧为人脸检测帧,则进行人脸检测。
S104、基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以更新所述预设目标跟踪框集合。
其中,对N个目标跟踪框与所述M个目标对象框进行匹配时,也即利用如下公式计算目标跟踪框与目标对象框之间的重叠度:
Figure PCTCN2017111175-appb-000002
其中,rface指人脸框,rtracker指跟踪框。
具体地,当利用重叠度构建表征人脸框与跟踪框之间关系的权重矩阵,再利用匈牙利算法,找到最大有权二分图,此时目标对象框与目标跟踪框的重叠度总和最大,且一个目标对象框至多匹配到一个目标跟踪框,一个目标跟踪框也至多匹配一个目标对象框,该匹配可认为是目标对象框与目标跟踪框的最佳匹配。
具体地,基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以更新所述预设目标跟踪框集合,包括:
基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以确定所述M个目标对象框中匹配成功的目标对象框、未匹配成功的目标对象框以及所述预设目标跟踪框集合中未匹配成功的预设目标跟踪框;
将所述未匹配成功的目标对象框加入所述预设目标跟踪框集合、将所述未 匹配成功的预设目标跟踪框从所述预设目标跟踪框删除,以及将所述匹配成功的目标对象框替换与所述匹配成功的目标对象框对应的预设目标跟踪框,以更新所述预设目标跟踪框集合。
举例说明,在本发明的一个示例中,当需要跟踪某个人脸图像时,首先利用光流跟踪法确定出来该人脸图像中的人脸对象框并对预设人脸跟踪框进行更新得到更新后的5个人脸对象,然后再利用图像检测算法检测出来该人脸图像中的准确的4个人脸对象框,并将该4个人脸对象框与5个预设人脸跟踪框利用匈牙利算法进行匹配,若该5个预设人脸对象框中有3个与这4个人脸对象框匹配成功,4个人脸对象框中有一个人脸对象框在5个预设人脸对象框中从未出现,从而还有2个预设人脸对象框不能与这4个人脸对象框中的任何一个人脸对象框匹配,则将匹配成功的这3个人脸对象框更新原来的预设人脸对象框中的3个预设人脸对象框,将这1个未在预设人脸对象框中出现的人脸对象框加入该预设人脸对象框,并将该2个未匹配成功预设人脸对象框从原来的预设人脸对象框中删除,最后这个预设人脸对象框集合中将包括4个预设人脸对象框。
可以理解,将所述未匹配成功的目标对象框加入所述预设目标跟踪框集合、将所述未匹配成功的预设目标跟踪框从所述预设目标跟踪框删除,以及将所述匹配成功的目标对象框替换与所述匹配成功的目标对象框对应的预设目标跟踪框,以更新所述预设目标跟踪框集合,从而可以使得后续对目标的跟踪更为准确,以及节约跟踪时间,提高目标跟踪效率。
需要说明,对预设目标跟踪框集合进行更新的三个步骤没有先后顺序。
可以看出,本实施例的方案中,获取目标图像,所述目标图像中包括至少一个目标对象;基于预设目标跟踪框集合利用光流法跟踪算法对目标图像进行跟踪以确定所述目标图像中的N个目标跟踪框,所述N为正整数;利用图像检测算法检测所述目标图像中的M个目标对象框,所述M为正整数;基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以更新所述预设目标跟踪框集合。本发明实施例通过基于匈牙利算法将目标跟踪框与目标对象框进行匹配以更新预设目标跟踪框集合,从而使得预设目标跟踪框能根据目标对象进行更新,提高目标跟踪准确率。
进一步的,通过使用单向光流跟踪算法,减少了计算开销,提高目标跟踪效率。
参见图3,图3是本发明实施例提供的一种目标跟踪方法的第二实施例流程示意图。图3所示的方法中,与图1所示方法相同或类似的内容可以参考图1中的详细描述,此处不再赘述。如图3所示,本发明实施例提供的目标跟踪方法包括以下步骤:
S301、获取目标图像,所述目标图像中包括至少一个目标对象。
S302、基于预设目标跟踪框集合利用光流法跟踪算法对目标图像进行跟踪以确定所述目标图像中的N个目标跟踪框,所述N为正整数。
S303、判断所述目标图像是否为预设检测帧。
可选地,若该目标图像为预设检测帧时,执行步骤S304。
可选地,若该目标图像不为预设检测帧时,执行返回执行步骤S301。
具体地,可以间隔一定帧数选择一幅预设检测帧用于检测进一步利用图像检测算法检测目标。
S304、利用图像检测算法检测所述目标图像中的M个目标对象框,所述M为正整数。
S305、基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以确定所述M个目标对象框中匹配成功的目标对象框、未匹配成功的目标对象框以及所述预设目标跟踪框集合中未匹配成功的预设目标跟踪框。
S306、将所述未匹配成功的目标对象框加入所述预设目标跟踪框集合。
S307、将所述未匹配成功的预设目标跟踪框从所述预设目标跟踪框删除。
S308、将所述匹配成功的目标对象框替换与所述匹配成功的目标对象框对应的预设目标跟踪框。
更进一步地,当执行完步骤S308后,也即对预设目标跟踪框进行更新后,再转入执行步骤S301,从而使得后续利用该预设目标跟踪框进行人脸跟踪得到的效果更优。
需要说明,上述更新预设目标跟踪框的步骤S306、S307以及S308没有严格的先后顺序。
可以看出,本实施例的方案中,获取目标图像,所述目标图像中包括至少一个目标对象;基于预设目标跟踪框集合利用光流法跟踪算法对目标图像进行跟踪以确定所述目标图像中的N个目标跟踪框,所述N为正整数;利用图像检测算法检测所述目标图像中的M个目标对象框,所述M为正整数;基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以更新所述预设目标跟踪框集合。本发明实施例通过基于匈牙利算法将目标跟踪框与目标对象框进行匹配以更新预设目标跟踪框集合,从而使得预设目标跟踪框能根据目标对象进行更新,提高目标跟踪准确率。
进一步的,通过使用单向光流跟踪算法,减少了计算开销,提高目标跟踪效率。
本发明实施例还提供一种目标跟踪装置,包括:
获取模块,用于获取目标图像,所述目标图像中包括至少一个目标对象;
确定模块,用于基于预设目标跟踪框集合利用光流法跟踪算法对目标图像进行跟踪以确定所述目标图像中的N个目标跟踪框,所述N为正整数;
检测模块,用于利用图像检测算法检测所述目标图像中的M个目标对象框,所述M为正整数;
更新模块,用于基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以更新所述预设目标跟踪框集合。
具体地,请参见图4,图4是本发明实施例提供的一种目标跟踪装置的第一实施例的结构示意图,用于实现本发明实施例公开的一种目标跟踪方法。其中, 如图4所示,本发明实施例提供的一种目标跟踪装置400可以包括:
获取模块410、确定模块420、检测模块430和更新模块440。
其中,获取模块410,用于获取目标图像,所述目标图像中包括至少一个目标对象。
其中,目标图像可以是指从视频流中获取到的各帧图像,优选地,该图像包括人脸的图像。目标对象是指该目标图像中需要关注的特征,例如,若该目标图像为人脸图像,该目标对象可以为人脸。
在本发明实施例中,通过在目标区域或位置安装摄像头来获取视频流,再对该视频流进行解码,以从该视频流中获取一帧帧的视频图像,也即目标图像,再对该目标图像进行图像处理。
在本发明实施例中,可以在小区门口、学校门口、进出关口等位置安装该摄像头。
举例说明,在本发明的一个示例中,若为了统计某一关口的人数量,可以在关口位置安装一摄像头,然后获取摄像头拍摄的视频流,并对视频流进行解码得到目标图像,然后再基于该目标图像中的目标对象,也即人脸对象进行人物计数,但基于人脸对象进行计数的过程中,由于视频流中的不同帧可能存在同一人脸对象,所以为了防止重复计数,可以使用本发明实施例提供的目标跟踪方法对目标对象进行跟踪以去重,提高计数准确率。
确定模块420,用于基于预设目标跟踪框集合利用光流法跟踪算法对目标图像进行跟踪以确定所述目标图像中的N个目标跟踪框,所述N为正整数。
其中,预设目标跟踪框集合是指在该时刻目标图像之前出现在目标图像中的预设目标所对应的预设目标跟踪框集合。例如,若为了统计某一关口的人数量,在某一时刻获取到一帧目标图像,但由于在该时刻之前的目标图像中可能出现过目标人脸,从而需要对该帧目标图像中与之前的目标人脸重复的人脸进行去重,从而可以使用预设人脸跟踪框来确定重复的预设人脸并进行滤除。
其中,目标图像中的N个目标跟踪框是指利用光流法跟踪算法跟踪到目标图像中的目标,该目标跟踪框是指目标图像中的目标对象的目标跟踪框,例如,若目标图像为人脸图像,则该目标跟踪框为目标人脸图像跟踪框。
具体地,参见图5,图5示出了本发明实施例提供的一种确定模块的结构示意图,如图5所示,该确定模块420,包括:
提取单元421,用于在所述目标图像的上一个目标图像中提取第一目标特征点。
其中,第一目标特征点是指目标图像的上一个目标图像中的与目标相关的特征点。
具体地,在上一帧目标图像的目标跟踪框内,提取易于跟踪的特征点。
更进一步,具体地,可以采用提取网格节点的方式,也可以计算每个像素点的跟踪性能,再从中选取一些易于跟踪的点,并保证各点之间有一定的距离。
在本发明实施例中,若该目标图像为人脸图像,则该目标特征点可以为人脸特征点。
获取单元422,用于基于光流获取所述目标特征点在所述目标图像中对应的 第二目标特征点。
具体地,当获取到目标图像的上一个目标图像中的第一目标特征点后,可以基于该第一目标特征点,计算光流,并利用光流信息,即可得到第一目标特征点在该目标图像中的第二目标特征点。
例如,若上一目标图像中存在3个人脸目标特征点,则可以通过计算光流,得到该目标图像中这3个人脸目标特征点。
所述获取单元422,还用于基于所述第二目标特征点获取所述目标图像的目标跟踪框。
其中,目标跟踪框是指为了对目标特征点进行跟踪的一个具有特征形状的跟踪框,以便于对目标进行跟踪。
更进一步,具体地,计算每个特征点在上一帧与当前帧中位置的差值,按差值的大小进行排序,取中间的差值作为跟踪框移动的距离。计算上一帧中每个特征点之间的距离,同时计算当前帧中每个特征点之间的距离,显然两帧中由特征点距离构成的矩阵的维度是一致的,将两帧中对应的距离两两相除得到商值,将商值按大小排序,取中间的商值作为跟踪框的缩放比例。
所述获取单元422,还用于获取所述预设目标跟踪框集合中的预设目标跟踪框与所述目标跟踪框之间的相关度。
其中,该相关度是为了更为准确地表示预设目标跟踪框与目标跟踪框之间相似程度的一个度量。
具体地,所述获取单元422获取所述预设目标跟踪框集合中的预设目标跟踪框与所述目标跟踪框之间的相关度具体为:
将所述目标跟踪框与所述预设目标跟踪缩放至相同尺寸;
基于归一化相似性度量函数NCC计算所述预设目标跟踪框集合中的预设目标跟踪框与所述目标跟踪框之间的相关度。:
将所述目标跟踪框与所述预设目标跟踪缩放至相同尺寸;
基于归一化相似性度量函数(Normalized cross correlation,简称NCC)计算所述预设目标跟踪框集合中的预设目标跟踪框与所述目标跟踪框之间的相关度。
可以理解,通过以NCC来评价预设目标跟踪框与所述目标跟踪框之间的相关度,从而使得相关度计算更为准确,提高跟踪准确度。
具体地,该NNC的公式表示如下:
Figure PCTCN2017111175-appb-000003
更新单元423,用于在所述预设目标跟踪框集合中的预设目标跟踪框与所述目标跟踪框之间的相关度大于或等于预设阈值时,利用所述目标跟踪框替换所述预设目标跟踪框集合中相应的目标跟踪框,以更新所述预设目标跟踪框集合。
具体地,当利用NCC来评价相似度时,当NNC值高于预设阈值时,对预设目标跟踪框集合更行更新。
更进一步,具体地,可以设置利用预设目标跟踪框集合更新频率,若处于模板更新帧,则将目标跟踪框的图像赋值给预设目标跟踪框。例如,对于25帧 每秒的视频,模板更新频率可以为每3帧1次,从而可以很好的判断目标是否遮挡。
举例说明,在本发明的一个示例中,若在预设人脸跟踪框集合中有3个人脸跟踪框,也即3个不同的人脸图像,然后在上一帧目标图像中获取第一人脸特征点,并基于该人脸特征点计算该目标图像的第二人脸特征点,并基于该第二人脸特征点得到人脸跟踪框,然后再计算得到的3个人脸跟踪框与预设的3个人脸跟踪框的相关度,当相关度大于一定的阈值时,将该第二特征点对应的人脸跟踪框更新预设人脸跟踪框集合对应的预设人脸跟踪框。
可以理解,由于当相关度大于一定的阈值时,则证明该预设目标跟踪框集合中的预设目标跟踪框与当前图像的目标跟踪框匹配,但很显然当前图像中检测到的目标跟踪框相对预设目标跟踪框更为准确,所以此时利用当前图像的目标跟踪框去替换预设目标跟踪框,将使得后续对目标的跟踪更为准确,提高目标跟踪准确率。
检测模块430,用于利用图像检测算法检测所述目标图像中的M个目标对象框,所述M为正整数。
具体地,该图像检测算法例如可以为Sift特征匹配算法,也可以为其它图像检测算法。
举例说明,在本发明的一个示例中,可以通过Sift特征匹配算法准确地检测出来人脸图像中的3个目标人脸框。
可选地,可以只对视频图像流中的部分帧利用图像检测算法进一步进行检测。例如,对10帧视频图像中选择一帧视频图像进行检测。从而在提高跟踪准确率的同时,也可以减少检测时间。从而当当前帧为人脸检测帧,则进行人脸检测。
更新模块440,用于基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以更新所述预设目标跟踪框集合。
其中,对N个目标跟踪框与所述M个目标对象框进行匹配时,也即利用如下公式计算目标跟踪框与目标对象框之间的重叠度:
Figure PCTCN2017111175-appb-000004
其中,rface指人脸框,rtracker指跟踪框。
具体地,当利用重叠度构建表征人脸框与跟踪框之间关系的权重矩阵,再利用匈牙利算法,找到最大有权二分图,此时目标对象框与目标跟踪框的重叠度总和最大,且一个目标对象框至多匹配到一个目标跟踪框,一个目标跟踪框也至多匹配一个目标对象框,该匹配可认为是目标对象框与目标跟踪框的最佳匹配。
具体地,所述更新模块440包括:
确定单元441,用于基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以确定所述M个目标对象框中匹配成功的目标对象框、未匹配成功的目标对象框以及所述预设目标跟踪框集合中未匹配成功的预设目标跟踪框;
更新子单元442,用于将所述未匹配成功的目标对象框加入所述预设目标跟 踪框集合、将所述未匹配成功的预设目标跟踪框从所述预设目标跟踪框删除,以及将所述匹配成功的目标对象框替换与所述匹配成功的目标对象框对应的预设目标跟踪框,以更新所述预设目标跟踪框集合。
举例说明,在本发明的一个示例中,当需要跟踪某个人脸图像时,首先利用光流跟踪法确定出来该人脸图像中的人脸对象框并对预设人脸跟踪框进行更新得到更新后的5个人脸对象,然后再利用图像检测算法检测出来该人脸图像中的准确的4个人脸对象框,并将该4个人脸对象框与5个预设人脸跟踪框利用匈牙利算法进行匹配,若该5个预设人脸对象框中有3个与这4个人脸对象框匹配成功,4个人脸对象框中有一个人脸对象框在5个预设人脸对象框中从未出现,从而还有2个预设人脸对象框不能与这4个人脸对象框中的任何一个人脸对象框匹配,则将匹配成功的这3个人脸对象框更新原来的预设人脸对象框中的3个预设人脸对象框,将这1个未在预设人脸对象框中出现的人脸对象框加入该预设人脸对象框,并将该2个未匹配成功预设人脸对象框从原来的预设人脸对象框中删除,最后这个预设人脸对象框集合中将包括4个预设人脸对象框。
可以理解,将所述未匹配成功的目标对象框加入所述预设目标跟踪框集合、将所述未匹配成功的预设目标跟踪框从所述预设目标跟踪框删除,以及将所述匹配成功的目标对象框替换与所述匹配成功的目标对象框对应的预设目标跟踪框,以更新所述预设目标跟踪框集合,从而可以使得后续对目标的跟踪更为准确,以及节约跟踪时间,提高目标跟踪效率。
可以看出,本实施例的方案中,目标跟踪装置400获取目标图像,所述目标图像中包括至少一个目标对象;目标跟踪装置400再基于预设目标跟踪框集合利用光流法跟踪算法对目标图像进行跟踪以确定所述目标图像中的N个目标跟踪框,所述N为正整数;并利用图像检测算法检测所述目标图像中的M个目标对象框,所述M为正整数;最后目标跟踪装置400基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以更新所述预设目标跟踪框集合。本发明实施例通过基于匈牙利算法将目标跟踪框与目标对象框进行匹配以更新预设目标跟踪框集合,从而使得预设目标跟踪框能根据目标对象进行更新,提高目标跟踪准确率。
进一步的,通过使用单向光流跟踪算法,减少了计算开销,提高目标跟踪效率。
在本实施例中,目标跟踪装置400是以单元的形式来呈现。这里的“单元”可以指特定应用集成电路(application-specific integrated circuit,ASIC),执行一个或多个软件或固件程序的处理器和存储器,集成逻辑电路,和/或其他可以提供上述功能的器件。
可以理解的是,本实施例的目标跟踪装置400的各功能单元的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。
参见图6,图6是本发明实施例提供的一种目标跟踪装置的第二实施例的结 构示意图,用于实现本发明实施例公开的图像识别方法。其中,该目标跟踪装置600可以包括:至少一个总线601、与总线601相连的至少一个处理器602以及与总线601相连的至少一个存储器603。
其中,处理器602通过总线601,调用存储器中存储的代码以用于获取目标图像,所述目标图像中包括至少一个目标对象;基于预设目标跟踪框集合利用光流法跟踪算法对目标图像进行跟踪以确定所述目标图像中的N个目标跟踪框,所述N为正整数;利用图像检测算法检测所述目标图像中的M个目标对象框,所述M为正整数;基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以更新所述预设目标跟踪框集合。
可选地,在本发明的一些可能的实施方式中,所述处理器602基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以更新所述预设目标跟踪框集合,包括:
基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以确定所述M个目标对象框中匹配成功的目标对象框、未匹配成功的目标对象框以及所述预设目标跟踪框集合中未匹配成功的预设目标跟踪框;
将所述未匹配成功的目标对象框加入所述预设目标跟踪框集合、将所述未匹配成功的预设目标跟踪框从所述预设目标跟踪框删除,以及将所述匹配成功的目标对象框替换与所述匹配成功的目标对象框对应的预设目标跟踪框,以更新所述预设目标跟踪框集合。
可选地,在本发明的一些可能的实施方式中,所述处理器602基于预设目标跟踪框集合利用光流法跟踪算法对目标图像进行跟踪以确定所述目标图像中的N个目标跟踪框,包括:
在所述目标图像的上一个目标图像中提取第一目标特征点;
基于光流获取所述目标特征点在所述目标图像中对应的第二目标特征点;
基于所述第二目标特征点获取所述目标图像的目标跟踪框;
获取所述预设目标跟踪框集合中的预设目标跟踪框与所述目标跟踪框之间的相关度;
在所述预设目标跟踪框集合中的预设目标跟踪框与所述目标跟踪框之间的相关度大于或等于预设阈值时,利用所述目标跟踪框替换所述预设目标跟踪框集合中相应的目标跟踪框,以更新所述预设目标跟踪框集合。
可选地,在本发明的一些可能的实施方式中,所述处理器602获取所述预设目标跟踪框集合中的预设目标跟踪框与所述目标跟踪框之间的相关度,包括:
将所述目标跟踪框与所述预设目标跟踪缩放至相同尺寸;
基于归一化相似性度量函数NCC计算所述预设目标跟踪框集合中的预设目标跟踪框与所述目标跟踪框之间的相关度。
可选地,在本发明的一些可能的实施方式中,所述目标对象为人脸对象。
可以看出,本实施例的方案中,目标跟踪装置600获取目标图像,所述目标图像中包括至少一个目标对象;目标跟踪装置600再基于预设目标跟踪框集合利用光流法跟踪算法对目标图像进行跟踪以确定所述目标图像中的N个目标跟踪框,所述N为正整数;并利用图像检测算法检测所述目标图像中的M个目 标对象框,所述M为正整数;最后目标跟踪装置600基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以更新所述预设目标跟踪框集合。本发明实施例通过基于匈牙利算法将目标跟踪框与目标对象框进行匹配以更新预设目标跟踪框集合,从而使得预设目标跟踪框能根据目标对象进行更新,提高目标跟踪准确率。
进一步的,通过使用单向光流跟踪算法,减少了计算开销,提高目标跟踪效率。
在本实施例中,目标跟踪装置600是以单元的形式来呈现。这里的“单元”可以指特定应用集成电路(application-specific integrated circuit,ASIC),执行一个或多个软件或固件程序的处理器和存储器,集成逻辑电路,和/或其他可以提供上述功能的器件。
可以理解的是,本实施例的目标跟踪装置600的各功能单元的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。
本发明实施例还提供一种计算机存储介质,其中,该计算机存储介质可存储有程序,该程序执行时包括上述方法实施例中记载的任何目标跟踪方法的部分或全部步骤。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明的各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (12)

  1. 一种目标跟踪方法,其特征在于,所述方法包括:
    获取目标图像,所述目标图像中包括至少一个目标对象;
    基于预设目标跟踪框集合利用光流法跟踪算法对目标图像进行跟踪以确定所述目标图像中的N个目标跟踪框,所述N为正整数;
    利用图像检测算法检测所述目标图像中的M个目标对象框,所述M为正整数;
    基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以更新所述预设目标跟踪框集合。
  2. 根据权利要求1所述的方法,其特征在于,所述基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以更新所述预设目标跟踪框集合,包括:
    基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以确定所述M个目标对象框中匹配成功的目标对象框、未匹配成功的目标对象框以及所述预设目标跟踪框集合中未匹配成功的预设目标跟踪框;
    将所述未匹配成功的目标对象框加入所述预设目标跟踪框集合、将所述未匹配成功的预设目标跟踪框从所述预设目标跟踪框删除,以及将所述匹配成功的目标对象框替换与所述匹配成功的目标对象框对应的预设目标跟踪框,以更新所述预设目标跟踪框集合。
  3. 根据权利要求1或2所述的方法,其特征在于,基于预设目标跟踪框集合利用光流法跟踪算法对目标图像进行跟踪以确定所述目标图像中的N个目标跟踪框,包括:
    在所述目标图像的上一个目标图像中提取第一目标特征点;
    基于光流获取所述目标特征点在所述目标图像中对应的第二目标特征点;
    基于所述第二目标特征点获取所述目标图像的目标跟踪框;
    获取所述预设目标跟踪框集合中的预设目标跟踪框与所述目标跟踪框之间的相关度;
    在所述预设目标跟踪框集合中的预设目标跟踪框与所述目标跟踪框之间的相关度大于或等于预设阈值时,利用所述目标跟踪框替换所述预设目标跟踪框 集合中相应的目标跟踪框,以更新所述预设目标跟踪框集合。
  4. 根据权利要求3所述的方法,其特征在于,获取所述预设目标跟踪框集合中的预设目标跟踪框与所述目标跟踪框之间的相关度,包括:
    将所述目标跟踪框与所述预设目标跟踪缩放至相同尺寸;
    基于归一化相似性度量函数NCC计算所述预设目标跟踪框集合中的预设目标跟踪框与所述目标跟踪框之间的相关度。
  5. 根据权利要求4所述的方法,其特征在于,所述目标对象为人脸对象。
  6. 一种目标跟踪装置,其特征在于,所述装置包括:
    获取模块,用于获取目标图像,所述目标图像中包括至少一个目标对象;
    确定模块,用于基于预设目标跟踪框集合利用光流法跟踪算法对目标图像进行跟踪以确定所述目标图像中的N个目标跟踪框,所述N为正整数;
    检测模块,用于利用图像检测算法检测所述目标图像中的M个目标对象框,所述M为正整数;
    更新模块,用于基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以更新所述预设目标跟踪框集合。
  7. 根据权利要求6所述的装置,其特征在于,所述更新模块包括:
    确定单元,用于基于匈牙利算法将所述N个目标跟踪框与所述M个目标对象框进行匹配以确定所述M个目标对象框中匹配成功的目标对象框、未匹配成功的目标对象框以及所述预设目标跟踪框集合中未匹配成功的预设目标跟踪框;
    更新单元,用于将所述未匹配成功的目标对象框加入所述预设目标跟踪框集合、将所述未匹配成功的预设目标跟踪框从所述预设目标跟踪框删除,以及将所述匹配成功的目标对象框替换与所述匹配成功的目标对象框对应的预设目标跟踪框,以更新所述预设目标跟踪框集合。
  8. 根据权利要求6或7所述的装置,其特征在于,所述确定模块,包括:
    提取单元,用于在所述目标图像的上一个目标图像中提取第一目标特征点;
    获取单元,用于基于光流获取所述目标特征点在所述目标图像中对应的第二目标特征点;
    所述获取单元,还用于基于所述第二目标特征点获取所述目标图像的目标跟踪框;
    所述获取单元,还用于获取所述预设目标跟踪框集合中的预设目标跟踪框与所述目标跟踪框之间的相关度;
    更新单元,用于在所述预设目标跟踪框集合中的预设目标跟踪框与所述目标跟踪框之间的相关度大于或等于预设阈值时,利用所述目标跟踪框替换所述预设目标跟踪框集合中相应的目标跟踪框,以更新所述预设目标跟踪框集合。
  9. 根据权利要求8所述的装置,其特征在于,所述获取单元获取所述预设目标跟踪框集合中的预设目标跟踪框与所述目标跟踪框之间的相关度具体为:
    将所述目标跟踪框与所述预设目标跟踪缩放至相同尺寸;
    基于归一化相似性度量函数NCC计算所述预设目标跟踪框集合中的预设目标跟踪框与所述目标跟踪框之间的相关度。
  10. 根据权利要求9所述的装置,其特征在于,所述目标对象为人脸对象。
  11. 一种目标跟踪装置,其特征在于,所述目标跟踪装置包括处理器,所述处理器用于执行存储器中存储的计算机程序时实现如权利要求1至5中任意一项所述目标跟踪方法。
  12. 一种计算机可读存储介质,其上存储有计算机指令,其特征在于:所述计算机指令被处理器执行时实现如权利要求1至5中任意一项所述目标跟踪方法。
PCT/CN2017/111175 2016-11-29 2017-11-15 目标跟踪方法、装置及存储介质 WO2018099268A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611075159.5A CN106803263A (zh) 2016-11-29 2016-11-29 一种目标跟踪方法及装置
CN201611075159.5 2016-11-29

Publications (1)

Publication Number Publication Date
WO2018099268A1 true WO2018099268A1 (zh) 2018-06-07

Family

ID=58983962

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2017/087728 WO2018099032A1 (zh) 2016-11-29 2017-06-09 一种目标跟踪方法及装置
PCT/CN2017/111175 WO2018099268A1 (zh) 2016-11-29 2017-11-15 目标跟踪方法、装置及存储介质

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/087728 WO2018099032A1 (zh) 2016-11-29 2017-06-09 一种目标跟踪方法及装置

Country Status (2)

Country Link
CN (1) CN106803263A (zh)
WO (2) WO2018099032A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111369590A (zh) * 2020-02-27 2020-07-03 北京三快在线科技有限公司 多目标跟踪方法、装置、存储介质及电子设备

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803263A (zh) * 2016-11-29 2017-06-06 深圳云天励飞技术有限公司 一种目标跟踪方法及装置
CN108563982B (zh) * 2018-01-05 2020-01-17 百度在线网络技术(北京)有限公司 用于检测图像的方法和装置
WO2020019353A1 (zh) * 2018-07-27 2020-01-30 深圳市大疆创新科技有限公司 跟踪控制方法、设备、计算机可读存储介质
CN109325467A (zh) * 2018-10-18 2019-02-12 广州云从人工智能技术有限公司 一种基于视频检测结果的车辆跟踪方法
CN109635657B (zh) * 2018-11-12 2023-01-06 平安科技(深圳)有限公司 目标跟踪方法、装置、设备及存储介质
CN109598743B (zh) * 2018-11-20 2021-09-03 北京京东尚科信息技术有限公司 行人目标跟踪方法、装置及设备
CN111382628B (zh) * 2018-12-28 2023-05-16 成都云天励飞技术有限公司 同行判定方法及装置
CN111612813A (zh) * 2019-02-26 2020-09-01 北京海益同展信息科技有限公司 人脸追踪方法与装置
CN111551938B (zh) * 2020-04-26 2022-08-30 北京踏歌智行科技有限公司 一种基于矿区环境的无人驾驶技术感知融合方法
CN111696128B (zh) * 2020-05-27 2024-03-12 南京博雅集智智能技术有限公司 一种高速多目标检测跟踪和目标图像优选方法及存储介质
CN112528925B (zh) * 2020-12-21 2024-05-07 深圳云天励飞技术股份有限公司 行人跟踪、图像匹配方法及相关设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020159635A1 (en) * 2001-04-25 2002-10-31 International Business Machines Corporation Methods and apparatus for extraction and tracking of objects from multi-dimensional sequence data
CN101212658A (zh) * 2007-12-21 2008-07-02 北京中星微电子有限公司 一种目标跟踪方法及装置
CN104217417A (zh) * 2013-05-31 2014-12-17 张伟伟 一种视频多目标跟踪的方法及装置
WO2015052896A1 (ja) * 2013-10-09 2015-04-16 日本電気株式会社 乗車人数計測装置、乗車人数計測方法およびプログラム記録媒体
CN105243654A (zh) * 2014-07-09 2016-01-13 北京航空航天大学 一种多飞机跟踪方法及系统
CN106803263A (zh) * 2016-11-29 2017-06-06 深圳云天励飞技术有限公司 一种目标跟踪方法及装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101393609B (zh) * 2008-09-18 2013-02-13 北京中星微电子有限公司 一种目标检测跟踪方法和装置
CN103020578A (zh) * 2011-09-20 2013-04-03 佳都新太科技股份有限公司 一种基于二部图匹配的智能多目标跟踪算法
CN104063885A (zh) * 2014-07-23 2014-09-24 山东建筑大学 一种改进的运动目标检测与跟踪方法
US10664705B2 (en) * 2014-09-26 2020-05-26 Nec Corporation Object tracking apparatus, object tracking system, object tracking method, display control device, object detection device, and computer-readable medium
CN105931269A (zh) * 2016-04-22 2016-09-07 海信集团有限公司 一种视频中的目标跟踪方法及装置
CN106127807A (zh) * 2016-06-21 2016-11-16 中国石油大学(华东) 一种实时的视频多类多目标跟踪方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020159635A1 (en) * 2001-04-25 2002-10-31 International Business Machines Corporation Methods and apparatus for extraction and tracking of objects from multi-dimensional sequence data
CN101212658A (zh) * 2007-12-21 2008-07-02 北京中星微电子有限公司 一种目标跟踪方法及装置
CN104217417A (zh) * 2013-05-31 2014-12-17 张伟伟 一种视频多目标跟踪的方法及装置
WO2015052896A1 (ja) * 2013-10-09 2015-04-16 日本電気株式会社 乗車人数計測装置、乗車人数計測方法およびプログラム記録媒体
CN105243654A (zh) * 2014-07-09 2016-01-13 北京航空航天大学 一种多飞机跟踪方法及系统
CN106803263A (zh) * 2016-11-29 2017-06-06 深圳云天励飞技术有限公司 一种目标跟踪方法及装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111369590A (zh) * 2020-02-27 2020-07-03 北京三快在线科技有限公司 多目标跟踪方法、装置、存储介质及电子设备

Also Published As

Publication number Publication date
WO2018099032A1 (zh) 2018-06-07
CN106803263A (zh) 2017-06-06

Similar Documents

Publication Publication Date Title
WO2018099268A1 (zh) 目标跟踪方法、装置及存储介质
Xiao et al. End-to-end deep learning for person search
WO2021051545A1 (zh) 基于行为识别模型的摔倒动作判定方法、装置、计算机设备及存储介质
CN109635686B (zh) 结合人脸与外观的两阶段行人搜索方法
US20170300744A1 (en) Method and apparatus for determining identity identifier of face in face image, and terminal
US20190180087A1 (en) Picture Ranking Method, and Terminal
US8818028B2 (en) Systems and methods for accurate user foreground video extraction
JP6482195B2 (ja) 画像認識装置、画像認識方法及びプログラム
CN108875481B (zh) 用于行人检测的方法、装置、系统及存储介质
Avgerinakis et al. Recognition of activities of daily living for smart home environments
WO2018177153A1 (zh) 一种行人跟踪方法以及电子设备
WO2019042195A1 (zh) 一种人体目标身份识别方法及装置
WO2018121287A1 (zh) 目标再识别方法和装置
WO2018082308A1 (zh) 一种图像处理方法及终端
WO2019200702A1 (zh) 去网纹系统训练方法、去网纹方法、装置、设备及介质
WO2022160591A1 (zh) 人群行为检测方法及装置、电子设备、存储介质及计算机程序产品
WO2018068521A1 (zh) 一种人群分析方法及计算机设备
CN110765903A (zh) 行人重识别方法、装置及存储介质
WO2022156317A1 (zh) 视频帧处理方法及装置、电子设备和存储介质
CN111241928A (zh) 人脸识别底库优化方法、系统、设备、可读存储介质
CN108509963B (zh) 基于深度学习的目标差异性检测方法和目标差异性检测设备
CN111738042A (zh) 识别方法、设备及存储介质
CN107358621B (zh) 对象跟踪方法及装置
CN112800922A (zh) 一种人脸识别方法、装置、电子设备及存储介质
CN112927258A (zh) 一种目标跟踪方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17875393

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28.10.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17875393

Country of ref document: EP

Kind code of ref document: A1