CN114219830A - Target tracking method, terminal and computer readable storage medium - Google Patents

Target tracking method, terminal and computer readable storage medium Download PDF

Info

Publication number
CN114219830A
CN114219830A CN202111320883.0A CN202111320883A CN114219830A CN 114219830 A CN114219830 A CN 114219830A CN 202111320883 A CN202111320883 A CN 202111320883A CN 114219830 A CN114219830 A CN 114219830A
Authority
CN
China
Prior art keywords
target object
video frame
preset
position information
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111320883.0A
Other languages
Chinese (zh)
Inventor
吴思铭
李璐一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202111320883.0A priority Critical patent/CN114219830A/en
Publication of CN114219830A publication Critical patent/CN114219830A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The invention provides a target tracking method, a terminal and a computer readable storage medium, wherein the target tracking method is used for carrying out target detection on an acquired first video frame, and the first video frame is acquired by first image acquisition equipment; in response to that the first video frame does not detect a preset first target object, performing target detection on an obtained second video frame, wherein the second video frame is acquired through second image acquisition equipment; wherein the first monitoring area of the first image acquisition device is a subset of the second monitoring area of the second image acquisition device; in response to detecting the second target object in the second video frame, and the second target object is the same as the first target object, determining the position information of the second target object in the second monitoring area as the position information of the first target object. According to the method and the device, the position information of the second target object in the second monitoring area is determined as the position information of the first target object, and the first target object is prevented from disappearing.

Description

Target tracking method, terminal and computer readable storage medium
Technical Field
The present invention relates to the field of image recognition technologies, and in particular, to a target tracking method, a terminal, and a computer-readable storage medium.
Background
At present, gun ball linkage, Raymphere linkage or panoramic close-up tracking commonly used in the monitoring field are all equipment such as radars or fixed cameras and the like for monitoring a certain area and using pan-tilt cameras for close-up amplification. The monitoring mode based on linkage of a plurality of cameras can lead to low tracking real-time performance to a certain extent, and meanwhile, coordinate calibration of each device is needed to be carried out again when monitoring devices in one area are increased or decreased. The target tracking system based on the single monitoring camera identifies and tracks the target through the independent holder monitoring equipment, can avoid a coordinate calibration process and a linkage control process brought by linkage among a plurality of monitoring equipment, improves the real-time performance and accuracy of tracking to a certain extent, but has poor effect on the common problems of shielding and the like of target tracking. In the process of tracking the target, if the target is lost or shielded for a short time, the single monitoring camera cannot realize tracking, and the pan-tilt parameters of the current equipment still need to be utilized to call other monitoring equipment to perform linkage tracking on the tracked target.
Disclosure of Invention
The invention mainly solves the technical problem of providing a target tracking method, a terminal and a computer readable storage medium, and solves the problem that a single monitoring camera cannot track due to the fact that a target is temporarily lost or shielded in the prior art.
In order to solve the technical problems, the first technical scheme adopted by the invention is as follows: provided is a target tracking method including: performing target detection on the acquired first video frame, wherein the first video frame is acquired through first image acquisition equipment; in response to that the first video frame does not detect a preset first target object, performing target detection on an obtained second video frame, wherein the second video frame is acquired through second image acquisition equipment; wherein the first monitoring area of the first image acquisition device is a subset of the second monitoring area of the second image acquisition device; in response to detecting the second target object in the second video frame, and the second target object is the same as the first target object, determining the position information of the second target object in the second monitoring area as the position information of the first target object.
The target tracking method comprises the following steps: and responding to the first video frame to detect a preset first target object, and outputting the position information of the first target object in the first video frame.
Wherein, in response to the first video frame detecting a preset first target object, outputting the position information of the first target object in the first video frame, including: judging whether a previous video frame of the first video frame stores a preset feature map of the first target object; and if the last video frame stores the preset feature map, determining that the first video frame is not the first frame image containing the first target object.
Wherein, in response to the first video frame detecting a preset first target object, outputting the position information of the first target object in the first video frame, including: detecting the first video frame to obtain a candidate target object and position information of the candidate target object; performing feature extraction on the candidate target object to obtain a first feature map; judging whether the similarity between the first feature map and a preset feature map of the first target object stored in the previous video frame is greater than a first threshold value or not; if the similarity between the first characteristic diagram and the preset characteristic diagram is larger than a first threshold value, determining that the candidate target object is the first target object; and outputting the position information of the first target object in the first video frame.
Wherein, in response to the first video frame detecting a preset first target object, outputting the position information of the first target object in the first video frame, further comprising: and if the preset feature map is not stored in the last video frame, determining that the first video frame is a first frame image containing the first target object, and outputting the position information of the first target object in the first video frame.
Wherein, in response to the first video frame not detecting the preset first target object, performing target detection on the acquired second video frame includes: if the preset first target object is not detected in the first video frame; acquiring at least one second video frame in a preset time period; performing target detection on at least one second video frame; the preset time period comprises a time period including a first video frame acquisition time and a first time length later; or the preset period comprises a time period comprising a second duration after the first video frame capture time.
Wherein, in response to detecting the second target object in the second video frame, and the second target object is the same as the first target object, determining the position information of the second target object in the second monitoring area as the position information of the first target object, and the method comprises the following steps: extracting the features of the second target object to obtain a second feature map; judging whether the similarity of the second characteristic diagram and the preset characteristic diagram exceeds a second threshold value or not; in response to detecting the second target object in the second video frame, and the second target object is the same as the first target object, determining the position information of the second target object in the second monitoring area as the position information of the first target object, including: and if the similarity of the second characteristic diagram and the preset characteristic diagram exceeds a second threshold value, determining the position information of the second target object in the second monitoring area as the position information of the first target object.
The target tracking method further comprises the following steps: and if the similarity between the second characteristic diagram and the preset characteristic diagram does not exceed the second threshold, deleting the preset characteristic diagram and outputting preset position information.
In order to solve the above technical problems, the second technical solution adopted by the present invention is: there is provided a terminal comprising a memory, a processor and a computer program stored in the memory and running on the processor, the processor being configured to execute the sequence data to implement the steps in the above-described target tracking method.
In order to solve the above technical problems, the third technical solution adopted by the present invention is: there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps in the above-described object tracking method.
The invention has the beneficial effects that: different from the situation of the prior art, the target tracking method, the terminal and the computer readable storage medium are provided, the target tracking method is used for carrying out target detection on the acquired first video frame, and the first video frame is acquired through first image acquisition equipment; in response to that the first video frame does not detect a preset first target object, performing target detection on an obtained second video frame, wherein the second video frame is acquired through second image acquisition equipment; wherein the first monitoring area of the first image acquisition device is a subset of the second monitoring area of the second image acquisition device; in response to detecting the second target object in the second video frame, and the second target object is the same as the first target object, determining the position information of the second target object in the second monitoring area as the position information of the first target object. According to the method and the device, target detection is carried out on the first video frame acquired by the first image acquisition equipment, when a preset first target object is not detected in the first video frame, target detection is carried out on the second video frame acquired by the second image acquisition equipment, when the second target object is obtained through comparison and is the same as the preset first target object, the position information of the second target object in the second monitoring area is determined as the position information of the first target object, the target object in the first image acquisition equipment is prevented from disappearing, the target object is prevented from being lost, and the tracking stability of the target object is further improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a target tracking method provided by the present invention;
FIG. 2 is a schematic flow chart diagram illustrating a target tracking method according to an embodiment of the present invention;
FIG. 3 is a schematic block diagram of one embodiment of a terminal provided by the present invention;
FIG. 4 is a schematic block diagram of one embodiment of a computer-readable storage medium provided by the present invention.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
In order to make those skilled in the art better understand the technical solution of the present invention, a target tracking method provided by the present invention is further described in detail below with reference to the accompanying drawings and the detailed description.
In this application, image acquisition equipment is binocular camera, and image acquisition equipment includes first image acquisition equipment and second image acquisition equipment. The first image acquisition device and the second image acquisition device track the target object at the same time under the conditions of the same visual angle and different zooming. For example, the first image capturing apparatus and the second image capturing apparatus are mounted to the same device. The first image acquisition equipment and the second image acquisition equipment respectively correspond to one vision sensor and are respectively connected with the corresponding main chip. The target object may be a pedestrian, a vehicle, or the like. The first image acquisition equipment is a main lens which is a zoom lens, and is used for calibrating and tracking a target object with a small visual field range and capturing more target details, namely, the first image acquisition equipment acquires an image of a first monitoring area; the second image capturing device is an auxiliary lens, which may be a fixed focus lens or a zoom lens, and tracks the target object with a large field of view, that is, the second image capturing device captures an image of the second monitoring area. Wherein the first monitoring area of the first image capturing device is a subset of the second monitoring area of the second image capturing device. That is, the first monitored area is a partial area in the second monitored area, facilitating the capture of more target details.
Wherein, the centers of the lenses of the first image acquisition device and the second image acquisition device correspond to the same position in a PTZ (Pan/Tilt/Zoom) coordinate system. That is, the second image capturing device and the first image capturing device simultaneously track the same target object, and the coordinates of the target object in the image captured by the first image capturing device are the same as the coordinates of the target object in the image captured by the second image capturing device. That is, the coordinates of the target object in the image captured by the first image capturing device and the coordinates of the target object in the image captured by the second image capturing device are both the coordinates of the target object in the second monitored area. When the first image acquisition device does not acquire the image containing the preset target object, the second image acquisition device acquires the image, and the position information of the preset target object in the second monitoring area can be determined according to the position information of the preset target object in the image acquired by the second image acquisition device.
Referring to fig. 1, fig. 1 is a schematic flow chart of a target tracking method according to the present invention. The embodiment provides a target tracking method, which is suitable for tracking a target object in a double-lens scene, and under the condition that monitoring equipment is not required to be linked, images are acquired on a local holder in a double-lens mode, and the motion of the local holder is controlled according to motion parameters of the local holder so as to track the target object. The target tracking method includes the following steps.
S11: and carrying out target detection on the acquired first video frame, wherein the first video frame is acquired by first image acquisition equipment.
Specifically, detecting a first video frame to obtain a candidate target object and position information of the candidate target object; performing feature extraction on the candidate target object to obtain a first feature map; and judging whether the similarity between the first feature map and a preset feature map corresponding to a first target object stored in a previous video frame of the first video frame is greater than a first threshold value. If the similarity between the first characteristic diagram and the preset characteristic diagram is larger than a first threshold value, determining that the candidate target object is the first target object; and outputting the position information of the first target object in the first video frame. And if the similarity between the first feature map and the preset feature map is not larger than a first threshold value, determining that the first video frame does not detect the preset first target object.
S12: and in response to the fact that the first video frame does not detect the preset first target object, performing target detection on the obtained second video frame, wherein the second video frame is acquired through second image acquisition equipment.
Specifically, if a preset first target object is not detected in the first video frame; acquiring a plurality of second video frames in a preset time period; and carrying out target detection on the plurality of second video frames to obtain at least one second target object and position information of the second target object. If the preset first target object is not detected in the first video frame; acquiring at least one second video frame in a preset time period; performing target detection on the second video frame; the preset time period comprises a time period including a first video frame acquisition time and a first time length later; or the preset period comprises a time period comprising a second duration after the first video frame capture time.
In this embodiment, feature extraction is performed on the second target object to obtain a second feature map; and judging whether the similarity of the second characteristic diagram and the preset characteristic diagram exceeds a second threshold value. And if the similarity of the second feature map and the preset feature map exceeds a second threshold, determining that the second target object is the first target object. And if the similarity between the second characteristic diagram and the preset characteristic diagram does not exceed a second threshold value, deleting the preset characteristic diagram and outputting preset position information.
S13: in response to detecting the second target object in the second video frame, and the second target object is the same as the first target object, determining the position information of the second target object in the second monitoring area as the position information of the first target object.
Specifically, if the similarity between the second feature map and the preset feature map exceeds a second threshold, it is determined that the second target object is the first target object, and the position information of the second target object is determined as the position information of the first target object.
The embodiment provides a target tracking method, which comprises the steps of carrying out target detection on an obtained first video frame, wherein the first video frame is acquired through first image acquisition equipment; in response to that the first video frame does not detect a preset first target object, performing target detection on an obtained second video frame, wherein the second video frame is acquired through second image acquisition equipment; wherein the first monitoring area of the first image acquisition device is a subset of the second monitoring area of the second image acquisition device; in response to detecting the second target object in the second video frame, and the second target object is the same as the first target object, determining the position information of the second target object in the second monitoring area as the position information of the first target object. According to the method and the device, target detection is carried out on the first video frame acquired by the first image acquisition equipment, when a preset first target object is not detected in the first video frame, target detection is carried out on the second video frame acquired by the second image acquisition equipment, when the second target object is obtained through comparison and is the same as the preset first target object, the position information of the second target object in the second monitoring area is determined as the position information of the first target object, the target object in the first image acquisition equipment is prevented from disappearing, the target object is prevented from being lost, and the tracking stability of the target object is further improved.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a target tracking method according to an embodiment of the present invention. The target tracking method is suitable for tracking a target object under a double-lens scene, and under the condition that monitoring equipment is not required to be linked, images are collected on a local cloud deck in a double-lens mode, and the motion of the local cloud deck is controlled according to the motion parameters of the local cloud deck so as to track the target object. The target tracking method includes the following steps.
S201: a first video frame is acquired.
Specifically, a current video frame is acquired through a first image acquisition device, and the current video frame is used as a first video frame. The first video frame is captured with a small field of view range. The first video frame may contain a first target object; it is also possible that the first target object is not included, i.e. it is briefly lost or occluded by other targets. The first target object is a preset target object. That is, the first target object is a target object that is set to be tracked. The first target object may be a person or a vehicle.
S202: and carrying out target detection on the first video frame to obtain a candidate target object and corresponding position information.
Specifically, target detection is performed on the first video frame through a target detection network, so that a detection frame of the candidate target object is obtained. That is, the candidate target object in the first video frame and the position information of the candidate target object are obtained. In a specific embodiment, a target detection frame of the candidate target object can be obtained by performing target detection on the first video frame through one target detection network of Fast R-CNN, YOLO and SSD.
S203: and performing feature extraction on the candidate target object to obtain a first feature map.
Specifically, feature extraction is performed on the candidate target object obtained through detection through a feature extraction network, so that a first feature map corresponding to the candidate target object is obtained. In a specific embodiment, the first Feature map may be obtained by performing Feature extraction on a candidate target object included in the detection box through a Feature extraction network selected from Histogram of Oriented Gradient (HOG), Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), and Superpoint.
S204: and judging whether the similarity between the first feature map and a preset feature map corresponding to a first target object preset in the previous video frame is greater than a first threshold value.
Specifically, to confirm whether the detected candidate target object is a preset first target object. It is required to determine whether the first feature map corresponding to the candidate target object is the same as a preset feature map corresponding to a first target object preset in a previous video frame. Namely, whether the similarity between the first characteristic diagram and the preset characteristic diagram is greater than a first threshold value is judged. In one embodiment, when the first video frame is not the first frame image containing the first target object, the similarity between the first feature map and the preset feature map of the first target object stored in the previous video frame is calculated. The similarity between the first feature map and a preset feature map of the first target object stored in any historical video frame before the first video frame can also be calculated. In another embodiment, when the first video frame is a first frame image containing a first target object, the similarity between the first feature map and a pre-stored preset feature map of the first target object is calculated.
In one embodiment, whether the first feature map is the same as the preset feature map may be determined by Fast Nearest neighbor search packages (FLANN), Brute Force algorithm (BF), and euclidean distance.
If the similarity between the first feature map and the preset feature map is greater than the first threshold, directly jumping to step S205; if the similarity between the first feature map and the preset feature map is not greater than the first threshold, go directly to step S206.
S205: and determining the candidate target object as the first target object.
Specifically, if the similarity between the first feature map and the preset feature map is greater than the first threshold, it indicates that the first feature map is the same as the preset feature map, and it is determined that the candidate target object corresponding to the first feature map is the preset first target object.
S206: and outputting the position information of the first target object in the first video frame.
Specifically, when the candidate target object is determined to be a preset first target object, it indicates that the first target object is detected in the first video frame, and the position information of the first target object in the first video frame is output as the position information of the first target object in the second monitoring area, so as to track the first target object.
S207: determining that the preset first target object is not detected in the first video frame.
Specifically, if the similarity between the first feature map and the preset feature map is not greater than the first threshold, it indicates that the first feature map is different from the preset feature map, and it is determined that the candidate target object corresponding to the first feature map is not the preset first target object. If all the candidate target objects detected in the first video frame obtained by comparison are not the preset first target object, it is indicated that the first target object is lost in the first video frame or is blocked by other targets.
Because the first image acquisition device acquires the image containing the first target object within the small visual field range, when the first target object in the first video frame acquired by the first image acquisition device is lost, the first image acquisition device cannot continuously track the first target object, and the second image acquisition device is required to acquire the image within the large visual field range, so as to determine the position information of the first target object.
S208: a second video frame is acquired.
Specifically, all video frames within a time period including the acquisition time of the first video frame and a first duration later are acquired and acquired as the second video frame through the second image acquisition device. Or acquiring all video frames in a time period containing a second duration after the acquisition time of the first video frame as a second video frame by using second image acquisition equipment. Wherein, the second video frame can be one; there may be a plurality of sheets. The second video frame is used for collecting an image with a large visual field range, so that the image containing the target object can be more conveniently obtained.
S209: and carrying out target detection on the second video frame to obtain at least one second target object and position information of the second target object.
Specifically, the target detection is performed on the second video frame through a target detection network to obtain a detection frame of the second target object. That is, the second target object in the second video frame and the position information of the second target object are obtained. In a specific embodiment, the detection frame of the second target object can be obtained by performing target detection on the second video frame through one target detection network of Fast R-CNN, YOLO and SSD.
S210: and performing feature extraction on the second target object to obtain a second feature map.
Specifically, feature extraction is performed on the detected second target object through a feature extraction network, so that a second feature map corresponding to the second target object is obtained. In a specific embodiment, the second feature map may be obtained by performing feature extraction on the second target object included in the detection box through one of a feature extraction network of HOG, SIFT, SURF, and Superpoint.
S211: and judging whether the similarity of the second characteristic diagram and the preset characteristic diagram exceeds a second threshold value.
Specifically, to confirm whether the detected second target object is a preset first target object. Whether the second feature map corresponding to the second target object is the same as the preset feature map corresponding to the preset first target object needs to be judged. Namely, whether the similarity between the second characteristic diagram and the preset characteristic diagram is greater than a second threshold value is judged. The second threshold may be the same as or different from the first threshold.
If the similarity between the second feature map and the preset feature map is greater than the second threshold, directly jumping to step S212; if the similarity between the second feature map and the preset feature map is not greater than the second threshold, the process goes directly to step S213.
S212: the position information of the second target object is determined as the position information of the first target object.
Specifically, if the similarity between the second feature map and the preset feature map exceeds a second threshold, the position information of the second target object in the second monitoring area is determined as the position information of the first target object. And sending the position information of the second target object in the second monitoring area and the second feature map to the first image acquisition equipment, and updating the second feature map into a preset feature map in the first video frame and storing the preset feature map by the first image acquisition equipment. The first image acquisition equipment outputs the position information of the second target object in the second monitoring area, and the first target object is continuously tracked according to the received position information of the second target object in the second monitoring area.
S213: and deleting the preset feature map and outputting preset position information.
Specifically, if the similarity between the second feature map and the preset feature map does not exceed the second threshold, which indicates that the first target object completely disappears, the first target object does not need to be tracked continuously, the preset feature map is deleted, and the preset position information indicating that the tracking is finished is output. For example, the preset position information is a coordinate system origin.
In an optional embodiment, in order to obtain a preset feature map stored in a video frame where a first target object is closest to a current video frame, whether a previous video frame stores the preset feature map corresponding to the first target object is determined; and if the last video frame stores the preset feature map corresponding to the first target object, determining that the first video frame is not the first frame image containing the first target object. And if the preset feature map is not stored in the last video frame, determining that the first video frame is a first frame image containing the first target object, and determining to output the position information of the first target object in the first video frame.
The embodiment provides a target tracking method, which comprises the steps of carrying out target detection on an obtained first video frame, wherein the first video frame is acquired through first image acquisition equipment; in response to that the first video frame does not detect a preset first target object, performing target detection on an obtained second video frame, wherein the second video frame is acquired through second image acquisition equipment; wherein the first monitoring area of the first image acquisition device is a subset of the second monitoring area of the second image acquisition device; in response to detecting the second target object in the second video frame, and the second target object is the same as the first target object, determining the position information of the second target object in the second monitoring area as the position information of the first target object. According to the method and the device, target detection is carried out on the first video frame acquired by the first image acquisition equipment, when a preset first target object is not detected in the first video frame, target detection is carried out on the second video frame acquired by the second image acquisition equipment, when the second target object is obtained through comparison and is the same as the preset first target object, the position information of the second target object in the second monitoring area is determined as the position information of the first target object, the target object in the first image acquisition equipment is prevented from disappearing, the target object is prevented from being lost, and the tracking stability of the target object is further improved.
Referring to fig. 3, fig. 3 is a schematic block diagram of an embodiment of a terminal provided in the present invention. The terminal 70 in this embodiment includes: the processor 71, the memory 72, and a computer program stored in the memory 72 and capable of running on the processor 71 are not described herein for avoiding repetition in order to implement the above-mentioned target tracking method when the computer program is executed by the processor 71.
Referring to fig. 4, fig. 4 is a schematic block diagram of an embodiment of a computer-readable storage medium provided by the present invention.
The embodiment of the present application further provides a computer-readable storage medium 90, where the computer-readable storage medium 90 stores a computer program 901, the computer program 901 includes program instructions, and a processor executes the program instructions to implement the target tracking method provided in the embodiment of the present application.
The computer-readable storage medium 90 may be an internal storage unit of the computer device of the foregoing embodiment, such as a hard disk or a memory of the computer device. The computer-readable storage medium 90 may also be an external storage device of the computer device, such as a plug-in hard disk provided on the computer device, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A target tracking method, characterized in that the target tracking method comprises:
carrying out target detection on an obtained first video frame, wherein the first video frame is acquired by first image acquisition equipment;
in response to that the first video frame does not detect a preset first target object, performing target detection on an obtained second video frame, wherein the second video frame is acquired through second image acquisition equipment; wherein the first monitoring area of the first image capturing device is a subset of the second monitoring area of the second image capturing device;
in response to detecting a second target object in the second video frame, and the second target object is the same as the first target object, determining the position information of the second target object in the second monitoring area as the position information of the first target object.
2. The target tracking method according to claim 1, characterized in that the target tracking method comprises:
and responding to the first video frame to detect the preset first target object, and outputting the position information of the first target object in the first video frame.
3. The target tracking method of claim 2,
the outputting the position information of the first target object in the first video frame in response to the first video frame detecting the preset first target object comprises:
judging whether a previous video frame of the first video frame stores a preset feature map of the first target object;
and if the preset feature map is saved in the last video frame, determining that the first video frame is not a first frame image containing the first target object.
4. The object tracking method according to claim 3,
the outputting the position information of the first target object in the first video frame in response to the first video frame detecting the preset first target object comprises:
detecting the first video frame to obtain a candidate target object and position information of the candidate target object;
extracting the features of the candidate target object to obtain a first feature map;
judging whether the similarity between the first feature map and a preset feature map of the first target object stored in the previous video frame is greater than a first threshold value or not;
if the similarity between the first feature map and the preset feature map is larger than the first threshold, determining that the candidate target object is the first target object;
outputting the position information of the first target object in the first video frame.
5. The object tracking method according to claim 3,
the outputting the position information of the first target object in the first video frame in response to the first video frame detecting the preset first target object further includes:
and if the preset feature map is not saved in the last video frame, determining that the first video frame is the first frame image containing the first target object, and outputting the position information of the first target object in the first video frame.
6. The object tracking method according to claim 3,
the performing, in response to the first video frame not detecting the preset first target object, target detection on the acquired second video frame includes:
if the preset first target object is not detected in the first video frame; acquiring at least one second video frame in a preset time period;
performing target detection on the at least one second video frame;
the preset time period comprises a time period including the acquisition time of the first video frame and a first time length later; or the preset time period comprises a time period comprising a second duration after the first video frame acquisition time.
7. The target tracking method of claim 6,
the determining, in response to detecting a second target object in the second video frame and the second target object being the same as the first target object, the position information of the second target object in the second monitoring area as the position information of the first target object previously comprises:
performing feature extraction on the second target object to obtain a second feature map;
judging whether the similarity of the second characteristic diagram and the preset characteristic diagram exceeds a second threshold value or not;
the determining, in response to detecting a second target object in the second video frame and the second target object being the same as the first target object, position information of the second target object in the second monitoring area as position information of the first target object includes:
and if the similarity between the second feature map and the preset feature map exceeds the second threshold, determining the position information of the second target object in the second monitoring area as the position information of the first target object.
8. The object tracking method according to claim 7, further comprising:
and if the similarity between the second characteristic diagram and the preset characteristic diagram does not exceed the second threshold, deleting the preset characteristic diagram and outputting preset position information.
9. A terminal, characterized in that the terminal comprises a memory, a processor and a computer program stored in the memory and run on the processor, the processor being configured to execute sequence data to implement the steps in the object tracking method according to any of claims 1-8.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the object tracking method according to any one of claims 1 to 8.
CN202111320883.0A 2021-11-09 2021-11-09 Target tracking method, terminal and computer readable storage medium Pending CN114219830A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111320883.0A CN114219830A (en) 2021-11-09 2021-11-09 Target tracking method, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111320883.0A CN114219830A (en) 2021-11-09 2021-11-09 Target tracking method, terminal and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114219830A true CN114219830A (en) 2022-03-22

Family

ID=80696728

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111320883.0A Pending CN114219830A (en) 2021-11-09 2021-11-09 Target tracking method, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114219830A (en)

Similar Documents

Publication Publication Date Title
CN110866480B (en) Object tracking method and device, storage medium and electronic device
CN109151375B (en) Target object snapshot method and device and video monitoring equipment
CN108447091B (en) Target positioning method and device, electronic equipment and storage medium
KR101687530B1 (en) Control method in image capture system, control apparatus and a computer-readable storage medium
JP5484184B2 (en) Image processing apparatus, image processing method, and program
CN109005334B (en) Imaging method, device, terminal and storage medium
KR101530255B1 (en) Cctv system having auto tracking function of moving target
CN108062763B (en) Target tracking method and device and storage medium
CN111583118B (en) Image stitching method and device, storage medium and electronic equipment
JP6551226B2 (en) INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND PROGRAM
CN110569770A (en) Human body intrusion behavior recognition method and device, storage medium and electronic equipment
CN113302907B (en) Shooting method, shooting device, shooting equipment and computer readable storage medium
CN110675426A (en) Human body tracking method, device, equipment and storage medium
JP7354767B2 (en) Object tracking device and object tracking method
CN114466129A (en) Image processing method, image processing device, storage medium and electronic equipment
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
CN110706257B (en) Identification method of effective characteristic point pair, and camera state determination method and device
JPWO2018179119A1 (en) Video analysis device, video analysis method, and program
CN117014716A (en) Target tracking method and electronic equipment
CN113125434A (en) Image analysis system and method of controlling photographing of sample image
CN116342642A (en) Target tracking method, device, electronic equipment and readable storage medium
CN114219830A (en) Target tracking method, terminal and computer readable storage medium
US9183448B2 (en) Approaching-object detector, approaching object detecting method, and recording medium storing its program
CN112738387B (en) Target snapshot method, device and storage medium
CN112329729B (en) Small target ship detection method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination