CN109543534B - Method and device for re-detecting lost target in target tracking - Google Patents

Method and device for re-detecting lost target in target tracking Download PDF

Info

Publication number
CN109543534B
CN109543534B CN201811231109.0A CN201811231109A CN109543534B CN 109543534 B CN109543534 B CN 109543534B CN 201811231109 A CN201811231109 A CN 201811231109A CN 109543534 B CN109543534 B CN 109543534B
Authority
CN
China
Prior art keywords
search
target
frame
similarity
similar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811231109.0A
Other languages
Chinese (zh)
Other versions
CN109543534A (en
Inventor
陈翔宇
张一帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Fangcun Zhiwei (Nanjing) Technology Co.,Ltd.
Original Assignee
Nanjing Artificial Intelligence Chip Innovation Institute Institute Of Automation Chinese Academy Of Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Artificial Intelligence Chip Innovation Institute Institute Of Automation Chinese Academy Of Sciences filed Critical Nanjing Artificial Intelligence Chip Innovation Institute Institute Of Automation Chinese Academy Of Sciences
Priority to CN201811231109.0A priority Critical patent/CN109543534B/en
Publication of CN109543534A publication Critical patent/CN109543534A/en
Application granted granted Critical
Publication of CN109543534B publication Critical patent/CN109543534B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes

Abstract

The embodiment of the invention provides a method and a device for re-detecting target loss in target tracking, which are characterized in that a similar search frame is determined by randomly sampling pixel points of a current frame image, the search range is narrowed, the calculation range of the similar search frame is expanded to obtain a target search area, and the position of a target is accurately determined according to the position of the search frame of each pixel point in the target search area. According to the technical scheme, the time complexity of target re-detection is reduced by narrowing the search range, and the problem of target loss in the target tracking process is efficiently solved, so that the robustness of the tracker to moving objects is better.

Description

Method and device for re-detecting lost target in target tracking
Technical Field
The embodiment of the invention relates to the technical field of target tracking, in particular to a method and a device for re-detecting target loss in target tracking.
Background
Target tracking is a process for determining target position information in continuous frame images, is a more important content in computer vision research, and has wide applications, such as: video monitoring, human-computer interaction, unmanned driving and the like.
In the target tracking process, situations such as background clutter, illumination brightness change, partial or total shielding, target posture change, target rapid motion and the like appearing in a video image can cause target tracking loss to cause target continuous tracking failure.
When the target is lost, the target search area of the tracking algorithm needs to be redefined. Because the time complexity of the tracking algorithm is proportional to the size of the picture, when the target is lost, a large amount of time is spent on selecting to perform the whole search on the whole picture so as to re-determine the target search area, and because the processing speed of the algorithm is high by the target tracker, the burden of the target tracker is increased, and the user experience is poor.
Disclosure of Invention
In order to solve the above technical problems or at least partially solve the above technical problems, embodiments of the present invention provide a method and an apparatus for detecting missing targets in target tracking.
In view of this, in a first aspect, an embodiment of the present invention provides a method for detecting missing targets in target tracking, including:
when a tracking target is lost, randomly selecting a plurality of pixel points on a current frame image as search position points of the target;
selecting one search frame from the search frames corresponding to the plurality of search position points as a similar search frame according to the position of the search frame corresponding to the search position point;
enlarging the calculation range of the similar search box to obtain a search area;
and determining the position of the target according to the position of the search frame corresponding to each pixel point in the search area.
Optionally, selecting one search frame from the search frames corresponding to the plurality of search location points as a similar search frame according to the location of the search frame corresponding to the search location point, where the selecting includes:
respectively calculating the similarity between the search frame corresponding to each of the plurality of search position points and the target frame in the previous frame of image;
comparing the calculated similarity;
and selecting the search box corresponding to the maximum similarity in the calculated similarities as a similar search box.
Optionally, enlarging the calculation range of the similar search box to obtain a search area, including:
interpolating the upper part, the lower part, the left part and the right part of the similar search box by adopting a padding technology, and expanding the calculation range of the similar search box to a threshold value;
and taking the calculation range of the similar search box after padding as a search area.
Optionally, determining the position of the target according to the position of each pixel point in the search area includes:
calculating the similarity between a search frame corresponding to each pixel point in the search area and a target frame in the previous frame of image;
comparing the calculated similarity;
and selecting the position of the search box corresponding to the maximum similarity in the calculated similarities as the position of the target.
Optionally, the similarity is obtained by being a euclidean distance, a manhattan distance, a minkowski distance, or by a twin network.
In a second aspect, an embodiment of the present invention provides an apparatus for detecting missing targets in target tracking, including:
the random sampling module is used for randomly selecting a plurality of pixel points on the current frame image as target searching position points;
the rough searching module is used for selecting one searching frame from the searching frames corresponding to the plurality of searching position points as a similar searching frame according to the position of the searching frame corresponding to the searching position point;
the search area determining module is used for enlarging the calculation range of the similar search box to obtain a search area;
and the target position determining module is used for determining the position of the target according to the position of the search frame corresponding to each pixel point in the search area.
Optionally, the rough search module includes:
the first similarity calculation module is used for calculating the similarity between the search frame corresponding to each of the plurality of search position points and the target frame in the previous frame image;
the first comparison module is used for comparing the calculated similarity;
and the first selecting module is used for selecting the search box corresponding to the maximum similarity in the calculated similarities as a similar search box.
Optionally, the search determining module interpolates the similar search box up, down, left, and right by using a padding technology, expands a calculation range of the similar search box to a threshold, and uses the calculation range of the similar search box after padding as a search area.
Optionally, the target position determining module includes:
the second calculation module is used for calculating the similarity between a search frame corresponding to each pixel point in the search area and a target frame in a previous frame of image;
the second comparison module is used for comparing the calculated similarity;
and the second selecting module is used for selecting the position of the search box corresponding to the maximum similarity in the calculated similarities as the position of the target.
In a third aspect, an embodiment of the present invention provides a mobile terminal, including:
a processor, a memory, a communication interface, and a bus;
the processor, the memory and the communication interface complete mutual communication through the bus;
the communication interface is used for information transmission between external devices;
the processor is configured to invoke program instructions in the memory to perform the steps of the method of the first aspect.
In a fourth aspect, an embodiment of the present invention also provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the steps of the method according to the first aspect.
Compared with the prior art, the method for re-detecting the target loss in the target tracking provided by the embodiment of the invention determines the similar search frame by randomly sampling the pixel points of the current frame image, reduces the search range, enlarges the calculation range of the similar search frame to obtain the target search area, and accurately determines the position of the target according to the position of the search frame of each pixel point in the target search area. According to the technical scheme, the time complexity of target re-detection is reduced by narrowing the search range, and the problem of target loss in the target tracking process is efficiently solved, so that the robustness of the tracker to moving objects is better.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a flowchart of a method for detecting missing targets in target tracking according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an apparatus for detecting missing targets in target tracking according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a search location point and a target point according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another embodiment of the present invention for searching a location point and a target point;
fig. 5 is a schematic diagram of another search location point and target point according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Fig. 1 is a flowchart of a method for detecting missing targets in target tracking according to an embodiment of the present invention, where the method includes: coarse search and fine search;
the rough search includes steps S1-S2 as follows:
s1, after a tracking target is lost, randomly selecting a plurality of pixel points on a current frame image as search position points of the target;
specifically, in this embodiment of the present application, the current frame image is a first frame image collected by a target tracker after the target is lost, and the target in a previous frame image is not lost.
S2, selecting a search frame from the search frames corresponding to the plurality of search position points as a similar search frame according to the position of the search frame corresponding to the search position point;
specifically, in this embodiment of the present application, selecting one search frame from search frames corresponding to a plurality of search position points as a similar search frame according to a position of the search frame corresponding to the search position point includes:
each search position point corresponds to a search box taking the position of the search position point as a center position, and the size of the search box is determined by a target tracking algorithm adopted by target tracking.
Calculating the similarity between the search frame corresponding to each of the plurality of search position points and the target frame in the previous frame of image, wherein the target frame is the search frame with the position of the target as a central point, and specifically, in the embodiment of the application, the similarity can be obtained by calculating an Euclidean distance, a Manhattan distance, a Minkowski distance, a twin network and the like;
comparing the calculated similarity;
and selecting the search frame corresponding to the maximum similarity in the calculated similarities as a similar search frame, wherein the search position points are randomly selected, so that the similarity between the search frame corresponding to the search position point and the target frame in the previous frame image is greater than the similarity between the search frame corresponding to the search position point and the target frame corresponding to the other search position points as long as a certain search position point is positioned near the position where the target is positioned, and the position near the position where the target is positioned, namely the position in the similar search frame, can be positioned.
The time complexity of the rough search is only related to the number of the selected search location points.
The fine search includes steps S3-S4 as follows:
s3, enlarging the calculation range of the similar search box to obtain a search area;
specifically, in the embodiment of the present application, a search area is obtained by padding a similar search box, where padding is a commonly used method in the field of target tracking, and is to interpolate the top, bottom, left, and right of the similar search box on the basis of the similar search box currently being processed, so as to expand the calculation range.
S4, determining the position of the target according to the position of a search frame corresponding to each pixel point in the search area;
specifically, in this embodiment of the present application, determining the position of the target according to the position of each pixel point in the search area includes:
calculating the similarity between a search frame corresponding to each pixel point in the search area and a target frame in a previous frame of image, specifically, in the embodiment of the present application, the similarity may be obtained by calculating an euclidean distance, a manhattan distance, a minkowski distance, or using a twin network, or the like;
comparing the calculated similarity;
and selecting the position of the search box corresponding to the maximum similarity in the calculated similarities as the position of the target, wherein the position of the search box is the position of the center point of the search box.
The complexity of the fine search is proportional to the search area size after padding.
Compared with the prior art, the invention has the following advantages: 1. the problem of target loss caused by reasons such as too fast movement in target tracking is efficiently processed, so that the robustness of a tracker to moving objects is better. 2. The time complexity is low, the time required by rough search is only in direct proportion to the selection of random points, and the square-level time complexity of the exhaustive search is reduced to a linear level. 3. The fine search greatly ensures the accuracy of the search, thereby improving the tracking effect.
Based on the same inventive concept as the target loss rechecking method in the target tracking, an embodiment of the present invention further provides a device for target loss rechecking in the target tracking, as shown in fig. 2, where the device for target loss rechecking in the target tracking includes:
the random sampling module is used for randomly selecting a plurality of pixel points on the current frame image as target searching position points;
the rough searching module is used for selecting one searching frame from the searching frames corresponding to the plurality of searching position points as a similar searching frame according to the position of the searching frame corresponding to the searching position point;
the search area determining module is used for enlarging the calculation range of the similar search box to obtain a search area;
and the target position determining module is used for determining the position of the target according to the position of the search frame corresponding to each pixel point in the search area.
The coarse search module may include:
the first similarity calculation module is used for calculating the similarity between the search frame corresponding to each of the plurality of search position points and the target frame in the previous frame image;
the first comparison module is used for comparing the calculated similarity;
and the first selecting module is used for selecting the search box corresponding to the maximum similarity in the calculated similarities as a similar search box.
The search determining module may interpolate the upper, lower, left, and right sides of the similar search box by using a padding technique, expand a calculation range of the similar search box to a threshold, and use the calculation range of the similar search box after padding as a search area.
The target location determination module may include:
the second calculation module is used for calculating the similarity between a search frame corresponding to each pixel point in the search area and a target frame in a previous frame of image;
the second comparison module is used for comparing the calculated similarity;
and the second selecting module is used for selecting the position of the search box corresponding to the maximum similarity in the calculated similarities as the position of the target.
One specific example is:
when a signal that the target is tracked and lost is obtained, the target is not in the current search area of the tracker, and the target tracker can change the search area and redefine the target search area of the tracking algorithm. In order to efficiently obtain the target search area, the re-detection process can be performed in two steps, i.e., a coarse search and a fine search. Firstly, obtaining the approximate position of a target through a first-step rough search; and then, the position is accurately searched to obtain the accurate position of the target, so that the problem of target loss caused by over-quick movement and the like is solved.
1. Coarse search by random sampling
Firstly, randomly generating a plurality of search position points on a current frame picture as possible directions of targets. As shown in fig. 3, white dots in the figure are randomly generated search position points, and black dots are target points;
then, similarity calculation is performed on a target frame (as shown in fig. 4, a frame with a black dot as a center is used as the target frame) generated by the previous frame image and a search frame (as shown in fig. 4, a frame with a white dot as a center is used as the search frame) where each search position point of the current frame is located, and since the search position points are randomly selected, as long as a certain search position point is located near the position where the target is located, the similarity between the search frame corresponding to the search position point and the target frame in the previous frame image is greater than the similarity between the search frame corresponding to other search position points and the target frame, so that the position near the position where the target is located, that is, the position where the search frame shown in fig. 4 is located, which is the result of rough search. The time complexity of the rough search is only related to the number of random points selected.
2. Fine search using padding method
The position obtained by the rough search is only the position near the target, but not necessarily the position of the target, so that the target needs to be continuously and accurately positioned, that is, the search frame obtained by the rough search is paged, then the region after the paging is taken as the target search region, the region after the paging is the region in the dashed line frame shown in fig. 5, the similarity calculation is performed on the search frame where each pixel point in the target search region is located and the target frame in the previous frame, and the position where the pixel point with the largest similarity is located is obtained and taken as the position of the target, thereby realizing the accurate search. The complexity of the fine search is proportional to the size of the area after padding.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
Through the above description of the embodiments, those skilled in the art will clearly understand that the methods described in the embodiments of the present invention can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention or the method according to some parts of the embodiments.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A method for detecting target loss in target tracking is characterized by comprising the following steps:
when a tracking target is lost, randomly selecting a plurality of pixel points on a current frame image as search position points of the target;
selecting one search frame from the search frames corresponding to the plurality of search position points as a similar search frame according to the positions of the search frames corresponding to the search position points;
enlarging the calculation range of the similar search box to obtain a search area;
and determining the position of the target according to the position of the search frame corresponding to each pixel point in the search area.
2. The method according to claim 1, wherein selecting one search frame from the search frames corresponding to the plurality of search position points as the similar search frame according to the position of the search frame corresponding to the search position point comprises:
respectively calculating the similarity between the search frame corresponding to each of the plurality of search position points and the target frame in the previous frame of image;
comparing the calculated similarity;
and selecting the search box corresponding to the maximum similarity in the calculated similarities as a similar search box.
3. The method of claim 1, wherein expanding the calculation range of the similar search box to obtain a search area comprises:
interpolating the upper part, the lower part, the left part and the right part of the similar search box by adopting a padding technology, and expanding the calculation range of the similar search box to a threshold value;
and taking the calculation range of the similar search box after padding as a search area.
4. The method of claim 1, wherein determining the location of the target according to the locations of the pixels in the search area comprises:
calculating the similarity between a search frame corresponding to each pixel point in the search area and a target frame in the previous frame of image;
comparing the calculated similarity;
and selecting the position of the search box corresponding to the maximum similarity in the calculated similarities as the position of the target.
5. The method of object loss re-detection in object tracking according to claim 2 or 4,
the similarity is obtained by being a euclidean distance, a manhattan distance, a minkowski distance, or by a twin network.
6. An apparatus for detecting missing target in target tracking, comprising:
the random sampling module is used for randomly selecting a plurality of pixel points on the current frame image as target searching position points;
the rough searching module is used for selecting one searching frame from the searching frames corresponding to the searching position points as a similar searching frame according to the positions of the searching frames corresponding to the searching position points;
the search area determining module is used for enlarging the calculation range of the similar search box to obtain a search area;
and the target position determining module is used for determining the position of the target according to the position of the search frame corresponding to each pixel point in the search area.
7. The apparatus of claim 6, wherein the coarse search module comprises:
the first similarity calculation module is used for calculating the similarity between the search frame corresponding to each of the plurality of search position points and the target frame in the previous frame image;
the first comparison module is used for comparing the calculated similarity;
and the first selecting module is used for selecting the search box corresponding to the maximum similarity in the calculated similarities as a similar search box.
8. The apparatus of claim 6, wherein the search determining module interpolates the similar search frames by using padding technology, so as to expand the calculation range of the similar search frames to a threshold, and uses the calculation range of the similar search frames after padding as the search area.
9. The apparatus of claim 6, wherein the target position determining module comprises:
the second calculation module is used for calculating the similarity between a search frame corresponding to each pixel point in the search area and a target frame in a previous frame of image;
the second comparison module is used for comparing the calculated similarity;
and the second selecting module is used for selecting the position of the search box corresponding to the maximum similarity in the calculated similarities as the position of the target.
10. A mobile terminal, comprising:
a processor, a memory, a communication interface, and a bus;
the processor, the memory and the communication interface complete mutual communication through the bus;
the communication interface is used for information transmission between external devices;
the processor is configured to call program instructions in the memory to perform the steps of the method of any of claims 1-5.
CN201811231109.0A 2018-10-22 2018-10-22 Method and device for re-detecting lost target in target tracking Active CN109543534B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811231109.0A CN109543534B (en) 2018-10-22 2018-10-22 Method and device for re-detecting lost target in target tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811231109.0A CN109543534B (en) 2018-10-22 2018-10-22 Method and device for re-detecting lost target in target tracking

Publications (2)

Publication Number Publication Date
CN109543534A CN109543534A (en) 2019-03-29
CN109543534B true CN109543534B (en) 2020-09-01

Family

ID=65844154

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811231109.0A Active CN109543534B (en) 2018-10-22 2018-10-22 Method and device for re-detecting lost target in target tracking

Country Status (1)

Country Link
CN (1) CN109543534B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112417939A (en) * 2019-08-21 2021-02-26 南京行者易智能交通科技有限公司 Passenger flow OD data acquisition method and device based on image recognition, mobile terminal equipment, server and model training method
CN110610202B (en) * 2019-08-30 2022-07-26 联想(北京)有限公司 Image processing method and electronic equipment
CN112308106A (en) * 2019-11-15 2021-02-02 北京京邦达贸易有限公司 Image labeling method and system
CN115665552A (en) * 2022-08-19 2023-01-31 重庆紫光华山智安科技有限公司 Cross-mirror tracking method and device, electronic equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1794264A (en) * 2005-12-31 2006-06-28 北京中星微电子有限公司 Method and system of real time detecting and continuous tracing human face in video frequency sequence
CN104732187A (en) * 2013-12-18 2015-06-24 杭州华为企业通信技术有限公司 Method and equipment for image tracking processing
CN106127161A (en) * 2016-06-29 2016-11-16 深圳市格视智能科技有限公司 Fast target detection method based on cascade multilayer detector
CN107066990A (en) * 2017-05-04 2017-08-18 厦门美图之家科技有限公司 A kind of method for tracking target and mobile device
CN107992791A (en) * 2017-10-13 2018-05-04 西安天和防务技术股份有限公司 Target following failure weight detecting method and device, storage medium, electronic equipment
CN108154524A (en) * 2018-01-16 2018-06-12 中国人民解放军陆军装甲兵学院 Target predicting and tracking method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103259962B (en) * 2013-04-17 2016-02-17 深圳市捷顺科技实业股份有限公司 A kind of target tracking method and relevant apparatus
US9697595B2 (en) * 2014-11-26 2017-07-04 Adobe Systems Incorporated Content aware fill based on similar images
US20170161591A1 (en) * 2015-12-04 2017-06-08 Pilot Ai Labs, Inc. System and method for deep-learning based object tracking
CN107633226B (en) * 2017-09-19 2021-12-24 北京师范大学珠海分校 Human body motion tracking feature processing method
CN108388250B (en) * 2018-03-30 2021-03-05 哈尔滨工程大学 Water surface unmanned ship path planning method based on self-adaptive cuckoo search algorithm

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1794264A (en) * 2005-12-31 2006-06-28 北京中星微电子有限公司 Method and system of real time detecting and continuous tracing human face in video frequency sequence
CN104732187A (en) * 2013-12-18 2015-06-24 杭州华为企业通信技术有限公司 Method and equipment for image tracking processing
CN106127161A (en) * 2016-06-29 2016-11-16 深圳市格视智能科技有限公司 Fast target detection method based on cascade multilayer detector
CN107066990A (en) * 2017-05-04 2017-08-18 厦门美图之家科技有限公司 A kind of method for tracking target and mobile device
CN107992791A (en) * 2017-10-13 2018-05-04 西安天和防务技术股份有限公司 Target following failure weight detecting method and device, storage medium, electronic equipment
CN108154524A (en) * 2018-01-16 2018-06-12 中国人民解放军陆军装甲兵学院 Target predicting and tracking method

Also Published As

Publication number Publication date
CN109543534A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
CN109543534B (en) Method and device for re-detecting lost target in target tracking
CN102792317B (en) Based on the Image Feature Detection of the application of multiple property detector
US9298990B2 (en) Object tracking method and device
US8289402B2 (en) Image processing apparatus, image pickup apparatus and image processing method including image stabilization
CN109598744B (en) Video tracking method, device, equipment and storage medium
CN110853076A (en) Target tracking method, device, equipment and storage medium
US10438086B2 (en) Image information recognition processing method and device, and computer storage medium
US20110211233A1 (en) Image processing device, image processing method and computer program
CN104866805B (en) Method and device for real-time tracking of human face
JP6682559B2 (en) Image processing apparatus, image processing method, image processing program, and storage medium
JP2013054429A (en) Object tracking device
US20180352186A1 (en) Method for estimating a timestamp in a video stream and method of augmenting a video stream with information
US11669978B2 (en) Method and device for estimating background motion of infrared image sequences and storage medium
US20190005323A1 (en) Information processing apparatus for tracking processing
CN110689014B (en) Method and device for detecting region of interest, electronic equipment and readable storage medium
CN113628250A (en) Target tracking method and device, electronic equipment and readable storage medium
US11164286B2 (en) Image processing apparatus, image processing method, and storage medium
JP2008085491A (en) Image processor, and image processing method thereof
US11647294B2 (en) Panoramic video data process
WO2021157213A1 (en) Image processing device and image processing method
US9159118B2 (en) Image processing apparatus, image processing system, and non-transitory computer-readable medium
US10346680B2 (en) Imaging apparatus and control method for determining a posture of an object
JP6116916B2 (en) Image detection apparatus, control program, and image detection method
US11790483B2 (en) Method, apparatus, and device for identifying human body and computer readable storage medium
JP6062483B2 (en) Digital camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210329

Address after: Room 203b, building 3, artificial intelligence Industrial Park, 266 Chuangyan Road, Qilin science and Technology Innovation Park, Jiangning District, Nanjing City, Jiangsu Province, 211135

Patentee after: Zhongke Fangcun Zhiwei (Nanjing) Technology Co.,Ltd.

Address before: 211135 3rd floor, building 3, 266 Chuangyan Road, Jiangning District, Nanjing City, Jiangsu Province

Patentee before: NANJING ARTIFICIAL INTELLIGENCE CHIP INNOVATION INSTITUTE, INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES