US9171229B2 - Visual object tracking method - Google Patents

Visual object tracking method Download PDF

Info

Publication number
US9171229B2
US9171229B2 US14/184,829 US201414184829A US9171229B2 US 9171229 B2 US9171229 B2 US 9171229B2 US 201414184829 A US201414184829 A US 201414184829A US 9171229 B2 US9171229 B2 US 9171229B2
Authority
US
United States
Prior art keywords
window
color
tracking method
target
color filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US14/184,829
Other versions
US20150117706A1 (en
Inventor
Chaur-Heh Hsieh
Shu-Wei Chou
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MING CHUAN UNIVERSITY
Original Assignee
MING CHUAN UNIVERSITY
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MING CHUAN UNIVERSITY filed Critical MING CHUAN UNIVERSITY
Assigned to MING CHUAN UNIVERSITY reassignment MING CHUAN UNIVERSITY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHOU, SHU-WEI, HSIEH, CHAUR-HEH
Publication of US20150117706A1 publication Critical patent/US20150117706A1/en
Application granted granted Critical
Publication of US9171229B2 publication Critical patent/US9171229B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • G06K9/6202
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • G06K9/00624
    • G06K9/4647
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • G06T7/408
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Definitions

  • the present invention relates to a visual object tracking method, in particular to the visual object tracking method using both color and shape characteristics to compute the center coordinates of a target accurately.
  • Video tracking plays a very important role in computer vision and has extensive applications in the fields of video surveillance, man-machine interface, automobile navigation and intelligent transport system.
  • the practical applications of the video tracking technology still have many technical difficulties including complicated background, non-rigid object, illumination change or masked object, and these technical issues make the visual object tracking very difficult, and both stability and accuracy of the tracked object will be affected by the aforementioned technical issues.
  • video tracking primarily analyzes and processes information of the characteristics such as the color, shape and lines of an object continuously to estimate the center position and size of a target to achieve the tracking effect.
  • Mean Shift algorithm is a highly efficient visual object tracking method
  • CamShift algorithm switches the Mean Shift algorithm to be adaptive and automatically adjusts the size of an object window in order to fit the size of the object that changes with time
  • the CamShift algorithm is a highly efficient and stable target tracking method and thus capturing extensive attention.
  • CamShift is an improved algorithm of Mean shift.
  • CamShift is a high-speed algorithm, it mainly uses a color characteristic as the basis of tracking. Since the computation method is to convert the similarity of colors into probability and then calculate the mass center from the probability. If a target has a background with similar colors or a larger object with similar colors, the tracking of the original object will be interfered to cause a failure of tracking. In other words, this algorithm simply using the similarity of colors to calculate probability is often interfered by other larger objects or the background having similar colors of a target, and misjudgments or errors of the object tracking occur frequently.
  • the inventor of the present invention based on years of experience in the related industry to conduct extensive researches and experiments, and finally invented a visual object tracking method in accordance with the present invention to overcome the foregoing problems.
  • the present invention overcomes the technical issues of the conventional tracking algorithms, in particular, the CamShift algorithm that simply converts the similarity of colors into probability to compute the mass center, and if the target is situated in a background having similar colors or large objects having similar colors are existed around the target, then the object tracking result will be affected, and failures and errors may occur very often.
  • the conventional tracking algorithms have the drawbacks of poor stability and accuracy.
  • the present invention provides a visual object tracking method comprising the steps of: setting an object window having a target in a video image; defining a search window greater than the object window; analyzing an image pixel of the object window to generate a color histogram for defining a color filter, wherein the color histogram defines a color filter, and the color filter includes a dominant color characteristic of the target; using the color filter in the object window to generate an object template, wherein the object template, includes a shape characteristic of the target; using the color filter the search window to generate a dominant color map in, wherein the dominant color map includes at least one candidate block; comparing the similarity between the object template and the candidate block to obtain a probability distribution map; and using the probability distribution map to compute the mass center of the target.
  • the visual object tracking method further comprises the steps of adjusting the mass center of the target by a mean shift algorithm to obtain the best center position of the target; and updating the size of the search window according to the best center position.
  • the object window is selected and obtained manually in the step of setting an object window in a video image.
  • the object window is obtained by pre-loading an image and computing the image in the step of setting an object window in a video image.
  • the object window is situated at the center of the search window in the step of setting a search window.
  • the video image is converted into a HSV color space.
  • the object template and the dominant color map are binary images.
  • the present invention mainly uses the color filter and can use both color and shape characteristics of the target as the basis of tracking to obtain the mass center of the target effectively. Further, the mean shift algorithm is used to adjust the mass center and update the size of the search window, so that the present invention can achieve the effects of improving the stability and accuracy of the recognition, lowering the total operation cost, and achieving the timely tracking effect.
  • FIG. 1 is a flow chart of the present invention
  • FIG. 2 is a color histogram of the present invention
  • FIG. 3 is a schematic view a color filter of the present invention.
  • FIG. 4 is a schematic view an object template of the present invention.
  • FIG. 5 is a schematic view of dominant colors of the present invention.
  • FIG. 6 is a schematic view of a probability distribution of the present invention.
  • the visual object tracking method comprises the following steps:
  • the object window 100 is set in one of video images, and the object window 100 has a target, and the object window 100 is selected and obtained manually, or obtained by pre-loading an image and then computing the image.
  • the video image is converted into a HSV color space 200 .
  • a user is assumed to select the object window 100 manually and set the size of the object window 100 to w ⁇ h.
  • a search window 101 with a size greater than the object window 100 , wherein the object window 100 is disposed at the center of the search window 101 , wherein the size of the search window 101 is assumed to (s ⁇ w) ⁇ (s ⁇ h), and s is the magnification ratio, and the magnification ratio is set to 1.3.
  • CF ⁇ ( k ) ⁇ 1 , k ⁇ ⁇ belongs ⁇ ⁇ to ⁇ ⁇ a ⁇ ⁇ defined dominant ⁇ ⁇ color . ⁇ 0 , k ⁇ ⁇ does ⁇ ⁇ not ⁇ ⁇ belong ⁇ ⁇ to ⁇ ⁇ a ⁇ ⁇ defined dominant ⁇ ⁇ color . ⁇ Mathematical ⁇ ⁇ Equation ⁇ ⁇ 1
  • M is set to be 2
  • the color filter 400 includes a dominant color characteristic of the target.
  • the color histogram contains both percentage and Hue to cover the representative color image pixels.
  • a dominant color map 402 (as shown in FIG. 5 ) in the search window 101 by the color filter 400 , wherein the dominant color map 402 is a binary image, and the dominant color map 402 has at least one candidate block (C), and the size of the candidate block must be equal to the size of the object template 401 (which is w ⁇ h).
  • the size of the search window 101 is set to be 60 ⁇ 60 pixels, and a pixel distance of the candidate block is defined.
  • a candidate block is produced in every 10 pixel distances, wherein the pixel distance is simply provided to facilitate the computation by the computer, but the present invention is not limited to such arrangement.
  • represents inversion of Exclusive OR
  • (m,n) represents the position of the candidate block.
  • the probability distribution map 500 as shown in FIG. 6 different probability values are shown by different colors. The closer position to the center of the target, the greater the probability value. The farther from the center of the target, the smaller the probability value. Since the pixel distance of this preferred embodiment is set to 10 pixels, the probability of the candidate block corresponding to its pixel range is ( ⁇ 5 ⁇ +5).
  • the mean shift algorithm is used to adjust the mass center of the target 600 and determine whether the best center position of the target 601 is obtained; if no, then the previous step is repeated to obtain the mass center of the target 600 again, or else the next step is executed.
  • the size of the search window 700 is updated according to the best center position for tracking the target continuously. After the size of the search window 700 is updated, the same procedure is applied to another video image. After the search window 700 is updated, the color filler is used to achieve the effect of tracking the target accurately and stably.
  • said computer is further connected to a monitor for displaying the sequence of video images and the probability distribution map 500 at the same time.
  • the probability distribution map 500 can be represented not only by colorful-dotted pattern but also a circle or rectangle frame (e.g. the yellow circle frame in FIG. 6 ). No matter how it is represented, the probability distribution 500 is modified to show the tracking results of said computer analysis.
  • the object template is produced in the object window by the color filter, and the dominant color map is produced in the search window, so that both dominant color and shape characteristics of the target can be obtained.
  • the dominant color map has the pre-defined candidate block, and the similarity between the candidate block and the object template is compared to obtain the probability distribution map, and the mass center of the target can be obtained from the probability distribution map effectively.
  • a mean shift algorithm is used to adjust the mass center and update the size of the search window, so that the present invention can overcome the interference issue of having objects with similar colors in a video background, so as to achieve the advantages and effects of improving the stability and accuracy of recognition, lowering the total operation cost, and accomplishing the timely tracking.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

A visual object tracking method includes the steps of: setting an object window having a target in a video image; defining a search window greater than the object window; analyzing an image pixel of the object window to generate a color histogram for defining a color filter which includes a dominant color characteristic of the target; using the color filter to generate an object template and a dominant color map in the object window and the search window respectively, the object template including a shape characteristic of the target, the dominant color map including at least one candidate block; comparing the similarity between the object template and the candidate block to obtain a probability distribution map, and using the probability distribution map to compute the mass center of the target. The method generates the probability map by the color and shape characteristics to compute the mass center.

Description

FIELD OF THE INVENTION
The present invention relates to a visual object tracking method, in particular to the visual object tracking method using both color and shape characteristics to compute the center coordinates of a target accurately.
BACKGROUND OF THE INVENTION
Video tracking plays a very important role in computer vision and has extensive applications in the fields of video surveillance, man-machine interface, automobile navigation and intelligent transport system. However, the practical applications of the video tracking technology still have many technical difficulties including complicated background, non-rigid object, illumination change or masked object, and these technical issues make the visual object tracking very difficult, and both stability and accuracy of the tracked object will be affected by the aforementioned technical issues. Nonetheless, video tracking primarily analyzes and processes information of the characteristics such as the color, shape and lines of an object continuously to estimate the center position and size of a target to achieve the tracking effect.
At present, there are many different tracking algorithms available in the market, and the features of these algorithms are different, wherein Mean Shift algorithm is a highly efficient visual object tracking method, and CamShift algorithm switches the Mean Shift algorithm to be adaptive and automatically adjusts the size of an object window in order to fit the size of the object that changes with time, and the CamShift algorithm is a highly efficient and stable target tracking method and thus capturing extensive attention.
However, CamShift is an improved algorithm of Mean shift. Although CamShift is a high-speed algorithm, it mainly uses a color characteristic as the basis of tracking. Since the computation method is to convert the similarity of colors into probability and then calculate the mass center from the probability. If a target has a background with similar colors or a larger object with similar colors, the tracking of the original object will be interfered to cause a failure of tracking. In other words, this algorithm simply using the similarity of colors to calculate probability is often interfered by other larger objects or the background having similar colors of a target, and misjudgments or errors of the object tracking occur frequently.
In view of the problems above, the inventor of the present invention based on years of experience in the related industry to conduct extensive researches and experiments, and finally invented a visual object tracking method in accordance with the present invention to overcome the foregoing problems.
SUMMARY OF THE INVENTION
Therefore, it is a primary objective of the present invention to overcome the technical issues of the conventional tracking algorithms, in particular, the CamShift algorithm that simply converts the similarity of colors into probability to compute the mass center, and if the target is situated in a background having similar colors or large objects having similar colors are existed around the target, then the object tracking result will be affected, and failures and errors may occur very often. In other words, the conventional tracking algorithms have the drawbacks of poor stability and accuracy.
To achieve the aforementioned objective, the present invention provides a visual object tracking method comprising the steps of: setting an object window having a target in a video image; defining a search window greater than the object window; analyzing an image pixel of the object window to generate a color histogram for defining a color filter, wherein the color histogram defines a color filter, and the color filter includes a dominant color characteristic of the target; using the color filter in the object window to generate an object template, wherein the object template, includes a shape characteristic of the target; using the color filter the search window to generate a dominant color map in, wherein the dominant color map includes at least one candidate block; comparing the similarity between the object template and the candidate block to obtain a probability distribution map; and using the probability distribution map to compute the mass center of the target.
The visual object tracking method further comprises the steps of adjusting the mass center of the target by a mean shift algorithm to obtain the best center position of the target; and updating the size of the search window according to the best center position.
Wherein, the object window is selected and obtained manually in the step of setting an object window in a video image.
Wherein, the object window is obtained by pre-loading an image and computing the image in the step of setting an object window in a video image.
Wherein, the object window is situated at the center of the search window in the step of setting a search window.
Wherein, the video image is converted into a HSV color space.
Wherein, the object template and the dominant color map are binary images.
Compared with the prior art that uses color characteristic as the basis of tracking and fails to distinguish different objects of the same color, the present invention mainly uses the color filter and can use both color and shape characteristics of the target as the basis of tracking to obtain the mass center of the target effectively. Further, the mean shift algorithm is used to adjust the mass center and update the size of the search window, so that the present invention can achieve the effects of improving the stability and accuracy of the recognition, lowering the total operation cost, and achieving the timely tracking effect.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a color histogram of the present invention;
FIG. 3 is a schematic view a color filter of the present invention;
FIG. 4 is a schematic view an object template of the present invention;
FIG. 5 is a schematic view of dominant colors of the present invention; and
FIG. 6 is a schematic view of a probability distribution of the present invention.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
The aforementioned and other objectives and advantages of the present invention will become clearer in light of the following detailed description of an illustrative embodiment of this invention described in connection with the drawings. It is intended that the embodiments and drawings disclosed herein are to be considered illustrative rather than restrictive.
The processes, features, or functions of the present invention can be implemented by program instructions that perform in an appropriate computing device. With reference to FIGS. 1 to 6 for a visual object tracking method of the present invention, the visual object tracking method comprises the following steps:
capture a sequence of video images by a video camera, wherein the background scene may be changing or said video camera may be moving;
establish a computer having a processor in connection with said video camera, said processor being configured for:
Set an object window 100 in one of video images, and the object window 100 has a target, and the object window 100 is selected and obtained manually, or obtained by pre-loading an image and then computing the image. In addition, the video image is converted into a HSV color space 200. In this preferred embodiment, a user is assumed to select the object window 100 manually and set the size of the object window 100 to w×h.
Defines a search window 101 with a size greater than the object window 100, wherein the object window 100 is disposed at the center of the search window 101, wherein the size of the search window 101 is assumed to (s×w)×(s×h), and s is the magnification ratio, and the magnification ratio is set to 1.3.
Analyze an image pixel x of the object window 100 to generate a color histogram 300, and the color histogram 300 as shown in FIG. 2 is divided into N bins and used for defining a color filter (CF) 400, and the color filter 400 is divided into M bins (M<N), and the mathematical equation 1 is given below.
CF ( k ) = { 1 , k belongs to a defined dominant color . 0 , k does not belong to a defined dominant color . Mathematical Equation 1
With reference to FIG. 3, M is set to be 2, and the color filter 400 includes a dominant color characteristic of the target. One of skilled person in the art will understand that the color histogram, as shown in FIGS. 2 and 3, contains both percentage and Hue to cover the representative color image pixels.
Use the color filter 400 in the object window 100 to analyze each image pixel x(i,j) of the object window 100 to produce an object template (O) 401, and its related mathematical equation 2 is given below:
O ( i , j ) = { 1 , if CF ( k ) = 1 for x ( i , j ) 0 , if CF ( k ) = 0 for x ( i , j ) . Mathematical Equation 2
The object template 401 is a binary image. From O(i,j)=1 as shown in the mathematical equation 2, the color of the image pixel at the position (i,j) matches with the dominant color characteristic. In FIG. 4, the object template 401 includes a shape characteristic of the target.
Generate a dominant color map 402 (as shown in FIG. 5) in the search window 101 by the color filter 400, wherein the dominant color map 402 is a binary image, and the dominant color map 402 has at least one candidate block (C), and the size of the candidate block must be equal to the size of the object template 401 (which is w×h). For example, if the size of the search window 101 is set to be 60×60 pixels, and a pixel distance of the candidate block is defined. In other words, a candidate block is produced in every 10 pixel distances, wherein the pixel distance is simply provided to facilitate the computation by the computer, but the present invention is not limited to such arrangement.
Compare the similarity between the object template 401 and the candidate block to obtain a probability distribution map 500, and mathematical equation 3 and mathematical equation 4 are given below:
S ( O , C ) = O ( i , j ) C ( i , j ) Mathematical Equation 3 P ( m , n ) - S ( O , C ) w × h Mathematical Equation 4
Wherein, ⊕ represents inversion of Exclusive OR, and (m,n) represents the position of the candidate block.
With reference to the probability distribution map 500 as shown in FIG. 6, different probability values are shown by different colors. The closer position to the center of the target, the greater the probability value. The farther from the center of the target, the smaller the probability value. Since the pixel distance of this preferred embodiment is set to 10 pixels, the probability of the candidate block corresponding to its pixel range is (−5˜+5).
According to the probability distribution map 500, the mean shift algorithm is used to adjust the mass center of the target 600 and determine whether the best center position of the target 601 is obtained; if no, then the previous step is repeated to obtain the mass center of the target 600 again, or else the next step is executed.
The size of the search window 700 is updated according to the best center position for tracking the target continuously. After the size of the search window 700 is updated, the same procedure is applied to another video image. After the search window 700 is updated, the color filler is used to achieve the effect of tracking the target accurately and stably. By the way, said computer is further connected to a monitor for displaying the sequence of video images and the probability distribution map 500 at the same time. The probability distribution map 500 can be represented not only by colorful-dotted pattern but also a circle or rectangle frame (e.g. the yellow circle frame in FIG. 6). No matter how it is represented, the probability distribution 500 is modified to show the tracking results of said computer analysis.
From the aforementioned description and arrangement, the object template is produced in the object window by the color filter, and the dominant color map is produced in the search window, so that both dominant color and shape characteristics of the target can be obtained. In addition, the dominant color map has the pre-defined candidate block, and the similarity between the candidate block and the object template is compared to obtain the probability distribution map, and the mass center of the target can be obtained from the probability distribution map effectively. Further, a mean shift algorithm is used to adjust the mass center and update the size of the search window, so that the present invention can overcome the interference issue of having objects with similar colors in a video background, so as to achieve the advantages and effects of improving the stability and accuracy of recognition, lowering the total operation cost, and accomplishing the timely tracking.
In summation of the above description, the present invention herein improves over the prior art and further complies with the patent application requirements and is duly submitted to the Patent and Trademark Office for review and granting of the commensurate patent rights.
While the invention has been described by way of example and in terms of a preferred embodiment, it is to be understood that the invention is not limited thereto. To the contrary, it is intended to cover various modifications and similar arrangements and procedures, and the scope of the appended claims therefore should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements and procedures.

Claims (11)

What is claimed is:
1. A visual object tracking method, comprising the steps of:
establishing a computer having processor, said processor being configured for:
setting an object window having a target in a video image;
defining a search window greater than the object window;
analyzing an image pixel of the object window to generate a color histogram, wherein the color histogram defines a color filter (CF), and the color filter includes a dominant color characteristic of the target;
using the color filter in the object window to generate an object template (O), wherein the object template includes a shape characteristic of the target;
using the color filter in the search window to generate a dominant color map, wherein the dominant color map includes at least one candidate block (C);
comparing the similarity between the object template and the candidate block to obtain a probability distribution map; and
using the probability distribution map to compute the mass center of the target.
2. The visual object tracking method of claim 1, further comprising the steps of: adjusting the mass center of the target by a mean shift algorithm to obtain the best center position of the target; and updating the size of the search window according to the best center position.
3. The visual object tracking method of claim 1, wherein the object window is selected and obtained manually in the step of setting an object window in a video image.
4. The visual object tracking method of claim 1, wherein the object window is obtained by pre-loading an image and computing the image in the step of setting an object window in a video image.
5. The visual object tracking method of claim 1, wherein the object window is situated at the center of the search window in the step of setting a search window.
6. The visual object tracking method of claim 1, wherein the color histogram is divided into N bins, and the color filter is divided into M bins, and M is smaller than N in the step of defining a color filter by using a color histogram, and the mathematical equation of the color filter (CF) is:
CF ( k ) = { 1 , k belongs to a defined dominant color . 0 , k does not belong to a defined dominant color .
7. The visual object tracking method of claim 6, wherein the color filter is analyzed based on each image pixel x(i,j) of the object window in the step of using the color filter to generate an object template in the object window, and the mathematical equation of the object window is:
O ( i , j ) = { 1 , if CF ( k ) = 1 for x ( i , j ) 0 , if CF ( k ) = 0 for x ( i , j ) .
8. The visual object tracking method of claim 7, wherein the following mathematical equation is used to compare the similarity between the object template and the candidate block in the step of comparing the similarity between the object template and the candidate block: S(O,C)=ΣO(i,j)⊕C(i,j), and ⊕ is inversion of Exclusive OR.
9. The visual object tracking method of claim 8, wherein the probability distribution map is obtained by the mathematical equation of
P ( m , n ) = S ( O , C ) w × h
in the step of obtaining a probability distribution map, and (m,n) represents the position of the candidate block, and w×h represents the size of the object window.
10. The visual object tracking method of claim 1, wherein the video image is converted into a HSV color space.
11. The visual object tracking method of claim 1, wherein the object template and the dominant color map are binary images.
US14/184,829 2013-10-28 2014-02-20 Visual object tracking method Expired - Fee Related US9171229B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
TW102138843A 2013-10-28
TW102138843A TWI497450B (en) 2013-10-28 2013-10-28 Visual object tracking method
TW102138843 2013-10-28

Publications (2)

Publication Number Publication Date
US20150117706A1 US20150117706A1 (en) 2015-04-30
US9171229B2 true US9171229B2 (en) 2015-10-27

Family

ID=52995512

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/184,829 Expired - Fee Related US9171229B2 (en) 2013-10-28 2014-02-20 Visual object tracking method

Country Status (2)

Country Link
US (1) US9171229B2 (en)
TW (1) TWI497450B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10026003B2 (en) 2016-03-08 2018-07-17 Accuware, Inc. Method and arrangement for receiving data about site traffic derived from imaging processing
US10824878B2 (en) 2016-03-08 2020-11-03 Accuware, Inc. Method and arrangement for receiving data about site traffic derived from imaging processing

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102137263B1 (en) * 2014-02-20 2020-08-26 삼성전자주식회사 Image processing apparatus and method
CN106295466A (en) * 2015-05-18 2017-01-04 佳能株式会社 Image processing method and device
US20180082428A1 (en) * 2016-09-16 2018-03-22 Qualcomm Incorporated Use of motion information in video data to track fast moving objects
CN107452015B (en) * 2017-07-28 2020-09-25 南京工业职业技术学院 Target tracking system with re-detection mechanism
CN110276781A (en) * 2018-03-13 2019-09-24 天津工业大学 Motion target tracking method
CN110458045A (en) * 2019-07-22 2019-11-15 浙江大华技术股份有限公司 Acquisition methods, image processing method and the device of response probability histogram
CN110837774A (en) * 2019-09-27 2020-02-25 中科九度(北京)空间信息技术有限责任公司 High-precision identification method for combined target of shoulder-carried rod-shaped objects
CN112288780B (en) * 2020-11-09 2024-01-16 西安工业大学 Multi-feature dynamically weighted target tracking algorithm
CN116975585B (en) * 2023-09-25 2023-12-15 中国人民解放军军事科学院国防科技创新研究院 Method and device for formalized representation of computable instant advantage window
CN117853484B (en) * 2024-03-05 2024-05-28 湖南建工交建宏特科技有限公司 Intelligent bridge damage monitoring method and system based on vision

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6363160B1 (en) * 1999-01-22 2002-03-26 Intel Corporation Interface using pattern recognition and tracking

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101610412B (en) * 2009-07-21 2011-01-19 北京大学 Visual tracking method based on multi-cue fusion
CN101955130B (en) * 2010-09-08 2012-03-07 西安理工大学 Tower crane video monitoring system with automatic tracking and zooming functions and monitoring method
TW201220215A (en) * 2010-11-08 2012-05-16 Hon Hai Prec Ind Co Ltd Suspicious object recognizing and tracking system and method
CN102737385A (en) * 2012-04-24 2012-10-17 中山大学 Video target tracking method based on CAMSHIFT and Kalman filtering

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6363160B1 (en) * 1999-01-22 2002-03-26 Intel Corporation Interface using pattern recognition and tracking

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10026003B2 (en) 2016-03-08 2018-07-17 Accuware, Inc. Method and arrangement for receiving data about site traffic derived from imaging processing
US10824878B2 (en) 2016-03-08 2020-11-03 Accuware, Inc. Method and arrangement for receiving data about site traffic derived from imaging processing

Also Published As

Publication number Publication date
TW201516969A (en) 2015-05-01
TWI497450B (en) 2015-08-21
US20150117706A1 (en) 2015-04-30

Similar Documents

Publication Publication Date Title
US9171229B2 (en) Visual object tracking method
US10049293B2 (en) Pixel-level based micro-feature extraction
Soriano et al. Adaptive skin color modeling using the skin locus for selecting training pixels
US9036039B2 (en) Apparatus and method for acquiring face image using multiple cameras so as to identify human located at remote site
US8718321B2 (en) Method of image processing
Zhang et al. A new haze removal approach for sky/river alike scenes based on external and internal clues
CN107403175A (en) Visual tracking method and Visual Tracking System under a kind of movement background
US7747079B2 (en) Method and system for learning spatio-spectral features in an image
US8934669B2 (en) Self-adaptive image-based obstacle detection method
CN107240118B (en) Discriminant tracking method based on RGB color histogram
US8811671B2 (en) Image processing apparatus, image processing method, and recording medium
Küçükmanisa et al. Real-time illumination and shadow invariant lane detection on mobile platform
WO2019015344A1 (en) Image saliency object detection method based on center-dark channel priori information
CN105243667A (en) Target re-identification method based on local feature fusion
CN106934338B (en) Long-term pedestrian tracking method based on correlation filter
CN111680699A (en) Air-ground infrared time-sensitive weak small target detection method based on background suppression
KR101921717B1 (en) Face recognition method and facial feature extraction method using local contour patten
US9727780B2 (en) Pedestrian detecting system
US10140555B2 (en) Processing system, processing method, and recording medium
CN109271865B (en) Moving target tracking method based on scattering transformation multilayer correlation filtering
JP2015082287A (en) Image processing apparatus, image processing method, and image processing program
CN108053425B (en) A kind of high speed correlation filtering method for tracking target based on multi-channel feature
Qi et al. Cascaded cast shadow detection method in surveillance scenes
Dai et al. Robust and accurate moving shadow detection based on multiple features fusion
Hu et al. Digital video stabilization based on multilayer gray projection

Legal Events

Date Code Title Description
AS Assignment

Owner name: MING CHUAN UNIVERSITY, TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HSIEH, CHAUR-HEH;CHOU, SHU-WEI;REEL/FRAME:032351/0075

Effective date: 20131126

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20231027