CN108416800A - Method for tracking target and device, terminal, computer readable storage medium - Google Patents

Method for tracking target and device, terminal, computer readable storage medium Download PDF

Info

Publication number
CN108416800A
CN108416800A CN201810203671.6A CN201810203671A CN108416800A CN 108416800 A CN108416800 A CN 108416800A CN 201810203671 A CN201810203671 A CN 201810203671A CN 108416800 A CN108416800 A CN 108416800A
Authority
CN
China
Prior art keywords
target
image frame
model
initial
current image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810203671.6A
Other languages
Chinese (zh)
Inventor
陈万龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Medical Equipment Co Ltd
Original Assignee
Qingdao Hisense Medical Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Medical Equipment Co Ltd filed Critical Qingdao Hisense Medical Equipment Co Ltd
Priority to CN201810203671.6A priority Critical patent/CN108416800A/en
Publication of CN108416800A publication Critical patent/CN108416800A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Present invention is disclosed a kind of method for tracking target and device, terminal, computer readable storage mediums, belong to computer application technology.The method includes:The initiating searches position in the current image frame is determined by the target location in preceding picture frame of current image frame, the initiating searches model in the initiating searches position and the object module in preceding picture frame by the current image frame, the object module of the current image frame is calculated, according to the object module of the initial target model and the current image frame of initial image frame, using the initiating searches position as starting point, target location is determined in the current image frame.In addition, additionally providing target tracker and terminal, computer readable storage medium.Above-mentioned method for tracking target and device, terminal, computer readable storage medium can ensure the stability accurately tracked to target.

Description

Target tracking method and device, terminal and computer readable storage medium
Technical Field
The present invention relates to the field of computer application technologies, and in particular, to a target tracking method and apparatus, a terminal, and a computer-readable storage medium.
Background
In the digital operating room, the pictures acquired by the camera can be live broadcast in a classroom, a consultation room or even a conference site in real time. During operation, in order to realize demonstration, teaching and remote consultation guidance of remote operation, the interaction between personnel outside an operating room and a doctor of a main scalpel is enhanced, and the camera tracks the doctor of the main scalpel so that the doctor of the main scalpel is always positioned near the center of the visual field of the camera.
At present, the mean shift algorithm is an important research direction in the field of target tracking due to the advantages of low computational complexity, good stability, easy computer implementation and the like. In the target tracking technology based on the mean shift method, a color kernel histogram or a gray kernel histogram is adopted to describe the characteristics of a target, and then the target position is searched by using a mean shift vector. In the process of visual tracking, the mean shift algorithm requires that a target region has a coincidence region between a current image frame and a previous image frame, so that the tracking effect of the classical mean shift visual tracking algorithm is poor when a target moves rapidly; on the other hand, under the conditions of target shielding, scene illumination intensity variation and the like, the traditional visual tracking method based on mean shift is poor in robustness. A plurality of devices such as a screen display endoscope, a monitor and a CT (computed tomography) device are arranged in the digital operating room to monitor information of a patient in real time, a doctor of a surgical knife moves among the devices when checking related information, the doctor of the surgical knife possibly moves out of the visual field of a camera, and due to the narrow space in the operating room scene, the doctor of the surgical knife, an assistant, a nurse, an anesthesiologist and other personnel are in the operating room, so that the shielding condition is inevitable, and the tracking loss phenomenon is easily caused when matching operation is carried out; and the illumination intensity that different equipment sent in the operating room scene is different, makes the illumination luminance of different positions in the operating scene different to because indoor personnel all have the gauze mask, can't carry out facial discernment, lead to causing the phenomenon of mistake when carrying out the matching operation with easily.
In order to solve the phenomena of tracking loss, tracking error and the like, when a target is tracked, the target model of the current image frame is updated by adopting the target model in the previous image frame at a certain updating rate, and then the target position is determined in the current image frame. However, when the target model in the current image frame is not exactly the same as the target model in the previous image frame as the scene illumination intensity and the camera angle of view change, if the target model in the current image frame is updated only according to the target model in the previous image frame, when the tracking position in the previous image frame slightly deviates from the actual target, the target is lost due to the continuous accumulation of the deviation between the two adjacent image frames. Therefore, the target tracking updated by the model at present cannot ensure the stability of accurate target tracking.
Disclosure of Invention
The invention provides a target tracking method and device, a terminal and a computer-readable storage medium, aiming at solving the technical problem of poor stability of accurate target tracking in the related art.
In a first aspect, a target tracking method is provided, including:
determining a starting search position in a current image frame from a target position in a previous image frame of the current image frame;
calculating to obtain a target model of the current image frame through an initial search model of the current image frame at the initial search position and a target model of the previous image frame;
and determining a target position in the current image frame by taking the initial searching position as a starting point according to an initial target model of an initial image frame and a target model of the current image frame.
In a second aspect, there is provided a target tracking apparatus, comprising:
a starting search position determining module for determining a starting search position in a current image frame from a target position in a previous image frame of the current image frame;
the current target model calculation module is used for calculating a target model of the current image frame according to an initial search model of the current image frame at the initial search position and a target model of the previous image frame;
and the current target position determining module is used for determining a target position in the current image frame by taking the initial searching position as a starting point according to an initial target model of an initial image frame and a target model of the current image frame.
In a third aspect, a terminal is provided, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
In a fourth aspect, there is provided a computer readable storage medium storing a program which, when executed, causes a terminal to perform the method of the first aspect.
The technical scheme provided by the embodiment of the invention can obtain the following beneficial effects:
when a target position is determined for a current image frame, a target model of the current image frame is obtained through an initial searching model of the current image frame at the initial searching position and a target model of a previous image frame, the initial target model of the initial image frame and the target model of the current image frame are considered at the same time, the target position is determined in the current image frame according to the initial target model of the initial image frame and the target model in the previous image frame, and the problems that when the target model of the current image frame is updated only according to the target model in the previous image frame, the deviation between the previous image frame and an actual target is accumulated continuously due to the change of scene illumination intensity and camera visual angle, the deviation between a tracking position and the actual target is caused, and the target is lost due to the continuous accumulation of the deviation between two adjacent image frames are avoided, so that the accuracy of target tracking is improved, the stability of accurate tracking of the target is guaranteed.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow diagram illustrating a method of target tracking according to an exemplary embodiment.
Fig. 2 is a flow chart of another target tracking method according to the corresponding embodiment of fig. 1.
Fig. 3 is a flowchart illustrating a specific implementation of step S120 in the target tracking method according to the corresponding embodiment of fig. 1.
Fig. 4 is a flowchart illustrating a specific implementation of step S130 in the target tracking method according to the corresponding embodiment of fig. 1.
Fig. 5 is a flow chart illustrating a matching operation by a mean shift method according to an exemplary embodiment.
Fig. 6 is a diagram illustrating a process of performing a matching operation by a mean shift method according to an exemplary embodiment.
Fig. 7 is a flowchart illustrating a specific implementation of step S132 in the target tracking method according to the corresponding embodiment in fig. 4.
Fig. 8 is a schematic diagram illustrating a process of tracking a target in different image frames by using a classical mean shift method under an experimental scenario where the illumination intensity varies according to an exemplary embodiment.
Fig. 9 is a schematic diagram illustrating a process of tracking a target in different image frames according to the present solution in an experimental scenario where the illumination intensity changes according to an exemplary embodiment.
FIG. 10 is a block diagram illustrating a target tracking device according to an exemplary embodiment.
Fig. 11 is a block diagram of another object tracking device shown in accordance with the corresponding embodiment of fig. 10.
Fig. 12 is a block diagram of the current object model calculation module 120 shown in accordance with the corresponding embodiment of fig. 10.
Fig. 13 is a block diagram of the current target position determination module 130 according to the corresponding embodiment of fig. 10.
Fig. 14 is a block diagram of the current target position determination unit 132 according to the corresponding embodiment of fig. 13.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as set forth in the claims below.
Fig. 1 is a flowchart illustrating a target tracking method according to an exemplary embodiment, where the target tracking method is applicable to a range and an execution subject, and may be a terminal such as a smart phone or a computer. As shown in fig. 1, the target tracking method may include the following steps.
In step S110, a start search position in the current image frame is determined from a target position in a previous image frame of the current image frame.
The target position is where the target is located in the image frame.
The target position may be a point or an area of various shapes, and the shape of the target position is not described one by one.
It should be noted that, when performing target tracking, the target positions are respectively determined in a plurality of image frames having a chronological order.
The current image frame is an image frame currently subject to target tracking.
The previous image frame is a temporally previous image frame with respect to the current image frame.
The current image frame and the previous image frame may be one image captured by separate photographing or may be one image frame extracted from a captured video.
It should be noted that, when the current image frame and the previous image frame are both image frames extracted from the acquired video, the current image frame and the previous image frame may be two adjacent image frames in the video, or two image frames selected from the video according to a certain image frame interval.
For example, the image frames in the video a in time sequence include image frames a1, a2, A3, a4, a5, A6, a7, A8, a9, and a10, and when the current image frame is the image frame a5 and the image frame a5 is subject-tracked, the previous image frame may be the image frame a4 or the image frame a2 selected from the video.
It will be appreciated that in determining the position of the target in the current image frame, the position of the target in the previous image frame has already been determined.
Generally, the acquisition time interval between the current image frame and the previous image frame is small, and the distance between the target position of the current image frame and the target position of the previous image frame is small.
Therefore, when the target tracking is performed on the current image frame, the target position in the previous image frame is determined as the initial search position in the current image frame, and the target position in the current image frame is searched in the neighborhood of the initial search position, so that the calculation amount for determining the target position in the current image frame is greatly reduced.
In step S120, a target model of the current image frame is calculated by using a starting search model of the current image frame at the starting search position and a target model of the previous image frame.
The starting search model is a feature model describing image features of the current image frame at a starting search position.
The initial search model is established by extracting image features of the initial search position from the current image frame and according to the image features.
The object model of the previous image frame is a feature model describing image features of the object in the previous image frame.
In an exemplary embodiment, the image feature is a color kernel histogram or a grayscale kernel histogram.
Optionally, a certain model weight is set between the initial search model and the target model of the previous image frame, and the target model of the current image frame is obtained through calculation.
In step S130, a target position is determined in the current image frame starting from the initial target model of the initial image frame and the target model of the current image frame, with the start search position as a starting point.
The initial image frame is an image frame that is chronologically the foremost among the plurality of image frames when the target tracking is performed.
The initial object model of the initial image frame is a feature model describing image features of objects in the initial image frame.
When determining the target position in the current image frame according to the initial target model of the initial image frame and the target model of the current image frame, the target position in the current image frame may be searched in the neighborhood of the initial search position by taking the initial search position as a starting point, so that the similarity between the target model describing the image characteristics of the target position in the current image frame and the initial target model and the target model of the current image frame is the maximum or the average similarity is the maximum, or different weights are set between the first similarity between the target model and the initial target model and the second similarity between the target model and the target model of the current image frame, so that the finally obtained similarity is the maximum; or according to the initial target model, with the initial search position as a starting point, searching a first target position in the current image frame in the neighborhood of the initial search position, and according to the target model of the current image frame, with the initial search position as a starting point, searching a second target position in the current image frame in the neighborhood of the initial search position, and further determining a target position in the current image frame according to the first target position and the second target position; the manner in which the position of the object in the current image frame is determined from the initial object model of the initial image frame and the object model of the current image frame is not defined herein.
By using the method, when the target position of the current image frame is determined, the target model of the current image frame is obtained through the initial searching model of the current image frame at the initial searching position and the target model of the previous image frame, and the target position is determined in the current image frame according to the initial target model of the initial image frame, the target model in the previous image frame and the target model of the current image frame by considering the initial target model of the initial image frame and the target model of the current image frame, so that the problems that when the target model of the current image frame is updated only according to the target model in the previous image frame, the deviation between the previous image frame and the actual target is continuously accumulated due to factors such as the scene illumination intensity, the camera view angle change and the like, the deviation between the tracking position and the actual target is generated, and the target is finally lost due to the continuous accumulation of the deviation between two adjacent image frames are avoided, therefore, the accuracy of target tracking is improved, and the stability of accurate target tracking is ensured.
Fig. 2 is another target tracking method according to the corresponding embodiment of fig. 1. As shown in fig. 2, the target tracking method may further include the following steps.
In step S210, an initial target position in the initial image frame is determined by position selection in the initial image frame.
Since the initial image frame has no previous image frame, the initial image frame does not need to be subject to target tracking, but only needs to determine the target position.
It should be noted that the target position in the initial image frame may be determined according to position selection, may also be determined according to face recognition, or the like, may also be determined according to a preset position, or may be determined in other determining manners, which are not described herein one by one.
For example, the user selects a position in the initial image frame by using a mouse, for example, a rectangular region, a circular region, an elliptical region, etc., and then determines an initial target position in the initial image frame according to the position-selected region.
In step S220, an initial target model is built according to image features of the initial image frame at the initial target position.
An initial target model is built by extracting image features located at an initial target position from an initial image frame.
In an exemplary embodiment, the initial target model is established using Mean Shift.
Target location is determined by user position selection in initial image frame by mouse, assume x0Is the central pixel coordinate, x, of the regioni(i ═ 1,2, …, n) represents the coordinates of the pixels in the target template, then the kernel histogram model of the target template is described as follows:
in equation (1), h is a bandwidth factor that is related to the size of the selected target location; function b (x)i):R2→ {1,2, … m } is the pixel coordinate xiThe pixel value of (d); u-1, 2, …, m denotes any color index; n represents the total number of pixels contained in the target region; δ is the Kronecker Delta function; k (x) is a kernel function, often Epanechnikov, Guassian, etc.; c is a normalization constant such thatThus:
by using the method, before the target position is determined for the current image frame, the initial target position is determined in the initial image frame through position selection in advance, and then the target tracking is performed in the subsequent image frame according to the target at the initial target position.
Optionally, in the target tracking method shown in the corresponding embodiment of fig. 1, step S120 may include the following steps:
and controlling the operation between the initial searching model of the current image frame at the initial searching position and the target model of the previous image frame through a preset model weight to obtain the target model of the current image frame.
The preset model weight is a weight parameter for controlling the weight between different models. The model weight value can be preset and fixed or can be adjustable after being set according to an empirical value.
It can be understood that, according to factors such as the illumination intensity of different application scenes, the visual angle relationship between the target and the camera, the target deformation and the like, in a specific application process, the preset model weight is modified, so that the target tracking accuracy is further improved.
And calculating between the initial search model of the current image frame at the initial search position and the target model of the previous image frame by using a preset model weight value, so that the target model of the current image frame already contains the characteristics of the target in the previous image frame, and the target tracking deviation caused by the rapid change of factors such as scene illumination intensity, the visual angle relation between the target and a camera, target deformation and the like among different image frames is avoided.
Optionally, fig. 3 is a detailed description of step S120 in the target tracking method according to the corresponding embodiment shown in fig. 1. As shown in fig. 3, the preset model weights include a preset first model weight and a preset second model weight, the target model of the current image frame includes a fast-changing model and a slow-changing model, and the step S120 may include the following steps.
In step S121, the operation between the fast-changing model of the previous image frame and the initial search model is controlled by the preset first model weight, so as to obtain the fast-changing model of the current image frame.
The target model of the current image frame comprises a fast-changing model and a slow-changing model.
The fast-changing model is a feature model which focuses more on describing image features which change faster in the target model, and the slow-changing model is a feature model which focuses more on describing image features which change slower in the target model.
The preset first model weight is a weight parameter for controlling the weight between the fast-changing model of the previous image frame and the initial search model. The first model weight may be preset and fixed, or may be set according to an empirical value and then adjustable.
For example, the fast-changing model of the previous image frame is controlled by the preset first model weight αAnd initial search modelThe fast-changing model of the current image frame is obtained through the calculationIn an exemplary embodiment, the first model weight α is preset to 0.05, and the first model weight α may be adjusted according to the effect of target tracking during actual target tracking.
In step S122, the operation between the initial target model of the initial image frame and the fast-changing model of the current image frame is controlled by the preset second model weight, so as to obtain the slow-changing model of the current image frame.
Similarly, the preset second model weight is a weight parameter that controls a weight between the initial target model of the initial image frame and the fast-changing model of the current image frame. The second model weight may be preset and fixed, or may be adjustable after being set according to an empirical value.
For example, the initial target model of the initial image frame is controlled by the preset second model weight βFast changing model with current image frameThe slow change model of the current image frame is obtained through the operation between the two imagesIn an exemplary embodiment, the second model weight β is preset to 0.8, and the second model weight β may be adjusted according to the effect of target tracking during actual target tracking.
By utilizing the method, the target model is divided into the fast-changing model and the slow-changing model, when the target model of the current image frame is calculated, the weight of the calculation is controlled through the preset model weight, the fast-changing model and the slow-changing model of the current image frame are calculated, the target position is further determined in the current image frame according to the initial target model, the fast-changing model and the slow-changing model of the current image frame, the image characteristics of the target in the initial image frame and the image characteristics which change faster and slower in the current image frame are fully considered, the gradual accumulation of the deviation between the image characteristics which change slower in different image frames due to the factors such as the scene illumination intensity, the camera visual angle change and the like is avoided, the deviation between the tracking position and the actual target is caused, the target is finally lost due to the continuous accumulation of the deviation between two adjacent image frames, and the accuracy of target tracking is improved, the stability of accurate tracking of the target is guaranteed.
Fig. 4 is a detailed description of step S130 in the target tracking method according to the corresponding embodiment shown in fig. 1. As shown in fig. 4, the step S130 may include the following steps.
In step S131, different search positions are selected from the current image frame with the initial search position as a starting point, and the search model of the current image frame at the search position is respectively matched with the initial target model of the initial image frame and the target model of the current image frame, so as to determine a first target search position and a second target search position in the current image frame.
When different search positions are selected from the current image frame with the initial search position as the starting point and the search model of the current image frame at the search position is matched with the initial target model of the initial image frame and the target model of the current image frame, various methods for performing matching operation on the image, for example, a superimposed image analysis method, an inter-frame image correlation method, and the like, may be used, and the method of the matching operation is not limited herein.
In an exemplary embodiment, the matching operation is performed using a mean shift method. The mean shift method uses the kernel density as a starting point from a starting search position, calculates the similarity between the kernel density and an initial target model of an initial image frame or a target model of a current image frame, estimates a maximum value point (namely a class center) of search probability density, and then replaces the colors of all pixels of the class by the color of the class center, thereby smoothing the image. And finally, clustering the class centers according to needs, and combining the regions with too few pixel points and the regions with too short class center distance to avoid over-segmentation caused by too many classes.
Fig. 5 is a flow chart illustrating a matching operation by a mean shift method according to an exemplary embodiment. By using a mean shift method and a target model quWhen matching operation is carried out, the target position in the previous image frame is determined as the initial search position in the current image frame, and the central coordinate of the initial search position is set as y0At y0The neighborhood search iteration of finding the optimal target position (i.e., the center coordinate y of the target search position)1) That is, the maximum value of the calculated similarity (e.g., Bhattacharyya coefficient ρ).
In the current image frame, the characteristic model of the initial search position:
the normalization constant is:
taylor expansion is performed on the Bhattacharyya coefficient rho, and the Bhattacharyya coefficient rho is approximated:
substituting equation (3) can result in:
wherein,
in the formula (4), the first term is independent of y, so that the Bhattacharyya coefficientMax, the second term is maximized, so that the Mean Shift vector can be derived:
where g (x) ═ -k' (x) is the nuclear density estimate. The center coordinate y of the target search position in the current image frame can be obtained through 10 iterations1
Fig. 6 is a diagram illustrating a process of performing a matching operation by a mean shift method according to an exemplary embodiment. The MeanShift vector is a normalized probability density gradient. The Mean Shift method is a nonparametric feature space analysis method based on kernel density estimation, and rapidly converges on a local maximum of a probability density function through iteration of adaptive step sizes. The iterative process is as follows: calculating the offset mean value of the current search position, moving to the offset mean value, then taking the offset mean value as a new search position, recalculating the offset mean value, and continuing moving until a certain condition is met.
In step S132, a target position in the current image frame is determined according to the first target search position and the second target search position.
After the first target searching position and the second target searching position are searched, the midpoint between the center coordinates of the first target searching position and the second target searching position can be calculated and used as the center point of the target position; or the weights of the first target searching position and the second target searching position are controlled according to a preset weight, so that the target position in the current image frame is determined; or determining respective weights of the first target search position and the second target search position according to the similarity of the matching operation in step S131, controlling the weights of the first target search position and the second target search position, and then determining the target position in the current image frame; the target position in the current image frame may also be determined according to the first target search position and the second target search position in other manners, which are not described one by one here.
With the method as described above, when the target position of the current image frame is determined, based on the initial target model of the initial image frame and the target model of the current image frame, respectively determining two target searching positions with the highest similarity with the initial target model of the initial image frame and the target model of the current image frame after matching operation, determining the target position of the current image frame according to the two target searching positions, since the object model in the initial image frame and the object model in the current image frame are fully considered, namely, the image characteristics which change faster and the image characteristics which change slower in the current image frame are fully considered, therefore, the finally determined target position is more accurate, target tracking loss caused by continuous accumulation of the deviation of the previous image frame is avoided, the accuracy of target tracking is improved, and the stability of accurate target tracking is ensured.
Alternatively, fig. 7 is a detailed description of step S132 in the target tracking method according to the embodiment shown in fig. 4. As shown in fig. 7, the step S132 may include the following steps.
In step S1321, a first target search model and a second target search model of the current image frame at the first target search position and the second target search position are respectively established.
Similarly, a first target searching model and a second target searching model are established according to the image characteristics of the current image frame at the first target searching position and the second target searching position.
In step S1322, the similarity between the first target search model and the initial target model and the similarity between the second target search model and the target model of the current image frame are calculated, respectively.
The way of calculating the similarity is shown in equation (6).
Alternatively, when the target model of the current image frame includes a fast-changing model and a slow-changing model, i.e. the initial target modelQuick change modelSlowly changing modelRespectively as follows:
at this time, the second target search position will include two: a second target search position a, a second target search position B.
First target search model and initial targetSimilarity between standard modelsSimilarity between the second target searching model and the fast-changing model and the slow-changing model of the current image frameRespectively as follows:
in step S1323, the position weights of the first target search position and the second target search position are determined according to the similarity.
In an exemplary embodiment, the first target search model is searched for the initial target model based on similarity between the first target search model and the initial target modelSimilarity between the second target searching model and the fast-changing model and the slow-changing model of the current image frameCalculating α the position weight of the first and second target searching positions1、α2、α3
In step S1324, the target position in the current image frame is determined by the first target search position, the second target search position, and the position weight thereof.
Let y be the center coordinates of the first target search position, the second target search position A and the second target search position B11、y12、y13Then the target position y in the current image framef=α1y112y123y13
By using the method, the first target searching position and the second target searching position in the current image frame are determined, the searching models are respectively established according to the target searching positions, the position weight of each target searching position is determined according to the similarity between each searching model and the target model matched with the searching model, the weight of the corresponding target searching position is controlled according to the position weight, and the target position in the current image frame is finally determined, so that the contribution of the target searching position to the target position is determined according to the similarity, the adaptability to the scene illumination intensity change of the image frame is enhanced, and the stability of accurate target tracking is further ensured.
Fig. 8 is a schematic diagram illustrating a process of tracking a target in different image frames by using a classical mean shift method under an experimental scenario where the illumination intensity varies according to an exemplary embodiment. As can be seen from fig. 8, when the illumination intensity of the scene changes, the target cannot be effectively tracked by using the classical mean shift method.
Fig. 9 is a schematic diagram illustrating a process of tracking a target in different image frames according to the present solution in an experimental scenario where the illumination intensity changes according to an exemplary embodiment. As can be seen from fig. 9, when the scene illumination intensity changes, the target can be effectively tracked by using the scheme.
And qualitatively comparing the effectiveness of the scheme for target tracking by adopting the effective frame rate. Assuming that in a certain image frame, a moving target area selected by manual framing is M, and a target area obtained after target tracking is N, the overlapping ratio is:
if overlap is greater than 1/2, object tracking for that image frame is valid. The ratio of the number of image frames in which the target tracking is effective to the total number of image frames in each image frame sequence is called the effective frame rate. When the scene illumination changes, the effective frame rate for tracking the target by adopting the classic mean shift method and the scheme is shown in the following table.
The analysis of the above table shows that when the target is tracked for the current image frame, the initial frame model is used in the scheme, and when the target position is determined in the current image frame through the target model in the previous image frame, the initial frame model is adopted to trim the target model of the current image frame, so that the situation that the deviation between the previous image frame and the actual target is continuously accumulated due to factors such as scene illumination intensity, camera visual angle change and the like, and further the deviation is generated between the tracking position and the actual target, and the target is finally lost due to the continuous accumulation of the deviation between two adjacent image frames is avoided, the target tracking result is effectively improved, and the effective frame rate is greatly improved. The experimental result fully explains the stability of accurately tracking the target based on the scheme.
The following is an embodiment of a system of the present invention, which may be used to implement the above-described embodiment of the target tracking method. For details that are not disclosed in the embodiments of the system of the present invention, refer to the embodiments of the target tracking method of the present invention.
FIG. 10 is a block diagram illustrating a target tracking device according to an exemplary embodiment, the system including, but not limited to: an initial search location determination module 110, a current target model calculation module 120, and a current target location determination module 130.
A starting search position determination module 110 for determining a starting search position in a current image frame from a target position in a previous image frame of the current image frame;
a current target model calculation module 120, configured to calculate a target model of the current image frame according to an initial search model of the current image frame at the initial search position and a target model of the previous image frame;
a current target position determining module 130, configured to determine a target position in the current image frame by using the initial search position as a starting point according to an initial target model of an initial image frame and a target model of the current image frame.
The implementation processes of the functions and actions of each module in the device are specifically described in the implementation processes of the corresponding steps in the target tracking method, and are not described herein again.
Optionally, as shown in fig. 11, the target tracking apparatus shown in fig. 10 further includes, but is not limited to: an initial object position determination module 210 and an initial object model building module 220.
An initial target position determination module 210 for determining an initial target position in the initial image frame from the position selections in the initial image frame;
an initial object model building module 220, configured to build an initial object model according to image features of the initial image frame at the initial object position.
Optionally, as shown in fig. 12, the current object model calculation module 120 shown in fig. 10 includes but is not limited to: a fast-varying model calculation unit 121 and a slow-varying model calculation unit 122.
A fast-changing model calculating unit 121, configured to control, according to a preset first model weight, an operation between the fast-changing model of the previous image frame and the initial search model, to obtain a fast-changing model of the current image frame;
and the slowly varying model calculating unit 122 is configured to control, according to a preset second model weight, an operation between the initial target model of the initial image frame and the rapidly varying model of the current image frame, so as to obtain the slowly varying model of the current image frame.
Optionally, as shown in fig. 13, the current target position determination module 130 shown in fig. 10 includes but is not limited to: a matching operation unit 131 and a front target position determination unit 132.
A matching operation unit 131, configured to select different search positions from the current image frame with the initial search position as a starting point, perform matching operation on a search model of the current image frame at the search position, an initial target model of an initial image frame, and a target model of the current image frame, respectively, and determine a first target search position and a second target search position in the current image frame;
a current target position determining unit 132, configured to determine a target position in the current image frame according to the first target search position and the second target search position.
Optionally, as shown in fig. 14, the current target position determining unit 132 shown in fig. 13 further includes but is not limited to: a target search model building subunit 1321, a similarity degree operator unit 1322, a position weight determination subunit 1323, and a current target position determination subunit 1324.
A target search model establishing subunit 1321, configured to respectively establish a first target search model and a second target search model of the current image frame at the first target search position and the second target search position;
a similarity operator unit 1322 for calculating similarities between the first target search model and the initial target model, and between the second target search model and the target model of the current image frame, respectively;
a position weight determining subunit 1323, configured to determine, according to the similarity, a position weight of the first target search position and the second target search position;
a current target position determining subunit 1324, configured to determine a target position in the current image frame according to the first target search position, the second target search position, and the position weight thereof.
Optionally, the present invention further provides a terminal, which executes all or part of the steps of the target tracking method shown in any of the above exemplary embodiments. The terminal includes:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method according to any one of the above exemplary embodiments.
The specific manner in which the processor in the terminal performs the operation in this embodiment has been described in detail in the embodiment related to the target tracking method, and will not be elaborated here.
In an exemplary embodiment, a storage medium is also provided that is a computer-readable storage medium, such as may be transitory and non-transitory computer-readable storage media, including instructions. The storage medium, for example, includes a memory of instructions executable by a processor of the terminal to perform the target tracking method described above.
It is to be understood that the invention is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be effected therein by one skilled in the art without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (10)

1. A method of target tracking, the method comprising:
determining a starting search position in a current image frame from a target position in a previous image frame of the current image frame;
calculating to obtain a target model of the current image frame through an initial search model of the current image frame at the initial search position and a target model of the previous image frame;
and determining a target position in the current image frame by taking the initial searching position as a starting point according to an initial target model of an initial image frame and a target model of the current image frame.
2. The method of claim 1, wherein the step of determining a starting search position in a current image frame from a target position in a previous image frame of the current image frame is preceded by the method further comprising:
determining an initial target location in the initial image frame from the location selections in the initial image frame;
and establishing an initial target model according to the image characteristics of the initial image frame at the initial target position.
3. The method of claim 1, wherein the step of calculating the target model of the current image frame from the starting search model of the current image frame at the starting search position and the target model of the previous image frame comprises:
and controlling the operation between the initial searching model of the current image frame at the initial searching position and the target model of the previous image frame through a preset model weight to obtain the target model of the current image frame.
4. The method according to claim 3, wherein the preset model weights comprise a preset first model weight and a preset second model weight, the target model of the current image frame comprises a fast-varying model and a slow-varying model, and the step of controlling the operation of the current image frame between the initial search model of the initial search position and the target model of the previous image frame by the preset model weights comprises:
controlling the operation between the fast-changing model of the previous image frame and the initial searching model through a preset first model weight to obtain the fast-changing model of the current image frame;
and controlling the operation between the initial target model of the initial image frame and the fast-changing model of the current image frame through a preset second model weight to obtain the slow-changing model of the current image frame.
5. The method of claim 1, wherein the step of determining a target location in the current image frame starting from the starting search location from an initial target model of an initial image frame and a target model of the current image frame comprises:
selecting different search positions from the current image frame by taking the initial search position as a starting point, respectively performing matching operation on a search model of the current image frame at the search position, an initial target model of the initial image frame and a target model of the current image frame, and determining a first target search position and a second target search position in the current image frame;
and determining the target position in the current image frame according to the first target searching position and the second target searching position.
6. The method of claim 5, wherein the step of determining a target location in the current image frame based on the first target search location and the second target search location comprises:
respectively establishing a first target searching model and a second target searching model of the current image frame at the first target searching position and the second target searching position;
respectively calculating the similarity between the first target searching model and the initial target model and the similarity between the second target searching model and the target model of the current image frame;
determining the position weight of the first target searching position and the second target searching position according to the similarity;
and determining the target position in the current image frame according to the first target searching position, the second target searching position and the position weight value thereof.
7. An object tracking apparatus, characterized in that the apparatus comprises:
a starting search position determining module for determining a starting search position in a current image frame from a target position in a previous image frame of the current image frame;
the current target model calculation module is used for calculating a target model of the current image frame according to an initial search model of the current image frame at the initial search position and a target model of the previous image frame;
and the current target position determining module is used for determining a target position in the current image frame by taking the initial searching position as a starting point according to an initial target model of an initial image frame and a target model of the current image frame.
8. The apparatus of claim 7, further comprising:
an initial target position determination module for determining an initial target position in the initial image frame from a position selection in the initial image frame;
and the initial target model establishing module is used for establishing an initial target model according to the image characteristics of the initial image frame at the initial target position.
9. A terminal, characterized in that the terminal comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
10. A computer-readable storage medium storing a program, characterized in that the program, when executed, causes a terminal to perform the method according to any one of claims 1-6.
CN201810203671.6A 2018-03-13 2018-03-13 Method for tracking target and device, terminal, computer readable storage medium Withdrawn CN108416800A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810203671.6A CN108416800A (en) 2018-03-13 2018-03-13 Method for tracking target and device, terminal, computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810203671.6A CN108416800A (en) 2018-03-13 2018-03-13 Method for tracking target and device, terminal, computer readable storage medium

Publications (1)

Publication Number Publication Date
CN108416800A true CN108416800A (en) 2018-08-17

Family

ID=63131225

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810203671.6A Withdrawn CN108416800A (en) 2018-03-13 2018-03-13 Method for tracking target and device, terminal, computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108416800A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109523573A (en) * 2018-11-23 2019-03-26 上海新世纪机器人有限公司 The tracking and device of target object
CN110929093A (en) * 2019-11-20 2020-03-27 百度在线网络技术(北京)有限公司 Method, apparatus, device and medium for search control
CN111010590A (en) * 2018-10-08 2020-04-14 传线网络科技(上海)有限公司 Video clipping method and device
CN112635073A (en) * 2020-12-18 2021-04-09 四川省疾病预防控制中心 Method and device for checking close contact person, computer equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107256561A (en) * 2017-04-28 2017-10-17 纳恩博(北京)科技有限公司 Method for tracking target and device
CN107424175A (en) * 2017-07-20 2017-12-01 西安电子科技大学 A kind of method for tracking target of combination spatio-temporal context information

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107256561A (en) * 2017-04-28 2017-10-17 纳恩博(北京)科技有限公司 Method for tracking target and device
CN107424175A (en) * 2017-07-20 2017-12-01 西安电子科技大学 A kind of method for tracking target of combination spatio-temporal context information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈万龙: "基于均值漂移和模糊控制的主动视觉系统", 《中国优秀硕士论文 信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111010590A (en) * 2018-10-08 2020-04-14 传线网络科技(上海)有限公司 Video clipping method and device
CN109523573A (en) * 2018-11-23 2019-03-26 上海新世纪机器人有限公司 The tracking and device of target object
CN110929093A (en) * 2019-11-20 2020-03-27 百度在线网络技术(北京)有限公司 Method, apparatus, device and medium for search control
CN110929093B (en) * 2019-11-20 2023-08-11 百度在线网络技术(北京)有限公司 Method, apparatus, device and medium for search control
CN112635073A (en) * 2020-12-18 2021-04-09 四川省疾病预防控制中心 Method and device for checking close contact person, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108416800A (en) Method for tracking target and device, terminal, computer readable storage medium
CN110147744B (en) Face image quality assessment method, device and terminal
JP6631179B2 (en) Foreground image division method and apparatus, program, and recording medium
JP5197279B2 (en) Method for tracking the 3D position of an object moving in a scene implemented by a computer
US10559062B2 (en) Method for automatic facial impression transformation, recording medium and device for performing the method
Cannons A review of visual tracking
US8503726B2 (en) Image processing device, object tracking device, and image processing method
CN105554486A (en) Projection calibration method and device
JP6624877B2 (en) Information processing apparatus, information processing method and program
CN102831382A (en) Face tracking apparatus and method
JP2012529691A (en) 3D image generation
JPWO2009113231A1 (en) Image processing apparatus and image processing method
WO2019128676A1 (en) Light spot filtering method and apparatus
US20190066311A1 (en) Object tracking
CN110675426B (en) Human body tracking method, device, equipment and storage medium
CN109451240B (en) Focusing method, focusing device, computer equipment and readable storage medium
CN106713740A (en) Positioning tracking camera shooting method and system
KR20120007850A (en) Apparatus and method for object recognition based on part-template matching
JP2011215716A (en) Position estimation device, position estimation method and program
CN113744307A (en) Image feature point tracking method and system based on threshold dynamic adjustment
JP6272071B2 (en) Image processing apparatus, image processing method, and program
WO2019144296A1 (en) Control method and apparatus for movable platform and movable platform
CN111212226A (en) Focusing shooting method and device
CN114037923A (en) Target activity hotspot graph drawing method, system, equipment and storage medium
CN114022567A (en) Pose tracking method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20180817

WW01 Invention patent application withdrawn after publication