CN101986348A - Visual target identification and tracking method - Google Patents

Visual target identification and tracking method Download PDF

Info

Publication number
CN101986348A
CN101986348A CN 201010537843 CN201010537843A CN101986348A CN 101986348 A CN101986348 A CN 101986348A CN 201010537843 CN201010537843 CN 201010537843 CN 201010537843 A CN201010537843 A CN 201010537843A CN 101986348 A CN101986348 A CN 101986348A
Authority
CN
China
Prior art keywords
image
sensation target
frame
sensation
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 201010537843
Other languages
Chinese (zh)
Inventor
熊玉梅
宁建红
闫俊英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Dianji University
Original Assignee
Shanghai Dianji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Dianji University filed Critical Shanghai Dianji University
Priority to CN 201010537843 priority Critical patent/CN101986348A/en
Publication of CN101986348A publication Critical patent/CN101986348A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a visual target identification method, comprising the following steps: inverting a collected original image to a binary image; calculating the enclosure frame of a visual target on the basis of the binary image; and searching characteristic points in the enclosure frame. The invention also discloses a visual target tracking method. The method comprises: pretermitting that the size of the search window of a zeroth frame is the same as that of the image; identifying a first frame image to obtain the enclosure frame; and predicting the search window. In the invention, an image processing method is used to calculate the enclosure frame and the characteristic points in the enclosure frame. Meanwhile, on the basis of a target tracking technology, the invention also provides a method based on a predictable search window, so as to predict and track the motion of the target, thereby obviously reducing the search range and improving the instantaneity of the target tracking method.

Description

A kind of sensation target recognition and tracking method
Technical field
The present invention relates to the image analysis technology field, relate in particular to a kind of sensation target recognition and tracking method.
Background technology
The sensation target recognition and tracking is the position of recognition visible sensation target from sequence image, calculates the interesting target state, at destination properties, degree of freedom and tracking condition, adopts different tracking strategy.Sensation target recognition and tracking technology all has application scenario preferably in fields such as the fingerprint recognition of identity validation, recognition of face, Flame Image Process, intelligent traffic administration system, emulated robots.
At present, sensation target recognition and tracking technology has become the emphasis of domestic and international information processing technology development.Existing sensation target identification mainly contains following several method:
(1) Jing Dian statistical pattern recognition method.This method mainly is to utilize the statistical distribution of target signature, relies on a large amount of training of target identification system and based on the characteristic matching sorting technique of model space distance metric, obtains identification effectively in narrower scenario definition territory.But this method is the early stage method of using, and only in very narrow scenario definition territory, and at target image and just more effective under the little situation of change of background, is difficult to solve problems such as attitude variation, target stainedly fog, the crested of target part on every side.
(2) based on the target identification method of knowledge.Late 1970s, the artificial intelligence expert system begins to be applied to Study on object identification, has formed the target identification technology based on knowledge, i.e. knowledge base (Knowledge Based, KB) system.The limitation and the defective of classical statistics mode identification method have been overcome to a certain extent based on the Target Recognition Algorithms of knowledge.But the subject matter that this method exists at present is can be very difficult for the checking of the identification of the knowledge source that utilizes and knowledge, is difficult in adapting to new scene organization knowledge effectively simultaneously.
(3) based on the automatic target recognition method of model.Model-based (Model Based, method MB) at first is the sample space modelling with the Target Recognition of complexity, these models provide a kind of simple pathway of describing the various important change characteristics of sample space.The typical models based system extracts certain target property, and utilizes these characteristics and some supplementary knowledges to register the model parameter of target, thereby selects some original hypothesis, realizes the prediction of target property.The final goal of a model-based system is the characteristic of actual characteristic of coupling and prediction back.If mark is accurate, matching process then can be successfully with effective.But described automatic target recognition method based on model still is in conceptual phase at present, is applied to reality and also takes day.
(4) based on the recognition methods of sensor information fusion goal.The target seeker of single-sensor is in the complex environment that light, electrical interference are arranged, and ability, antijamming capability and the functional reliability thereof of target search and knowledge identification all will reduce.Rise the eighties in 20th century based on multi-sensor information fusion (Multi-sensorInformation Fusion Based, MIFB) target identification method has overcome the defective of single-sensor system, each sensor is with data feed-in signal processor separately, earlier carry out target detection respectively, draw the positional information or the movement locus of aimless judgement and target.Then these information are sent into the data fusion unit, target location or movement locus are carried out doing further judgement again after the association.But, described automatic target recognition method based on multi-sensor information fusion, it is to think definite that these method characteristic index weights are chosen, and has bigger subjective randomness.
(5) based on the target identification method of artificial neural network and expert system.Expert system is based on reasoning from logic, the artificial intelligence approach of simulating human thinking.Artificial neural network (ANN) is based on the neuron syndeton, comes a kind of NOT logic of simulating human thinking in images, the artificial intelligence approach of non-language by simulation human brain structure.Application of Neural Network can solve a lot of traditional unvanquishable difficulties of recognition methods in pattern-recognition, neural network has also obtained higher recognition accuracy to there being the target of blocking to discern.But, described target identification method based on artificial neural network and expert system, neural network realizes that the engineering application has the not good enough bottleneck of real-time.
At the problem that prior art exists, this case designer relies on the industry experience for many years of being engaged in, and the active research improvement is so there has been a kind of sensation target recognition and tracking of the present invention method.
Summary of the invention
The present invention be directed in the prior art, existing sensation target identification and tracking degree of accuracy are low, and defectives such as real-time difference provide a kind of sensation target recognition methods.
Another purpose of the present invention is in the prior art, and existing sensation target identification and tracking degree of accuracy are low, and defectives such as real-time difference provide a kind of tracking that utilizes the sensation target of described sensation target recognition methods identification.
In order to address the above problem, the invention provides a kind of sensation target recognition methods, described sensation target recognition methods comprises: convert the original image that collects to bianry image; On the basis of described bianry image, calculate the encirclement frame of sensation target; In described encirclement frame, seek unique point.
Optionally, the number of described unique point is 8.
Wherein, described original image converts bianry image to and further comprises: convert original image to 256 grades of gray level images; Adopt thresholding method to be divided into bianry image gray level image.
The method that described original image converts gray level image to comprises: catch the coloured image that card is obtained the rgb format of original image by Matrox, wherein each pixel is all used R, G, three bytes store of B; The image transitions of rgb color system is arrived YUV color system, the wherein brightness of Y-signal represent pixel; Directly take out the Y-signal of each pixel, just obtain a gray level image of representing 256 grades.Described rgb color system to the conversion formula of YUV color system is
Y U V = 0.299 0.587 0.114 - 0.148 - 0.289 0.437 0.615 - 0.515 - 0.100 R G B
The method that described gray level image adopts thresholding method to be divided into bianry image further comprises: obtain minimum gradation value Z in the gray level image MinWith maximum gradation value Z Max, and make threshold value T 0=(Z Min+ Z Max)/2; According to threshold value Tk gray level image is divided into target image and background image two parts, obtains two-part average gray value Z respectively LowAnd Z High
Z low = &Sigma; z ( i , j ) < T k z ( i , j ) &times; N ( i , j ) &Sigma; z ( i , j ) < T k N ( i , j ) Z high = &Sigma; z ( i , j ) > T k z ( i , j ) &times; N ( i , j ) &Sigma; z ( i , j ) > T k N ( i , j )
Wherein, Z (i, j) and N (i, j) be respectively point on the gray level image (i, gray-scale value j) and power, in the calculating, make N (i, j)=1;
Obtain new threshold value, described threshold value T K+1=(Z Min+ Z Max)/2; If T k=T K+1, then this computing finishes; If T k≠ T K+1, then getting k is the k+1 value, and above-mentioned steps is carried out in circulation.
The calculating that described sensation target surrounds frame further comprises: the housing that detects sensation target; Search characteristics point in the zone of described housing.
The method of the housing of described detection sensation target also comprises: adopt the image corroding method to realize the de-noising of bianry image; Extract the sensation target frame; Calculate the coordinate of this quadrilateral frame.
The calculating of described unique point further comprises: the zone to n unique point of candidate feature point set is scanned, and add up its area, with area difference apart from the little class that is classified as, if wherein a class area value occurrence number is less than 8 times, just with its deletion from the set of candidate feature point, the candidate feature point set element number that S contains after the deletion is n '; Take out the center-of-mass coordinate of all n unique point elements among the candidate feature point set S, carry out the Hough conversion, simulate most possible straight line on two probability, and calculate intersection point A; With the some A ' of an A,,, then search for successfully among the taking-up candidate feature point S, remove S, these seven unique points and A ' point are put into the S set if can search 4 and 3 unique points respectively along two straight line directions from A ' apart from minimum; Otherwise S is not removed in the search failure.
For realizing another purpose of the present invention, the invention provides a kind of tracking that utilizes the sensation target that described sensation target recognition methods discerned, described visual target tracking method comprises: the initial situation default treatment the 0th frame, the search window of this frame size the same with entire image; The same size of search window in the time of first frame with entire image, image call sensation target recognizer in the search window is carried out static state identification, regulate repeatedly, guarantee under simple background, first two field picture is successfully discerned, and obtains a suitable frame that surrounds of the unique point set of sign; After second two field picture, review four summits of front cross frame search window at every turn, spatial movement relation according to these two set of vertices, calculate four direction of motion that the summit is possible of search window for the third time, and the size and the zone of pressing this four direction correction search window, as the strategy that enlarges, dwindles or move.Be less than actual number if the detected feature of previous frame is counted, search window respectively enlarges certain distance to four direction so; Otherwise dwindle.Wherein, enlarge and the distance of dwindling is the average length of side of unique point external surrounding frame.
In sum, the present invention utilizes image processing method, to surround frame and wherein unique point calculate.Simultaneously, technical in target following, a kind of method based on predictable search window has been proposed, sign is carried out motion prediction and tracking, obviously reduced the hunting zone, improved the real-time of method for tracking target.
Description of drawings
Fig. 1 is the process flow diagram of the sensation target recognition methods of a kind of sensation target recognition and tracking of the present invention method;
Fig. 2 is the synoptic diagram that original image transitions becomes bianry image in the sensation target identifying of a kind of sensation target recognition and tracking of the present invention method;
Fig. 3 is the synoptic diagram that abates the noise in the sensation target identifying of a kind of sensation target recognition and tracking of the present invention method;
Fig. 4 is the synoptic diagram of the variation of image before and after the sensation target frame of a kind of sensation target recognition and tracking of the present invention method extracts;
Fig. 5 is the process flow diagram of the visual target tracking method of a kind of sensation target recognition and tracking of the present invention method;
Fig. 6 is the synoptic diagram of the search window Forecasting Methodology of a kind of sensation target recognition and tracking of the present invention method.
Embodiment
By the technology contents, the structural attitude that describe the invention in detail, reached purpose and effect, described in detail below in conjunction with embodiment and conjunction with figs..
See also Fig. 1, Figure 1 shows that sensation target recognition methods process flow diagram.In sensation target identification, available information is marginal information, geological information and the chrominance information of sensation target.Described sensation target recognition methods 1 may further comprise the steps:
Step S11: the original image 10 that collects is converted into bianry image 20, and described original image 10 is the coloured image of rgb format.Described bianry image 20 is the black-and-white two color image;
Step S12: the encirclement frame that on the basis of described bianry image 20, calculates sensation target;
Step S13: in described encirclement frame, seek unique point.
In order to improve the real-time performance of algorithm, the present invention adopts the method that original figure 10 is converted to two-value figure 20.From the quantity of information of multi-media image, each pixel of original image 10 has 24bit information, and each pixel of bianry image 20 has only 1bit information, therefore handles the speed that bianry image 20 can be accelerated Flame Image Process significantly.The method that described original image 10 converts bianry image 20 to further may further comprise the steps:
Step S111: convert original image 10 to 256 grades of gray level images 30;
Step S112: adopt the Threshold Segmentation Algorithm in the image processing techniques, gray level image 30 is divided into bianry image 20.
Execution in step S111 specifically comprises:
Step S1111 catches the coloured image that card is obtained the rgb format of original image 10 by Matrox, and wherein each pixel is all used R, G, three bytes store of B;
Step S1112, with the image transitions of rgb color system to YUV color system, the wherein brightness of Y-signal represent pixel;
Step S1113 directly takes out the Y-signal of each pixel, just obtains a gray level image 30 of representing 256 grades.Described rgb color system is as follows to the conversion formula of YUV color system:
Y U V = 0.299 0.587 0.114 - 0.148 - 0.289 0.437 0.615 - 0.515 - 0.100 R G B
In the application, each pixel is only carried out Y calculate, need not calculate U, V.
Execution in step S112 specifically comprises:
Step S1121: obtain minimum gradation value Z in the gray level image 30 MinWith maximum gradation value Z Max, and make threshold value T 0=(Z Min+ Z Max)/2;
Step S1122 is according to threshold value T k Gray level image 30 is divided into target image and background image two parts, obtains two-part average gray value Z respectively LowAnd Z High
Z low = &Sigma; z ( i , j ) < T k z ( i , j ) &times; N ( i , j ) &Sigma; z ( i , j ) < T k N ( i , j ) Z high = &Sigma; z ( i , j ) > T k z ( i , j ) &times; N ( i , j ) &Sigma; z ( i , j ) > T k N ( i , j )
Wherein, Z (i, j) and N (i, j) be respectively point on the gray level image 30 (i, gray-scale value j) and power, in the calculating, make N (i, j)=1;
Step S1123 obtains new threshold value, described threshold value T K+1=(Z Min+ Z Max)/2;
If step S1124 is T k=T K+1, then this computing finishes; If T k≠ T K+1, then getting k is the k+1 value, and circulation execution in step S1122.
See also Fig. 2, Figure 2 shows that original image 10 converts the synoptic diagram of bianry image 20 to.Described original image 10 becomes 256 grades of gray level images 30 by the gray processing step conversion, and passing threshold is cut apart and converted described gray scale figure 30 to bianry image 20.
After original image 10 converts bianry image 20 to, the just image information of computation vision target fast.In the present invention, need to obtain 8 unique point coordinates of sensation target.But because image disruption may be big, need limit, further may further comprise the steps so described sensation target surrounds the calculating of frame to the unique point region of search:
Step S121: the housing that detects sensation target;
Step S122: search characteristics point in the zone of described housing.
Execution in step S121 specifically comprises:
Step S1211 abates the noise, and adopts the image corroding method to realize the de-noising of bianry image 20.The result of corrosion eliminates the housing of connected region.See also Fig. 3, when bianry image 20 was corroded, all black connected regions 40 all suitably diminished.Wherein, some little black connected regions 40 are eliminated, and show that the de-noising performance is pretty good, so just finish the de-noising of bianry image 20.De-noising performance and template select that much relations are arranged, and the big more de-noising ability of template is strong more, but also strong more to the destruction of sensation target.In the present invention, preferred de-noising template is 4 * 4 slide block.
Step S1212 extracts the sensation target frame.In the present invention, adopt profile to mention technology and extract, empty internal point.See also Fig. 4, Figure 4 shows that the frame of bianry image 20 after excessive erosion extracts the variation synoptic diagram of front and back image.After described sensation target frame extraction operation, airtight connected region 40 disappears, only remaining a small amount of stain 50 in the bianry image 20.
Step S1213 calculates the coordinate of this quadrilateral frame.Among the present invention, adopt line fitting method, have a few in the analysis image, at first construct these some straight lines of some possibility match, find out four near the straight line of frame by quadrilateral how much restrictions then, obtain the encirclement frame of sensation target according to the intersection point of these four straight lines.The Hough conversion can be by repeatedly moving many straight lines of match in the line fitting method.Described Hough changing method further comprises:
Step S12131, the array in transform domain r of initialization, θ space, the quantification number on the r direction is an image diagonal direction pixel count, and the quantification number on the θ direction is 90, and angle is from 0-180 °;
Step S12132, all stains 50 in the sequential search image to each stain 50, add 1 on the corresponding each point of conversion;
Step S12133 obtains maximal value record in the transform domain;
Step S12134 is with maximum of points and near some zero clearing;
Step S12135 obtains the pairing straight line of this maximal value, and stores, if added up six straight lines then turn to step S12136, otherwise turns to step S12135;
Step S12136 utilizes the quadrilateral geometrical-restriction relation, finds out in six straight lines of candidate near four straight lines of sensation target projection, thinks that this is the housing of sensation target;
Step S12137 is according to four summits of these four straight line computation vision targets.Quadrilateral geometrical constraint wherein promptly is for the close straight line of slope, at most only choosing distance is far away two; For the relevant very big straight line of slope, then select the comparison freedom.Guaranteeing to select four straight lines at last, is two groups of parallel lines substantially, and two groups of equality wire clamp angles are bigger, and between standoff distance similar.
Identification to sensation target, be for the projection coordinate of space characteristics point different angles is provided to visual target tracking, and also can use these unique points in the Classification and Identification of sensation target, so the identification of sensation target needs to determine the unique point of sensation target at last.In the present invention, detect 8 unique points as required and do the sensation target recognition and tracking, and to choose by n the set that element constituted be candidate feature point set.The method of choosing 8 unique points from candidate feature point set S further comprises:
Step S131 scans the zone of n the unique point of candidate feature point set S, and adds up its area, with the area difference distance little be classified as a class.If wherein a class area value occurrence number is less than 8 times, just it is deleted from candidate feature point set S.Candidate feature point set element number that S contains after the deletion is n ';
Step S132, the center-of-mass coordinate of taking out all n unique point elements among the candidate feature point set S is carried out the Hough conversion, simulates most possible straight line on two probability, and calculates intersection point A;
Step S133 with the some A ' of an A apart from minimum, from A ', if can search 4 and 3 unique points respectively along two straight line directions, then searches for successfully among the taking-up candidate feature point S, removes S, and these seven unique points and A ' point are put into the S set; Otherwise S is not removed in the search failure.
So far, we have finished the recognition methods of whole sensation target, and the recognition methods of described sensation target is converted to bianry image 20 by the original image 10 that will collect, and calculate the encirclement frame and the unique point of sensation target, follow the tracks of for succeeding target and get ready.
When visual target tracking, sensation target is put a square frame, and to define described housing be search window, only the figure in the described search window is handled at every turn.What search window did not stop along with moving of sensation target moves, and guarantees that sensation target all falls into described search window at every turn.Wherein, search window coordinate system image and coordinate system have the certain deviation relation, and computation structure need pass to three reconstructed block by the translation conversion, but described time complexity is ignored for computing machine.Therefore, the tracking of described sensation target has reduced the complexity of Flame Image Process under the situation that does not influence stereo reconstruction.
See also Fig. 5, Figure 5 shows that the process flow diagram of visual target tracking method 2.More satisfactory in situation, in the time of can both detecting visual beacon, comprise based on the visual target tracking method of search window at every turn:
Step S21: the initial situation default treatment the 0th frame, the search window of this frame size the same with entire image;
Step S22: the same size of search window in the time of first frame with entire image.Image call sensation target recognizer in the search window is carried out static state identification.Regulate repeatedly, guarantee that first two field picture is successfully discerned under simple background, obtain a suitable frame that surrounds of the unique point set of sign;
Step S23: after second two field picture, review four summits of front cross frame search window at every turn, spatial movement relation according to these two set of vertices, calculate four direction of motion that the summit is possible of search window for the third time, and the size and the zone of pressing this four direction correction search window, as the strategy that enlarges, dwindles or move;
Simultaneously, not ideal in situation, in the time of can not detecting sensation target, described visual target tracking method also need be considered: be less than actual number if the detected feature of previous frame is counted, search window respectively enlarges certain distance to four direction so; Otherwise dwindle.During actual the realization, enlarge and the distance of dwindling is the average length of side of unique point external surrounding frame.
See also Fig. 6, Figure 6 shows that the synoptic diagram of search window Forecasting Methodology.On the basis of the N-1 time search window 60 and the N time search window 70 Search Results, use object inertia rule judgment to predict search window 80 the N+1 time, and this zone is suitably amplified.Reduce the complexity of Flame Image Process based on the use of search window tracking, improved processing speed, may for providing real-time 3 D visual to follow the tracks of that registration provides.
In sum, the present invention utilizes image processing method, to surround frame and wherein unique point calculate.Simultaneously, technical in target following, a kind of method based on predictable search window has been proposed, sign is carried out motion prediction and tracking, obviously reduced the hunting zone, improved the real-time of method for tracking target.
Those skilled in the art all should be appreciated that, under the situation that does not break away from the spirit or scope of the present invention, can carry out various modifications and variations to the present invention.Thereby, if when any modification or modification fall in the protection domain of appended claims and equivalent, think that the present invention contains these modifications and modification.

Claims (12)

1. a sensation target recognition methods is characterized in that, described sensation target recognition methods comprises:
Convert the original image that collects to bianry image;
On the basis of described bianry image, calculate the encirclement frame of sensation target;
In described encirclement frame, seek unique point.
2. a kind of sensation target recognition methods as claimed in claim 1 is characterized in that, described original image converts bianry image to and further comprises:
Convert original image to 256 grades of gray level images;
Adopt thresholding method to be divided into bianry image gray level image.
3. a kind of sensation target recognition methods as claimed in claim 2 is characterized in that the method that described original image converts gray level image to comprises:
Catch the coloured image that card is obtained the rgb format of original image by Matrox, wherein each pixel is all used R, G, three bytes store of B;
The image transitions of rgb color system is arrived YUV color system, the wherein brightness of Y-signal represent pixel;
Directly take out the Y-signal of each pixel, just obtain a gray level image of representing 256 grades.
4. a kind of sensation target recognition methods as claimed in claim 3 is characterized in that, described rgb color system to the conversion formula of YUV color system is
Y U V = 0.299 0.587 0.114 - 0.148 - 0.289 0.437 0.615 - 0.515 - 0.100 R G B
5. a kind of sensation target recognition methods as claimed in claim 2 is characterized in that, the method that described gray level image adopts thresholding method to be divided into bianry image further comprises:
Obtain minimum gradation value Z in the gray level image MinWith maximum gradation value Z Max, and make threshold value T 0=(Z Min+ Z Max)/2;
According to threshold value T kGray level image is divided into target image and background image two parts, obtains two-part average gray value Z respectively LowAnd Z High
Z low = &Sigma; z ( i , j ) < T k z ( i , j ) &times; ( i , j ) &Sigma; z ( i , j ) < T k N ( i , j ) Z high = &Sigma; z ( i , j ) < T k z ( i , j ) &times; N ( i , j ) &Sigma; z ( i , j ) > T k N ( i , j )
Wherein, Z (i, j) and N (i, j) be respectively point on the gray level image (i, gray-scale value j) and power, in the calculating, make N (i, j)=1;
Obtain new threshold value, described threshold value T K+1=(Z Min+ Z Max)/2;
If T k=T K+1, then this computing finishes; If T k≠ T K+1, then getting k is the k+1 value, and above-mentioned steps is carried out in circulation.
6. a kind of sensation target recognition methods as claimed in claim 1 is characterized in that, the calculating that described sensation target surrounds frame further comprises:
Detect the housing of sensation target;
Search characteristics point in the zone of described housing.
7. a kind of sensation target recognition methods as claimed in claim 6 is characterized in that the method for the housing of described detection sensation target also comprises:
Adopt the image corroding method to realize the de-noising of bianry image;
Extract the sensation target frame;
Calculate the coordinate of this quadrilateral frame.
8. a kind of sensation target recognition methods as claimed in claim 1 is characterized in that the number of described unique point is 8.
9. a kind of sensation target recognition methods as claimed in claim 1 is characterized in that the calculating of described unique point further comprises:
Zone to n unique point of candidate feature point set is scanned, and add up its area, with area difference apart from the little class that is classified as, if wherein a class area value occurrence number is less than 8 times, just with its deletion from the set of candidate feature point, the candidate feature point set element number that S contains after the deletion is n ';
Take out the center-of-mass coordinate of all n unique point elements among the candidate feature point set S, carry out the Hough conversion, simulate most possible straight line on two probability, and calculate intersection point A;
With the some A ' of an A,,, then search for successfully among the taking-up candidate feature point S, remove S, these seven unique points and A ' point are put into the S set if can search 4 and 3 unique points respectively along two straight line directions from A ' apart from minimum; Otherwise S is not removed in the search failure.
10. tracking that utilizes the sensation target that sensation target recognition methods as claimed in claim 1 discerned is characterized in that described visual target tracking method comprises:
The initial situation default treatment the 0th frame, the search window of this frame size the same with entire image;
The same size of search window in the time of first frame with entire image, image call sensation target recognizer in the search window is carried out static state identification, regulate repeatedly, guarantee under simple background, first two field picture is successfully discerned, and obtains a suitable frame that surrounds of the unique point set of sign;
After second two field picture, review four summits of front cross frame search window at every turn, spatial movement relation according to these two set of vertices, calculate four direction of motion that the summit is possible of search window for the third time, and the size and the zone of pressing this four direction correction search window, as the strategy that enlarges, dwindles or move.
11. the tracking of a kind of sensation target as claimed in claim 10, it is characterized in that, described visual target tracking method also comprises: be less than actual number if the detected feature of previous frame is counted, search window respectively enlarges certain distance to four direction so; Otherwise dwindle.
12. the tracking of a kind of sensation target as claimed in claim 11 is characterized in that, enlarges and the distance of dwindling is the average length of side of unique point external surrounding frame.
CN 201010537843 2010-11-09 2010-11-09 Visual target identification and tracking method Pending CN101986348A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010537843 CN101986348A (en) 2010-11-09 2010-11-09 Visual target identification and tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010537843 CN101986348A (en) 2010-11-09 2010-11-09 Visual target identification and tracking method

Publications (1)

Publication Number Publication Date
CN101986348A true CN101986348A (en) 2011-03-16

Family

ID=43710694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010537843 Pending CN101986348A (en) 2010-11-09 2010-11-09 Visual target identification and tracking method

Country Status (1)

Country Link
CN (1) CN101986348A (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663359A (en) * 2012-03-30 2012-09-12 博康智能网络科技股份有限公司 Method and system for pedestrian retrieval based on internet of things
CN102982307A (en) * 2011-06-13 2013-03-20 索尼公司 Recognizing apparatus and method, program, and recording medium
CN103003843A (en) * 2010-05-28 2013-03-27 高通股份有限公司 Dataset creation for tracking targets with dynamically changing portions
CN103679635A (en) * 2013-12-16 2014-03-26 中国人民解放军63791部队 Quick image interpolation processing method based on port door detection
CN104182993A (en) * 2014-09-10 2014-12-03 四川九洲电器集团有限责任公司 Target tracking method
CN104280036A (en) * 2013-07-05 2015-01-14 北京四维图新科技股份有限公司 Traffic information detection and positioning method, device and electronic equipment
WO2015004501A1 (en) 2013-07-09 2015-01-15 Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi Method for updating target tracking window size
CN105095905A (en) * 2014-04-18 2015-11-25 株式会社理光 Target recognition method and target recognition device
CN108062510A (en) * 2017-11-17 2018-05-22 维库(厦门)信息技术有限公司 Dynamic display method and computer equipment during a kind of multiple target tracking fructufy
CN108230357A (en) * 2017-10-25 2018-06-29 北京市商汤科技开发有限公司 Critical point detection method, apparatus, storage medium, computer program and electronic equipment
CN108230366A (en) * 2017-12-28 2018-06-29 厦门市美亚柏科信息股份有限公司 A kind of method for tracing of object
CN108364301A (en) * 2018-02-12 2018-08-03 中国科学院自动化研究所 Based on across when Duplication Vision Tracking stability assessment method and device
CN108596955A (en) * 2018-04-25 2018-09-28 Oppo广东移动通信有限公司 A kind of image detecting method, image detection device and mobile terminal
CN108846481A (en) * 2018-06-25 2018-11-20 山东大学 A kind of context information uncertainty elimination system and its working method based on QoX adaptive management
CN109377512A (en) * 2018-09-07 2019-02-22 深圳市易成自动驾驶技术有限公司 The method, apparatus and storage medium of target following
CN110059578A (en) * 2019-03-27 2019-07-26 东软睿驰汽车技术(沈阳)有限公司 A kind of method and device of vehicle tracking
CN110355765A (en) * 2019-05-27 2019-10-22 西安交通大学 A kind of identification of view-based access control model follows barrier-avoiding method and robot automatically
WO2020019353A1 (en) * 2018-07-27 2020-01-30 深圳市大疆创新科技有限公司 Tracking control method, apparatus, and computer-readable storage medium
CN111008305A (en) * 2019-11-29 2020-04-14 百度在线网络技术(北京)有限公司 Visual search method and device and electronic equipment
CN111157757A (en) * 2019-12-27 2020-05-15 苏州博田自动化技术有限公司 Vision-based crawler speed detection device and method
CN111680685A (en) * 2020-04-14 2020-09-18 上海高仙自动化科技发展有限公司 Image-based positioning method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101231755A (en) * 2007-01-25 2008-07-30 上海遥薇实业有限公司 Moving target tracking and quantity statistics method
CN101464948A (en) * 2009-01-14 2009-06-24 北京航空航天大学 Object identification method for affine constant moment based on key point
CN101770568A (en) * 2008-12-31 2010-07-07 南京理工大学 Target automatically recognizing and tracking method based on affine invariant point and optical flow calculation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101231755A (en) * 2007-01-25 2008-07-30 上海遥薇实业有限公司 Moving target tracking and quantity statistics method
CN101770568A (en) * 2008-12-31 2010-07-07 南京理工大学 Target automatically recognizing and tracking method based on affine invariant point and optical flow calculation
CN101464948A (en) * 2009-01-14 2009-06-24 北京航空航天大学 Object identification method for affine constant moment based on key point

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《Proceedings of the 2006 IEEE International Conference on Information Acquisition》 20060823 Yangbin Chen et al. Multi-Stereo Vision Tracking for AR System 全文 1-12 , 2 *
《Proceedings of the Second Symposium International Computer Science and Computational Technology》 20091228 Yumei Xiong et al. Study of Visual Object Tracking Technique 第406-408页 1-12 , 2 *

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9256956B2 (en) 2010-05-28 2016-02-09 Qualcomm Incorporated Dataset creation for tracking targets with dynamically changing portions
CN103003843A (en) * 2010-05-28 2013-03-27 高通股份有限公司 Dataset creation for tracking targets with dynamically changing portions
US9785836B2 (en) 2010-05-28 2017-10-10 Qualcomm Incorporated Dataset creation for tracking targets with dynamically changing portions
CN103003843B (en) * 2010-05-28 2016-08-03 高通股份有限公司 Create for following the tracks of the data set of the target with dynamic changing unit
CN102982307A (en) * 2011-06-13 2013-03-20 索尼公司 Recognizing apparatus and method, program, and recording medium
CN102663359A (en) * 2012-03-30 2012-09-12 博康智能网络科技股份有限公司 Method and system for pedestrian retrieval based on internet of things
CN102663359B (en) * 2012-03-30 2014-04-09 博康智能网络科技股份有限公司 Method and system for pedestrian retrieval based on internet of things
CN104280036A (en) * 2013-07-05 2015-01-14 北京四维图新科技股份有限公司 Traffic information detection and positioning method, device and electronic equipment
WO2015004501A1 (en) 2013-07-09 2015-01-15 Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi Method for updating target tracking window size
CN103679635B (en) * 2013-12-16 2017-07-18 中国人民解放军63791部队 Rapid image interpolation process method based on ripple door detection
CN103679635A (en) * 2013-12-16 2014-03-26 中国人民解放军63791部队 Quick image interpolation processing method based on port door detection
CN105095905A (en) * 2014-04-18 2015-11-25 株式会社理光 Target recognition method and target recognition device
CN105095905B (en) * 2014-04-18 2018-06-22 株式会社理光 Target identification method and Target Identification Unit
CN104182993B (en) * 2014-09-10 2017-02-15 四川九洲电器集团有限责任公司 Target tracking method
CN104182993A (en) * 2014-09-10 2014-12-03 四川九洲电器集团有限责任公司 Target tracking method
CN108230357A (en) * 2017-10-25 2018-06-29 北京市商汤科技开发有限公司 Critical point detection method, apparatus, storage medium, computer program and electronic equipment
CN108230357B (en) * 2017-10-25 2021-06-18 北京市商汤科技开发有限公司 Key point detection method and device, storage medium and electronic equipment
CN108062510A (en) * 2017-11-17 2018-05-22 维库(厦门)信息技术有限公司 Dynamic display method and computer equipment during a kind of multiple target tracking fructufy
CN108230366A (en) * 2017-12-28 2018-06-29 厦门市美亚柏科信息股份有限公司 A kind of method for tracing of object
CN108364301A (en) * 2018-02-12 2018-08-03 中国科学院自动化研究所 Based on across when Duplication Vision Tracking stability assessment method and device
CN108596955A (en) * 2018-04-25 2018-09-28 Oppo广东移动通信有限公司 A kind of image detecting method, image detection device and mobile terminal
CN108596955B (en) * 2018-04-25 2020-08-28 Oppo广东移动通信有限公司 Image detection method, image detection device and mobile terminal
CN108846481A (en) * 2018-06-25 2018-11-20 山东大学 A kind of context information uncertainty elimination system and its working method based on QoX adaptive management
CN108846481B (en) * 2018-06-25 2021-08-27 山东大学 Situation information uncertainty elimination system based on QoX self-adaptive management and working method thereof
WO2020019353A1 (en) * 2018-07-27 2020-01-30 深圳市大疆创新科技有限公司 Tracking control method, apparatus, and computer-readable storage medium
CN109377512A (en) * 2018-09-07 2019-02-22 深圳市易成自动驾驶技术有限公司 The method, apparatus and storage medium of target following
CN110059578A (en) * 2019-03-27 2019-07-26 东软睿驰汽车技术(沈阳)有限公司 A kind of method and device of vehicle tracking
CN110355765A (en) * 2019-05-27 2019-10-22 西安交通大学 A kind of identification of view-based access control model follows barrier-avoiding method and robot automatically
CN111008305A (en) * 2019-11-29 2020-04-14 百度在线网络技术(北京)有限公司 Visual search method and device and electronic equipment
US11704813B2 (en) 2019-11-29 2023-07-18 Baidu Online Network Technology (Beijing) Co., Ltd. Visual search method, visual search device and electrical device
CN111157757A (en) * 2019-12-27 2020-05-15 苏州博田自动化技术有限公司 Vision-based crawler speed detection device and method
CN111680685A (en) * 2020-04-14 2020-09-18 上海高仙自动化科技发展有限公司 Image-based positioning method and device, electronic equipment and storage medium
CN111680685B (en) * 2020-04-14 2023-06-06 上海高仙自动化科技发展有限公司 Positioning method and device based on image, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN101986348A (en) Visual target identification and tracking method
Ke et al. Multi-dimensional traffic congestion detection based on fusion of visual features and convolutional neural network
CN102402680B (en) Hand and indication point positioning method and gesture confirming method in man-machine interactive system
CN109344717B (en) Multi-threshold dynamic statistical deep sea target online detection and identification method
KR100612858B1 (en) Method and apparatus for tracking human using robot
CN108304798A (en) The event video detecting method of order in the street based on deep learning and Movement consistency
CN112633231B (en) Fire disaster identification method and device
CN110298297A (en) Flame identification method and device
CN111582092B (en) Pedestrian abnormal behavior detection method based on human skeleton
CN105404894A (en) Target tracking method used for unmanned aerial vehicle and device thereof
CN114049356B (en) Method, device and system for detecting structure apparent crack
Yadav Vision-based detection, tracking, and classification of vehicles
CN112036381B (en) Visual tracking method, video monitoring method and terminal equipment
CN109376736A (en) A kind of small video target detection method based on depth convolutional neural networks
CN112560580A (en) Obstacle recognition method, device, system, storage medium and electronic equipment
Ding et al. Efficient vanishing point detection method in complex urban road environments
CN104915642A (en) Method and apparatus for measurement of distance to vehicle ahead
Zhang Detection and tracking of human motion targets in video images based on camshift algorithms
Wang et al. Pointer meter recognition in UAV inspection of overhead transmission lines
CN115620090A (en) Model training method, low-illumination target re-recognition method and device and terminal equipment
Qu et al. Scale self-adaption tracking method of Defog-PSA-Kcf defogging and dimensionality reduction of foreign matter intrusion along railway lines
CN113240829B (en) Intelligent gate passing detection method based on machine vision
CN108765463A (en) A kind of moving target detecting method calmodulin binding domain CaM extraction and improve textural characteristics
CN112613668A (en) Scenic spot dangerous area management and control method based on artificial intelligence
Li et al. Target segmentation of industrial smoke image based on LBP Silhouettes coefficient variant (LBPSCV) algorithm

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20110316