CN103345762B - Bayes's visual tracking method based on manifold learning - Google Patents
Bayes's visual tracking method based on manifold learning Download PDFInfo
- Publication number
- CN103345762B CN103345762B CN201310244062.2A CN201310244062A CN103345762B CN 103345762 B CN103345762 B CN 103345762B CN 201310244062 A CN201310244062 A CN 201310244062A CN 103345762 B CN103345762 B CN 103345762B
- Authority
- CN
- China
- Prior art keywords
- manifold
- particle
- bayes
- frame
- space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000000007 visual effect Effects 0.000 title claims abstract description 15
- 238000013507 mapping Methods 0.000 claims abstract description 32
- 239000011159 matrix material Substances 0.000 claims abstract description 29
- 230000006870 function Effects 0.000 claims abstract description 18
- 230000008569 process Effects 0.000 claims abstract description 9
- 239000002245 particle Substances 0.000 claims description 43
- 238000012549 training Methods 0.000 claims description 18
- 238000005070 sampling Methods 0.000 claims description 7
- 230000009467 reduction Effects 0.000 claims description 5
- 238000012952 Resampling Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 230000001154 acute effect Effects 0.000 description 3
- 239000000686 essence Substances 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009184 walking Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides a kind of Bayes's visual tracking method based on manifold learning, comprise the following steps: S1, propose a kind of new popular learning algorithm and obtain essential manifold, by image observation data set X=[x1,x2,…,xn] with low dimensional manifold above point set Y=[y1,y2,…,yn] distinguish correspondence, and each point on low dimensional manifold surface can pass through yi=[x, y, z]T=f (μ, ν) represents, wherein i=1,2 ..., n;S2, carries out back mapping study, obtains mapping function g and relevant coefficient matrix B thereof from low dimensional manifold space to dimensional images observation space;S3, integrating step S1 carries out Bayes tracking process with the result of S2, finally provides tracking result.Present invention is mainly used for the tracking problem to human body under solution dynamic environment, it is the novel Bayes tracking algorithm that a kind of manifold kept based on essence variable builds, it is possible to achieve accurate to target is followed the tracks of, and has stronger robustness.
Description
Technical field
The present invention relates to a kind of pedestrian tracking algorithm, particularly to a kind of Bayes visual tracking side based on manifold learning
Method.
Background technology
At computer vision field, the pedestrian's health in image or video is tracked and poses discrimination is one and bears
Challenging difficult point.Tracked target is often one piece of region represented by high dimensional data in image, it is common that this district
The gray value of territory pixel.
Traditional tracking attempts to extract significant characteristics and is made a distinction in target area and nontarget area.Typical case
Such algorithm need an object module represented by physical features, these features can be color, shape or texture
Deng.So tracking problem can have the candidate of minimum error as fresh target by finding in visual observation with object module
And solve.But the performance of these algorithms is often affected by acute variation or the target travel of ambient lighting, because being made
Feature be not enough stablizing under these extreme cases.
Propose a further type of algorithm in recent years, it is possible to high dimensional data is learnt, embed it in low-dimensional
In manifold.Although the target followed the tracks of, the such as health of people or head, high dimensional image represent, but the row of target
For being often on an essential low dimensional manifold with attitude.Based on this imagination, a lot of research work attempt at lower-dimensional subspace
Rather than original higher dimensional space solves target following and poses discrimination problem.
Document " Tracking People on a Torus " (A.Elgammal and C.S.Lee, IEEE
Trans.Pattern Anal.Mach.Intell., vol 31, no.3, pp.520-538,2009.) think the body posture of people
Manifold is one and has two essential dimensions: body posture and horizontal view angle, but is in the torus in three-dimensional theorem in Euclid space, and
And achieve human body tracking based on this manifold.Document " Learning an intrinsic variable preserving
manifold for dynamic visual tracking”(H.Qiao,P.Zhang,B.Zhang,and S.W.Zheng,
IEEE.Trans.Syst.Man.Cybern.Part B, vol.40, no.3, pp.868-880,2010) in, it is proposed that Yi Zhongben
The manifold learning arithmetic (IVPML) that qualitative change amount keeps, it is possible to be effectively maintained the essence of training sample while dimensionality reduction learns
Variable, and be successfully applied in dynamic vision tracking, but this Vision Tracking is real by the way of neighborhood search
Existing, this method is not likely to be the most stable in actual applications.
Summary of the invention
The problem that the present invention is directed to the existence of above-mentioned prior art makes improvements, and i.e. the present invention is on the basis of above-mentioned two documents
In conjunction with Bayes tracking framework, build a kind of significantly more efficient track algorithm, and be applied in actual tracking.The present invention with
In track algorithm, trained from observation space to the mapping function in manifold space not only by new manifold learning, also learn
The back mapping of image observation data can be recovered from manifold space.Position and the attitude of target are carried out in manifold space
Predict and be verified at observation space.
In order to solve above-mentioned technical problem, the invention provides following technical scheme:
Bayes's visual tracking method based on manifold learning, comprises the following steps:
S1, proposes a kind of new popular learning algorithm and obtains essential manifold, by image observation data set X=[x1,x2,…,
xn] with low dimensional manifold above point set Y=[y1,y2,…,yn] distinguish correspondence, and each point on low dimensional manifold surface can pass through
yi=[x, y, z]T=f (μ, v) represents, wherein i=1,2 ..., n;
S2, carries out back mapping study, obtain from low dimensional manifold space to dimensional images observation space mapping function g and
Its relevant coefficient matrix B;
S3, integrating step S1 carries out Bayes tracking process with the result of S2, finally provides tracking result.
In step sl, described a kind of new manifold learning arithmetic, it is possible not only to human body training data is embedded into essence
Dimensional space, and neighborhood relationships and the Global Topological of training dataset can be kept simultaneously.In step S1 and S2, in order to incite somebody to action
The higher-dimension observation data that the point of embedding manifold is corresponding connect accurately, learn based on dimensionality reduction study and kernel regression method
One biaxial stress structure flexibly.In step s3, based on bayesian theory, by manifold particle spatially with in image
Observation data between be mutually authenticated, target can be the most tracked.Meanwhile, the mistake of the continuous renewal of particle in manifold
Cheng Zhong, the state of target can also estimate.
Further, step S1 comprises the following steps:
S11, builds adjacent map and the geometry of adjacent map, and describes with G, use x1,x2,…,xnRepresent therein
Training point set;
S12, selects weight, represents the weight matrix of figure G by matrix W, for the data in weight matrix, according to difference
Situation selects different weighted values;
S13, Feature Mapping, allow X=[x1,x2,…,xn] representing training data matrix, low-dimensional is expressed can pass through YT=ETX
Obtaining, E is a mapping matrix.
Further, in step s 2, described coefficient matrix B refers to the coefficient matrix of back mapping function, makes Z=
[z1,z2,…,zn] represent the observation space recovered, Y=[y1,y2,…,yn] represent its correspondence in essential manifold space
Low-dimensional point set, here zi∈RhAnd yi∈Rl, and l < < h;If this non-linear back mapping function g:Rl→RhHave following
Form:
zi=g (yi) :=Bk (yi) (1)
Wherein B=[b1,b2,…,bn] it is the coefficient matrix of a h × n, and
k(yi)=[k1(yi,y1),k2(yi,y2),…kn(yi,yn)]T (2)
Be one about yiCharacteristic function, ki() is a kernel function.
Further, step S3 comprises the following steps:
S31, initializes, and selects initial target x in video1, by comparing x1With each training data in training set X,
Select corresponding point y on essential manifold1=f (μ1,v1), by y1Point surrounding sample initializes particle assemblyWherein
S32, obtains candidate, at t frame, in the picture according to the target location x of previous framet-1Carry out sampling to be waited
The person's of choosing data acquisition systemDescribed sampling process is according to determining under the different scale of image
X and y coordinates step-length carry out;
S33, more new particle, in t frame, the biggest according to particle weights, there is the selected rule of the biggest probability to t-
Particle assembly in 1 frameResampling, uses
Represent new particle set, white Gaussian noise is added in described new particle set, obtains the particle assembly of t frame
S34, determines new target, is calculated candidate respectively in observation space and manifold space by biaxial stress structureWith particleBetween similarity, candidate pass through dimensionality reduction letter
Number y=ETX is mapped in manifold space, and particle recovers observed image by (1) formula, can find the candidate x of optimumt
Target as t frame;
S35, updates particle weights, calculates each particle and fresh target xtSimilarity as particle assembly
The weight at t frame, the weight of described t frame need according toIt is standardized;
S36, returns described S32 step and processes a new frame thus continue to follow the tracks of processing procedure.
Further, the biaxial stress structure described in step S34 is the Feature Mapping described in S13 and the back mapping described in S2,
Higher-dimension observed image is down in manifold space by S13 and particle calculates similarity, and low-dimensional particle is reduced into image by S2, calculate with
Similarity between observed image.
A kind of based on manifold learning Bayes's visual tracking method that the present invention proposes, compared with traditional tracking
Relatively, algorithm performance will not be affected by the acute variation of ambient lighting or target travel, has enough stability;With newer
Vision Tracking based on neighborhood search compares, and in inventive algorithm, trained not only by new manifold learning
From observation space to the mapping function in manifold space, also learn to recover the reverse of image observation data from manifold space
Mapping, position and the attitude of target are predicted in manifold space and are verified at observation space.Additionally, should actual
In with, the most stable to the result of human body tracking.
Accompanying drawing explanation
Accompanying drawing is for providing a further understanding of the present invention, and constitutes a part for description, with the reality of the present invention
Execute example together for explaining the present invention, be not intended that limitation of the present invention.In the accompanying drawings:
Fig. 1 is the method flow diagram of one preferred embodiment of the present invention;
Fig. 2 is the human body tracking effect schematic diagram that the algorithm that the embodiment of the present invention proposes is applied under dynamic bad border.
Detailed description of the invention
As it is shown in figure 1, the present invention discloses a kind of Bayes's visual tracking method based on manifold learning, including following step
Rapid:
The first step, proposes a kind of new popular learning algorithm and obtains essential manifold, by image observation data set X=[x1,
x2,…,xn] with low dimensional manifold above point set Y=[y1,y2,…,yn] the most corresponding, and each point on low dimensional manifold surface can
To pass through yi=[x, y, z]T=f (μ, v) represents, wherein i=1,2 ..., n;
Second step, carries out back mapping study, obtains the mapping letter from low dimensional manifold space to dimensional images observation space
Number g and relevant coefficient matrix B thereof;
3rd step, integrating step S1 carries out Bayes tracking process with the result of S2, finally provides tracking result.
The first step proposes a kind of new manifold learning arithmetic, and concrete grammar is as follows:
1) build adjacent map and its geometry, and describe with G, use x1,x2,…,xnRepresent training points therein
Collection;
2) selecting weight, represent the weight matrix of figure G by matrix W, for the data in weight matrix, we have as follows
Definition:
If i) xiAnd xjOn coordinate μ or υ adjacent, then make weighted value Wij=cμOr cυ。cμAnd cυIt is artificially to set
Fixed constant;
Ii) if but two points are not neighbours connected by a paths in figure, then calculate between the two point
Shortest path is as weight;
Iii) if two points do not connect in figure, then to arranging a value the biggest between them as weight.
3) Feature Mapping, allows X=[x1,x2,…,xn] representing training data matrix, then low-dimensional is expressed can pass through YT=
ETX obtains, and E is a mapping matrix.Manifold learning arithmetic is contemplated to solve certain optimization problem and obtain this mapping matrix.
The back mapping study that second step is carried out, obtains the mapping letter from low dimensional manifold space to dimensional images observation space
Number g and relevant coefficient matrix B thereof, concrete method is as follows:
Make Z=[z1,z2,…,zn] represent the observation space recovered, Y=[y1,y2,…,yn] represent that it is at essential manifold
Low-dimensional point set z here corresponding in spacei∈RhAnd yi∈Rl, and l < < h.Assuming that this non-linear back mapping function g:
Rl→RhThere is a following form:
zi=g (yi) :=Bk (yi) (1)
Wherein B=[b1,b2,…,bn] it is the coefficient matrix of a h × n, and
k(yi)=[k1(yi,y1),k2(yi,y2),…kn(yi,yn)]T (2)
Be one about yiCharacteristic function.ki(. .) it is a kernel function, generally we select gaussian kernel as core letter
Number.Coefficient matrix B can obtain by solving minimization problem, and in the training stage, B is by image observation data and correspondence
Manifold above training data point be calculated.
In conjunction with the result of the first step Yu second step, the Bayes tracking algorithm concrete grammar of the 3rd step is as follows:
1) initialize, select initial target x in video1.By comparing x1With each training data in training set X, choosing
Select corresponding point y on essential manifold1=f (μ1,v1).By at y1Point surrounding sample initializes particle assemblyWherein
2) candidate is obtained, at t frame, in the picture according to the target location x of previous framet-1Carry out sampling and obtain candidate
Person's data acquisition systemThis sampling process is according to the x determined under the different scale of image
Carry out with y-coordinate step-length.
3) more new particle, in t frame, the biggest according to particle weights, there is the selected rule of the biggest probability to t-1
Particle assembly in frameResampling.This importance is adopted at random
Sample can obtain by being uniformly distributed on [0,1] is carried out sampling according to the accumulation weight of particle.It should be noted that previously
Particle assembly in some point may have been reselected several times, other point is then abandoned.We useRepresent new particle assembly.
This new particle set is added white Gaussian noise, and then we have just obtained the particle assembly of t frame
4) determine new target, in observation space and manifold space, calculated candidate by biaxial stress structure respectivelyWith particleBetween similarity.Wait
The person of choosing can pass through dimensionality reduction function y=ETX is mapped in manifold space, and particle can pass through formula (1) and recover observed image.
Optimum candidate x is found by the methodtTarget as t frame.
5) update particle weights, calculate each particle and fresh target xtSimilarity as particle assembly
The weight at t frame, these weights need according toIt is standardized.
6) return the 2nd) step a new frame is processed thus continue follow the tracks of processing procedure.
The training sample of this method is to intercept in the video for same person collection and come, by human body according to 36 water
Straight angle degree and 42 walkings shoot and sample.The video of test is to gather on the intelligent vehicle in environment out of doors
, video resolution is 640 × 480, and the frame per second of video is that 30 frames are per second, and video data stream is to carry out in removable computer system
Process.
Such as Fig. 2, according to above-mentioned specific embodiment, pedestrian's human body tracking at turning, street is processed, at photographing unit
The visual field in, target quickly move to from the left side the right, algorithm can be followed the tracks of target accurately, may certify that institute of the present invention
The method provided can effectively carry out pedestrian tracking, although inside some frame, target window is more somewhat larger than human body, but
Following a few frame can zoom to again adapt with human region, it is sufficient to embody the track algorithm of the present invention under dynamic environment
Robustness.
To sum up, a kind of based on manifold learning Bayes's visual tracking method that the present invention proposes, with traditional track side
Method compares, and algorithm performance will not be affected by the acute variation of ambient lighting or target travel, has enough stability;With
Newer Vision Tracking based on neighborhood search compares, in inventive algorithm, not only by new manifold learning
Trained from observation space to the mapping function in manifold space, also learnt to recover image observation data from manifold space
Back mapping, position and the attitude of target are predicted in manifold space and are verified at observation space.Additionally, in reality
In the application on border, the most stable to the result of human body tracking.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, although with reference to aforementioned reality
Executing example to be described in detail the present invention, for a person skilled in the art, it still can be to aforementioned each enforcement
Technical scheme described in example is modified, or wherein portion of techniques feature is carried out equivalent.All essences in the present invention
Within god and principle, any modification, equivalent substitution and improvement etc. made, should be included within the scope of the present invention.
Claims (4)
1. Bayes's visual tracking method based on manifold learning, it is characterised in that:
Comprise the following steps:
S1, proposes a kind of new popular learning algorithm and obtains essential manifold, by image observation data set X=[x1,x2,...,xn]
With the point set Y=[y above low dimensional manifold1,y2,...,yn] distinguish correspondence, and each point on low dimensional manifold surface can pass through yi
=[x, y, z]T=f (μ, ν) represents, wherein i=1,2 ..., n, μ and υ are coordinate;
S2, carries out back mapping study, obtains mapping function g and phase thereof from low dimensional manifold space to dimensional images observation space
The coefficient matrix B closed;
S3, integrating step S1 carries out Bayes tracking process with the result of S2, finally provides tracking result;
Wherein, in step s 2, described coefficient matrix B refers to the coefficient matrix of back mapping function, makes Z=[z1,z2,...,
zn] represent the observation space recovered, Y=[y1,y2,...,yn] represent its low-dimensional point set corresponding in essential manifold space,
Here zi∈RhAnd yi∈Rl, and l < < h;If this non-linear back mapping function g:Rl→RhThere is a following form:
zi=g (yi) :=Bk (yi) (1)
Wherein B=[b1,b2,...,bn] it is the coefficient matrix of a h × n, and
k(yi)=[k1(yi,y1),k2(yi,y2),...kn(yi,yn)]T (2)
Be one about yiCharacteristic function, ki() is a kernel function.
2. Bayes's visual tracking method based on manifold learning as claimed in claim 1, it is characterised in that: step S1 enters one
Comprising the following steps of step:
S11, builds adjacent map and the geometry of adjacent map, and describes with G, use x1,x2,…,xnRepresent training therein
Point set;
S12, selects weight, represents the weight matrix of figure G by matrix W, for the data in weight matrix, according to different situations
Select different weighted values;
S13, Feature Mapping, allow X=[x1,x2,...,xn] representing training data matrix, low-dimensional is expressed can pass through YT=ETX obtains
Arriving, E is a mapping matrix.
3. Bayes's visual tracking method based on manifold learning as claimed in claim 1, it is characterised in that: step S3 enters one
Comprising the following steps of step:
S31, initializes, and selects initial target x in video1, by comparing x1With each training data in training set X, select
Corresponding point y on essential manifold1=f (μ1,ν1), by y1Point surrounding sample initializes particle assemblyWherein
S32, obtains candidate, at t frame, in the picture according to the target location x of previous framet-1Carry out sampling and obtain candidate
Data acquisition systemDescribed sampling process is according to the x and y coordinates determined under the different scale of image
Step-length is carried out;
S33, more new particle, in t frame, the biggest according to particle weights, there is the selected rule of the biggest probability to t-1 frame
In particle assemblyResampling, usesRepresent new grain
Subclass, adds white Gaussian noise to described new particle set, obtains the particle assembly of t frame
S34, determines new target, is calculated candidate respectively in observation space and manifold space by biaxial stress structureWith particleBetween similarity, candidate pass through dimensionality reduction letter
Number y=ETX is mapped in manifold space, and particle recovers observed image by (1) formula, can find the candidate x of optimumt
Target as t frame;
S35, updates particle weights, calculates each particle and fresh target xtSimilarity as particle assembly
The weight at t frame, the weight of described t frame need according toIt is standardized;
S36, returns described S32 step and processes a new frame thus continue to follow the tracks of processing procedure.
4. Bayes's visual tracking method based on manifold learning as claimed in claim 3, it is characterised in that: step S34 institute
The biaxial stress structure stated is the Feature Mapping described in S13 and the back mapping described in S2, and it is empty that higher-dimension observed image is down to manifold by S13
Calculating similarity with particle between, low-dimensional particle is reduced into image by S2, calculates the similarity between observed image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310244062.2A CN103345762B (en) | 2013-06-19 | 2013-06-19 | Bayes's visual tracking method based on manifold learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310244062.2A CN103345762B (en) | 2013-06-19 | 2013-06-19 | Bayes's visual tracking method based on manifold learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103345762A CN103345762A (en) | 2013-10-09 |
CN103345762B true CN103345762B (en) | 2016-08-17 |
Family
ID=49280555
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310244062.2A Expired - Fee Related CN103345762B (en) | 2013-06-19 | 2013-06-19 | Bayes's visual tracking method based on manifold learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103345762B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110866936B (en) * | 2018-08-07 | 2023-05-23 | 创新先进技术有限公司 | Video labeling method, tracking device, computer equipment and storage medium |
CN110675424A (en) * | 2019-09-29 | 2020-01-10 | 中科智感科技(湖南)有限公司 | Method, system and related device for tracking target object in image |
CN112085765B (en) * | 2020-09-15 | 2024-05-31 | 浙江理工大学 | Video target tracking method combining particle filtering and metric learning |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1828630A (en) * | 2006-04-06 | 2006-09-06 | 上海交通大学 | Manifold learning based human face posture identification method |
-
2013
- 2013-06-19 CN CN201310244062.2A patent/CN103345762B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1828630A (en) * | 2006-04-06 | 2006-09-06 | 上海交通大学 | Manifold learning based human face posture identification method |
Non-Patent Citations (3)
Title |
---|
Learning an Intrinsic-Variable Preserving Manifold for Dynamic Visual Tracking;Hong Qiao,Peng Zhang,Bo zhang,Suiwu Zheng;《IEEE TRANSACTIONS ON SYSTEMS,MAN AND CYBERNETICS-PART B:CYBERNETICS》;20100630;第40卷(第3期);868-872 * |
流行学习在交通标志识别中的应用研究;李福才;《中国优秀硕士学位论文全文数据库 信息科学辑》;20120315(第3期);16、19、30 * |
流行学习的谱方法相关问题研究;曾宪华;《万方学位论文数据库》;20100319;3 * |
Also Published As
Publication number | Publication date |
---|---|
CN103345762A (en) | 2013-10-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108154118B (en) | A kind of target detection system and method based on adaptive combined filter and multistage detection | |
CN111311666B (en) | Monocular vision odometer method integrating edge features and deep learning | |
CN105930868B (en) | A kind of low resolution airport target detection method based on stratification enhancing study | |
CN107103613B (en) | A kind of three-dimension gesture Attitude estimation method | |
CN109949341B (en) | Pedestrian target tracking method based on human skeleton structural features | |
CN110599537A (en) | Mask R-CNN-based unmanned aerial vehicle image building area calculation method and system | |
Sukanya et al. | A survey on object recognition methods | |
CN108805906A (en) | A kind of moving obstacle detection and localization method based on depth map | |
CN109544592B (en) | Moving object detection algorithm for camera movement | |
CN104794737B (en) | A kind of depth information Auxiliary Particle Filter tracking | |
CN112395977B (en) | Mammalian gesture recognition method based on body contour and leg joint skeleton | |
CN104392228A (en) | Unmanned aerial vehicle image target class detection method based on conditional random field model | |
CN104200494A (en) | Real-time visual target tracking method based on light streams | |
CN107609571B (en) | Adaptive target tracking method based on LARK features | |
CN108734200B (en) | Human target visual detection method and device based on BING (building information network) features | |
CN110245587B (en) | Optical remote sensing image target detection method based on Bayesian transfer learning | |
CN112949440A (en) | Method for extracting gait features of pedestrian, gait recognition method and system | |
CN106778767B (en) | Visual image feature extraction and matching method based on ORB and active vision | |
CN117949942B (en) | Target tracking method and system based on fusion of radar data and video data | |
Tawab et al. | Efficient multi-feature PSO for fast gray level object-tracking | |
CN113111857A (en) | Human body posture estimation method based on multi-mode information fusion | |
CN112184767A (en) | Method, device, equipment and storage medium for tracking moving object track | |
Ali et al. | Vehicle detection and tracking in UAV imagery via YOLOv3 and Kalman filter | |
Hao et al. | Recognition of basketball players’ action detection based on visual image and Harris corner extraction algorithm | |
CN103345762B (en) | Bayes's visual tracking method based on manifold learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210308 Address after: 214500 Wuliqiao, east suburb, Jingjiang City, Taizhou City, Jiangsu Province Patentee after: JIANGSU SANLI HYDRAULIC MACHINERY Co.,Ltd. Address before: 214046 Room 101, building C, information industry science and Technology Park, No. 21, Changjiang Road, New District, Wuxi City, Jiangsu Province Patentee before: WUXI YINYU INTELLIGENT ROBOT Co.,Ltd. |
|
TR01 | Transfer of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20160817 |
|
CF01 | Termination of patent right due to non-payment of annual fee |