CN108875655A - A kind of real-time target video tracing method and system based on multiple features - Google Patents

A kind of real-time target video tracing method and system based on multiple features Download PDF

Info

Publication number
CN108875655A
CN108875655A CN201810662349.XA CN201810662349A CN108875655A CN 108875655 A CN108875655 A CN 108875655A CN 201810662349 A CN201810662349 A CN 201810662349A CN 108875655 A CN108875655 A CN 108875655A
Authority
CN
China
Prior art keywords
video
sub
subgraph
sample
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810662349.XA
Other languages
Chinese (zh)
Inventor
曲海平
刘显林
岳峻
寇光杰
贾世祥
张志旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ludong University
Original Assignee
Ludong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ludong University filed Critical Ludong University
Priority to CN201810662349.XA priority Critical patent/CN108875655A/en
Publication of CN108875655A publication Critical patent/CN108875655A/en
Priority to CN201910142073.7A priority patent/CN109685045B/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of real-time target video tracing method and system based on multiple features, and wherein method includes:Video to be tracked is chronologically divided into several sub-videos according to preset rules, wherein the number of sub-video is at least one;For any one sub-video, the target subgraph in the sub-video in first frame image is obtained according to the corresponding default classifier of the sub-video;For any one frame image in the sub-video after first frame image, multiple sample subgraphs are chosen from the image, calculate the distance between the target subgraph in each sample subgraph and the image previous frame image, the corresponding posterior probability of each sample subgraph is determined according to each distance, and the corresponding sample subgraph of maximum a posteriori probability is determined as the target subgraph in the image.This method and system can adapt to the cosmetic variation of target in object tracking process, and can effectively solve the problem that occlusion issue, be conducive to the accuracy for improving target following result.

Description

A kind of real-time target video tracing method and system based on multiple features
Technical field
The present invention relates to target following technical fields, more particularly, to a kind of real-time target video based on multiple features Tracking and system.
Background technique
Target following is a research hotspot of computer vision field, with the universalness of video camera, video with Track has a wide range of applications, and has important application in fields such as human-computer interaction, intelligent monitoring, target identifications.
Although there is a large amount of track algorithm to emerge in large numbers in recent years, most track algorithm is when carrying out target expression, only Consider certain independent feature of target.Target following is calculated in the influence for the problems such as being blocked due to the deformation of target, target The performance of method receives very big influence.Therefore, under the conditions of taking a kind of individual characteristic information that can not cope with several scenes Variation, be capable of handling the tracking problem under all scenes without the algorithm of any single feature.
Target and background are carried out in addition, existing track algorithm is all made of single classifier during entire tracking Separation, however there may be deformation, single classifier is difficult accurately to carry out target and background target during tracking Separation causes the result precision of tracking not high.
In view of this, it would be highly desirable to provide it is a kind of can adapt to target appearance change and can preferably solve the real-time of occlusion issue Tracking and system.
Summary of the invention
The present invention in order to overcome in the prior art track algorithm can not adapt to target appearance variation and can not solve to block to ask Topic, the problem for causing the accuracy of target following result not high provide a kind of real-time target video tracking side based on multiple features Method and system.
On the one hand, the present invention provides a kind of real-time target video tracing method based on multiple features, including:
Video to be tracked is chronologically divided into several sub-videos according to preset rules, wherein the number of the sub-video At least one;
For any one sub-video, first frame figure in the sub-video is obtained according to the corresponding default classifier of the sub-video Target subgraph as in;
For any one frame image in the sub-video after the first frame image, chosen from the image multiple Sample subgraph calculates the distance between the target subgraph in each sample subgraph and the image previous frame image, The corresponding posterior probability of each sample subgraph is determined according to each distance, by corresponding sample of maximum a posteriori probability Image is determined as the target subgraph in the image.
Preferably, described that multiple sample subgraphs are chosen from the image, specially:
Obtain the weight of this subgraph of various kinds in the previous frame image of the image;
According to the weight of this subgraph of various kinds in the previous frame image of the image, multiple sample subgraphs are chosen from the image Picture.
Preferably, described that mesh in the sub-video in first frame image is obtained according to the corresponding default classifier of the sub-video Subgraph is marked, further includes before:
The markd training sample of the corresponding band of the sub-video is obtained, classifier is instructed using the training sample Practice, obtains the corresponding default classifier of the sub-video.
Preferably, described that classifier is trained using the training sample, it is default point corresponding to obtain the sub-video Class device, specially:
For any one training sample, the corresponding HOG feature of the training sample, SILTP feature and Harr-like are extracted The corresponding HOG feature of the training sample, SILTP feature and Harr-like feature are cascaded, obtain the training sample by feature This corresponding feature vector;
Classifier is trained according to all training samples corresponding feature vector, it is corresponding to obtain the sub-video Default classifier.
Preferably, the corresponding feature vector of described acquisition training sample further includes later:
Dimensionality reduction carried out to the corresponding feature vector of the training sample, feature after obtaining the corresponding dimensionality reduction of the training sample to Amount;
Accordingly, described that classifier is trained according to all training samples corresponding feature vector, specially:
Classifier is trained according to the feature vector after the corresponding dimensionality reduction of all training samples.
Preferably, the corresponding feature vector of the described pair of training sample carries out dimensionality reduction, specially:
Difference between difference or class is obtained in the class in the corresponding feature vector of the training sample between every two feature, according to Difference obtains covariance matrix in class in all classes, and according to covariance matrix between difference acquisition class between all classes;
According to covariance matrix calculates the corresponding mapping of described eigenvector between covariance matrix and the class in the class Matrix carries out dimensionality reduction to described eigenvector using the mapping matrix.
Preferably, described that covariance matrix in class is obtained according to difference in all classes, further include later:
Regularization parameter is added in covariance matrix in the class, obtains covariance matrix in updated class;
Correspondingly, described according to covariance matrix calculates described eigenvector between covariance matrix and the class in the class Corresponding mapping matrix, specially:
According to covariance matrix calculates described eigenvector pair between covariance matrix and the class in the updated class The mapping matrix answered.
On the one hand, the present invention provides a kind of real-time target video frequency following system based on multiple features, including:
Video division module, for video to be tracked to be chronologically divided into several sub-videos according to preset rules, wherein The number of the sub-video is at least one;
Target initialization module, for being obtained according to the corresponding default classifier of the sub-video for any one sub-video Take the target subgraph in the sub-video in first frame image;
Target tracking module, for in the sub-video be located at the first frame image after any one frame image, Multiple sample subgraphs are chosen from the image, calculate the target in each sample subgraph and the image previous frame image The distance between subgraph determines the corresponding posterior probability of each sample subgraph according to each distance, after maximum Test the target subgraph that the corresponding sample subgraph of probability is determined as in the image.
On the one hand, the present invention provides a kind of equipment of real-time target video tracing method based on multiple features, including:
At least one processor;And
At least one processor being connect with the processor communication, wherein:
The memory is stored with the program instruction that can be executed by the processor, and the processor calls described program to refer to It enables and is able to carry out any of the above-described method.
On the one hand, the present invention provides a kind of non-transient computer readable storage medium, and the non-transient computer is readable to deposit Storage media stores computer instruction, and the computer instruction makes the computer execute any of the above-described method.
A kind of real-time target video tracing method and system based on multiple features provided by the invention draws video to be tracked It is divided into multiple sub-videos, and for the individual classifier of each sub-video training, it is then initial using corresponding classifier Change the target position in the first frame image of each sub-video, it is final to utilize target tracking algorism to other frames of each sub-video Target in image is tracked.Different points is arranged in this method and system in object tracking process for different sub-videos Class device so that different classifiers can adapt to the cosmetic variation of target in object tracking process, and can effectively solve the problem that and block Problem is conducive to the accuracy for improving target following result.
Detailed description of the invention
Fig. 1 is a kind of overall flow signal of real-time target video tracing method based on multiple features of the embodiment of the present invention Figure;
Fig. 2 is a kind of overall structure signal of real-time target video frequency following system based on multiple features of the embodiment of the present invention Figure;
Fig. 3 is a kind of structural frames of the equipment of real-time target video tracing method based on multiple features of the embodiment of the present invention Frame schematic diagram.
Specific embodiment
With reference to the accompanying drawings and examples, specific embodiments of the present invention will be described in further detail.Implement below Example is not intended to limit the scope of the invention for illustrating the present invention.
Fig. 1 is a kind of overall flow signal of real-time target video tracing method based on multiple features of the embodiment of the present invention Figure, as shown in Figure 1, the present invention provides a kind of real-time target video tracing method based on multiple features, including:
Video to be tracked is chronologically divided into several sub-videos according to preset rules, wherein the number of sub-video is extremely by S1 It is less one;
Specifically, Image Acquisition is carried out by image collecting device and obtains video to be tracked, for video to be tracked, according to Video to be tracked is divided into several sub-videos according to timing by preset rules, and wherein timing is every frame figure in video to be tracked As the sequencing of acquisition.In the present embodiment, preset rules, which can be set to be spaced a certain number of picture frames, is once drawn Point, specific interval frame number can be configured according to actual needs, be not specifically limited herein.It is advised as a result, in certain predetermined On the basis of then, the quantity of sub-video can be determined according to the quantity in video to be tracked including picture frame, and sub-video Quantity be at least one.
S2 obtains first in the sub-video any one sub-video according to the corresponding default classifier of the sub-video Target subgraph in frame image;
It specifically, can be preparatory for each sub-video on the basis of video to be tracked is divided into several sub-videos A certain number of target images and background image are chosen respectively as positive sample and negative sample, by positive sample and negative sample input point Class device is trained, and can be obtained the corresponding default classifier of each sub-video.On this basis, any one height is regarded Frequently, the target and background that first frame image is included in the sub-video is divided according to the sub-video corresponding default classifier Class obtains the target subgraph in the sub-video in first frame image.For entire video to be tracked, it is being spaced After a certain number of picture frames, then new classifier can be used and separate the target in picture frame with background, Neng Gouyou Imitate the cosmetic variation of adaptive targets.
S3 chooses multiple any one frame image in the sub-video after first frame image from the image Sample subgraph calculates the distance between the target subgraph in each sample subgraph and the image previous frame image, according to Each distance determines the corresponding posterior probability of each sample subgraph, and the corresponding sample subgraph of maximum a posteriori probability is determined as this Target subgraph in image.
Specifically, on the basis of target subgraph in obtaining the sub-video in first frame image, then using preset Target tracking algorism carries out target following to any one frame image after being located at first frame image in the sub-video.Specifically, For any one frame image in the sub-video after first frame image, multiple sample subgraphs are chosen from the image, Generally, the size phase of the size of selected sample subgraph and the target subgraph in first frame image in the sub-video Together, the quantity of sample subgraph and specific location can be configured according to actual needs, be not specifically limited herein.
On this basis, it calculates between the target subgraph in each sample subgraph and the frame image previous frame image Distance, distance calculated can be configured according to actual needs, such as Pasteur's distance, be not specifically limited herein.For every The distance that a sample subgraph calculates can be used for measuring the similarity of sample subgraph Yu target subgraph.Finally, using just The thought of state distribution can determine that the corresponding posteriority of each sample subgraph is general according to the corresponding distance calculated of each sample subgraph Rate, and maximum a posteriori probability is compared, and then the corresponding sample subgraph of maximum a posteriori probability can be determined as in the image Target subgraph.
In the present embodiment, for any one frame image in the sub-video after first frame image, filtered using particle Wave is as target location algorithm, using the dynamic model substitution conventional particle filtering Gaussian Profile based on rayleigh distributed to adapt to mesh Target fast moves.Two-dimentional rayleigh distributed is defined as:
Wherein x is the position of x-axis direction, and y is the position in y-axis, and μ is model parameter.In the dynamic based on rayleigh distributed In model, particle is more in the particle for the circle distribution that radius is μ, it is therefore desirable to which the size for controlling μ uses most particle point up Cloth is around true target.The present invention mainly defines the size of μ according to the speed of target:
Following five step of track algorithm process:
1) the two-dimentional rayleigh distributed that above formula provides is defined, resampling is carried out to particle, generates n obedience R2(x, y) The particle { (γ of distributionj):J=1,2,3 ..., n };
2) new particle collection is obtained by particle state equation of transfer, centered on each particle, according to the state of particle Parameter acquisition image pattern can be obtained sample subgraph.Wherein, particle state equation of transfer such as following formula:
Wherein,The horizontal coordinate of j-th of particle is represented,The vertical coordinate of j-th of particle is represented,WithIt does not represent Two state components of horizontal coordinate and vertical coordinate of particle in t frame image.
3) observation probability of each particle on the basis of the above, is calculated, i.e., between particle sample image block and target image Distance, specific formula for calculation are as follows:
Wherein, htFor the feature histogram of target image, hbFor the feature histogram of particle sample image block, D is the two Pasteur's distance.
4) according to the observation probability of each particle and combination Bhattacharyya coefficient (i.e. similarity measurement) and normal state point Cloth function estimates the maximum a posteriori probability of target:
Wherein, P (h) is the prior probability for not having training data h, and P (D) is the prior probability of training data D, and P (D | h) it represents H observes the probability of D under the premise of setting up.
It 5) is the target subgraph that can determine that in every frame image according to maximum a posteriori probability.
A kind of real-time target video tracing method based on multiple features provided by the invention, video to be tracked is divided into more A sub-video, and for the individual classifier of each sub-video training, it is then initialized using corresponding classifier each Target position in the first frame image of sub-video, finally using target tracking algorism in other frame images of each sub-video Target tracked.Different classifiers is arranged for different sub-videos in object tracking process in this method, so that not Same classifier can adapt to the cosmetic variation of target in object tracking process, and can effectively solve the problem that occlusion issue, be conducive to Improve the accuracy of target following result.
Based on any of the above-described embodiment, a kind of real-time target video tracing method based on multiple features is provided, from the image It is middle to choose multiple sample subgraphs, specially:Obtain the weight of this subgraph of various kinds in the previous frame image of the image;According to this The weight of this subgraph of various kinds in the previous frame image of image chooses multiple sample subgraphs from the image.
Specifically, for any one sub-video, for any one frame in the sub-video after first frame image Image shows the target following in the previous frame image of the image when choosing multiple sample subgraphs from the image at this time It has been completed that, i.e., the posterior probability of this subgraph of various kinds is it has been determined that and corresponding by maximum a posteriori probability in previous frame image Sample subgraph be determined as target subgraph.Meanwhile in the previous frame image for calculating the image after this subgraph of various kinds When testing probability, it has been determined that the distance between the target subgraph in this subgraph of various kinds and front cross frame image.It is basic herein On, this subgraph of various kinds can be determined according to the distance between the target subgraph in this subgraph of various kinds and front cross frame image Weight.It specifically, can be by the distance between target subgraph in some sample subgraph and front cross frame image and all samples The ratio of the distance between target subgraph in this subgraph and front cross frame image summation is determined as the power of the sample subgraph Weight.
On the basis of the above, the position for mapping out this subgraph of various kinds in previous frame image in the images, by the image Previous frame image in various kinds this subgraph weight of the weight as the image resampling, in weight high sample subgraph week It encloses and chooses multiple sample subgraphs again, can be obtained the corresponding sample subgraph of the image.
A kind of real-time target video tracing method based on multiple features provided by the invention, obtains the former frame figure of the image The weight of this subgraph of various kinds as in, according to the weight of this subgraph of various kinds in the previous frame image of the image, from the image Choose multiple sample subgraphs.This method is using the weight of this subgraph of various kinds in previous frame image as present image resampling Weight, the sample subgraph for advantageously allowing to choose are restrained along target position, further improve target following result Accuracy.
Based on any of the above-described embodiment, a kind of real-time target video tracing method based on multiple features is provided, according to the son The corresponding default classifier of video obtains the target subgraph in the sub-video in first frame image, further includes before:Obtaining should The markd training sample of the corresponding band of sub-video is trained classifier using training sample, and it is corresponding to obtain the sub-video Default classifier.
Specifically, for any one sub-video, which is being obtained according to the corresponding default classifier of the sub-video It before target subgraph in middle first frame image, also needs to obtain the markd training sample of the corresponding band of the sub-video, utilize Training sample is trained classifier, obtains the corresponding default classifier of the sub-video, and then corresponding according to the sub-video Default classifier obtains the target subgraph in the sub-video in first frame image.
In the present embodiment, for first sub-video in video to be tracked, first two before in first sub-video Training sample is obtained in frame image, and one label is set to each training sample, and label can be indicated with 1 and -1, wherein 1 Target is represented, -1 represents background.It can be obtained the training sample with label label as a result,.On this basis, above-mentioned instruction is utilized Practice sample to be trained classifier, obtains the corresponding default classifier of first sub-video.Wherein, the type of classifier can be with It is configured, is not specifically limited herein according to actual needs.
On the basis of the above, for other sub-videos in video to be tracked, using sub-space learning method for each Sub-video successively updates training sample, i.e., reselects training sample for each sub-video, then by reselecting trained sample This training classifier, and then realize the update of classifier, it can preferably handle adverse effect caused by the factors such as blocking.
In the present embodiment, selected classifier is that learning machine (Extreme Learning Machine) ELM, ELM is It is a kind of solve neural networks with single hidden layer algorithm, ELM can with random initializtion input weight and bias and exported accordingly Weight.The feature of ELM maximum is that traditional neural network, especially Single hidden layer feedforward neural networks (SLFNs) are being guaranteed Learn precision under the premise of learning algorithm speed faster.The target of ELM is to find one to have minimum miss to all training datas Function f (the x of differencei), i.e.,:
Wherein,WithRespectively indicate weight and the connection of connection i-th of hidden node and input layer The output quantity of i-th of hidden node and output layer.In addition, blIt is the threshold values of i-th of hidden node;L is hidden node number;g It (x) is activation primitive, so that N0The error of a training sample is close to 0, i.e.,
A kind of real-time target video tracing method based on multiple features provided by the invention, corresponding according to the sub-video Before default classifier obtains the target subgraph in the sub-video in first frame image, also need to obtain the corresponding band of the sub-video Markd training sample is trained classifier using training sample, obtains the corresponding default classifier of the sub-video.It should Different classifiers is arranged for different sub-videos in object tracking process in method, so that different classifiers can adapt to The cosmetic variation of target in object tracking process, and can effectively solve the problem that occlusion issue, be conducive to improve target following result Accuracy.
Based on any of the above-described embodiment, a kind of real-time target video tracing method based on multiple features is provided, training is utilized Sample is trained classifier, obtains the corresponding default classifier of the sub-video, specially:Sample is trained for any one This, extracts the corresponding HOG feature of the training sample, SILTP feature and Harr-like feature, by the corresponding HOG of the training sample Feature, SILTP feature and Harr-like feature are cascaded, and the corresponding feature vector of the training sample is obtained;According to all instructions Practice the corresponding feature vector of sample to be trained classifier, obtains the corresponding default classifier of the sub-video.
Specifically, for any one sub-video, after obtaining the corresponding training sample of the sub-video, training sample is utilized This is trained classifier, obtains the corresponding default classifier of the sub-video.Specifically, for any one training sample, The corresponding HOG feature of the training sample, SILTP feature and Harr-like feature are extracted, the corresponding HOG of the training sample is special Sign, SILTP feature and Harr-like feature carry out cascade and form unified feature vector, and it is corresponding to can be obtained the training sample Feature vector.The corresponding feature vector of all training samples is inputted into classifier again, according to the corresponding spy of all training samples Sign vector is trained classifier, obtains the corresponding default classifier of the sub-video.
In the present embodiment, using cascade of strong classifiers by the corresponding HOG feature of training sample, SILTP feature and Harr- Like feature carry out cascade forms unified feature vector, wherein the strategy of cascade of strong classifiers be by several strong classifiers by Complex array is simply arrived, has each strong classifier compared with high detection rate by training, and misclassification rate can be lowerd.Using addition Characteristic method, for first classifier, only with a few feature, each classifier later adds on the basis of last Add feature, the requirement until meeting this grade.
In the present embodiment, the specific extraction process for extracting the corresponding HOG feature of each training sample is as follows:
1) space gamma and color space are standardized
Firstly, whole image is standardized (normalization), because surface layer local in the texture strength of image exposes The specific gravity of light contribution is larger, so compression processing can effectively reduce the variation of local shade and illumination.Gamma compresses formula such as Under (gamma=1/2 can be taken):
I (x, y)=I (x, y)gamma
2) image gradient size and Orientation is calculated
Convolution algorithm is done to original image with [- 1,0,1] gradient operator first, obtains the gradient component on the direction x Then gradscalx uses [- 1,0,1]TGradient operator does convolution algorithm to original image, obtains the gradient component on the direction y Gradscaly, finally acquires gradient magnitude and the direction of the pixel respectively, and specific formula for calculation is as follows:
3) gradient orientation histogram is constructed for each cell
Several cells are divided the image into, each pixel in cell is added in histogram along gradient direction Power projection, is divided into 9 direction blocks for 360 ° of gradient direction of cell, the counting of the direction block where this pixel gradient direction adds Upper projection weight (gradient magnitude), can be obtained the gradient orientation histogram of this cell.
4) HOG feature is collected
Cell is combined into big block (block) first, then the feature vector of cells all in block is connected Obtain the HOG feature of the block, the block that finally will test all overlappings in window carries out the collection of HOG feature, and by they It is combined into final feature vector.
In the present embodiment, the specific extraction process for extracting the corresponding SILTP feature of each training sample is as follows:
1) three layers of pyramid are established to by the image in 2 × 2 local mean value ponds
It 2) is 5 pixels by step-length, the sliding window that size is 10 × 10 obtains overlapping rectangles block;
3) in each rectangular block calculate local histogram andThe textural characteristics of two kinds of scales;
4) feature on each layer is together in series to form last SILTP feature.
In addition, extracting the corresponding Harr-like feature of each training sample using the method for integrogram in the present embodiment. In other embodiments, the corresponding HOG feature of each training sample of other modes, SILTP feature and Harr- can also be used Like feature can be configured according to actual needs, be not specifically limited herein.
A kind of real-time target video tracing method based on multiple features provided by the invention trains sample for any one This, extracts the corresponding HOG feature of the training sample, SILTP feature and Harr-like feature, by the corresponding HOG of the training sample Feature, SILTP feature and Harr-like feature are cascaded, and the corresponding feature vector of the training sample is obtained;According to all instructions Practice the corresponding feature vector of sample to be trained classifier, obtains the corresponding default classifier of the sub-video.This method combines Multiple features of training sample are trained classifier, cope with the Target Tracking Problem under the conditions of several scenes, favorably In the accuracy for improving target following result.
Based on any of the above-described embodiment, a kind of real-time target video tracing method based on multiple features is provided, the instruction is obtained Practice the corresponding feature vector of sample, further includes later:Dimensionality reduction is carried out to the corresponding feature vector of the training sample, obtains the training Feature vector after the corresponding dimensionality reduction of sample;Accordingly, classifier is carried out according to all training samples corresponding feature vector Training, specially:Classifier is trained according to the feature vector after the corresponding dimensionality reduction of all training samples.
Specifically, since the corresponding feature vector of each training sample is by cascading the corresponding HOG of each training sample What feature, SILTP feature and Harr-like feature obtained, so the dimension of the corresponding feature vector of each training sample is higher. In view of this, in the present embodiment, for any one training sample, after obtaining the corresponding feature vector of the training sample, Dimensionality reduction is carried out to the corresponding feature vector of the training sample, the feature vector after obtaining the corresponding dimensionality reduction of the training sample.Wherein The dimensionality reduction mode of feature vector can be configured according to actual needs, be not specifically limited herein.On this basis, will own Feature vector after the corresponding dimensionality reduction of training sample inputs classifier, according to the feature after the corresponding dimensionality reduction of all training samples to Amount is trained classifier.It can effectively reduce classifier existing information redundancy in the training process as a result, be conducive to Improve the training effectiveness of classifier.
A kind of real-time target video tracing method based on multiple features provided by the invention, feature corresponding to training sample Vector carries out dimensionality reduction, is trained according to the feature vector after the corresponding dimensionality reduction of all training samples to classifier;It can effectively drop Low classifier existing information redundancy in the training process, is conducive to the training effectiveness for improving classifier.
Based on any of the above-described embodiment, a kind of real-time target video tracing method based on multiple features is provided, to the training The corresponding feature vector of sample carries out dimensionality reduction, specially:Obtain in the corresponding feature vector of the training sample every two feature it Between class in difference between difference or class, covariance matrix in class is obtained according to difference in all classes, and according to difference between all classes Covariance matrix between acquisition class;According to covariance matrix calculates the corresponding mapping square of feature vector between covariance matrix and class in class Battle array carries out dimensionality reduction to feature vector using mapping matrix.
Specifically, in the present embodiment, for any one training sample, intersection quadratic discriminatory analysis algorithm (XQDA) is utilized Dimensionality reduction is carried out to the corresponding feature vector of the training sample.The algorithm utilizes cross-view data learning characteristic subspace, simultaneously Study is used for the distance function of similarity measure in new proper subspace.To primitive character xi, xj∈Rd, XQDA passes through study Mapping matrix W ∈ Rd×rPrimitive character is mapped to lower-dimensional subspace, distance function such as following formula by (r < d):
Wherein, as two feature xi, xjWhen corresponding sample label is consistent, then the difference between two features is known as Difference Ω in classI;As two feature xi, xjWhen corresponding sample label is inconsistent, then the difference between two features is known as Difference Ω between classE.In above formula, ∑ 'IFor the corresponding covariance matrix of difference in class, referred to as covariance matrix in class;∑′EFor class Between the corresponding covariance matrix of difference, referred to as covariance matrix between class.Since W is present in two inverse matrixs, so directly excellent It is relatively difficult to change above formula.Assuming that ΩIAnd ΩEObeying mean value is 0, and variance is respectively σIAnd σEGaussian Profile, thus deducibility Out, by ΩIAnd ΩEIt is 0 that sample after being mapped, which also obeys mean value, and variance is respectively σI(W) and σE(W) Gaussian Profile.It is many Well known, linear discriminant analysis (Linear Discriminant Analysis, LDA) is to be mapped to feature by higher dimensional space A kind of supervised classification method of lower-dimensional subspace.When two class sample averages are identical, LDA algorithm is no longer applicable in, but after projection σI(W) and σE(W) still can be used for classifying.Based on this, projecting direction W is optimized by selection, makes σE(W)/σI(W) it maximizes, it is right It should be in Generalized Rayleigh entropy, such as following formula:
Wherein, σE(W)=WTEW, σI(W)=WTIW, so following formula can be obtained:
, need to be to above formula using W as variable derivation in order to guarantee that J (W) is maximized, and enabling derivative is 0, obtained result is as follows Formula:
EW=J (W) ∑IW
The problem of J (W) is considered as generalized eigenvalue is then solved, then above formula equivalence can be converted to:
In above formula, J (W)=α is enabled, is obtainedIt is rightEigenvalues Decomposition is carried out, is acquired Maximum eigenvalue be J (W) maximum value, first row w of the corresponding feature vector as W1, mapping matrix W is then by preceding r The corresponding feature vector composition of a maximum eigenvalue.Finally, dimensionality reduction is carried out to feature vector using mapping matrix, after obtaining dimensionality reduction Feature vector.
A kind of real-time target video tracing method based on multiple features provided by the invention, feature corresponding to training sample Vector carries out dimensionality reduction, is trained according to the feature vector after the corresponding dimensionality reduction of all training samples to classifier;It can effectively drop Low classifier existing information redundancy in the training process, is conducive to the training effectiveness for improving classifier.
Based on any of the above-described embodiment, a kind of real-time target video tracing method based on multiple features is provided, according to all Difference obtains covariance matrix in class in class, further includes later:Regularization parameter is added in covariance matrix in class, is obtained more Covariance matrix in class after new;Correspondingly, according to covariance matrix calculates feature vector pair between covariance matrix and class in class The mapping matrix answered, specially:According to covariance matrix calculates feature vector pair between covariance matrix and class in updated class The mapping matrix answered.
Specifically, based on the above technical solution, by eigenmatrixAs can be seen that covariance matrix in class There may be unusual, solution is caused to be not present.Therefore, using the method for increasing covariance matrix in class regularization parameter, even ∑I=∑I+ λ I makes it adaptively match different data sets wherein obtaining regularization parameter λ using following methods.
Assuming that there is C1..., CLL classification, data spaceMiddle sampleExpression class label is CkIn i-th A sample, NkExpression classification is CkSample size.In this regard, the class empirical mean of every one kindAnd total sample averageDefinition It is as follows respectively:
Wherein,For whole sample sizes.Optimal mapping square is found according to Fisher linear discriminant analysis criterion Battle array, such as following formula:
Wherein,WithCovariance matrix respectively between class and in class, definition is respectively such as following formula:
Assuming that the sample data Normal Distribution of every one kind.Input (x, y) is given, wherein x ∈ X, class label y ∈ {C1..., CL}.Here, definitionSample x and label are described as disturbance quantity it is expected E for the sample of yx′|y[x '] it Between difference, then simulate a Stochastic Mean-Value go description Ex′|y[x '], wherein x ' represents the difference of authentic specimen Yu desired sample It is different.ξxObey mean value be 0, variance ΩyGaussian Profile.It is expressed as:
Wherein ΩyFor ξxCovariance matrix.I.e. Disturbance Model is
So, for the specific sample in xIts disturbance random vector is obeyedIts perturbation vector mould Type is:
As a result, to certain kinds CkThe expectation of sample can be expressed as:
Through deriving, covariance matrix is as follows between covariance matrix and class in new class:
As can be seen from the above equation, random vector is disturbedIt is included among new covariance matrix,WithRespectively Disturbance covariance matrix between disturbance covariance matrix and class is denoted as in class.
In class in the case where data Gaussian distributed, by finding optimal Linear Mapping matrix to following formula So that it is determined that disturbance random vector ξx
In the case where for the distribution of L class sample standard deviation Gaussian distributed, disturbance covariance matrix is replaced using mean value.Using Following formula indicates final disturbance covariance matrix, specially:
Therefore, for any given input sample x=(x1..., xn)T∈ X, ξx~N (0, Ω).Therefore, Ω can be by It is write as:
Wherein,The variance of i-th dimension feature is in expression sample xAll variance summations are averaged in the algorithm, I.e.To replace single varianceIt can so be indicated with following formula:
Ω=σ2I, σ ≠ 0
Above formula is the regularization estimation for disturbing random vector Ω, namely corresponding above-mentioned regularization parameter λ.
A kind of real-time target video tracing method based on multiple features provided by the invention, adds in covariance matrix in class Add regularization parameter, obtain covariance matrix in updated class, further according to being assisted between covariance matrix and class in updated class Variance matrix calculates the corresponding mapping matrix of feature vector.This method adds regularization parameter in class in covariance matrix, can The data set for keeping its Adaptive matching different is advantageously implemented the dimensionality reduction of different characteristic vector.
Fig. 2 is a kind of overall structure signal of real-time target video frequency following system based on multiple features of the embodiment of the present invention Figure provides a kind of real-time target video frequency following system based on multiple features as shown in Fig. 2, being based on any of the above-described embodiment, wraps It includes:
Video division module 1, for video to be tracked to be chronologically divided into several sub-videos according to preset rules, In, the number of sub-video is at least one;
Target initialization module 2, for being obtained according to the corresponding default classifier of the sub-video for any one sub-video Take the target subgraph in the sub-video in first frame image;
Target tracking module 3, for in the sub-video be located at first frame image after any one frame image, from this Multiple sample subgraphs are chosen in image, calculate target subgraph in each sample subgraph and the image previous frame image it Between distance, the corresponding posterior probability of each sample subgraph is determined according to each distance, by the corresponding sample of maximum a posteriori probability Subgraph is determined as the target subgraph in the image.
Specifically, the present embodiment provides a kind of real-time target video frequency following system based on multiple features, including video divide Module 1, target initialization module 2 and target tracking module 3 realize the side in any of the above-described embodiment of the method by each module Method, reference can be made to the above method embodiment for specific implementation process, and details are not described herein again.
A kind of real-time target video frequency following system based on multiple features provided by the invention, video to be tracked is divided into more A sub-video, and for the individual classifier of each sub-video training, it is then initialized using corresponding classifier each Target position in the first frame image of sub-video, finally using target tracking algorism in other frame images of each sub-video Target tracked.Different classifiers is arranged for different sub-videos in object tracking process in the system, so that not Same classifier can adapt to the cosmetic variation of target in object tracking process, and can effectively solve the problem that occlusion issue, be conducive to Improve the accuracy of target following result.
Fig. 3 shows the structural frames of the equipment of the real-time target video tracing method based on multiple features of the embodiment of the present application Figure.Reference Fig. 3, the equipment of the real-time target video tracing method based on multiple features, including:Processor (processor) 31, memory (memory) 32 and bus 33;Wherein, the processor 31 and memory 32 are completed mutually by the bus 33 Between communication;The processor 31 is used to call the program instruction in the memory 32, to execute above-mentioned each method embodiment Provided method, for example including:Video to be tracked is chronologically divided into several sub-videos according to preset rules, wherein son The number of video is at least one;For any one sub-video, which is obtained according to the corresponding default classifier of the sub-video Target subgraph in video in first frame image;For any one frame figure in the sub-video after first frame image Picture chooses multiple sample subgraphs from the image, calculates the target in each sample subgraph and the image previous frame image The distance between subgraph determines the corresponding posterior probability of each sample subgraph according to each distance, by maximum a posteriori probability pair The sample subgraph answered is determined as the target subgraph in the image.
The present embodiment discloses a kind of computer program product, and the computer program product includes being stored in non-transient calculating Computer program on machine readable storage medium storing program for executing, the computer program include program instruction, when described program instruction is calculated When machine executes, computer is able to carry out method provided by above-mentioned each method embodiment, for example including:It will be to according to preset rules Tracking video is chronologically divided into several sub-videos, wherein the number of sub-video is at least one;Any one height is regarded Frequently, the target subgraph in the sub-video in first frame image is obtained according to the corresponding default classifier of the sub-video;For this Any one frame image being located at after first frame image in sub-video, chooses multiple sample subgraphs from the image, calculates every The distance between target subgraph in a sample subgraph and the image previous frame image determines each sample according to each distance The corresponding sample subgraph of maximum a posteriori probability is determined as the target subgraph in the image by the corresponding posterior probability of subgraph Picture.
The present embodiment provides a kind of non-transient computer readable storage medium, the non-transient computer readable storage medium Computer instruction is stored, the computer instruction makes the computer execute method provided by above-mentioned each method embodiment, example Such as include:Video to be tracked is chronologically divided into several sub-videos according to preset rules, wherein the number of sub-video is at least One;For any one sub-video, first frame image in the sub-video is obtained according to the corresponding default classifier of the sub-video In target subgraph;For any one frame image in the sub-video after first frame image, chosen from the image Multiple sample subgraphs calculate the distance between the target subgraph in each sample subgraph and the image previous frame image, The corresponding posterior probability of each sample subgraph is determined according to each distance, and the corresponding sample subgraph of maximum a posteriori probability is determined For the target subgraph in the image.
Those of ordinary skill in the art will appreciate that:Realize that all or part of the steps of above method embodiment can pass through The relevant hardware of program instruction is completed, and program above-mentioned can be stored in a computer readable storage medium, the program When being executed, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned includes:ROM, RAM, magnetic disk or light The various media that can store program code such as disk.
The embodiments such as the equipment of real-time target video tracing method based on multiple features described above are only to illustrate Property, wherein the unit as illustrated by the separation member may or may not be physically separated, as unit The component of display may or may not be physical unit, it can and it is in one place, or may be distributed over more In a network unit.Some or all of the modules therein can be selected to realize this embodiment scheme according to the actual needs Purpose.Those of ordinary skill in the art are without paying creative labor, it can understand and implement.
Through the above description of the embodiments, those skilled in the art can be understood that each embodiment can It realizes by means of software and necessary general hardware platform, naturally it is also possible to pass through hardware.Based on this understanding, on Stating technical solution, substantially the part that contributes to existing technology can be embodied in the form of software products in other words, should Computer software product may be stored in a computer readable storage medium, such as ROM/RAM, magnetic disk, CD, including several fingers It enables and using so that a computer equipment (can be personal computer, server or the network equipment etc.) executes each implementation Method described in certain parts of example or embodiment.
Finally, the present processes are only preferable embodiment, it is not intended to limit the scope of the present invention.It is all Within the spirit and principles in the present invention, any modification, equivalent replacement, improvement and so on should be included in protection of the invention Within the scope of.

Claims (10)

1. a kind of real-time target video tracing method based on multiple features, which is characterized in that including:
Video to be tracked is chronologically divided into several sub-videos according to preset rules, wherein the number of the sub-video is at least It is one;
For any one sub-video, obtained in the sub-video in first frame image according to the corresponding default classifier of the sub-video Target subgraph;
For any one frame image in the sub-video after the first frame image, multiple samples are chosen from the image Subgraph calculates the distance between the target subgraph in each sample subgraph and the image previous frame image, according to Each distance determines the corresponding posterior probability of each sample subgraph, by the corresponding sample subgraph of maximum a posteriori probability The target subgraph being determined as in the image.
2. the method according to claim 1, wherein described choose multiple sample subgraphs, tool from the image Body is:
Obtain the weight of this subgraph of various kinds in the previous frame image of the image;
According to the weight of this subgraph of various kinds in the previous frame image of the image, multiple sample subgraphs are chosen from the image.
3. the method according to claim 1, wherein described obtain according to the corresponding default classifier of the sub-video Target subgraph in the sub-video in first frame image further includes before:
The markd training sample of the corresponding band of the sub-video is obtained, classifier is trained using the training sample, is obtained Obtain the corresponding default classifier of the sub-video.
4. according to the method described in claim 3, it is characterized in that, described instruct classifier using the training sample Practice, obtains the corresponding default classifier of the sub-video, specially:
For any one training sample, it is special to extract the corresponding HOG feature of the training sample, SILTP feature and Harr-like Sign, the corresponding HOG feature of the training sample, SILTP feature and Harr-like feature are cascaded, the training sample is obtained Corresponding feature vector;
Classifier is trained according to all training samples corresponding feature vector, it is corresponding default to obtain the sub-video Classifier.
5. according to the method described in claim 4, it is characterized in that, the corresponding feature vector of described acquisition training sample, it After further include:
Dimensionality reduction is carried out to the corresponding feature vector of the training sample, the feature vector after obtaining the corresponding dimensionality reduction of the training sample;
Accordingly, described that classifier is trained according to all training samples corresponding feature vector, specially:
Classifier is trained according to the feature vector after the corresponding dimensionality reduction of all training samples.
6. according to the method described in claim 5, it is characterized in that, the corresponding feature vector of the described pair of training sample drops Dimension, specially:
Difference between difference or class is obtained in the class in the corresponding feature vector of the training sample between every two feature, according to all Difference obtains covariance matrix in class in the class, and according to covariance matrix between difference acquisition class between all classes;
According to covariance matrix calculates the corresponding mapping matrix of described eigenvector between covariance matrix and the class in the class, Dimensionality reduction is carried out to described eigenvector using the mapping matrix.
7. according to the method described in claim 6, it is characterized in that, described obtain association side in class according to difference in all classes Poor matrix further includes later:
Regularization parameter is added in covariance matrix in the class, obtains covariance matrix in updated class;
Correspondingly, described according to covariance matrix calculating described eigenvector is corresponding between covariance matrix and the class in the class Mapping matrix, specially:
According to covariance matrix calculating described eigenvector is corresponding between covariance matrix and the class in the updated class Mapping matrix.
8. a kind of real-time target video frequency following system based on multiple features, which is characterized in that including:
Video division module, for video to be tracked to be chronologically divided into several sub-videos according to preset rules, wherein described The number of sub-video is at least one;
Target initialization module, for for any one sub-video, being obtained according to the corresponding default classifier of the sub-video should Target subgraph in sub-video in first frame image;
Target tracking module, for in the sub-video be located at the first frame image after any one frame image, from this Multiple sample subgraphs are chosen in image, calculate the target subgraph in each sample subgraph and the image previous frame image The distance between as, the corresponding posterior probability of each sample subgraph is determined according to each distance, maximum a posteriori is general The corresponding sample subgraph of rate is determined as the target subgraph in the image.
9. a kind of equipment of the real-time target video tracing method based on multiple features, which is characterized in that including:
At least one processor;And
At least one processor being connect with the processor communication, wherein:
The memory is stored with the program instruction that can be executed by the processor, and the processor calls described program to instruct energy Enough methods executed as described in claim 1 to 7 is any.
10. a kind of non-transient computer readable storage medium, which is characterized in that the non-transient computer readable storage medium is deposited Computer instruction is stored up, the computer instruction makes the computer execute the method as described in claim 1 to 7 is any.
CN201810662349.XA 2018-06-25 2018-06-25 A kind of real-time target video tracing method and system based on multiple features Pending CN108875655A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810662349.XA CN108875655A (en) 2018-06-25 2018-06-25 A kind of real-time target video tracing method and system based on multiple features
CN201910142073.7A CN109685045B (en) 2018-06-25 2019-02-26 Moving target video tracking method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810662349.XA CN108875655A (en) 2018-06-25 2018-06-25 A kind of real-time target video tracing method and system based on multiple features

Publications (1)

Publication Number Publication Date
CN108875655A true CN108875655A (en) 2018-11-23

Family

ID=64295579

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201810662349.XA Pending CN108875655A (en) 2018-06-25 2018-06-25 A kind of real-time target video tracing method and system based on multiple features
CN201910142073.7A Active CN109685045B (en) 2018-06-25 2019-02-26 Moving target video tracking method and system

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201910142073.7A Active CN109685045B (en) 2018-06-25 2019-02-26 Moving target video tracking method and system

Country Status (1)

Country Link
CN (2) CN108875655A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685045A (en) * 2018-06-25 2019-04-26 鲁东大学 A kind of Moving Targets Based on Video Streams tracking and system
CN109743617A (en) * 2018-12-03 2019-05-10 清华大学 A kind of video playing jumps air navigation aid and equipment
CN109919043A (en) * 2019-02-18 2019-06-21 北京奇艺世纪科技有限公司 A kind of pedestrian tracting method, device and equipment
CN110288633A (en) * 2019-06-04 2019-09-27 东软集团股份有限公司 Target tracking method, device, readable storage medium storing program for executing and electronic equipment
CN110489592A (en) * 2019-07-18 2019-11-22 平安科技(深圳)有限公司 Video classification methods, device, computer equipment and storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221689B (en) * 2021-04-27 2022-07-29 苏州工业职业技术学院 Video multi-target emotion degree prediction method
CN113256685B (en) * 2021-06-25 2021-09-24 南昌工程学院 Target tracking method and system based on convolutional neural network dictionary pair learning
CN113743252B (en) * 2021-08-17 2024-05-31 北京佳服信息科技有限公司 Target tracking method, device, equipment and readable storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101546556B (en) * 2008-03-28 2011-03-23 展讯通信(上海)有限公司 Classification system for identifying audio content
CN101783012B (en) * 2010-04-06 2012-05-30 中南大学 Automatic image defogging method based on dark primary colour
US8989442B2 (en) * 2013-04-12 2015-03-24 Toyota Motor Engineering & Manufacturing North America, Inc. Robust feature fusion for multi-view object tracking
CN108875655A (en) * 2018-06-25 2018-11-23 鲁东大学 A kind of real-time target video tracing method and system based on multiple features

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685045A (en) * 2018-06-25 2019-04-26 鲁东大学 A kind of Moving Targets Based on Video Streams tracking and system
CN109685045B (en) * 2018-06-25 2020-09-29 鲁东大学 Moving target video tracking method and system
CN109743617A (en) * 2018-12-03 2019-05-10 清华大学 A kind of video playing jumps air navigation aid and equipment
CN109743617B (en) * 2018-12-03 2020-11-24 清华大学 Skip navigation method and device for video playing
CN109919043A (en) * 2019-02-18 2019-06-21 北京奇艺世纪科技有限公司 A kind of pedestrian tracting method, device and equipment
CN109919043B (en) * 2019-02-18 2021-06-04 北京奇艺世纪科技有限公司 Pedestrian tracking method, device and equipment
CN110288633A (en) * 2019-06-04 2019-09-27 东软集团股份有限公司 Target tracking method, device, readable storage medium storing program for executing and electronic equipment
CN110288633B (en) * 2019-06-04 2021-07-23 东软集团股份有限公司 Target tracking method and device, readable storage medium and electronic equipment
CN110489592A (en) * 2019-07-18 2019-11-22 平安科技(深圳)有限公司 Video classification methods, device, computer equipment and storage medium
CN110489592B (en) * 2019-07-18 2024-05-03 平安科技(深圳)有限公司 Video classification method, apparatus, computer device and storage medium

Also Published As

Publication number Publication date
CN109685045B (en) 2020-09-29
CN109685045A (en) 2019-04-26

Similar Documents

Publication Publication Date Title
CN108875655A (en) A kind of real-time target video tracing method and system based on multiple features
Gai et al. A detection algorithm for cherry fruits based on the improved YOLO-v4 model
CN109800689B (en) Target tracking method based on space-time feature fusion learning
Li et al. Fast and accurate green pepper detection in complex backgrounds via an improved Yolov4-tiny model
Song et al. Tracking body and hands for gesture recognition: Natops aircraft handling signals database
CN106951870B (en) Intelligent detection and early warning method for active visual attention of significant events of surveillance video
CN110929578A (en) Anti-blocking pedestrian detection method based on attention mechanism
CN108171133B (en) Dynamic gesture recognition method based on characteristic covariance matrix
CN109299716A (en) Training method, image partition method, device, equipment and the medium of neural network
Hu Design and implementation of abnormal behavior detection based on deep intelligent analysis algorithms in massive video surveillance
Yu et al. An object-based visual attention model for robotic applications
CN104537689B (en) Method for tracking target based on local contrast conspicuousness union feature
CN105701467A (en) Many-people abnormal behavior identification method based on human body shape characteristic
CN111582349B (en) Improved target tracking algorithm based on YOLOv3 and kernel correlation filtering
Wang et al. Precision detection of dense plums in orchards using the improved YOLOv4 model
Bhatt et al. Comparison of CNN models for application in crop health assessment with participatory sensing
CN106815576B (en) Target tracking method based on continuous space-time confidence map and semi-supervised extreme learning machine
CN108038515A (en) Unsupervised multi-target detection tracking and its storage device and camera device
CN106529441B (en) Depth motion figure Human bodys&#39; response method based on smeared out boundary fragment
CN111199245A (en) Rape pest identification method
CN109685830A (en) Method for tracking target, device and equipment and computer storage medium
Li et al. Fast recognition of pig faces based on improved Yolov3
Yang et al. An improved algorithm for the detection of fastening targets based on machine vision
CN114492634A (en) Fine-grained equipment image classification and identification method and system
Su et al. A CNN-LSVM model for imbalanced images identification of wheat leaf

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20181123

WD01 Invention patent application deemed withdrawn after publication