CN104156734A - Fully-autonomous on-line study method based on random fern classifier - Google Patents

Fully-autonomous on-line study method based on random fern classifier Download PDF

Info

Publication number
CN104156734A
CN104156734A CN201410407669.2A CN201410407669A CN104156734A CN 104156734 A CN104156734 A CN 104156734A CN 201410407669 A CN201410407669 A CN 201410407669A CN 104156734 A CN104156734 A CN 104156734A
Authority
CN
China
Prior art keywords
sample
positive
random fern
random
negative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410407669.2A
Other languages
Chinese (zh)
Other versions
CN104156734B (en
Inventor
罗大鹏
韩家宝
魏龙生
王勇
马丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Geosciences
Original Assignee
China University of Geosciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Geosciences filed Critical China University of Geosciences
Priority to CN201410407669.2A priority Critical patent/CN104156734B/en
Publication of CN104156734A publication Critical patent/CN104156734A/en
Application granted granted Critical
Publication of CN104156734B publication Critical patent/CN104156734B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

本发明公开了一种基于随机蕨分类器的全自主在线学习方法,该方法只需在视频帧中框选一次目标即可进行针对该目标类的分类器在线学习。步骤为:首先对框选的目标采用仿射变换得到初始的正样本集,在视频的非目标区域提取少量的负样本集训练初始随机蕨分类器;其次,使用该分类器在视频帧中进行目标检测。检测的过程中,采用最近邻分类器收集在线学习新样本,并自动判断样本类别;最后,将新样本用于随机蕨分类器的在线训练,更新随机蕨后验概率,逐渐提高分类器目标检测的精度,实现目标检测系统全自主在线学习。

The invention discloses a fully autonomous online learning method based on a random fern classifier. The method only needs to select a target in a video frame once to perform online learning of the classifier for the target class. The steps are: first, use affine transformation to obtain the initial positive sample set for the frame-selected target, and extract a small number of negative sample sets in the non-target area of the video to train the initial random fern classifier; secondly, use this classifier to perform Target Detection. During the detection process, the nearest neighbor classifier is used to collect online learning new samples, and automatically judge the sample category; finally, the new samples are used for online training of the random fern classifier, and the posterior probability of the random fern is updated to gradually improve the target detection of the classifier. Accuracy, to achieve fully autonomous online learning of the target detection system.

Description

一种基于随机蕨分类器的全自主在线学习方法A Fully Autonomous Online Learning Method Based on Random Fern Classifier

技术领域technical field

本发明属于模式识别领域,具体涉及一种基于随机蕨分类器的全自主在线学习方法。The invention belongs to the field of pattern recognition, and in particular relates to a fully autonomous online learning method based on a random fern classifier.

背景技术Background technique

在线学习属于增量学习的研究范畴,在这一类方法中分类器对每个样本只学一次,而不是重复的学习,这样在线学习算法运行过程中不需要大量的存储空间来存储训练样本,分类器每获得一个样本,即对其进行在线学习,通过在线学习使分类器在使用过程中仍然能根据新样本自我更新和改进,进一步提高分类效果。Online learning belongs to the research category of incremental learning. In this type of method, the classifier only learns each sample once instead of repeated learning. In this way, a large amount of storage space is not required to store training samples during the operation of the online learning algorithm. Every time the classifier obtains a sample, it conducts online learning. Through online learning, the classifier can still update and improve itself according to the new sample during use, and further improve the classification effect.

早期的在线学习算法有Winnow算法,统一线性预测算法等,2001年学者Oza将这些算法与boosting算法进行结合,提出了在线boosting算法(该算法引自“Online bagging andboosting”N.Oza and S.Russell,In Proc.Artificial Intelligence and Statistics,105-112,2001),在Oza的方法中,每个特征对应一个弱分类器,而强分类器是一定数量的弱分类器的加权和,其中弱分类器都是从弱分类器集合中挑选出来的。在线学习时,每个训练样本逐一的更新弱分类器集合中的每个弱分类器,包括调整正负样本的分类阈值以及该分类器的权重,使好的弱分类器权重越来越高,而较差的弱分类器权重越来越低,从而每次在线学习一个样本就可以挑选出一个当前权重最高的弱分类器加入强分类器中使最终训练出来的分类器有较强的分类能力。Early online learning algorithms include Winnow algorithm, unified linear prediction algorithm, etc. In 2001, scholar Oza combined these algorithms with boosting algorithm and proposed online boosting algorithm (this algorithm is quoted from "Online bagging and boosting" N.Oza and S.Russell , In Proc.Artificial Intelligence and Statistics,105-112,2001), in Oza's method, each feature corresponds to a weak classifier, and the strong classifier is a weighted sum of a certain number of weak classifiers, where the weak classifier are selected from the set of weak classifiers. During online learning, each training sample updates each weak classifier in the weak classifier set one by one, including adjusting the classification threshold of positive and negative samples and the weight of the classifier, so that the weight of a good weak classifier is getting higher and higher. And the weight of the poorer weak classifier is getting lower and lower, so that each time a sample is learned online, a weak classifier with the highest current weight can be selected and added to the strong classifier, so that the final trained classifier has a stronger classification ability. .

但是,在线boosting算法的弱分类器集合中每个弱分类器都要对新样本进行在线学习,当弱分类器个数较多时,在线学习速度必然会变慢。Grabner对在线boosting算法进行了改进,使其也象Adaboost算法一样可以进行特征选择,并且这种特征选择以及对分类器的更新都是在线进行的,称为在线Adaboost(该算法引自“On-line boosting and vision”H.Grabner and H.Bischof,In Proc.CVPR,(1):260-267,2006)。但是在线Adaboost用特征选择算子代替一般的弱分类器合成强分类器,特征选择算子数以及特征选择算子对应的弱分类器数都是固定的,相应的在线学习分类器结构比较僵化。当发现其分类能力无法满足检测性能的要求时,即使持续的在线学习下去也无法提高检测精度。However, each weak classifier in the weak classifier set of the online boosting algorithm needs to learn new samples online. When the number of weak classifiers is large, the online learning speed will inevitably slow down. Grabner has improved the online boosting algorithm so that it can also perform feature selection like the Adaboost algorithm, and this feature selection and update of the classifier are all performed online, which is called online Adaboost (the algorithm is quoted from "On- line boosting and vision" H. Grabner and H. Bischof, In Proc. CVPR, (1): 260-267, 2006). However, online Adaboost uses feature selection operators instead of general weak classifiers to synthesize strong classifiers. The number of feature selection operators and the number of weak classifiers corresponding to feature selection operators are fixed, and the corresponding online learning classifier structure is relatively rigid. When it is found that its classification ability cannot meet the requirements of detection performance, even continuous online learning cannot improve the detection accuracy.

Ozuysal不再使用弱分类器组成强分类器进行样本分类,而是从样本特征集合随机抽取多个特征构成一个随机蕨,通过随机蕨统计训练样本后验概率分布,再由多个随机蕨的后验概率分布进行样本分类,即为随机蕨分类器算法(该算法引自“Fast keypoint recognition usingrandom ferns”In Pattern Analysis Machine Intelligence,IEEE Transaction on 32(3),448-461,2010)。Ozuysal no longer uses weak classifiers to form strong classifiers for sample classification, but randomly extracts multiple features from the sample feature set to form a random fern, and trains the posterior probability distribution of samples through random fern statistics, and then uses the posterior probability distribution of multiple random ferns The random fern classifier algorithm (this algorithm is quoted from "Fast keypoint recognition using random ferns" In Pattern Analysis Machine Intelligence, IEEE Transaction on 32(3), 448-461, 2010).

发明内容Contents of the invention

本发明要解决的技术问题是:提供一种基于随机蕨分类器的全自主在线学习方法,用于分类器的自主学习以提高分类性能。The technical problem to be solved by the present invention is to provide a fully autonomous online learning method based on a random fern classifier, which is used for autonomous learning of classifiers to improve classification performance.

本发明为解决上述技术问题所采取的技术方案为:一种基于随机蕨分类器的全自主在线学习方法,其特征在于:它包括如下步骤:The technical scheme that the present invention takes for solving the above-mentioned technical problem is: a kind of fully autonomous online learning method based on random fern classifier, it is characterized in that: it comprises the following steps:

1)准备初始训练分类器的样本集:1) Prepare the sample set for the initial training classifier:

针对待检测的视频帧,在视频帧中框选出一个目标图片,对该目标图片进行仿射变换得到的图片作为正样本;以不含有目标的背景图像区域作为负样本;如此随机的获取一定数量的正样本和负样本作为初始训练分类器的样本集;正、负样本为大小相同的图像块;For the video frame to be detected, a target picture is selected in the video frame, and the picture obtained by affine transformation of the target picture is used as a positive sample; the background image area that does not contain the target is used as a negative sample; in this way, a certain The number of positive samples and negative samples is used as the sample set for the initial training classifier; the positive and negative samples are image blocks of the same size;

2)随机蕨分类器初始训练:2) Random fern classifier initial training:

使用准备好的初始训练分类器的样本集对随机蕨分类器进行初始训练,统计正负样本在每个随机蕨上的后验概率分布;Use the sample set of the prepared initial training classifier to perform initial training on the random fern classifier, and count the posterior probability distribution of positive and negative samples on each random fern;

3)将初始训练好的随机蕨分类器作为当前目标检测器遍历待检测的视频帧进行目标检测,得到目标模块,并计算每个目标模块的置信度;3) Use the initially trained random fern classifier as the current target detector to traverse the video frames to be detected for target detection, obtain target modules, and calculate the confidence of each target module;

4)构建正负样本模板集:4) Construct positive and negative sample template sets:

将以下三种样本作为正样本模板添加到正样本模板集M+,其余添加到负样本模板集M-Add the following three samples as positive sample templates to the positive sample template set M + , and add the rest to the negative sample template set M :

A、步骤1)中得到的正样本;A, the positive sample obtained in step 1);

B、对步骤3)中得到的置信度超过置信度预设值的目标模块,采用光流法对其所在视频帧进行跟踪得到跟踪模块,若跟踪模块与该目标模块有重合区域,且重合率超过预设重合率,则认为该跟踪模块是真实目标,作为正样本模板添加到M+中;B. For the target module whose confidence degree obtained in step 3) exceeds the preset value of the confidence degree, the optical flow method is used to track the video frame where it is located to obtain the tracking module. If the tracking module has an overlapping area with the target module, and the coincidence rate If it exceeds the preset coincidence rate, the tracking module is considered to be a real target and added to M + as a positive sample template;

C、对步骤3)中得到的置信度超过置信度预设值的目标模块,采用光流法对其所在视频帧进行跟踪得到跟踪模块,若跟踪模块与该目标模块有重合区域,且重合率未超过预设重合率,则通过保守相似度Sc判断该跟踪模块能否加入正样本模板集:C. For the target module whose confidence degree obtained in step 3) exceeds the preset value of the confidence degree, the optical flow method is used to track the video frame where it is located to obtain the tracking module. If the tracking module has an overlapping area with the target module, and the coincidence rate If the preset coincidence rate is not exceeded, it is judged whether the tracking module can be added to the positive sample template set by the conservative similarity S c :

SS cc == SS 5050 %% ++ SS 5050 %% ++ ++ SS --

其中:in:

如果Sc大于预设的保守相似度阈值,则该跟踪模块作为正样本模板加入M+为待分类样本与当前正样本模板集的前半部分模板的相似度,S+、S-分别为待分类样本与正、负样本模板集的相似度,为两个图像帧的相似度,p+,p-分别为正样本和负样本,p为待分类样本,本步骤中待分类样本为跟踪模块;If S c is greater than the preset conservative similarity threshold, the tracking module is added to M + as a positive sample template, is the similarity between the sample to be classified and the first half template of the current positive sample template set, S + and S - are the similarity between the sample to be classified and the positive and negative sample template sets respectively, Be the similarity of two image frames, p + , p - are positive samples and negative samples respectively, p is the sample to be classified, and the sample to be classified is the tracking module in this step;

每加入一个正样本模板,则取同一视频帧中其周围四个相同大小的图像块判断是否为负样本,若是作为负样本模板加入负样本模板集M-Add a positive sample template every time, then get four image blocks of the same size around it in the same video frame to judge whether it is a negative sample, if add the negative sample template set M as a negative sample template ;

5)使用最近邻分类器,获得在线学习的正负样本:5) Use the nearest neighbor classifier to obtain positive and negative samples for online learning:

最近邻分类器的设置如下:对于每个待分类样本p,分别计算其与正负样本模板集的相似度S+(p,M+)及S-(p,M-):The setting of the nearest neighbor classifier is as follows: For each sample p to be classified, calculate its similarity S + (p, M + ) and S - (p, M - ) with the template set of positive and negative samples respectively:

SS ++ (( pp ,, Mm ++ )) == maxmax pp ii ++ ∈∈ Mm ++ SS (( pp ,, pp ii ++ ))

SS -- (( pp ,, Mm -- )) == maxmax pp ii -- ∈∈ Mm -- SS (( pp ,, pp ii -- ))

相应的可得相似度SrThe corresponding available similarity S r :

SS rr == SS ++ SS ++ ++ SS --

若相似度Sr大于阈值θNN,则判断该待分类样本为真实目标,作为在线学习的正样本;否则为虚警,作为在线学习的负样本;If the similarity S r is greater than the threshold θ NN , it is judged that the sample to be classified is a real target, and it is used as a positive sample for online learning; otherwise, it is a false alarm, and it is used as a negative sample for online learning;

本步骤中待分类样本为步骤3)得到的目标模块和步骤4)得到的正负样本模板集;In this step, the sample to be classified is the target module obtained in step 3) and the positive and negative sample template set obtained in step 4);

(6)随机蕨分类器的在线训练:(6) Online training of random fern classifier:

使用步骤5)获得的在线学习的正负样本,对随机蕨分类器进行在线学习,逐渐提高其分类精度;Use the positive and negative samples of the online learning that step 5) obtains, carry out online learning to random fern classifier, improve its classification accuracy gradually;

将在线学习的随机蕨分类器作为可持续更新的检测系统进行目标检测。Using an Online-Learned Random Fern Classifier as a Continuously Renewable Detection System for Object Detection.

按上述方案,所述的步骤2)的具体方法如下:According to the above scheme, the concrete method of described step 2) is as follows:

2.1)构造随机蕨:2.1) Construct a random fern:

对初始训练分类器的样本集中的单个样本上随机取s对特征点作为一组随机蕨,每个样本取特征点的位置相同,每对特征点进行像素值的比较,每对特征点中前一个特征点像素值大则取特征值为1,反之则取特征值为0,s对特征点比较后得到的s个特征值按照随机的顺序构成一个s位的二进制数,即为该组随机蕨的随机蕨数值,每个样本的随机蕨中特征值的顺序一致;Randomly select s pairs of feature points on a single sample in the sample set of the initial training classifier as a group of random ferns. The position of each feature point is the same for each sample, and the pixel value of each pair of feature points is compared. The top of each pair of feature points If the pixel value of a feature point is large, the feature value is taken as 1, otherwise, the feature value is taken as 0, and the s feature values obtained after comparing the feature points with s form a binary number of s bits in a random order, which is the set of random The random fern value of the fern, the order of the eigenvalues in the random fern of each sample is the same;

2.2)计算随机蕨数值在正负样本类上的后验概率:2.2) Calculate the posterior probability of random fern values on positive and negative sample classes:

随机蕨中,有一部分为正样本得到的,其它为负样本得到的;随机蕨数值的取值种类有2s个;Among the random ferns, some are obtained from positive samples, and others are obtained from negative samples; there are 2 s types of random fern values;

统计每种随机蕨数值的取值的正样本个数,从而获得随机蕨数值在正样本类C1上的后验概率分布P(Fl|C1);同理获得随机蕨数值在负样本类C0上的后验概率分布P(Fl|C0);联合所有随机蕨对初始训练分类器的样本集进行分类,即为随机蕨分类器;Count the number of positive samples of each random fern value, so as to obtain the posterior probability distribution P(F l |C 1 ) of the random fern value on the positive sample class C 1 ; similarly, obtain the random fern value on the negative sample Posterior probability distribution P(F l |C 0 ) on class C 0 ; combine all random ferns to classify the sample set of the initial training classifier, which is a random fern classifier;

所述的步骤3)采用上述随机蕨分类器在每帧视频图像中进行目标检测:Described step 3) adopt above-mentioned random fern classifier to carry out target detection in each frame video image:

遍历待检测的每帧视频图像,在每帧视频图像中提取相同大小的图像块作为待测样本,待测样本的大小与步骤1)中正样本的大小相等,计算每个待测样本的随机蕨数值,从而得到相应的后验概率,最后由随机蕨分类器计算其类别;Traversing each frame of video image to be detected, extracting an image block of the same size in each frame of video image as the sample to be tested, the size of the sample to be tested is equal to the size of the positive sample in step 1), and calculating the random fern of each sample to be tested value, so as to obtain the corresponding posterior probability, and finally calculate its category by the random fern classifier;

对于类别为正样本的图像块,则作为目标被检测出来。For the image block whose category is a positive sample, it is detected as an object.

按上述方案,所述的步骤4)每加入一个正样本模板,则取同一视频帧中其周围四个相同大小的图像块判断是否为负样本时,引入高斯背景建模,若图像块内前景像素小于前景像素阈值,则判断它为负样本。According to the above-mentioned scheme, described step 4) each time a positive sample template is added, four image blocks of the same size around it in the same video frame are taken to judge whether they are negative samples, and Gaussian background modeling is introduced. If the foreground in the image block If the pixel is smaller than the foreground pixel threshold, it is judged as a negative sample.

按上述方案,所述的步骤4)还包括模板集消减机制:待分类样本与正负模板集的相似度等于待分类样本与正负模板集中单个正负样本模板之间相似度的最大值;实时统计各个正负样本模板获得该最大值的次数,若某正负样本模板获得的该最大值的次数小于最大值次数预设值,则去除对应的正样本模板或负样本模板。According to the above scheme, described step 4) also includes a template set reduction mechanism: the similarity between the sample to be classified and the positive and negative template set is equal to the maximum value of the similarity between the sample to be classified and the single positive and negative sample template in the positive and negative template set; Count the number of times each positive and negative sample template obtains the maximum value in real time, and if the number of times that a certain positive and negative sample template obtains the maximum value is less than the preset value of the maximum number of times, then remove the corresponding positive sample template or negative sample template.

按上述方案,所述的步骤6)随机蕨分类器的在线学习通过更新后验概率分布实现。According to the above scheme, the step 6) online learning of the random fern classifier is realized by updating the posterior probability distribution.

按上述方案,所述的步骤6)具体方法如下:According to the above-mentioned scheme, described step 6) concrete method is as follows:

6.1)将步骤5)获得的正负样本作为在线学习样本;设一个在线学习样本为(fnew,ck),其中fnew为随机蕨s位的二进制数,ck为样本类别,计算该在线学习样本的随机蕨数值;6.1) Use the positive and negative samples obtained in step 5) as an online learning sample; set an online learning sample as (f new , c k ), where f new is a random binary number of s bits, c k is a sample category, and calculate the The random fern value of online learning samples;

6.2)对步骤2.1)样本集中类别为ck的样本总数加1,类别为ck的与该在线学习样本的随机蕨数值相同的样本数加1;其它随机蕨数值的样本数不变;6.2) Add 1 to the total number of samples whose category is c k in the sample set in step 2.1), and add 1 to the number of samples whose category is c k and the random fern value of the online learning sample; the number of samples of other random fern values remains unchanged;

6.3)根据更新后的样本数,重新计算随机蕨数值在该样本类上的后验概率分布;6.3) According to the updated number of samples, recalculate the posterior probability distribution of the random fern value on the sample class;

6.4)每新增一个在线学习样本,便重复6.1)至6.3)对后验概率分布进行更新一次。6.4) For each new online learning sample, repeat 6.1) to 6.3) to update the posterior probability distribution once.

本发明的有益效果为:The beneficial effects of the present invention are:

1、只需在视频帧中框选一次目标即可进行针对该目标类的分类器在线学习,即:首先对框选的目标采用仿射变换得到初始的正样本集,在视频的非目标区域提取少量的负样本集训练初始随机蕨分类器;其次,使用该分类器在视频帧中进行目标检测;检测的过程中,采用最近邻分类器收集在线学习新样本,并自动判断样本类别;最后,将在线学习新样本用于随机蕨分类器的在线训练,更新随机蕨后验概率,逐渐提高随机蕨分类器目标检测的精度,实现目标检测系统全自主在线学习。1. Only need to select a target in the video frame once to carry out online learning of the classifier for this target class, that is: first, use affine transformation to obtain the initial positive sample set for the frame-selected target, and in the non-target area of the video Extract a small number of negative sample sets to train the initial random fern classifier; secondly, use the classifier to detect objects in video frames; during the detection process, use the nearest neighbor classifier to collect online learning new samples, and automatically judge the sample category; finally , the online learning new samples are used for the online training of the random fern classifier, the posterior probability of the random fern is updated, the accuracy of the target detection of the random fern classifier is gradually improved, and the target detection system is fully autonomous online learning.

2、本专利引入模板集消减机制,可避免模板集中,正负样本模板较多可能造成的系统运行速度下降的缺点。2. This patent introduces a template set reduction mechanism, which can avoid the disadvantages of template concentration and a decrease in system operating speed that may be caused by more positive and negative sample templates.

附图说明Description of drawings

图1为本发明方法的流程图;Fig. 1 is the flowchart of the inventive method;

图2为本发明一实施例中随机蕨分类器结构图;Fig. 2 is a structural diagram of a random fern classifier in an embodiment of the present invention;

图3为本发明一实施例在线学习的随机蕨分类器前后检测性能的对比图,其中图3(a)-3(d)是在线学习前的检测结果,图3(i)-3(l)是在线学习后的检测结果;Fig. 3 is the contrast figure of detection performance before and after the random fern classifier of online learning of an embodiment of the present invention, wherein Fig. 3 (a)-3 (d) is the detection result before online learning, Fig. 3 (i)-3 (l ) is the detection result after online learning;

图4为夜晚光照条件下的分类器自主学习过程图;Figure 4 is a diagram of the autonomous learning process of the classifier under night light conditions;

图5为行人检测的分类器自主学习过程图;Fig. 5 is a self-learning process diagram of a classifier for pedestrian detection;

图6为本发明一实施例与其它经典在线学习过程的ROC曲线比较图。Fig. 6 is a comparison chart of ROC curves between an embodiment of the present invention and other classic online learning processes.

具体实施方式Detailed ways

下面结合具体实例和附图对本发明做进一步说明。The present invention will be further described below in conjunction with specific examples and accompanying drawings.

本发明公开了基于目标检测系统研究的全自主在线学习过程中的最近邻分类器训练方法,该方法只需在视频帧中框选一次目标即可进行针对该目标类的分类器在线学习。步骤为:首先对框选的目标采用仿射变换得到初始的正样本集,在视频的非目标区域提取少量的负样本集训练初始随机蕨分类器;其次,使用该分类器在视频帧中进行目标检测。检测的过程中,采用最近邻分类器收集在线学习新样本,并自动判断样本类别;最后,将新样本用于随机蕨分类器的在线训练,更新随机蕨后验概率,逐渐提高分类器目标检测的精度,实现目标检测系统全自主在线学习。The invention discloses a nearest neighbor classifier training method in a fully autonomous online learning process based on the research of a target detection system. The method only needs to select a target in a video frame once to perform online learning of the classifier for the target class. The steps are: first, use affine transformation to obtain the initial positive sample set for the frame-selected target, and extract a small number of negative sample sets in the non-target area of the video to train the initial random fern classifier; secondly, use this classifier to perform Target Detection. During the detection process, the nearest neighbor classifier is used to collect online learning new samples, and automatically judge the sample category; finally, the new samples are used for online training of the random fern classifier, and the posterior probability of the random fern is updated to gradually improve the target detection of the classifier. Accuracy, to achieve fully autonomous online learning of the target detection system.

本发明提供一种基于随机蕨分类器的全自主在线学习方法如图1所示,包括如下步骤:The present invention provides a fully autonomous online learning method based on a random fern classifier as shown in Figure 1, comprising the following steps:

1)准备初始训练分类器的样本集:1) Prepare the sample set for the initial training classifier:

针对待检测的视频帧,在视频图像的第一帧中框选出一个目标,对该目标图片进行仿射变换得到的图片作为正样本;以不含有目标的背景图像区域作为负样本;如此随机的获取一定数量的正样本和负样本作为初始训练分类器的样本集。For the video frame to be detected, a target is selected in the first frame of the video image, and the image obtained by affine transformation of the target image is used as a positive sample; the background image area that does not contain the target is used as a negative sample; so random Obtain a certain number of positive samples and negative samples as the sample set for initial training classifier.

所述初始训练分类器的样本集中的样本在本实施例中就是相同大小的图像块,一般尺寸为15×15(像素),若图像块中含有待检测的目标则该样本为正样本,没有则为负样本。In this embodiment, the samples in the sample set of the initial training classifier are image blocks of the same size, generally 15×15 (pixels), if the image block contains a target to be detected, then the sample is a positive sample, and there is no is a negative sample.

2)随机蕨分类器初始训练:2) Random fern classifier initial training:

使用准备好的初始训练分类器的样本集对随机蕨分类器进行初始训练,统计正负样本在每个随机蕨上的后验概率分布,如图2所示。Use the sample set of the prepared initial training classifier to perform initial training on the random fern classifier, and count the posterior probability distribution of positive and negative samples on each random fern, as shown in Figure 2.

具体方法如下:The specific method is as follows:

2.1)构造随机蕨:2.1) Construct a random fern:

对样本集中的单个样本上随机取s对特征点作为一组随机蕨(本实施例选5对),每个样本取特征点的位置相同,每对特征点进行像素值的比较,每对特征点中前一个特征点像素值大则取特征值为1,反之则取特征值为0,s对特征点比较后得到的s个特征值按照随机的顺序构成一个s位的二进制数,即为该组随机蕨的随机蕨数值,每个样本的随机蕨中特征值的顺序一致;Randomly get s pairs of feature points on a single sample in the sample set as a group of random ferns (5 pairs are selected in this embodiment), the position of each feature point is the same for each sample, and the comparison of pixel values is carried out for each pair of feature points, and each pair of feature points If the pixel value of the previous feature point in the point is large, then the feature value is 1, otherwise, the feature value is 0, and the s feature values obtained after comparing the feature points in s form a binary number of s bits in random order, that is, The random fern value of this group of random ferns, the order of the eigenvalues in the random ferns of each sample is the same;

2.2)计算随机蕨数值在正负样本类上的后验概率:2.2) Calculate the posterior probability of random fern values on positive and negative sample classes:

随机蕨中,有一部分为正样本得到的,其它为负样本得到的;每个样本的随机蕨Fl包含的特征可联合在一起形成一个十进制数,由于该十进制数通过S位二进制码获得,因此随机蕨数值的取值种类有2s个,即有2s种可能(本实施例中为25种可能);Among the random ferns, some are obtained from positive samples, and others are obtained from negative samples; the features contained in the random fern F l of each sample can be combined to form a decimal number, because the decimal number is obtained by S-bit binary code, Therefore, there are 2 s value types of the random fern value, that is, 2 s possibilities ( 25 possibilities in this embodiment);

统计每种随机蕨数值的取值的正样本个数,从而获得随机蕨数值在正样本类C1上的后验概率分布P(Fl|C1);同理获得随机蕨数值在负样本类C0上的后验概率分布P(Fl|C0);联合所有随机蕨对初始训练分类器的样本集进行分类,即为随机蕨分类器。Count the number of positive samples of each random fern value, so as to obtain the posterior probability distribution P(F l |C 1 ) of the random fern value on the positive sample class C 1 ; similarly, obtain the random fern value on the negative sample Posterior probability distribution P(F l |C 0 ) on class C 0 ; all random ferns are combined to classify the sample set of the initial training classifier, which is a random fern classifier.

3)将初始训练好的随机蕨分类器作为当前目标检测器遍历待检测的视频帧进行目标检测,得到目标模块,并计算每个目标模块的置信度,具体为:遍历待检测的视频帧,在视频帧中提取相同大小的图像块作为待测样本,待测样本的大小与步骤1)中正样本的大小相等,计算每个待测样本的随机蕨数值,从而得到相应的后验概率,最后由随机蕨分类器计算其类别;3) Use the initially trained random fern classifier as the current target detector to traverse the video frames to be detected for target detection, obtain the target modules, and calculate the confidence of each target module, specifically: traverse the video frames to be detected, Extract an image block of the same size in the video frame as the sample to be tested. The size of the sample to be tested is equal to the size of the positive sample in step 1), and the random fern value of each sample to be tested is calculated to obtain the corresponding posterior probability. Finally its class is computed by a random fern classifier;

对于类别为正样本的图像块,则作为目标被检测出来,成为目标模块。For the image block whose category is a positive sample, it is detected as an object and becomes an object module.

4)构建正负样本模板集:4) Construct positive and negative sample template sets:

将以下三种样本作为正样本模板添加到正样本模板集M+,其余添加到负样本模板集M-Add the following three samples as positive sample templates to the positive sample template set M + , and add the rest to the negative sample template set M :

A、步骤1)中得到的正样本;A, the positive sample obtained in step 1);

B、对步骤3)中得到的置信度超过置信度预设值(可取0.6)的目标模块,采用光流法对其所在视频帧进行跟踪得到跟踪模块,若跟踪模块与该目标模块有重合区域,且重合率超过预设重合率(预设重合率通常取60%),则认为该跟踪模块是真实目标,作为正样本模板添加到M+中;B. For the target module whose confidence degree obtained in step 3) exceeds the preset value of confidence degree (preferably 0.6), use the optical flow method to track the video frame where it is located to obtain the tracking module. If the tracking module has an overlapping area with the target module , and the coincidence rate exceeds the preset coincidence rate (the preset coincidence rate is usually 60%), then the tracking module is considered to be a real target and added to M + as a positive sample template;

C、对步骤3)中得到的置信度超过置信度预设值(可取0.6)的目标模块,采用光流法对其所在视频帧进行跟踪得到跟踪模块,若跟踪模块与该目标模块有重合区域,且重合率未超过预设重合率,则通过保守相似度Sc判断该跟踪模块能否加入正样本模板集:C. For the target module whose confidence degree obtained in step 3) exceeds the preset value of the confidence degree (preferably 0.6), use the optical flow method to track the video frame where it is located to obtain the tracking module. If the tracking module has an overlapping area with the target module , and the coincidence rate does not exceed the preset coincidence rate, then judge whether the tracking module can be added to the positive sample template set by conservative similarity S c :

SS cc == SS 5050 %% ++ SS 5050 %% ++ ++ SS --

其中:in:

如果Sc大于预设的保守相似度阈值(可取0.6),则该跟踪模块作为正样本模板加入M+为待分类样本与当前正样本模板集的前半部分模板的相似度,S+、S-分别为待分类样本与正、负样本模板集的相似度,为两个图像帧的相似度,p+,p-分别为正样本和负样本,p为待分类样本,本步骤中待分类样本为跟踪模块;If S c is greater than the preset conservative similarity threshold (preferably 0.6), the tracking module is added to M + as a positive sample template, is the similarity between the sample to be classified and the first half template of the current positive sample template set, S + and S - are the similarity between the sample to be classified and the positive and negative sample template sets respectively, Be the similarity of two image frames, p + , p - are positive samples and negative samples respectively, p is the sample to be classified, and the sample to be classified is the tracking module in this step;

每加入一个正样本模板,则取同一视频帧中其周围四个相同大小的图像块判断是否为负样本,若是作为负样本模板加入负样本模板集M-。在判断时,引入高斯背景建模,若图像块内前景像素小于前景像素阈值(可取小于30%),则判断它为负样本。Each time a positive sample template is added, four surrounding image blocks of the same size in the same video frame are taken to judge whether it is a negative sample, and if it is used as a negative sample template, it is added to the negative sample template set M . When judging, Gaussian background modeling is introduced. If the foreground pixel in the image block is smaller than the foreground pixel threshold (preferably less than 30%), it is judged as a negative sample.

步骤4)还包括模板集消减机制:待分类样本与正负模板集的相似度等于待分类样本与正负模板集中单个正负样本模板之间相似度的最大值;实时统计各个正负样本模板获得该最大值的次数,若某正负样本模板获得的该最大值的次数小于最大值次数预设值,则去除对应的正样本模板或负样本模板。Step 4) also includes a template set reduction mechanism: the similarity between the sample to be classified and the positive and negative template set is equal to the maximum similarity between the sample to be classified and a single positive and negative sample template in the positive and negative template set; real-time statistics of each positive and negative sample template The number of times the maximum value is obtained. If the number of times the maximum value is obtained by a certain positive and negative sample template is less than the preset value of the maximum number of times, the corresponding positive sample template or negative sample template is removed.

5)使用最近邻分类器,获得在线学习的正负样本:5) Use the nearest neighbor classifier to obtain positive and negative samples for online learning:

最近邻分类器的设置如下:对于每个待分类样本p,分别计算其与正负样本模板集的相似度S+(p,M+)及S-(p,M-):The setting of the nearest neighbor classifier is as follows: For each sample p to be classified, calculate its similarity S + (p, M + ) and S - (p, M - ) with the template set of positive and negative samples respectively:

SS ++ (( pp ,, Mm ++ )) == maxmax pp ii ++ ∈∈ Mm ++ SS (( pp ,, pp ii ++ ))

SS -- (( pp ,, Mm -- )) == maxmax pp ii -- ∈∈ Mm -- SS (( pp ,, pp ii -- ))

相应的可得相似度SrThe corresponding available similarity S r :

SS rr == SS ++ SS ++ ++ SS --

若相似度Sr大于阈值θNN,则判断该待分类样本为真实目标,作为在线学习的正样本;否则为虚警,作为在线学习的负样本;If the similarity S r is greater than the threshold θ NN , it is judged that the sample to be classified is a real target, and it is used as a positive sample for online learning; otherwise, it is a false alarm, and it is used as a negative sample for online learning;

本步骤中待分类样本为步骤3)得到的目标模块和步骤4)得到的正负样本模板集。The samples to be classified in this step are the target modules obtained in step 3) and the positive and negative sample template sets obtained in step 4).

(6)随机蕨分类器的在线训练:(6) Online training of random fern classifier:

使用步骤5)获得的在线学习的正负样本,对随机蕨分类器进行在线学习,逐渐提高其分类精度;将在线学习的随机蕨分类器作为可持续更新的检测系统进行目标检测。Use the positive and negative samples of online learning obtained in step 5) to conduct online learning on the random fern classifier, and gradually improve its classification accuracy; use the online learning random fern classifier as a continuously updated detection system for target detection.

随机蕨分类器的在线学习通过更新后验概率分布实现,具体方法如下:The online learning of the random fern classifier is realized by updating the posterior probability distribution, and the specific method is as follows:

6.1)将步骤5)获得的正负样本作为在线学习样本;设一个在线学习样本为(fnew,ck),其中fnew为随机蕨s位的二进制数(本实施例中fnew为00101,即十进制数5),ck为样本类别,计算该在线学习样本的随机蕨数值;6.1) The positive and negative samples obtained in step 5) are used as online learning samples; an online learning sample is (f new , c k ), where f new is a random binary number of s bits (in this embodiment, f new is 00101 , that is, the decimal number 5), c k is the sample category, and calculates the random fern value of the online learning sample;

6.2)如图2所示,对步骤2.1)样本集中类别为ck的样本总数加1,类别为ck的与该在线学习样本的随机蕨数值相同的样本数加1;其它随机蕨数值的样本数不变(本实施例中,类别为ck的样本总数M加1,随机蕨Fl的数值为5的样本数N加1,其它数值的样本数Nother不变);6.2) As shown in Figure 2, add 1 to the total number of samples whose category is c k in the sample set in step 2.1), add 1 to the number of samples whose category is c k and the same random fern value of the online learning sample; The number of samples is constant (in the present embodiment, the total number of samples M of the category c k adds 1, the value of the random fern F l is 5 and the number of samples N adds 1, and the number of samples N other of other values is constant);

6.3)根据更新后的样本数,重新计算随机蕨数值在该样本类上的后验概率分布(本实施例中,随机蕨Fl的数值为5的后验概率变为其它数值的后验概率值变为);6.3) According to the updated number of samples, recalculate the posterior probability distribution of the random fern value on the sample class (in this embodiment, the posterior probability that the value of the random fern F1 is 5 becomes The posterior probability values for other values become );

6.4)每新增一个在线学习样本,便重复6.1)至6.3)对后验概率分布进行更新一次。6.4) For each new online learning sample, repeat 6.1) to 6.3) to update the posterior probability distribution once.

通过在交通领域进行试验,如图3所示(实际目标检测过程中,我们使用几种不同尺度在视频图像中进行目标检测,不同尺度对应的图像框大小不同,因此可以检测到即框选出不同大小的图像块),其中图3a-3d是在线学习前的检测结果(即仅通过初始训练的检测结果),图3e-3h是在线学习后的检测结果,从图中可以发现初始训练分类器对目标检测的效果较低,经过训练之后对目标检测的效果高了很多。Through experiments in the traffic field, as shown in Figure 3 (in the actual target detection process, we use several different scales for target detection in video images, and the image frames corresponding to different scales have different sizes, so it can be detected and framed out Image blocks of different sizes), where Figure 3a-3d is the detection result before online learning (that is, the detection result only through initial training), and Figure 3e-3h is the detection result after online learning, from which it can be found that the initial training classification The effect of the device on target detection is low, and the effect on target detection is much higher after training.

图4为夜晚光照条件下的分类器自主学习过程图,其中图4(a)-4(d)为视频的开始阶段,可见漏检较多,这是由于全自主在线训练正样本较少造成的。随着在线训练样本的增多,检测率增加,虚警也逐步增多,如图4(e)-4(h)所示。当分类器进一步在线学习后,其每个随机蕨的后验概率趋于稳定,检测到的车辆目标也趋于准确,如图4(i)-4(l)所示。Figure 4 is a diagram of the autonomous learning process of the classifier under nighttime lighting conditions, in which Figures 4(a)-4(d) are the beginning stages of the video, and it can be seen that there are many missed detections, which is due to the lack of positive samples for fully autonomous online training of. With the increase of online training samples, the detection rate increases, and the false alarms gradually increase, as shown in Figure 4(e)-4(h). When the classifier is further studied online, the posterior probability of each random fern tends to be stable, and the detected vehicle targets also tend to be accurate, as shown in Figure 4(i)-4(l).

图5为行人检测的分类器自主学习过程图,其中图5(a)-5(d)为全自主在线学习初期的检测情况,图5(e)-5(h)为系统自主学习了200帧后的目标检测情况,从图中可以发现全自主在线学习方法能逐渐提高目标检测性能。Figure 5 is a diagram of the autonomous learning process of the classifier for pedestrian detection, in which Figures 5(a)-5(d) show the detection situation at the initial stage of fully autonomous online learning, and Figure 5(e)-5(h) show that the system has learned 200 times independently The target detection situation after the frame. From the figure, it can be found that the fully autonomous online learning method can gradually improve the target detection performance.

图6为本发明一实施例与其它经典在线学习过程的ROC曲线比较图,从图中可以发现全自主在线学习方法有较好的检测效果。FIG. 6 is a comparison diagram of ROC curves between an embodiment of the present invention and other classic online learning processes. From the figure, it can be found that the fully autonomous online learning method has a better detection effect.

Claims (6)

1. the complete autonomous on-line study method based on random fern sorter, is characterized in that: it comprises the steps:
1) prepare the sample set of initial training sorter:
For frame of video to be detected, at frame of video center, select a Target Photo, this Target Photo is carried out to picture that affined transformation obtains as positive sample; Using do not contain target background image region as negative sample; The so random positive sample that obtains some and negative sample are as the sample set of initial training sorter; Positive and negative samples is the image block that size is identical;
2) random fern sorter initial training:
Use the sample set of ready initial training sorter to carry out initial training to random fern sorter, add up the posterior probability of positive negative sample on each random fern and distribute;
3) the good random fern sorter of initial training is traveled through to frame of video to be detected as current goal detecting device and carry out target detection, obtain object module, and calculate the degree of confidence of each object module;
4) build positive negative sample template set:
Using following three kinds of samples as positive sample template, add positive sample template set M to +, all the other add negative sample template set M to -:
A, step 1) in the positive sample that obtains;
B, to step 3) in the degree of confidence that obtains surpass the object module of degree of confidence preset value, adopt optical flow method to follow the tracks of and obtain tracking module its place frame of video, if tracking module and this object module have overlapping region, and coincidence factor surpasses default coincidence factor, think that this tracking module is real goal, as positive sample template, add M to +in;
C, to step 3) in the degree of confidence that obtains surpass 0.6 object module, adopt optical flow method to follow the tracks of and obtain tracking module its place frame of video, if tracking module and this object module have overlapping region, and coincidence factor is over default coincidence factor, by conservative similarity S cjudge that can this tracking module add positive sample template set:
S c = S 50 % + S 50 % + + S -
Wherein:
If S cbe greater than default conservative similarity threshold, this tracking module adds M as positive sample template +, for the similarity of the first half template of sample to be sorted and current positive sample template set, S +, S -be respectively the similarity of sample to be sorted and positive and negative samples template set, be the similarity of two picture frames, p +, p -be respectively positive sample and negative sample, p is sample to be sorted, and in this step, sample to be sorted is tracking module;
Often add a positive sample template, get in same frame of video its around the image block of four formed objects determine whether negative sample, if add negative sample template set M as negative sample template -;
5) use nearest neighbor classifier, obtain the positive negative sample of on-line study:
Arranging of nearest neighbor classifier is as follows: for each sample p to be sorted, calculate respectively itself and the similarity S of positive negative sample template set +(p, M +) and S -(p, M -):
S + ( p , M + ) = max p i + ∈ M + S ( p , p i + )
S - ( p , M - ) = max p i - ∈ M - S ( p , p i - )
Can obtain similarity S accordingly r:
S r = S + S + + S -
If similarity S rbe greater than threshold value θ nN, judge that this sample to be sorted is real goal, as the positive sample of on-line study; Otherwise be false-alarm, as the negative sample of on-line study;
In this step, sample to be sorted is step 3) object module and the step 4 that obtain) the positive negative sample template set that obtains;
(6) the online training of random fern sorter:
The positive negative sample of on-line study use step 5) obtaining, carries out on-line study to random fern sorter, improves gradually its nicety of grading;
The random fern sorter of on-line study is carried out to target detection as the detection system of sustainable renewal.
2. the complete autonomous on-line study method based on random fern sorter according to claim 1, is characterized in that: concrete grammar described step 2) is as follows:
2.1) construct random fern:
To on the single sample in the sample set of initial training sorter, get at random s to unique point as one group of random fern, it is identical that each sample is got the position of unique point, every pair of unique point is carried out the comparison of pixel value, in every pair of unique point, greatly to get eigenwert be 1 to previous unique point pixel value, otherwise get eigenwert, be 0, the s that s obtains more afterwards to a unique point eigenwert forms the binary number of a s position according to random order, be the random fern numerical value that this organizes random fern, the sequence consensus of eigenwert in the random fern of each sample;
2.2) calculate the posterior probability of random fern numerical value in positive negative sample class:
In random fern, some obtains for positive sample, and other obtains for negative sample; The value kind of random fern numerical value has 2 sindividual;
Add up the positive number of samples of the value of every kind of random fern numerical value, thereby obtain random fern numerical value at positive sample class C 1on posterior probability distribution P (F l| C 1); In like manner obtain random fern numerical value at negative sample class C 0on posterior probability distribution P (F l| C 0); Combine all random ferns the sample set of initial training sorter is classified, be random fern sorter;
Described step 3) adopt above-mentioned random fern sorter in every frame video image, to carry out target detection:
Travel through every frame video image to be detected, in every frame video image, extract the image block of formed objects as sample to be tested, size and the step 1 of sample to be tested) in positive size equate, calculate the random fern numerical value of each sample to be tested, thereby obtain corresponding posterior probability, finally by random its classification of fern classifier calculated;
The image block that is positive sample for classification, is detected as target.
3. the complete autonomous on-line study method based on random fern sorter according to claim 1, it is characterized in that: described step 4) often add a positive sample template, when getting in same frame of video it around the image block of four formed objects determining whether negative sample, introduce Gaussian Background modeling, if foreground pixel is less than foreground pixel threshold value in image block, judge that it is negative sample.
4. according to the complete autonomous on-line study method based on random fern sorter described in claim 1 or 3, it is characterized in that: described step 4) also comprise that template set subdues mechanism: the similarity of sample to be sorted and positive and negative template set equals in sample to be sorted and positive and negative template set the maximal value of similarity between single positive negative sample template; Each positive negative sample template of real-time statistics obtains this peaked number of times, if this peaked number of times that certain positive negative sample template obtains is less than maximal value number of times preset value, removes corresponding positive sample template or negative sample template.
5. the complete autonomous on-line study method based on random fern sorter according to claim 2, is characterized in that: described step 6) on-line study of random fern sorter is distributed and realized by renewal posterior probability.
6. the complete autonomous on-line study method based on random fern sorter according to claim 5, is characterized in that: described step 6) concrete grammar is as follows:
6.1) using step 5) the positive negative sample that obtains is as on-line study sample; If an on-line study sample is (f new, c k), f wherein newfor the binary number of random fern s position, c kfor sample class, calculate the random fern numerical value of this on-line study sample;
6.2) to step 2.1) classification is c in sample set ktotal sample number add 1, classification is c kthe sample number identical with the random fern numerical value of this on-line study sample add 1; The sample number of other random fern numerical value is constant;
6.3), according to the sample number after upgrading, recalculate the posterior probability of random fern numerical value in this sample class and distribute;
6.4) often increase an on-line study sample newly, just repeat 6.1) to 6.3) posterior probability is distributed and upgraded once.
CN201410407669.2A 2014-08-19 2014-08-19 A kind of complete autonomous on-line study method based on random fern grader Expired - Fee Related CN104156734B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410407669.2A CN104156734B (en) 2014-08-19 2014-08-19 A kind of complete autonomous on-line study method based on random fern grader

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410407669.2A CN104156734B (en) 2014-08-19 2014-08-19 A kind of complete autonomous on-line study method based on random fern grader

Publications (2)

Publication Number Publication Date
CN104156734A true CN104156734A (en) 2014-11-19
CN104156734B CN104156734B (en) 2017-06-13

Family

ID=51882231

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410407669.2A Expired - Fee Related CN104156734B (en) 2014-08-19 2014-08-19 A kind of complete autonomous on-line study method based on random fern grader

Country Status (1)

Country Link
CN (1) CN104156734B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825233A (en) * 2016-03-16 2016-08-03 中国地质大学(武汉) Pedestrian detection method based on random fern classifier of online learning
CN106845387A (en) * 2017-01-18 2017-06-13 合肥师范学院 Pedestrian detection method based on self study
CN106846362A (en) * 2016-12-26 2017-06-13 歌尔科技有限公司 A kind of target detection tracking method and device
CN107092878A (en) * 2017-04-13 2017-08-25 中国地质大学(武汉) It is a kind of based on hybrid classifer can autonomous learning multi-target detection method
CN107292918A (en) * 2016-10-31 2017-10-24 清华大学深圳研究生院 Tracking and device based on video on-line study
CN107423702A (en) * 2017-07-20 2017-12-01 西安电子科技大学 Video target tracking method based on TLD tracking systems
CN108038515A (en) * 2017-12-27 2018-05-15 中国地质大学(武汉) Unsupervised multi-target detection tracking and its storage device and camera device
CN108932857A (en) * 2017-05-27 2018-12-04 西门子(中国)有限公司 A kind of method and apparatus controlling traffic lights
CN109325966A (en) * 2018-09-05 2019-02-12 华侨大学 A method for visual tracking through spatiotemporal context
CN110135456A (en) * 2019-04-08 2019-08-16 图麟信息科技(上海)有限公司 A kind of training method and device of target detection model
CN110211153A (en) * 2019-05-28 2019-09-06 浙江大华技术股份有限公司 Method for tracking target, target tracker and computer storage medium
CN110717556A (en) * 2019-09-25 2020-01-21 南京旷云科技有限公司 Posterior probability adjusting method and device for target recognition
CN110889747A (en) * 2019-12-02 2020-03-17 腾讯科技(深圳)有限公司 Commodity recommendation method, commodity recommendation device, commodity recommendation system, computer equipment and storage medium
CN111861966A (en) * 2019-04-18 2020-10-30 杭州海康威视数字技术股份有限公司 Model training method and device and defect detection method and device
CN112257738A (en) * 2020-07-31 2021-01-22 北京京东尚科信息技术有限公司 Training method and device for machine learning model, and image classification method and device
CN112347968A (en) * 2020-11-18 2021-02-09 合肥湛达智能科技有限公司 Target detection method based on autonomous online learning
CN113066101A (en) * 2019-12-30 2021-07-02 阿里巴巴集团控股有限公司 Data processing method and device, image processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982340A (en) * 2012-10-31 2013-03-20 中国科学院长春光学精密机械与物理研究所 Target tracking method based on semi-supervised learning and random fern classifier
CN103208190A (en) * 2013-03-29 2013-07-17 西南交通大学 Traffic flow detection method based on object detection
KR101332630B1 (en) * 2012-06-18 2013-11-25 한국과학기술원 Weight lightened random ferns and image expression method using the same
CN103699908A (en) * 2014-01-14 2014-04-02 上海交通大学 Joint reasoning-based video multi-target tracking method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101332630B1 (en) * 2012-06-18 2013-11-25 한국과학기술원 Weight lightened random ferns and image expression method using the same
CN102982340A (en) * 2012-10-31 2013-03-20 中国科学院长春光学精密机械与物理研究所 Target tracking method based on semi-supervised learning and random fern classifier
CN103208190A (en) * 2013-03-29 2013-07-17 西南交通大学 Traffic flow detection method based on object detection
CN103699908A (en) * 2014-01-14 2014-04-02 上海交通大学 Joint reasoning-based video multi-target tracking method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐诚: ""基于自主学习的复杂目标跟踪算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825233B (en) * 2016-03-16 2019-03-01 中国地质大学(武汉) A kind of pedestrian detection method based on on-line study random fern classifier
CN105825233A (en) * 2016-03-16 2016-08-03 中国地质大学(武汉) Pedestrian detection method based on random fern classifier of online learning
CN107292918A (en) * 2016-10-31 2017-10-24 清华大学深圳研究生院 Tracking and device based on video on-line study
CN106846362B (en) * 2016-12-26 2020-07-24 歌尔科技有限公司 Target detection tracking method and device
CN106846362A (en) * 2016-12-26 2017-06-13 歌尔科技有限公司 A kind of target detection tracking method and device
CN106845387A (en) * 2017-01-18 2017-06-13 合肥师范学院 Pedestrian detection method based on self study
CN106845387B (en) * 2017-01-18 2020-04-24 合肥师范学院 Pedestrian detection method based on self-learning
CN107092878A (en) * 2017-04-13 2017-08-25 中国地质大学(武汉) It is a kind of based on hybrid classifer can autonomous learning multi-target detection method
CN108932857B (en) * 2017-05-27 2021-07-27 西门子(中国)有限公司 Method and device for controlling traffic signal lamp
CN108932857A (en) * 2017-05-27 2018-12-04 西门子(中国)有限公司 A kind of method and apparatus controlling traffic lights
CN107423702B (en) * 2017-07-20 2020-06-23 西安电子科技大学 Video target tracking method based on TLD tracking system
CN107423702A (en) * 2017-07-20 2017-12-01 西安电子科技大学 Video target tracking method based on TLD tracking systems
CN108038515A (en) * 2017-12-27 2018-05-15 中国地质大学(武汉) Unsupervised multi-target detection tracking and its storage device and camera device
CN109325966A (en) * 2018-09-05 2019-02-12 华侨大学 A method for visual tracking through spatiotemporal context
CN109325966B (en) * 2018-09-05 2022-06-03 华侨大学 A method for visual tracking through spatiotemporal context
CN110135456A (en) * 2019-04-08 2019-08-16 图麟信息科技(上海)有限公司 A kind of training method and device of target detection model
CN111861966B (en) * 2019-04-18 2023-10-27 杭州海康威视数字技术股份有限公司 Model training method and device and defect detection method and device
CN111861966A (en) * 2019-04-18 2020-10-30 杭州海康威视数字技术股份有限公司 Model training method and device and defect detection method and device
CN110211153A (en) * 2019-05-28 2019-09-06 浙江大华技术股份有限公司 Method for tracking target, target tracker and computer storage medium
CN110717556A (en) * 2019-09-25 2020-01-21 南京旷云科技有限公司 Posterior probability adjusting method and device for target recognition
CN110889747B (en) * 2019-12-02 2023-05-09 腾讯科技(深圳)有限公司 Commodity recommendation method, device, system, computer equipment and storage medium
CN110889747A (en) * 2019-12-02 2020-03-17 腾讯科技(深圳)有限公司 Commodity recommendation method, commodity recommendation device, commodity recommendation system, computer equipment and storage medium
CN113066101A (en) * 2019-12-30 2021-07-02 阿里巴巴集团控股有限公司 Data processing method and device, image processing method and device
CN112257738A (en) * 2020-07-31 2021-01-22 北京京东尚科信息技术有限公司 Training method and device for machine learning model, and image classification method and device
CN112347968A (en) * 2020-11-18 2021-02-09 合肥湛达智能科技有限公司 Target detection method based on autonomous online learning

Also Published As

Publication number Publication date
CN104156734B (en) 2017-06-13

Similar Documents

Publication Publication Date Title
CN104156734B (en) A kind of complete autonomous on-line study method based on random fern grader
CN108830188B (en) Vehicle detection method based on deep learning
CN109977812B (en) A vehicle video object detection method based on deep learning
CN110188807B (en) Tunnel pedestrian target detection method based on cascaded super-resolution network and improved Faster R-CNN
CN105512640B (en) A kind of people flow rate statistical method based on video sequence
CN107133974B (en) Vehicle Classification Method Combining Gaussian Background Modeling and Recurrent Neural Network
CN104063713B (en) A kind of semi-autonomous on-line study method based on random fern grader
US10198657B2 (en) All-weather thermal-image pedestrian detection method
CN107944369A (en) A kind of pedestrian detection method based on tandem zones generation network and enhancing random forest
CN107194346A (en) A kind of fatigue drive of car Forecasting Methodology
CN109460704B (en) Fatigue detection method and system based on deep learning and computer equipment
CN106897738A (en) A kind of pedestrian detection method based on semi-supervised learning
CN104504395A (en) Method and system for achieving classification of pedestrians and vehicles based on neural network
CN106384345B (en) A kind of image detection and flow statistical method based on RCNN
CN110956158A (en) Pedestrian shielding re-identification method based on teacher and student learning frame
CN104850865A (en) Real-time compression tracking method of multi-characteristic transfer learning
CN104615986A (en) Method for utilizing multiple detectors to conduct pedestrian detection on video images of scene change
CN112232240A (en) Road sprinkled object detection and identification method based on optimized intersection-to-parallel ratio function
CN111339950B (en) Remote sensing image target detection method
CN106874825A (en) The training method of Face datection, detection method and device
CN104599291A (en) Structural similarity and significance analysis based infrared motion target detection method
CN106447045B (en) The assessment method of ADAS system based on machine learning
CN110046601B (en) Pedestrian detection method for crossroad scene
Sumi et al. Frame level difference (FLD) features to detect partially occluded pedestrian for ADAS
Rao et al. Convolutional Neural Network Model for Traffic Sign Recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170613

Termination date: 20180819

CF01 Termination of patent right due to non-payment of annual fee