CN106295668A - Robust gun detection method - Google Patents

Robust gun detection method Download PDF

Info

Publication number
CN106295668A
CN106295668A CN201510285393.XA CN201510285393A CN106295668A CN 106295668 A CN106295668 A CN 106295668A CN 201510285393 A CN201510285393 A CN 201510285393A CN 106295668 A CN106295668 A CN 106295668A
Authority
CN
China
Prior art keywords
gun
view
feature
guns
field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510285393.XA
Other languages
Chinese (zh)
Inventor
李新
肖曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sinocloud Wisdom Beijing Technology Co Ltd
Original Assignee
Sinocloud Wisdom Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sinocloud Wisdom Beijing Technology Co Ltd filed Critical Sinocloud Wisdom Beijing Technology Co Ltd
Priority to CN201510285393.XA priority Critical patent/CN106295668A/en
Publication of CN106295668A publication Critical patent/CN106295668A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a robust gun detection method, which carries out effective pretreatment and target significance detection, and searches a candidate area of a target detection object through color segmentation and screening of specific size characteristics; the method is to design a gun classifier based on the risk of classification errors of guns and non-guns; then, a plurality of multilayer deep learning neural networks are cascaded, and gun possibility is finally judged through gun statistics of candidate areas, so that a complete gun detector is finally formed. The gun detection method provided by the invention has the advantages of good anti-interference performance, strong real-time performance and high identification precision.

Description

一种鲁棒的枪支检测方法 A Robust Gun Detection Method

技术领域 technical field

本发明属于计算机视觉、图像处理和模式识别技术领域,具体说涉及一种鲁棒的枪支检测方法。 The invention belongs to the technical fields of computer vision, image processing and pattern recognition, and in particular relates to a robust gun detection method.

背景技术 Background technique

随着全世界范围内恐怖袭击、贩毒、走私等犯罪活动的日益猖獗,各国政府已不断加强对各种公共场所(如在机场、车站、航运码头、会展中心、政府机关、大型运动场及边检口岸等)的安全检查。现有技术中的安检系统通过产生X射线穿透行李以获得行李的X光图像,并将X光图像显示在屏幕上,然后由工作人员根据经验判别行李中是否包含危险物品。这种方式要求工作人员具有一定的判图经验,工作强度大,一定判错对于一些危害性较强的物品如枪支,一定漏检其危害性不可估量。随着计算机视觉、图像处理和模式识别技术的成熟发展,现有技术的安检系统开始采用图像自动识别技术来自动判别行李中是否包含危险物品。 With the increasing rampant of terrorist attacks, drug trafficking, smuggling and other criminal activities around the world, the governments of various countries have continuously strengthened the monitoring of various public places (such as airports, stations, shipping terminals, convention centers, government agencies, large stadiums and port, etc.) security checks. The security inspection system in the prior art generates X-rays to penetrate the luggage to obtain an X-ray image of the luggage, and displays the X-ray image on the screen, and then the staff judges whether the luggage contains dangerous items based on experience. This method requires the staff to have a certain experience in image judgment, and the work intensity is high, and the judgment must be wrong. For some dangerous items such as guns, the danger of missing inspection is immeasurable. With the mature development of computer vision, image processing and pattern recognition technology, the security inspection system of the prior art begins to adopt the automatic image recognition technology to automatically determine whether dangerous articles are contained in the luggage.

发明内容 Contents of the invention

针对现有技术存在的问题,本发明的目的在于提出一种漏检率和误报率低的枪支检测方法。 Aiming at the problems existing in the prior art, the object of the present invention is to propose a firearm detection method with a low rate of missed detection and false positive rate.

为实现上述目的,本发明提出一种鲁棒的枪支检测方法,该方法进行了有效的预处理和目标显著性检测,通过颜色分割和特有尺寸特征的筛选,来寻找目标检测物的候选区;该方法是基于枪支和非枪支的分类错误风险来设计枪支分类器;然后把多个多层深度学习神经网络级联起来,最后通过候选区统计枪支可能性进行判别,最终形成一个完整的枪支检测器; In order to achieve the above object, the present invention proposes a robust firearm detection method, which performs effective preprocessing and target saliency detection, and searches for candidate areas of target detection objects through color segmentation and screening of unique size features; The method is to design a gun classifier based on the risk of misclassification of guns and non-guns; then cascade multiple multi-layer deep learning neural networks, and finally judge the possibility of guns through the candidate area statistics, and finally form a complete gun detection device;

在由X光机图像采集设备和计算机组成的系统中,所述的检测方法包括训练阶段和检测阶段; In a system composed of an X-ray machine image acquisition device and a computer, the detection method includes a training phase and a detection phase;

1、训练阶段包括以下步骤: 1. The training phase includes the following steps:

1.1、样本的采集; 1.1. Collection of samples;

利用X光机过行李包裹,并将采集的图片中的枪支用人工标定切割出来,从不含有枪支的X光图片中随机切割非枪支图片;建立枪支样本数据库; Use the X-ray machine to go through the luggage package, and manually calibrate and cut the guns in the collected pictures, and randomly cut non-gun pictures from the X-ray pictures that do not contain guns; establish a gun sample database;

1.2、归一化处理; 1.2. Normalization processing;

包含样本光照和大小的线性归一化,即把步骤1.1得到的枪支和非枪支图像归一化为指定尺寸,然后将尺寸归一化后的图像灰度化; Contains linear normalization of sample illumination and size, that is, normalizes the gun and non-gun images obtained in step 1.1 to a specified size, and then grayscales the size-normalized image;

1.3、样本特征库的提取; 1.3. Extraction of sample feature library;

1.3.1、计算每个样本的积分图 1.3.1. Calculate the integral map of each sample

1.3.2、微结构特征库的提取 1.3.2. Extraction of microstructure feature library

使用harr特征的三种微结构特征:上下类型、左右类型、斜对称类型,加上一个边缘描述值作为微结构特征,再加上一个均值作为微结构特征,一共五种微结构特征模板来提取枪支样本的高维微结构特征,对于所述五种类型微结构特征向量,分别表示如下: Three microstructural features using harr features: up-down type, left-right type, oblique symmetry type, plus an edge description value as a microstructural feature, plus a mean value as a microstructural feature, a total of five microstructural feature templates to extract The high-dimensional microstructural features of gun samples, for the five types of microstructural feature vectors, are expressed as follows:

上下型Harr特征,是在N*N的视野中,将视野中的上半部分的灰度总值与下半部分的灰度总值相减,然后再除以视野中的像素个数N ,得到一个微结构特征,设上半部分和下半部分上下对称且面积相等,用w表示其中各部分的宽,h表示其中各部分的高,N为大于零自然数; The up-and-down Harr feature is to subtract the total gray value of the upper half of the field of view from the total gray value of the lower half of the field of view in the N*N field of view, and then divide it by the number of pixels in the field of view N , to obtain a microstructure feature, assuming that the upper and lower parts are symmetrical up and down and have the same area, w represents the width of each part, h represents the height of each part, and N is a natural number greater than zero;

左右型Harr特征,是在N*N的视野中,将视野中的左半部分的灰度总值与右半部分的灰度总值相减,然后再除以视野中的像素个数N,得到一个微结构特征,设左半部分和右半部分左右对称且面积相等,w、h的定义与上下型Harr特征相同; The left and right Harr feature is to subtract the total gray value of the left half of the field of view from the total gray value of the right half of the field of view in the N*N field of view, and then divide it by the number of pixels in the field of view N , to obtain a microstructural feature, assuming that the left half and the right half are symmetrical and have equal areas, and the definitions of w and h are the same as the upper and lower Harr features;

斜对称型Harr特征,是在N*N的视野中,将视野中的左上半部分的灰度总值与右下半部分的灰度总值相加得到斜方向的灰度总值,将视野中的右上部分的灰度总值与左下部分的灰度总值相加得到反斜方向的灰度总值,接着将斜方向的灰度总值与反斜方向的灰度总值相减,然后再除以视野中的像素个数N,得到一个微结构特征,设左上半部分、左下半部分和右上半部分、右下半部分的面积相等,w、h的定义与上下型Harr特征相同; The oblique symmetric Harr feature is that in the N*N field of view, the total gray value of the upper left half of the field of view and the total gray value of the lower right half of the field of view are added to obtain the total gray value of the oblique direction, and the field of view Add the total gray value of the upper right part and the total gray value of the lower left part to obtain the total gray value in the reverse oblique direction, and then subtract the total gray value in the oblique direction from the total gray value in the reverse oblique direction, Then divide by the number of pixels N in the field of view , to obtain a microstructure feature, assuming that the areas of the upper left half, the lower left half, the upper right half, and the lower right half are equal, and the definitions of w and h are the same as the upper and lower Harr features;

将以上三种微结构特征的的绝对数的最大值作为边缘描述值,这个值也作为一个微结构特征; The maximum value of the absolute number of the above three microstructural features is used as the edge description value, and this value is also used as a microstructural feature;

均值特征,是在N*N的视野中,将所有视野中的灰度值相加,然后再除以视野中的像素个数N,得到一个微结构特征; The mean feature is to add the gray values in all the fields of view in the N*N field of view, and then divide it by the number of pixels in the field of view N , get a microstructural feature;

1.3.3、微结构特征的提取方式 1.3.3. Extraction method of microstructure features

对于一个72*72像素的样本图,横向每隔4个像素读取一个8*8分辨率的视野,横向可有17个视野,同理纵向也取17个视野,因此分辨率为72*72的样本,一共有289个视野,每个视野分别计算了5种微结构特征,由此构成1445个特征来描述一个样本; For a sample image of 72*72 pixels, an 8*8 resolution field of view is read every 4 pixels in the horizontal direction, there can be 17 fields of view in the horizontal direction, and 17 fields of view in the vertical direction, so the resolution is 72*72 There are a total of 289 fields of view, and 5 microstructural features are calculated for each field of view, thus forming 1445 features to describe a sample;

1.4、分类器设计 1.4, classifier design

用以上设计的特征以及深度学习中的DBN算法,训练多个枪支分类器,并将这多个分类器分层级联组合成一个完整的枪支检测器,包括以下步骤: Use the features designed above and the DBN algorithm in deep learning to train multiple gun classifiers, and combine these multiple classifiers into a complete gun detector in a hierarchical cascade, including the following steps:

1.4.1、初始化i=1;初始化定义每一层分类器的训练目标,该目标是分别训练出一个在枪支训练集上的漏检率小于1%,并且在非枪支训练集上误报率小于20%的分类器,再训练出一个在枪支训练集上漏检率小于20%,并且在非枪支训练集上误报率小于1%的分类器,然后将这两个分类器合并成一个分类器作为该层的分类器;定义整个枪支检测器的目标,在枪支训练集上的漏报率小于5%,在非枪支训练集上的误报率小于1%;每个DBN分类器,采用两个隐层,输入层为1445个神经元,第一个隐层为578个神经元,第二个隐层为300个神经元,输出层为2分类,整个分类器各层均采用全链接; 1.4.1. Initialize i=1; initialize and define the training target of each layer classifier, the target is to train one with a missed detection rate of less than 1% on the gun training set and a false positive rate on the non-gun training set Less than 20% of the classifiers, and then train a classifier with a missed detection rate of less than 20% on the gun training set and a false positive rate of less than 1% on the non-gun training set, and then combine these two classifiers into one The classifier is used as the classifier of this layer; define the target of the whole gun detector, the false negative rate on the gun training set is less than 5%, and the false positive rate on the non-gun training set is less than 1%; each DBN classifier, Two hidden layers are used, the input layer is 1445 neurons, the first hidden layer is 578 neurons, the second hidden layer is 300 neurons, and the output layer is 2 classifications. Link;

1.4.2、训练第i层分类器; 1.4.2. Train the i-th layer classifier;

1.4.3、用训练得到的前i层分类器对样本集进行检测,并计算漏检率、误报率; 1.4.3. Use the trained i-level classifier to detect the sample set, and calculate the missed detection rate and false positive rate;

其中,漏检率=被判别为枪支的非枪支样本个数/非枪支样本总数*100%, Among them, the missed detection rate = the number of non-gun samples identified as guns / the total number of non-gun samples * 100%,

误报率=被判别为非枪支的枪支样本个数/枪支样本总数*100%; False positive rate = number of gun samples identified as non-guns/total number of gun samples*100%;

1.4.4、如果漏检率、误报率未达到步骤1.4.1设定的预定值,则,返回步骤1.4.2继续进行训练,否则停止训练; 1.4.4. If the missed detection rate and false alarm rate do not reach the predetermined value set in step 1.4.1, then , return to step 1.4.2 to continue training, otherwise stop training;

在检测阶段,采用以下步骤来判断输入图片是否含有枪支: In the detection phase, the following steps are used to determine whether the input image contains a gun:

2.1、载入已训练的参数,并初始化分类器; 2.1. Load the trained parameters and initialize the classifier;

2.2、将待检测图片输入到步骤1.4所得到的枪支检测器中; 2.2. Input the picture to be detected into the firearm detector obtained in step 1.4;

2.3、对待检测图片进行图像预处理,使用白平衡和颜色均衡化; 2.3. Perform image preprocessing on the image to be detected, using white balance and color equalization;

2.4、输入图像的缩放; 2.4. Scaling of the input image;

2.5、枪支目标性检测,使用颜色分割、以及连通域尺寸范围限定,进行候选区筛选; 2.5. For gun target detection, use color segmentation and limit the size of connected domains to screen candidate areas;

2.6、积分图像的计算; 2.6. Calculation of integral image;

2.7、通过积分图,提取候选区特征, 2.7. Through the integral map, extract the features of the candidate area,

在候选区使用72*72的窗口滑动,在窗口内使用积分图来计算滑动窗口的特征,一个窗口的特征作为一个待预测样本的特征,候选区的窗口个数由目标显著性检测决定; Use a 72*72 window sliding in the candidate area, use the integral map in the window to calculate the characteristics of the sliding window, the feature of a window is used as the feature of a sample to be predicted, and the number of windows in the candidate area is determined by the target saliency detection;

2.8、通过训练后的分类器来预测候选区有枪支的可能性, 2.8. Use the trained classifier to predict the possibility of guns in the candidate area,

具体是在每一个候选区中,使用分类器计算每一个滑动窗口是否包含枪支等目标; Specifically, in each candidate area, use a classifier to calculate whether each sliding window contains targets such as guns;

2.9、累计候选区域有枪支的可能性, 2.9. The possibility of accumulating guns in the candidate area,

在每一个候选区中,如果判定含有枪支的窗口个数超过了一定阈值则判定为有枪,这个阈值是候选区的窗口个数乘以一个系数,该系数范围是大于0.3小于0.9。 In each candidate area, if the number of windows containing guns exceeds a certain threshold, it is determined that there are guns. This threshold is the number of windows in the candidate area multiplied by a coefficient, and the coefficient range is greater than 0.3 and less than 0.9.

本发明的鲁棒的枪支检测方法也可以用于检测图像中是否含有电池的检测。 The robust gun detection method of the present invention can also be used to detect whether a battery is contained in an image.

附图说明 Description of drawings

图1为本发明一个具体实施例中的枪支检测方法中的训练阶段的流程图; Fig. 1 is the flowchart of the training stage in the firearm detection method in a specific embodiment of the present invention;

图2为本发明一个具体实施例中的枪支检测方法中的检测阶段的流程图。 Fig. 2 is a flow chart of the detection stage in the firearm detection method in a specific embodiment of the present invention.

具体实施方式 detailed description

下面结合附图和实施例对本发明提出的鲁棒的枪支检测方法做进一步描述。以下实施例仅用于说明本发明,但不用来限制本发明的范围。 The robust gun detection method proposed by the present invention will be further described below in conjunction with the accompanying drawings and embodiments. The following examples are only used to illustrate the present invention, but not to limit the scope of the present invention.

如图1和图2所示,本发明提出的鲁棒的枪支检测方法,该方法进行了有效的预处理和目标显著性检测,通过颜色分割和特有尺寸特征的筛选,来寻找目标检测物的候选区;该方法是基于枪支和非枪支的分类错误风险来设计枪支分类器;然后把多个多层深度学习神经网络级联起来,最后通过候选区统计枪支可能性进行判别,最终形成一个完整的枪支检测器; As shown in Figure 1 and Figure 2, the robust gun detection method proposed by the present invention, the method has carried out effective preprocessing and target saliency detection, through color segmentation and screening of unique size features, to find the target detection object Candidate area; this method is to design a gun classifier based on the risk of misclassification of guns and non-guns; then cascade multiple multi-layer deep learning neural networks, and finally judge the possibility of guns through the candidate area statistics, and finally form a complete firearm detectors;

在由X光机图像采集设备和计算机组成的系统中,所述的检测方法包括训练阶段和检测阶段; In a system composed of an X-ray machine image acquisition device and a computer, the detection method includes a training phase and a detection phase;

1、训练阶段包括以下步骤: 1. The training phase includes the following steps:

1.1、样本的采集; 1.1. Collection of samples;

利用X光机过行李包裹,并将采集的图片中的枪支用人工标定切割出来,建立枪支样本数据库;从不含有枪支的X光图片中随机切割非枪支图片;经筛选后,共得到17800张枪支样本和90000张非枪支样本作为训练样本集; Use the X-ray machine to go through the luggage package, and manually calibrate and cut out the guns in the collected pictures to establish a gun sample database; randomly cut non-gun pictures from the X-ray pictures that do not contain guns; after screening, a total of 17,800 pictures were obtained Gun samples and 90,000 non-gun samples are used as training sample sets;

1.2、归一化处理; 1.2. Normalization processing;

包含样本光照和大小的线性归一化,即把步骤1得到的枪支和非枪支图像归一化为指定尺寸,然后将尺寸归一化后的图像灰度化; Contains linear normalization of sample illumination and size, that is, normalizes the gun and non-gun images obtained in step 1 to a specified size, and then grayscales the size-normalized image;

样本库中,原始的目标宽为180像素,高为180像素,缩放后目标宽为72像素,高为72像素,缩放完后将其灰度化; In the sample library, the original target width is 180 pixels, and the height is 180 pixels. After zooming, the target width is 72 pixels, and the height is 72 pixels. After zooming, it will be grayed out;

1.3、样本特征库的提取; 1.3. Extraction of sample feature library;

1.3.1、计算每个样本的积分图 1.3.1. Calculate the integral map of each sample

根据定义使用 计算每个样本对应的积分图,并且有use by definition Calculate the integral map corresponding to each sample , and have , ;

1.3.2、微结构特征库的提取 1.3.2. Extraction of microstructure feature library

使用harr特征的三种微结构特征:上下类型、左右类型、斜对称类型,加上一个边缘描述值作为微结构特征,再加上一个均值作为微结构特征,一共五种微结构特征模板来提取枪支样本的高维微结构特征,所述微结构特征用表述,对于所述五种类型微结构特征向量,分别表示如下: Three microstructural features using harr features: up-down type, left-right type, oblique symmetry type, plus an edge description value as a microstructural feature, plus a mean value as a microstructural feature, a total of five microstructural feature templates to extract The high-dimensional microstructural features of gun samples, the microstructural features are used Expression, for the five types of microstructure eigenvectors, respectively expressed as follows:

上下型Harr特征,是在N*N的视野中,将视野中的上半部分的灰度总值与下半部分的灰度总值相减,然后再除以视野中的像素个数N,得到一个微结构特征,设上半部分和下半部分上下对称且面积相等,用w表示其中各部分的宽,h表示其中各部分的高,N为大于零自然数; The up-and-down Harr feature is to subtract the total gray value of the upper half of the field of view from the total gray value of the lower half in the field of view of N*N, and then divide it by the number of pixels in the field of view N , to obtain a microstructure feature, assuming that the upper and lower parts are symmetrical up and down and have the same area, w represents the width of each part, h represents the height of each part, and N is a natural number greater than zero;

= =

左右型Harr特征,是在N*N的视野中,将视野中的左半部分的灰度总值与右半部分的灰度总值相减,然后再除以视野中的像素个数N,得到一个微结构特征,设左半部分和右半部分左右对称且面积相等,w、h的定义与上下型Harr特征相同; The left and right Harr feature is to subtract the total gray value of the left half of the field of view from the total gray value of the right half of the field of view in the N*N field of view, and then divide it by the number of pixels in the field of view N , to obtain a microstructural feature, assuming that the left half and the right half are symmetrical and have equal areas, and the definitions of w and h are the same as the upper and lower Harr features;

= =

斜对称型Harr特征,是在N*N的视野中,将视野中的左上半部分的灰度总值与右下半部分的灰度总值相加得到斜方向的灰度总值,将视野中的右上部分的灰度总值与左下部分的灰度总值相加得到反斜方向的灰度总值,接着将斜方向的灰度总值与反斜方向的灰度总值相减,然后再除以视野中的像素个数N,得到一个微结构特征,设左上半部分、左下半部分和右上半部分、右下半部分的面积相等,w、h的定义与上下型Harr特征相同; The oblique symmetric Harr feature is that in the N*N field of view, the total gray value of the upper left half of the field of view and the total gray value of the lower right half of the field of view are added to obtain the total gray value of the oblique direction, and the field of view Add the total gray value of the upper right part and the total gray value of the lower left part to obtain the total gray value in the reverse oblique direction, and then subtract the total gray value in the oblique direction from the total gray value in the reverse oblique direction, Then divide by the number of pixels N in the field of view , to obtain a microstructure feature, assuming that the areas of the upper left half, the lower left half, the upper right half, and the lower right half are equal, and the definitions of w and h are the same as the upper and lower Harr features;

= =

将以上三种微结构特征的的绝对数的最大值作为边缘描述值,这个值也作为一个微结构特征; The maximum value of the absolute number of the above three microstructural features is used as the edge description value, and this value is also used as a microstructural feature;

均值特征,是在N*N的视野中,将所有视野中的灰度值相加,然后再除以视野中的像素个数N,得到一个微结构特征; The mean feature is to add the gray values in all the fields of view in the N*N field of view, and then divide it by the number of pixels in the field of view N , get a microstructural feature;

= =

1.3.3、微结构特征的提取方式 1.3.3. Extraction method of microstructure features

对于一个72*72像素的样本图,横向每隔4个像素读取一个8*8分辨率的视野,横向可有17个视野,同理纵向也取17个视野,因此分辨率为72*72的样本,一共有289个视野,每个视野分别计算了5种微结构特征,由此构成1445个特征来描述一个样本; For a sample image of 72*72 pixels, an 8*8 resolution field of view is read every 4 pixels in the horizontal direction, there can be 17 fields of view in the horizontal direction, and 17 fields of view in the vertical direction, so the resolution is 72*72 There are a total of 289 fields of view, and 5 microstructural features are calculated for each field of view, thus forming 1445 features to describe a sample;

1.4、分类器设计 1.4, classifier design

用以上设计的特征以及深度学习中的DBN算法,训练多个枪支分类器,并将这多个分类器分层级联组合成一个完整的枪支检测器,包括以下步骤: Use the features designed above and the DBN algorithm in deep learning to train multiple gun classifiers, and combine these multiple classifiers into a complete gun detector in a hierarchical cascade, including the following steps:

1.4.1、初始化i=1;初始化定义每一层分类器的训练目标,该目标是分别训练出一个在枪支训练集上的漏检率小于1%,并且在非枪支训练集上误报率小于20%的分类器,再训练出一个在枪支训练集上漏检率小于20%,并且在非枪支训练集上误报率小于1%的分类器,然后将这两个分类器合并成一个分类器作为该层的分类器;定义整个枪支检测器的目标,在枪支训练集上的漏报率小于5%,在非枪支训练集上的误报率小于1%;每个DBN分类器,采用两个隐层,输入层为1445个神经元,第一个隐层为578个神经元,第二个隐层为300个神经元,输出层为2分类,整个分类器各层均采用全链接。 1.4.1. Initialize i=1; initialize and define the training target of each layer classifier, the target is to train one with a missed detection rate of less than 1% on the gun training set and a false positive rate on the non-gun training set Less than 20% of the classifiers, and then train a classifier with a missed detection rate of less than 20% on the gun training set and a false positive rate of less than 1% on the non-gun training set, and then combine these two classifiers into one The classifier is used as the classifier of this layer; define the target of the whole gun detector, the false negative rate on the gun training set is less than 5%, and the false positive rate on the non-gun training set is less than 1%; each DBN classifier, Two hidden layers are used, the input layer is 1445 neurons, the first hidden layer is 578 neurons, the second hidden layer is 300 neurons, and the output layer is 2 classifications. Link.

1.4.2、训练第i层分类器; 1.4.2. Train the i-th layer classifier;

1.4.3、用训练得到的前i层分类器对样本集进行检测,并计算漏检率、误报率; 1.4.3. Use the trained i-level classifier to detect the sample set, and calculate the missed detection rate and false positive rate;

其中,漏检率=被判别为枪支的非枪支样本个数/非枪支样本总数*100%, Among them, the missed detection rate = the number of non-gun samples identified as guns / the total number of non-gun samples * 100%,

误报率=被判别为非枪支的枪支样本个数/枪支样本总数*100%; False positive rate = number of gun samples identified as non-guns/total number of gun samples*100%;

1.4.4、如果漏检率、误报率未达到步骤1.4.1设定的预定值,则,返回步骤1.4.2继续进行训练,否则停止训练; 1.4.4. If the missed detection rate and false alarm rate do not reach the predetermined value set in step 1.4.1, then , return to step 1.4.2 to continue training, otherwise stop training;

在检测阶段,采用以下步骤来判断输入图片是否含有枪支: In the detection phase, the following steps are used to determine whether the input image contains a gun:

2.1、载入已训练的参数,并初始化分类器; 2.1. Load the trained parameters and initialize the classifier;

2.2、将待检测图片输入到步骤1.4所得到的枪支检测器中; 2.2. Input the picture to be detected into the firearm detector obtained in step 1.4;

2.3、对待检测图片进行图像预处理,主要包括白平衡和颜色均衡化; 2.3. Perform image preprocessing on the image to be detected, mainly including white balance and color equalization;

2.4、对输入图像的缩放操作, 2.4. Scaling operation on the input image,

缩放后目标宽为72像素,高为72像素,缩放完后将其灰度化; After zooming, the target width is 72 pixels, and the height is 72 pixels. After zooming, it will be grayscaled;

2.5、枪支目标性检测, 2.5. Gun target detection,

具体步骤如下: Specific steps are as follows:

步骤1:颜色转换 Step 1: Color Conversion

将RGB颜色空间的图像转化为HSV颜色空间的图像; Convert an image in RGB color space to an image in HSV color space;

步骤2:颜色分析 Step 2: Color Analysis

根据目标物出现的颜色类型进行分割,当指定目标物为无机物时,呈现蓝色和绿色情况下,那么根据HSV中的H通道进行分割,即将保留H值>140并且<281的图像部分,其余部分用纯白色替代;当指定目标物为有机物时,呈现黄色和绿色情况,那么根据HSV中的H通道进行分割,即将保留H值>0并且<180的图像部分,其余部分用纯白色替代; Segment according to the color type of the target object. When the target object is specified as an inorganic substance, if it appears blue and green, then segment according to the H channel in HSV, and the image part with H value > 140 and < 281 will be retained. The rest is replaced by pure white; when the specified target is organic, it appears yellow and green, then it is segmented according to the H channel in HSV, that is, the image part with H value > 0 and < 180 is retained, and the rest is replaced by pure white ;

步骤3:亮度分析 Step 3: Brightness Analysis

步骤3.1:直方图统计分割 Step 3.1: Histogram Statistical Segmentation

对已经进行颜色分析后的图像的亮度进行直方图统计,选择亮度在0至10%的区域部分为黑色掩膜区域,其余为白色区域; Perform histogram statistics on the brightness of the image after color analysis, select the area with a brightness of 0 to 10% as the black mask area, and the rest as the white area;

步骤3.2:密度分析 Step 3.2: Density Analysis

在已经进行直方图统计分割的图像,计算该图的积分图,利用积分图保留分辨率为25*25中黑色像素数超过70%的区域,其余区域再次使用白色替代; In the image that has been statistically segmented by histogram, calculate the integral map of the image, use the integral map to retain the area with a resolution of 25*25 with more than 70% of black pixels, and replace the rest of the area with white;

步骤4:饱和度分析 Step 4: Saturation Analysis

步骤4.1:直方图统计分割 Step 4.1: Histogram Statistical Segmentation

对已经进行颜色分析后的图像的饱和度进行直方图统计,选择亮度在0至10%的区域部分为黑色掩膜区域,其余为白色区域; Perform histogram statistics on the saturation of the image after color analysis, select the area with a brightness of 0 to 10% as the black mask area, and the rest as the white area;

步骤4.2:密度分析 Step 4.2: Density Analysis

在已经进行直方图统计分割的图像,计算该图的积分图,利用积分图保留分辨率为25*25中黑色像素数超过70%的区域,其余区域再次使用白色替代; In the image that has been statistically segmented by histogram, calculate the integral map of the image, use the integral map to retain the area with a resolution of 25*25 with more than 70% of black pixels, and replace the rest of the area with white;

步骤5:颜色密度分析 Step 5: Color Density Analysis

步骤5.1:有色分割 Step 5.1: Colored Segmentation

将已经进行颜色分析后的图像中的非纯白色区域全设置为黑色; Set all non-pure white areas in the image after color analysis to black;

步骤5.2:密度分析 Step 5.2: Density Analysis

对有色分割后的图像,进行积分图计算,利用积分图保留含有密度超过40%的有色区域,其余区域使用白色替代; For the colored segmented image, calculate the integral map, use the integral map to retain the colored area with a density exceeding 40%, and replace the rest of the area with white;

步骤6:获取掩膜区域 Step 6: Get the mask area

合并亮度分析、饱和度分析和颜色密度分析的结果,得到颜色较深的区域; Merge the results of brightness analysis, saturation analysis and color density analysis to get darker areas;

步骤7:连通域分析 Step 7: Connected Domain Analysis

针对经过颜色密度分析后的图像,进行连通域分析,选择合适面积的部分,将面积大于两倍目标物尺寸和小于半倍目标物尺寸的连通域替换为背景白色; For the image after the color density analysis, perform connected domain analysis, select a part with an appropriate area, and replace the connected domain with an area larger than twice the size of the target object and smaller than half the size of the target object with the background white;

步骤8:获取最终结果 Step 8: Get the final result

合并步骤6和步骤7的结果。 Combine the results of steps 6 and 7.

2.6、计算输入图像的积分图, 2.6. Calculate the integral map of the input image,

使用 计算每个样本对应的积分图,并且有use Calculate the integral map corresponding to each sample , and have , ;

一般经过该预处理,可以得到1到30个候选区,其中大部分为3-5个候选区。 Generally, after this preprocessing, 1 to 30 candidate regions can be obtained, most of which are 3-5 candidate regions.

2.7、通过积分图,提取候选区特征, 2.7. Through the integral map, extract the features of the candidate area,

在候选区使用72*72的窗口滑动,在窗口内使用积分图来计算滑动窗口的特征,一个窗口的特征作为一个待预测样本的特征,候选区的窗口个数由目标显著性检测决定; Use a 72*72 window sliding in the candidate area, use the integral map in the window to calculate the characteristics of the sliding window, the feature of a window is used as the feature of a sample to be predicted, and the number of windows in the candidate area is determined by the target saliency detection;

2.8、通过训练后的分类器来预测候选区有枪支的可能性, 2.8. Use the trained classifier to predict the possibility of guns in the candidate area,

具体是在每一个候选区中,使用分类器计算每一个滑动窗口是否包含枪支等目标; Specifically, in each candidate area, use a classifier to calculate whether each sliding window contains targets such as guns;

2.9、累计候选区域有枪支的可能性, 2.9. The possibility of accumulating guns in the candidate area,

在每一个候选区中,如果判定含有枪支的窗口个数超过了一定阈值则判定为有枪,这个阈值是候选区的窗口个数乘以一个系数,该系数范围是大于0.3小于0.9。 In each candidate area, if the number of windows containing guns exceeds a certain threshold, it is determined that there are guns. This threshold is the number of windows in the candidate area multiplied by a coefficient, and the coefficient range is greater than 0.3 and less than 0.9.

本发明的枪支检测方法也可以用于检测图像中是否含有电池的检测。 The gun detection method of the present invention can also be used to detect whether an image contains a battery.

以上使用方式仅用于说明本发明,而并非对发明的限制,有关技术领域的普通技术人员,在不脱离本发明的精神和范围的情况下,还可以做出各种变化和变型,因此所有等同的技术方案也属于本发明的保护范畴。 The above usage methods are only used to illustrate the present invention, rather than to limit the invention. Those of ordinary skill in the relevant technical field can also make various changes and modifications without departing from the spirit and scope of the present invention. Therefore, all Equivalent technical solutions also belong to the protection category of the present invention.

Claims (3)

1.一种鲁棒的枪支检测方法,其特征在于,该方法进行了有效的预处理和目标显著性检测,通过颜色分割和特有尺寸特征的筛选,来寻找目标检测物的候选区;该方法是基于枪支和非枪支的分类错误风险来设计枪支分类器;然后把多个多层深度学习神经网络级联起来,最后通过候选区统计枪支可能性进行判别,最终形成一个完整的枪支检测器; 1. A robust firearm detection method, characterized in that the method has carried out effective preprocessing and target saliency detection, and searches for candidate regions of target detection objects by color segmentation and screening of unique size features; the method A gun classifier is designed based on the risk of misclassification of guns and non-guns; then multiple multi-layer deep learning neural networks are cascaded, and finally the probability of guns is discriminated through the candidate area statistics, and finally a complete gun detector is formed; 在由X光机图像采集设备和计算机组成的系统中,所述的检测方法包括训练阶段和检测阶段; In a system composed of an X-ray machine image acquisition device and a computer, the detection method includes a training phase and a detection phase; 1、训练阶段包括以下步骤: 1. The training phase includes the following steps: 1.1、样本的采集; 1.1. Collection of samples; 利用X光机过行李包裹,并将采集的图像中的枪支用人工标定切割出来,从不含有枪支的X光图像中随机切割非枪支图像;建立枪支样本数据库; Use the X-ray machine to go through the luggage package, and cut out the guns in the collected images with manual calibration, and randomly cut non-gun images from the X-ray images that do not contain guns; establish a gun sample database; 1.2、归一化处理; 1.2. Normalization processing; 包含样本光照和大小的线性归一化,即把步骤1.1得到的枪支和非枪支图像归一化为指定尺寸,然后将尺寸归一化后的图像灰度化; Contains linear normalization of sample illumination and size, that is, normalizes the gun and non-gun images obtained in step 1.1 to a specified size, and then grayscales the size-normalized image; 1.3、样本特征库的提取; 1.3. Extraction of sample feature library; 1.3.1、计算每个样本的积分图 1.3.1. Calculate the integral map of each sample 1.3.2、微结构特征库的提取 1.3.2. Extraction of microstructure feature library 使用harr特征的三种微结构特征:上下类型、左右类型、斜对称类型,加上一个边缘描述值作为微结构特征,再加上一个均值作为微结构特征,采用五种微结构特征模板来提取枪支样本的高维微结构特征,对于所述的五种类型微结构特征向量,分别表示如下: Three microstructural features using harr features: up-down type, left-right type, oblique symmetry type, plus an edge description value as a microstructural feature, plus a mean value as a microstructural feature, using five microstructural feature templates to extract The high-dimensional microstructural features of gun samples, for the five types of microstructural feature vectors, are expressed as follows: 上下型Harr特征,是在N*N的视野中,将视野中的上半部分的灰度总值与下半部分的灰度总值相减,然后再除以视野中的像素个数N ,得到第一微结构特征; The up-and-down Harr feature is to subtract the total gray value of the upper half of the field of view from the total gray value of the lower half in the field of view of N*N, and then divide it by the number of pixels in the field of view N , get the first microstructure feature; 左右型Harr特征,是在N*N的视野中,将视野中的左半部分的灰度总值与右半部分的灰度总值相减,然后再除以视野中的像素个数N,得到第二微结构特征; The left-right Harr feature is to subtract the total gray value of the left half of the field of view from the total gray value of the right half of the field of view in the N*N field of view, and then divide it by the number of pixels in the field of view N , to obtain the second microstructure feature; 斜对称型Harr特征,是在N*N的视野中,将视野中的左上半部分的灰度总值与右下半部分的灰度总值相加得到斜方向的灰度总值,将视野中的右上部分的灰度总值与左下部分的灰度总值相加得到反斜方向的灰度总值,接着将斜方向的灰度总值与反斜方向的灰度总值相减,然后再除以视野中的像素个数N,得到第三微结构特征; The oblique symmetric Harr feature is that in the N*N field of view, the total gray value of the upper left half of the field of view and the total gray value of the lower right half of the field of view are added to obtain the total gray value of the oblique direction, and the field of view Add the total gray value of the upper right part and the total gray value of the lower left part to obtain the total gray value in the reverse oblique direction, and then subtract the total gray value in the oblique direction from the total gray value in the reverse oblique direction, Then divide by the number of pixels N in the field of view , to obtain the third microstructure feature; 将第一微结构特征、第二微结构特征和第三微结构特征的绝对数的最大值作为边缘描述值,这个值作为第四微结构特征; The maximum value of the absolute numbers of the first microstructural feature, the second microstructural feature and the third microstructural feature is used as an edge description value, and this value is used as a fourth microstructural feature; 均值特征,是在N*N的视野中,将所有视野中的灰度值相加,然后再除以视野中的像素个数N,得到第五微结构特征; The mean feature is to add the gray values in all the fields of view in the N*N field of view, and then divide it by the number of pixels in the field of view N , to obtain the fifth microstructure feature; 1.3.3、微结构特征的提取方式 1.3.3. Extraction method of microstructure features 对于一个72*72像素的样本图,横向每隔4个像素读取一个8*8分辨率的视野,横向可有17个视野,同理纵向也取17个视野,因此分辨率为72*72的样本,一共有289个视野,每个视野分别计算步骤1.3.2中所述的五种微结构特征,由此构成1445个特征来描述一个样本; For a sample image of 72*72 pixels, an 8*8 resolution field of view is read every 4 pixels in the horizontal direction, there can be 17 fields of view in the horizontal direction, and 17 fields of view in the vertical direction, so the resolution is 72*72 There are 289 visual fields in total, and each visual field calculates the five microstructural features described in step 1.3.2, thus forming 1445 features to describe a sample; 1.4、分类器设计 1.4, classifier design 用以上设计的微结构特征以及深度学习中的DBN算法,训练多个枪支分类器,并将这多个分类器分层级联组合成一个完整的枪支检测器,包括以下步骤: Using the microstructure features designed above and the DBN algorithm in deep learning, train multiple gun classifiers, and combine these multiple classifiers into a complete gun detector in a hierarchical cascade, including the following steps: 1.4.1、初始化i=1;初始化定义每一层分类器的训练目标,该目标是分别训练出一个在枪支训练集上的漏检率小于1%,并且在非枪支训练集上误报率小于20%的分类器,再训练出一个在枪支训练集上漏检率小于20%,并且在非枪支训练集上误报率小于1%的分类器,然后将这两个分类器合并成一个分类器作为该层的分类器;定义整个枪支检测器的目标,在枪支训练集上的漏报率小于5%,在非枪支训练集上的误报率小于1%; 1.4.1. Initialize i=1; initialize and define the training target of each layer classifier, the target is to train one with a missed detection rate of less than 1% on the gun training set and a false positive rate on the non-gun training set Less than 20% of the classifiers, and then train a classifier with a missed detection rate of less than 20% on the gun training set and a false positive rate of less than 1% on the non-gun training set, and then combine these two classifiers into one The classifier is used as the classifier of this layer; define the target of the entire gun detector, the false positive rate on the gun training set is less than 5%, and the false positive rate on the non-gun training set is less than 1%; 1.4.2、训练第i层分类器; 1.4.2. Train the i-th layer classifier; 1.4.3、用训练得到的前i层分类器对样本集进行检测,并计算漏检率、误报率; 1.4.3. Use the trained i-level classifier to detect the sample set, and calculate the missed detection rate and false positive rate; 其中,漏检率=被判别为枪支的非枪支样本个数/非枪支样本总数*100%, Among them, the missed detection rate = the number of non-gun samples identified as guns / the total number of non-gun samples * 100%, 误报率=被判别为非枪支的枪支样本个数/枪支样本总数*100%; False positive rate = number of gun samples identified as non-guns/total number of gun samples*100%; 1.4.4、如果漏检率、误报率未达到步骤1.4.1设定的预定值,则,返回步骤1.4.2继续进行训练,否则停止训练; 1.4.4. If the missed detection rate and false alarm rate do not reach the predetermined value set in step 1.4.1, then , return to step 1.4.2 to continue training, otherwise stop training; 在检测阶段,采用以下步骤来判断输入图像是否含有枪支: In the detection phase, the following steps are taken to determine whether an input image contains a gun: 2.1、载入已训练的参数,并初始化分类器; 2.1. Load the trained parameters and initialize the classifier; 2.2、将待检测图像输入到步骤1.4所得到的枪支检测器中; 2.2. Input the image to be detected into the firearm detector obtained in step 1.4; 2.3、对待检测图像进行归一化处理,包括对待检测图像进行预处理和将待检测图像归一化为指定尺寸,然后将尺寸归一化后的图像灰度化; 2.3. Perform normalization processing on the image to be detected, including preprocessing the image to be detected and normalizing the image to be detected to a specified size, and then grayscale the normalized image; 2.5、枪支目标性检测,使用颜色分割、以及连通域尺寸范围限定,进行候选区筛选; 2.5. For gun target detection, use color segmentation and limit the size of connected domains to screen candidate areas; 2.6、计算待检测图像的积分图; 2.6. Calculate the integral map of the image to be detected; 2.7、通过积分图,提取候选区特征, 2.7. Through the integral map, extract the features of the candidate area, 在候选区使用72*72的窗口滑动,在窗口内使用积分图来计算滑动窗口的特征,一个窗口的特征作为一个待预测样本的特征,候选区的窗口个数由目标显著性检测决定; Use a 72*72 window sliding in the candidate area, use the integral map in the window to calculate the characteristics of the sliding window, the feature of a window is used as the feature of a sample to be predicted, and the number of windows in the candidate area is determined by the target saliency detection; 2.8、通过训练后的分类器来预测候选区有枪支的可能性, 2.8. Use the trained classifier to predict the possibility of guns in the candidate area, 具体是在每一个候选区中,使用分类器计算每一个滑动窗口是否包含枪支等目标; Specifically, in each candidate area, use a classifier to calculate whether each sliding window contains targets such as guns; 2.9、累计候选区域有枪支的可能性, 2.9. The possibility of accumulating guns in the candidate area, 在每一个候选区中,如果判定含有枪支的窗口个数超过了一定阈值则判定为有枪,这个阈值是候选区的窗口个数乘以一个系数,该系数范围是大于0.3小于0.9。 In each candidate area, if the number of windows containing guns exceeds a certain threshold, it is determined that there are guns. This threshold is the number of windows in the candidate area multiplied by a coefficient, and the coefficient range is greater than 0.3 and less than 0.9. 2.如权利要求1所述的鲁棒的枪支检测方法,其特征在于,所述步骤1.4.1中,每个DBN分类器,采用两个隐层,输入层为1445个神经元,第一个隐层为578个神经元,第二个隐层为300个神经元,输出层为2分类,整个分类器各层均采用全链接。 2. the robust firearm detection method as claimed in claim 1, is characterized in that, in described step 1.4.1, each DBN classifier adopts two hidden layers, and the input layer is 1445 neurons, the first The first hidden layer is 578 neurons, the second hidden layer is 300 neurons, the output layer is 2 classifications, and all layers of the classifier are fully connected. 3.如权利要求1至2所述的鲁棒的枪支检测方法,其特征在于,所述方法也可以用于检测图像中是否含有电池的检测。 3. The robust firearm detection method according to claims 1 to 2, wherein the method can also be used to detect whether an image contains a battery.
CN201510285393.XA 2015-05-29 2015-05-29 Robust gun detection method Pending CN106295668A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510285393.XA CN106295668A (en) 2015-05-29 2015-05-29 Robust gun detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510285393.XA CN106295668A (en) 2015-05-29 2015-05-29 Robust gun detection method

Publications (1)

Publication Number Publication Date
CN106295668A true CN106295668A (en) 2017-01-04

Family

ID=57635913

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510285393.XA Pending CN106295668A (en) 2015-05-29 2015-05-29 Robust gun detection method

Country Status (1)

Country Link
CN (1) CN106295668A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108303747A (en) * 2017-01-12 2018-07-20 清华大学 The method for checking equipment and detecting gun
CN108303748A (en) * 2017-01-12 2018-07-20 同方威视技术股份有限公司 The method for checking equipment and detecting the gun in luggage and articles
CN109001833A (en) * 2018-06-22 2018-12-14 天和防务技术(北京)有限公司 A kind of Terahertz hazardous material detection method based on deep learning
CN109784125A (en) * 2017-11-10 2019-05-21 福州瑞芯微电子股份有限公司 Deep learning network processing device, method and image processing unit
CN109829542A (en) * 2019-01-29 2019-05-31 武汉星巡智能科技有限公司 Method and device for reconstruction of multivariate deep network model based on multi-core processor
CN109977877A (en) * 2019-03-28 2019-07-05 北京邮电大学 A kind of safety check is intelligent to be assisted sentencing drawing method, system and system control method
CN110472544A (en) * 2019-08-05 2019-11-19 上海英迈吉东影图像设备有限公司 A kind of training method and system of article identification model

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731417A (en) * 2005-08-19 2006-02-08 清华大学 Method of robust human face detection in complicated background image
CN1731418A (en) * 2005-08-19 2006-02-08 清华大学 Method of robust accurate eye positioning in complicated background image
CN101398893A (en) * 2008-10-10 2009-04-01 北京科技大学 Adaboost arithmetic improved robust human ear detection method
CN102449661A (en) * 2009-06-01 2012-05-09 惠普发展公司,有限责任合伙企业 Determining detection certainty in a cascade classifier
US8437556B1 (en) * 2008-02-26 2013-05-07 Hrl Laboratories, Llc Shape-based object detection and localization system
CN103366190A (en) * 2013-07-26 2013-10-23 中国科学院自动化研究所 Method for identifying traffic sign
CN103744120A (en) * 2013-12-30 2014-04-23 中云智慧(北京)科技有限公司 Method and device for assisting identification of contraband

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731417A (en) * 2005-08-19 2006-02-08 清华大学 Method of robust human face detection in complicated background image
CN1731418A (en) * 2005-08-19 2006-02-08 清华大学 Method of robust accurate eye positioning in complicated background image
US8437556B1 (en) * 2008-02-26 2013-05-07 Hrl Laboratories, Llc Shape-based object detection and localization system
CN101398893A (en) * 2008-10-10 2009-04-01 北京科技大学 Adaboost arithmetic improved robust human ear detection method
CN102449661A (en) * 2009-06-01 2012-05-09 惠普发展公司,有限责任合伙企业 Determining detection certainty in a cascade classifier
CN103366190A (en) * 2013-07-26 2013-10-23 中国科学院自动化研究所 Method for identifying traffic sign
CN103744120A (en) * 2013-12-30 2014-04-23 中云智慧(北京)科技有限公司 Method and device for assisting identification of contraband

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108303747A (en) * 2017-01-12 2018-07-20 清华大学 The method for checking equipment and detecting gun
CN108303748A (en) * 2017-01-12 2018-07-20 同方威视技术股份有限公司 The method for checking equipment and detecting the gun in luggage and articles
CN108303747B (en) * 2017-01-12 2023-03-07 清华大学 Inspection apparatus and method of detecting a gun
CN109784125A (en) * 2017-11-10 2019-05-21 福州瑞芯微电子股份有限公司 Deep learning network processing device, method and image processing unit
CN109001833A (en) * 2018-06-22 2018-12-14 天和防务技术(北京)有限公司 A kind of Terahertz hazardous material detection method based on deep learning
CN109829542A (en) * 2019-01-29 2019-05-31 武汉星巡智能科技有限公司 Method and device for reconstruction of multivariate deep network model based on multi-core processor
CN109829542B (en) * 2019-01-29 2021-04-16 武汉星巡智能科技有限公司 Method and device for reconstruction of multivariate deep network model based on multi-core processor
CN109977877A (en) * 2019-03-28 2019-07-05 北京邮电大学 A kind of safety check is intelligent to be assisted sentencing drawing method, system and system control method
CN110472544A (en) * 2019-08-05 2019-11-19 上海英迈吉东影图像设备有限公司 A kind of training method and system of article identification model

Similar Documents

Publication Publication Date Title
CN109165577B (en) An Early Forest Fire Detection Method Based on Video Image
CN106295668A (en) Robust gun detection method
CN105868689B (en) A kind of face occlusion detection method based on concatenated convolutional neural network
Frizzi et al. Convolutional neural network for video fire and smoke detection
CN113963301B (en) A video fire smoke detection method and system based on spatiotemporal feature fusion
CN102982313B (en) The method of Smoke Detection
CN103761529B (en) A kind of naked light detection method and system based on multicolour model and rectangular characteristic
CN102831618B (en) Hough forest-based video target tracking method
CN106446926A (en) Transformer station worker helmet wear detection method based on video analysis
CN107316036B (en) Insect pest identification method based on cascade classifier
CN104732220B (en) A kind of particular color human body detecting method towards monitor video
CN109918971B (en) Method and device for detecting people in surveillance video
WO2019140767A1 (en) Recognition system for security check and control method thereof
CN106934386B (en) A method and system for text detection in natural scenes based on self-heuristic strategy
WO2017190574A1 (en) Fast pedestrian detection method based on aggregation channel features
TWI715457B (en) Unsupervised malicious flow detection system and method
CN111046827A (en) Video smoke detection method based on convolutional neural network
CN108229524A (en) A kind of chimney and condensing tower detection method based on remote sensing images
CN113221667B (en) Deep learning-based face mask attribute classification method and system
CN108288279A (en) Article discrimination method based on X-ray image foreground target extraction
CN104766338A (en) Method for detecting significance of complex X-ray pseudo-color image
CN111951250A (en) An image-based fire detection method
CN110992324B (en) Intelligent dangerous goods detection method and system based on X-ray image
Lai et al. Robust little flame detection on real-time video surveillance system
CN114494040A (en) Image data processing method and device based on multi-target detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170104