CN113269074A - Target detection method and security inspection robot - Google Patents

Target detection method and security inspection robot Download PDF

Info

Publication number
CN113269074A
CN113269074A CN202110543846.XA CN202110543846A CN113269074A CN 113269074 A CN113269074 A CN 113269074A CN 202110543846 A CN202110543846 A CN 202110543846A CN 113269074 A CN113269074 A CN 113269074A
Authority
CN
China
Prior art keywords
region
target
image
detected
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110543846.XA
Other languages
Chinese (zh)
Inventor
汤伟
王丹彤
王锦韫
黄璜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi University of Science and Technology
Original Assignee
Shaanxi University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi University of Science and Technology filed Critical Shaanxi University of Science and Technology
Priority to CN202110543846.XA priority Critical patent/CN113269074A/en
Publication of CN113269074A publication Critical patent/CN113269074A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses a target detection method and a security inspection robot, wherein the method comprises the following steps: acquiring an image to be detected; and detecting the target in the image to be detected by using the improved Faster-RCNN network, and outputting the category and the position of the target. The method has low requirement on the quantity of the training samples, can obtain better classification results even if the training samples are few, and greatly improves the accuracy of target detection results by combining the calculation efficiency of the Faster-RCNN network.

Description

Target detection method and security inspection robot
Technical Field
The invention relates to the technical field of image processing, in particular to a target detection method and a security inspection robot.
Background
In recent years, the efficiency of real person duty under special environment is gradually reduced due to the continuous increase of human resource cost, and a human-based security control system faces a great challenge. By combining the robot and the security equipment, the robot can effectively replace security personnel to execute high-risk tasks, and can further construct an all-weather seamless monitoring system for the fixed security system, thereby realizing 24-hour all-weather dead-corner-free real-time early warning. Especially at night and under bad weather conditions, the performance of the security inspection robot is better than the duty effect of a real person.
In the security inspection process, the robot needs to identify the target to determine the state of the target, and the target identification is premised on target detection, namely, the position of the target is determined in the acquired image. In the target detection process, most of the robots use the convolutional neural network CNN to perform the processing, the CNN network with the highest efficiency is a fast-RCNN network, and in the fast-RCNN network, the region generation network RPN uses a softmax classifier to determine whether the content in each candidate frame is a foreground or a background. However, the fitting ability of the softmax classifier on such a two-classification problem is not very good, which results in poor target classification results and reduces the accuracy of target detection.
Disclosure of Invention
The embodiment of the invention provides a target detection method and a security inspection robot, which are used for solving the problem that the performance of a Faster-RCNN network using a softmax classifier is insufficient in two classification problems in the prior art, so that the target detection accuracy is not ideal.
In one aspect, an embodiment of the present invention provides a target detection method, including:
acquiring an image to be detected;
detecting a target in an image to be detected by using an improved Faster-RCNN network, and outputting the category and the position of the target;
the improved fast-RCNN network comprises a convolution layer, a region generation network, a region-of-interest pooling layer and a classification layer;
the convolution layer is used for extracting a characteristic diagram of an image to be detected;
the area generation network is used for determining whether the anchor area in the feature map belongs to a foreground area or a background area through an SVM (support vector machine) classifier and correcting the anchor area belonging to the foreground area to obtain a candidate area;
the region-of-interest pooling layer is used for combining the feature map and the candidate regions and extracting corresponding candidate features;
and the classification layer is used for determining whether the candidate features belong to the target according to the candidate features, and outputting the category and the position of the target after determining the properties of all the candidate features.
On the other hand, the invention also provides a security inspection robot, which comprises: a sensor module and a controller module;
the sensor module is used for acquiring an image to be detected;
a controller module for detecting an object using the object detection method of claim 1.
In one possible implementation, a sensor module includes: and the binocular camera is used for measuring the distance between the robot and the barrier and is used for avoiding the barrier by the controller module.
In one possible implementation, a sensor module includes: the infrared thermal imager is used for acquiring an image to be detected.
The target detection method and the security inspection robot have the following advantages that:
the requirement on the number of training samples is not high, a good classification result can be obtained even if the number of training samples is small, and the accuracy of a target detection result is greatly improved by combining the calculation efficiency of the Faster-RCNN network.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a target detection method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a security inspection robot according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the prior art, the main function of the security inspection robot in the robot industry is to take charge of the robot in the security field, potential safety hazards, patrol monitoring, disaster warning and the like are avoided through the inspection robot, and safety accidents can be effectively reduced. In the process of robot inspection, a large number of CNN networks are adopted for target detection. The CNN network comprises an RCNN, a Fast-RCNN and a Fast-RCNN, wherein the Fast-RCNN is developed based on the Fast-RCNN, four steps of candidate region generation, feature extraction, classification and position refinement in the traditional RCNN are integrated in a region generation network RPN, all computations are completed in a graphic processing unit GPU at one time, repeated computations do not exist, and therefore the computation speed is greatly improved. RPN networks in the currently used fast-RCNN network all use a softmax classifier to determine whether the content in a candidate frame is a foreground or a background, so as to facilitate subsequent object classification operations. However, since the CNN network is relatively loosely labeled to the training data during training, which easily causes the CNN network to be over-fitted, a large number of training samples are required to improve the fitting effect. When the training samples are few, the performance of the softmax classifier is not satisfactory.
In order to solve the problems in the prior art, the invention provides a target detection method and a security inspection robot, a softmax classifier in a Faster-RCNN network is replaced by an SVM classifier, and the accuracy of target detection is greatly improved by utilizing the characteristic of strong adaptability of the SVM classifier when a small number of samples are trained.
Fig. 1 is a schematic flowchart of a target detection method according to an embodiment of the present invention. The invention provides a target detection method, which comprises the following steps:
acquiring an image to be detected;
detecting a target in an image to be detected by using an improved Faster-RCNN network, and outputting the category and the position of the target;
the improved fast-RCNN network comprises a convolution layer, a region generation network, a region-of-interest pooling layer and a classification layer;
the convolution layer is used for extracting a characteristic diagram of an image to be detected;
the area generation network is used for determining whether the anchor area in the feature map belongs to a foreground area or a background area through an SVM (support vector machine) classifier and correcting the anchor area belonging to the foreground area to obtain a candidate area;
the region-of-interest pooling layer is used for combining the feature map and the candidate regions and extracting corresponding candidate features;
and the classification layer is used for determining whether the candidate features belong to the target according to the candidate features, and outputting the category and the position of the target after determining the properties of all the candidate features.
Illustratively, the convolutional layer uses the basic conv + relu + posing layer to extract the feature map of the image to be detected. In general, 5-layer ZF or 16-layer VGG-16 is used as a convolutional layer, and the extracted feature map will be shared by the region generation network RPN and the region of interest pooling layer ROI posing.
The region generation network RPN is the key point of the fast-RCNN, an SVM classifier is adopted to judge whether anchor regions in a feature map belong to foreground regions or background regions, and accurate candidate regions are obtained by utilizing bounding box regression to correct the anchor regions.
The present invention also provides a security inspection robot, as shown in fig. 2, the robot includes: a sensor module and a controller module;
the sensor module is used for acquiring an image to be detected;
a controller module for detecting an object using the object detection method of claim 1.
Illustratively, the robot uses battery power to realize all-weather uninterrupted work. After the target is detected, whether the target state is normal or not can be determined by matching with a corresponding processing algorithm, then the target state can be transmitted to a data center or electronic equipment used by a user through a wireless local area network, and the robot can also send early warning or alarm information to the data center or the electronic equipment used by the user according to the target state.
In one possible embodiment, the sensor module comprises: binocular camera 100, binocular camera 100 are used for measuring the distance between robot and the barrier, supply the controller module to keep away the barrier and use.
Illustratively, the binocular camera 100 includes two cameras, one of which is a reference and the other of which is a reference, so that two images can be acquired simultaneously, one of which is a reference image collected by the reference camera and the other of which is a reference image collected by the reference camera. The two images collected by the binocular camera 100 are compared, the distance between the two images and the obstacle is obtained by combining the characteristic data presented by the target object on the two images, and after the controller module obtains the distance data, the actuator can be controlled to act, and the robot is driven to change the motion state so as to avoid collision with the obstacle.
In one possible embodiment, the sensor module comprises: the infrared thermal imager 200 is used for acquiring an image to be detected by the infrared thermal imager 200.
Illustratively, the infrared thermal imager 200 is used to detect a temperature target at night. In nature, all objects can radiate infrared rays, so that the infrared thermal imaging instrument 200 can be used for measuring the infrared ray difference between the target and the background to obtain infrared images formed by different thermal infrared rays. In daytime, a camera receiving visible light can also be used to acquire an image to be detected.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (4)

1. A method of object detection, comprising:
acquiring an image to be detected;
detecting a target in the image to be detected by using an improved Faster-RCNN network, and outputting the category and the position of the target;
wherein the improved Faster-RCNN network comprises a convolutional layer, a region generation network, a region-of-interest pooling layer and a classification layer;
the convolution layer is used for extracting a characteristic diagram of the image to be detected;
the region generation network is used for determining whether an anchor region in the feature map belongs to a foreground region or a background region through an SVM (support vector machine) classifier, and correcting the anchor region belonging to the foreground region to obtain a candidate region;
the region-of-interest pooling layer is used for combining the feature map and the candidate regions and extracting corresponding candidate features;
and the classification layer is used for determining whether the candidate features belong to the target according to the candidate features, and outputting the category and the position of the target after determining the properties of all the candidate features.
2. The utility model provides a robot is patrolled and examined in security protection which characterized in that includes: a sensor module and a controller module;
the sensor module is used for acquiring an image to be detected;
the controller module is configured to detect an object by using the object detection method according to claim 1.
3. The security inspection robot according to claim 2, wherein the sensor module includes: and the binocular camera is used for measuring the distance between the robot and the barrier and supplying the controller module to avoid the barrier.
4. The security inspection robot according to claim 2, wherein the sensor module includes: and the infrared thermal imager is used for acquiring the image to be detected.
CN202110543846.XA 2021-05-19 2021-05-19 Target detection method and security inspection robot Pending CN113269074A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110543846.XA CN113269074A (en) 2021-05-19 2021-05-19 Target detection method and security inspection robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110543846.XA CN113269074A (en) 2021-05-19 2021-05-19 Target detection method and security inspection robot

Publications (1)

Publication Number Publication Date
CN113269074A true CN113269074A (en) 2021-08-17

Family

ID=77231661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110543846.XA Pending CN113269074A (en) 2021-05-19 2021-05-19 Target detection method and security inspection robot

Country Status (1)

Country Link
CN (1) CN113269074A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101970442B1 (en) * 2018-12-04 2019-04-19 주식회사 넥스파시스템 Illegal parking enforcement system Using Fast R-CNN based on Vehicle detection
CN110211097A (en) * 2019-05-14 2019-09-06 河海大学 A kind of crack image detecting method based on the migration of Faster R-CNN parameter
CN110334661A (en) * 2019-07-09 2019-10-15 国网江苏省电力有限公司扬州供电分公司 Infrared power transmission and transformation abnormal heating point target detecting method based on deep learning
CN110942000A (en) * 2019-11-13 2020-03-31 南京理工大学 Unmanned vehicle target detection method based on deep learning
CN111402214A (en) * 2020-03-07 2020-07-10 西南交通大学 Neural network-based automatic detection method for breakage defect of catenary dropper current-carrying ring
CN111860439A (en) * 2020-07-31 2020-10-30 广东电网有限责任公司 Unmanned aerial vehicle inspection image defect detection method, system and equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101970442B1 (en) * 2018-12-04 2019-04-19 주식회사 넥스파시스템 Illegal parking enforcement system Using Fast R-CNN based on Vehicle detection
CN110211097A (en) * 2019-05-14 2019-09-06 河海大学 A kind of crack image detecting method based on the migration of Faster R-CNN parameter
CN110334661A (en) * 2019-07-09 2019-10-15 国网江苏省电力有限公司扬州供电分公司 Infrared power transmission and transformation abnormal heating point target detecting method based on deep learning
CN110942000A (en) * 2019-11-13 2020-03-31 南京理工大学 Unmanned vehicle target detection method based on deep learning
CN111402214A (en) * 2020-03-07 2020-07-10 西南交通大学 Neural network-based automatic detection method for breakage defect of catenary dropper current-carrying ring
CN111860439A (en) * 2020-07-31 2020-10-30 广东电网有限责任公司 Unmanned aerial vehicle inspection image defect detection method, system and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈宇鹏: "基于深度学习的自动驾驶单目视觉目标识别技术研究", 《中国优秀博士学位论文全文数据库 工程科技Ⅱ辑》, pages 12 - 74 *

Similar Documents

Publication Publication Date Title
CN110850723B (en) Fault diagnosis and positioning method based on transformer substation inspection robot system
CN103065323B (en) Subsection space aligning method based on homography transformational matrix
CN105059190B (en) The automobile door opening collision warning device and method of view-based access control model
CN111045000A (en) Monitoring system and method
CN114241298A (en) Tower crane environment target detection method and system based on laser radar and image fusion
CN113850102B (en) Vehicle-mounted vision detection method and system based on millimeter wave radar assistance
CN104751119A (en) Rapid detecting and tracking method for pedestrians based on information fusion
CN103714321A (en) Driver face locating system based on distance image and strength image
CN111008994A (en) Moving target real-time detection and tracking system and method based on MPSoC
CN115909092A (en) Light-weight power transmission channel hidden danger distance measuring method and hidden danger early warning device
CN113256731A (en) Target detection method and device based on monocular vision
CN107767366B (en) A kind of transmission line of electricity approximating method and device
Liu et al. Research on security of key algorithms in intelligent driving system
CN113269074A (en) Target detection method and security inspection robot
CN115857040A (en) Dynamic visual detection device and method for foreign matters on locomotive roof
Li et al. Mobile robot map building based on laser ranging and kinect
Li et al. Real time obstacle estimation based on dense stereo vision for robotic lawn mowers
CN114049580A (en) Airport apron aircraft positioning system
Xu et al. Multiview Fusion 3D Target Information Perception Model in Nighttime Unmanned Intelligent Vehicles
Zhang et al. Rc6d: An rfid and cv fusion system for real-time 6d object pose estimation
Shi et al. Cobev: Elevating roadside 3d object detection with depth and height complementarity
Wang et al. A system of automated training sample generation for visual-based car detection
CN113836975A (en) Binocular vision unmanned aerial vehicle obstacle avoidance method based on YOLOV3
Wang et al. Research on appearance defect detection of power equipment based on improved faster-rcnn
Bi et al. Ship Collision Avoidance Navigation Signal Recognition via Vision Sensing and Machine Forecasting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination