CN114529811A - Rapid and automatic identification and positioning method for foreign matters in subway tunnel - Google Patents
Rapid and automatic identification and positioning method for foreign matters in subway tunnel Download PDFInfo
- Publication number
- CN114529811A CN114529811A CN202011214199.XA CN202011214199A CN114529811A CN 114529811 A CN114529811 A CN 114529811A CN 202011214199 A CN202011214199 A CN 202011214199A CN 114529811 A CN114529811 A CN 114529811A
- Authority
- CN
- China
- Prior art keywords
- foreign
- foreign body
- camera
- image
- distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000001514 detection method Methods 0.000 claims abstract description 35
- 238000012544 monitoring process Methods 0.000 claims abstract description 13
- 230000008569 process Effects 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims abstract description 7
- 238000003384 imaging method Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000012549 training Methods 0.000 claims description 9
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 238000003062 neural network model Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 5
- 230000004807 localization Effects 0.000 claims description 3
- 238000011897 real-time detection Methods 0.000 abstract description 7
- 230000008901 benefit Effects 0.000 abstract description 4
- 238000007689 inspection Methods 0.000 description 13
- 238000005516 engineering process Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000012360 testing method Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000000875 corresponding effect Effects 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 229910052500 inorganic mineral Inorganic materials 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000011707 mineral Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 108091008695 photoreceptors Proteins 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 230000005477 standard model Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明属于地铁隧道异物检测相关领域,更具体地,涉及一种地铁隧道异物检测方法及系统。The invention belongs to the related field of foreign body detection in subway tunnels, and more particularly, relates to a method and system for foreign body detection in subway tunnels.
背景技术Background technique
在地铁正常运行的过程中,异物侵入地铁安全行驶范围并与地铁发生碰撞、摩擦的事件时有发生。由于异物侵限事件具有突发性、不确定性、危害性大等特点,一旦列车与异物发生冲突,轻则中断运营,重则危及行车安全,同时带来车辆设备损坏、人员伤亡,严重威胁了地铁列车的安全运行。因此,在车辆运行过程中如何及时探测异物侵入限界,并在列车到达前进行预警、报警,及时采取有效措施,避免地铁与侵入物体进行碰撞,是目前保证地铁正常运行的十分重要的一环。During the normal operation of the subway, foreign objects intrude into the safe driving range of the subway and collide and rub against the subway. Due to the characteristics of foreign body intrusion, such as suddenness, uncertainty, and great harm, once the train conflicts with foreign bodies, it will interrupt the operation at light level, and endanger the traffic safety at worst. Safe operation of subway trains. Therefore, how to detect the intrusion of foreign objects into the boundary in time during the operation of the vehicle, give early warning and alarm before the arrival of the train, and take effective measures in time to avoid the collision between the subway and the intruding object is a very important part to ensure the normal operation of the subway.
目前对于地铁隧道异物的检测主要是利用人工巡回检查完成,这样不仅耗费大量的人力与物力,同时效率低下,对于目前快速发展的地铁轨道交通,地铁速度加快、轨道线路延长,工人很难在有限时间内完成检测。At present, the detection of foreign objects in subway tunnels is mainly completed by manual inspection, which not only consumes a lot of manpower and material resources, but also has low efficiency. For the current rapid development of subway rail transit, the subway speed is accelerated and the rail lines are extended, and it is difficult for workers to work in limited capacity. Complete the inspection within time.
对于以机器人为载体的隧道异物检测,在中国发明专利说明书CN 109131444 A中公开了一种地铁轨道区间内异物智能检测装置,通过使用单轨小车在轨道行驶,运用图像识别判断地铁隧道是否异物侵入,发现则上报监控中心;但此种方法使用维护耗费资源较大,一旦小车遭遇故障,需要人力、物力对系统进行检查修复。在中国发明专利说明书CN108248635 A中公开了一种用于轨道交通隧道的智能检测系统,通过安装在导轨上的实时监控的智能巡检机器人,对隧道内部进行实时监控,将检测到的数据和信息传送到的站台的中央控制系统。能够发现轨道交通隧道内部存在的安全隐患;但此种方法虽然能实时检测隧道状态,但是不能识别和抓取异物。在中国发明专利说明书CN 109688388 A中公开了一种使用隧道巡检机器人全方位实时监控的方法,隧道巡检机器人接收指令后对隧道内实时全方位监控,包括违法行驶、交通事故、车辆拥挤、路面异物等情况,以达到隧道异常与违法情况的及时发现与及时处理;但这种方法虽然能够实现异物的识别,却无法实现将识别与自动抓取结合起来,无法做到随时识别,随时抓取。在中国发明专利说明书CN 108657223A中,公开了一种城市轨道交通自动巡检系统及隧道形变检测方法,通过将实时检测处理后的隧道三维模型与标准模型对比,得到相应位置点的隧道形变图。检测隧道设备设施及线缆温度、隧道形变量、轨道损伤在内的多种参数,而且还能对轨道内异物自动识别和清理,大大节省人力成本;本发明发明了自动巡检机器人,将各个系统模块集成在一起,没有研究具体异物自动识别定位的技术方法。For the detection of foreign objects in tunnels with robots as the carrier, a Chinese invention patent specification CN 109131444 A discloses an intelligent detection device for foreign objects in the subway track section. By using a monorail trolley to drive on the track, image recognition is used to judge whether the subway tunnel is intruded by foreign objects. If it is found, it will be reported to the monitoring center; however, the use and maintenance of this method consumes a lot of resources. Once the car encounters a failure, it requires manpower and material resources to check and repair the system. In the Chinese invention patent specification CN108248635 A, an intelligent detection system for rail transit tunnels is disclosed. The real-time monitoring of the inside of the tunnel is carried out by a real-time monitoring intelligent inspection robot installed on the guide rail, and the detected data and information The central control system of the station to which it is transmitted. It is possible to discover potential safety hazards in rail transit tunnels; however, although this method can detect the tunnel status in real time, it cannot identify and grasp foreign objects. In the Chinese invention patent specification CN 109688388 A, a method for all-round real-time monitoring using a tunnel inspection robot is disclosed. After receiving instructions, the tunnel inspection robot monitors the tunnel in real-time and all-round, including illegal driving, traffic accidents, vehicle congestion, etc. In order to detect and deal with the abnormal and illegal situations of the tunnel in time; but this method can realize the identification of foreign objects, but it cannot realize the combination of identification and automatic grasping, and it cannot be identified and grasped at any time. Pick. In the Chinese invention patent specification CN 108657223A, an automatic patrol inspection system for urban rail transit and a method for detecting tunnel deformation are disclosed. By comparing the three-dimensional model of the tunnel after real-time detection and processing with the standard model, the deformation map of the tunnel at the corresponding position is obtained. It can detect various parameters including tunnel equipment and facilities and cable temperature, tunnel deformation, and track damage, and can automatically identify and clean foreign objects in the track, which greatly saves labor costs; the invention invents an automatic inspection robot to The system modules are integrated together, and there is no technical method for automatic identification and positioning of specific foreign objects.
对于现有的铁路轨道异物侵限检测技术,在中国发明专利说明书CN 108549087 A中,公开了一种基于激光雷达的在线检测方法,通过激光雷达监测,判断监测范围内是否存在异物;但这种方法无法实现异物的种类识别及及时抓取。在中国实用新型专利说明书CN207712053 U公开了一种轨道交通用的机器人,能够实现对轨道交通的快速巡检,可去除轨道上的异物,采用特殊结构的巡检装置,实现对轨道更清晰的检测;但此专利并没有识别系统的部分,仅仅说明机器人的工作原理与方式。在中国发明专利说明书CN 206115276 U中,公开了一种地下异物巡检机器人系统,包括机器人本体、机器人控制器、控制终端、电源,通过机器人进行巡检,将检测结果传输到中央控制系统,便于工作人员及时了解现况;此系统无法实现异物的识别与及时抓取,仍需要耗费一定的人力物力。在中国发明专利说明书CN108197610 A中,公开了一种基于深度学习的轨道异物检测系统,此轨道异物检测系统包括:车载影像采集装置、影像传输单元、异物检测装置、机器学习装置以及影像数据,能够实现实时检测轨道异物;但系统无法确定异物具体位置,并将异物及时拾取。For the existing railway track foreign body intrusion limit detection technology, in the Chinese invention patent specification CN 108549087 A, an on-line detection method based on laser radar is disclosed. The method cannot realize the type identification and timely grasping of foreign bodies. The Chinese utility model patent specification CN207712053 U discloses a robot for rail transportation, which can realize rapid inspection of rail transportation, remove foreign objects on the track, and adopt a special-structured inspection device to realize a clearer inspection of the track. ; but this patent does not identify the part of the system, only describes the working principle and way of the robot. In the Chinese invention patent specification CN 206115276 U, an underground foreign object inspection robot system is disclosed, which includes a robot body, a robot controller, a control terminal, and a power supply. The robot performs inspection and transmits the inspection results to the central control system, which is convenient for The staff can keep abreast of the current situation; this system cannot realize the identification and timely grasp of foreign objects, and it still requires a certain amount of manpower and material resources. In the Chinese invention patent specification CN108197610 A, a deep learning-based track foreign body detection system is disclosed. The track foreign body detection system includes: a vehicle-mounted image acquisition device, an image transmission unit, a foreign body detection device, a machine learning device and image data. Real-time detection of foreign objects on the track is realized; however, the system cannot determine the specific location of the foreign objects and pick up the foreign objects in time.
综上所述,目前国内对于地铁隧道异物的研究已经取得了一定的成果,但对于实时识别异物种类、位置距离并能够以现有机器人为载体进行实时异物拾取方面的技术研发仍需探索。In summary, domestic research on foreign objects in subway tunnels has achieved certain results, but the technology research and development of real-time identification of foreign object types, location distances, and real-time foreign object pickup using existing robots as a carrier still needs to be explored.
发明内容SUMMARY OF THE INVENTION
有鉴于此,本发明提供了一种地铁隧道异物的快速自动识别与定位技术,能够快速有效地自动识别与定位异物的种类、位置,同时结合机器人,实时返回异物数据,实现机械手对异物的抓取。In view of this, the present invention provides a rapid automatic identification and positioning technology of foreign objects in subway tunnels, which can quickly and effectively identify and locate the type and location of foreign objects automatically, and at the same time, combined with robots, return foreign object data in real time, and realize the grasping of foreign objects by manipulators. Pick.
本发明采用如下技术方案:一种地铁隧道异物快速自动识别与定位方法,包含下列步骤:The present invention adopts the following technical scheme: a method for rapid automatic identification and positioning of foreign objects in a subway tunnel, comprising the following steps:
S1:在机器人上设置两个相同的相机;S1: Set up two identical cameras on the robot;
S2:获取两个相机监测区域的地铁隧道图像;S2: Obtain the subway tunnel image of the monitoring area of the two cameras;
S3:对获取的图像进行处理,并判断图像中是否存在异物;S3: Process the acquired image, and determine whether there is foreign matter in the image;
S4:检测到异物时,判断异物的种类并计算异物位置以及与相机之间的距离;S4: When a foreign body is detected, determine the type of the foreign body and calculate the position of the foreign body and the distance from the camera;
S5:根据异物的种类、位置和距离,机器人对异物进行抓取。S5: According to the type, position and distance of the foreign body, the robot grabs the foreign body.
所述S3包括以下步骤:The S3 includes the following steps:
两个相机中任一个相机采集图像,进行图像增强、滤波处理;Any one of the two cameras captures images, and performs image enhancement and filtering processing;
通过神经网络对图像中异物的种类与位置进行判断预测。The type and location of foreign objects in the image are judged and predicted by neural network.
所述神经网络的训练,包含以下步骤:The training of the neural network includes the following steps:
(1)预先通过两个相机获取地铁隧道异物图像数据,对于每个相机采集的图像,对异物进行标注,用包含异物的标签框表示异物位置,用标签名表示异物种类;(1) Acquire the foreign body image data of the subway tunnel through two cameras in advance. For the images collected by each camera, mark the foreign body, use the label frame containing the foreign body to indicate the position of the foreign body, and use the label name to indicate the type of foreign body;
(2)建立用于异物检测的YOLO神经网络模型,对于每个相机采集并标注的图像,进行YOLO神经网络模型训练,获得训练好的YOLO神经网络模型,用于对任一相机实时采集的监测图像进行异物的识别。(2) Establish a YOLO neural network model for foreign object detection, perform YOLO neural network model training for the images collected and labeled by each camera, and obtain a trained YOLO neural network model for real-time monitoring of any camera. Image to identify foreign objects.
所述计算异物和相机之间的距离,运用双目测距实现对异物和相机之间的距离进行计算,包括以下步骤:The calculation of the distance between the foreign object and the camera uses binocular ranging to calculate the distance between the foreign object and the camera, including the following steps:
根据相机参数,获取同一异物在双目相机之间的视差,进而得出图像中异物的深度信息,即异物和相机之间的距离。According to the camera parameters, the parallax of the same foreign object between the binocular cameras is obtained, and then the depth information of the foreign object in the image is obtained, that is, the distance between the foreign object and the camera.
所述计算异物和相机之间的距离,运用双目测距实现对异物和相机之间的距离进行计算,包括以下步骤:The calculation of the distance between the foreign object and the camera uses binocular ranging to calculate the distance between the foreign object and the camera, including the following steps:
预先对双目相机进行标定,得到两个相机的相机参数,并对两个相机分别获取的图像进行校正,使校正后的两张图像位于同一平面且互相平行,根据校正后的两张图像的匹配结果计算每个像素的深度,从而获得深度图,根据深度图得到异物和相机之间的距离。The binocular camera is calibrated in advance, the camera parameters of the two cameras are obtained, and the images obtained by the two cameras are corrected, so that the corrected two images are located on the same plane and parallel to each other. The matching result calculates the depth of each pixel to obtain a depth map, and obtains the distance between the foreign object and the camera according to the depth map.
所述异物和摄像头之间的距离通过下式得到:The distance between the foreign object and the camera is obtained by the following formula:
其中,c为相机焦距,b为两相机成像点连线长度,d为视差,Z为异物到两个相机连线中点的距离。Among them, c is the focal length of the camera, b is the length of the line connecting the imaging points of the two cameras, d is the parallax, and Z is the distance from the foreign object to the midpoint of the line connecting the two cameras.
一种地铁隧道异物快速自动识别与定位系统,包括:A rapid automatic identification and positioning system for foreign objects in a subway tunnel, comprising:
图像处理模块,对双目相机获取的地铁隧道图像进行处理,并判断图像中是否存在异物;The image processing module processes the subway tunnel image obtained by the binocular camera, and determines whether there is foreign matter in the image;
图像检测模块,用于检测到异物时,判断异物的种类并计算异物位置以及与相机之间的距离,使机器人对异物进行抓取。The image detection module is used to judge the type of the foreign body and calculate the position of the foreign body and the distance from the camera when the foreign body is detected, so that the robot can grasp the foreign body.
总体而言,按照本发明的以上技术方案与现有技术相比,主要具备以下的技术优点:Overall, compared with the prior art, the above technical solutions according to the present invention mainly have the following technical advantages:
1.采用YOLO v3目标识别算法对异物进行识别。优点在于利用YOLO物体检测框架训练出来的深度网络模型进行实时异物检测,基于YOLO物体检测框架训练出来的深度网络模型相较于传统的检测算法更加准确,同时识别过程十分迅速,能够达到实时检测的目标。1. Use YOLO v3 target recognition algorithm to identify foreign objects. The advantage is that the deep network model trained by the YOLO object detection framework is used for real-time foreign object detection. Compared with the traditional detection algorithm, the deep network model trained based on the YOLO object detection framework is more accurate, and the recognition process is very fast, which can achieve real-time detection. Target.
2.采用双目测距计算异物的距离。相对于单目测距,双目测距可以直接利用视差计算距离,精度比单目高,通过图像得出较为准确的位置距离信息,很好的满足我们的识别计算要求。2. Use binocular ranging to calculate the distance of foreign objects. Compared with monocular ranging, binocular ranging can directly use parallax to calculate distance, and the accuracy is higher than that of monocular. More accurate position distance information can be obtained through images, which can well meet our recognition and calculation requirements.
3.此技术以机器人为载体,在能够快速准确识别定位异物位置的同时,能够和机器人结合到一起,实现对异物的实时检测、实时抓取,随时保障地铁隧道内无异物,确保地铁安全运行,保证司机和乘客的健康安全。3. This technology uses the robot as the carrier. While it can quickly and accurately identify and locate the position of foreign objects, it can be combined with robots to realize real-time detection and real-time grasping of foreign objects, ensuring that there are no foreign objects in the subway tunnel at any time, and ensuring the safe operation of the subway. , to ensure the health and safety of drivers and passengers.
附图说明Description of drawings
图1是本发明技术的原理流程图;Fig. 1 is the principle flow chart of the technology of the present invention;
图2是针孔模型原理图;Figure 2 is a schematic diagram of the pinhole model;
图3是双目成像模型原理图。Figure 3 is a schematic diagram of the binocular imaging model.
具体实施方式Detailed ways
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。此外,下面所描述的本发明各个实施方式中所涉及到的技术特征只要彼此之间未构成冲突就可以相互组合。In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not conflict with each other.
本发明包含下列步骤:The present invention includes the following steps:
S1:按照要求安装摄像头,在机器人合适的位置安装两个相同的摄像头;S1: Install the camera as required, and install two identical cameras in the appropriate position of the robot;
S2:获取摄像头监测区域的图像;S2: obtain the image of the camera monitoring area;
S3:对获取的图像进行处理,并判断图像中是否存在异物;S3: Process the acquired image, and determine whether there is foreign matter in the image;
S4:检测到异物时,判断异物的种类并计算异物与摄像头(机器人)之间的位置、距离;S4: When a foreign body is detected, determine the type of the foreign body and calculate the position and distance between the foreign body and the camera (robot);
S5:机器人根据异物的种类、位置和距离数据对异物进行抓取。S5: The robot grabs the foreign object according to the type, position and distance data of the foreign object.
优选地,对于步骤S1,将系统与现有的机器人技术结合在一起,利用机器人的机械手的抓取功能对于异物实现及时发现、及时抓取。Preferably, for step S1, the system is combined with the existing robot technology, and the grasping function of the robot's manipulator is used to realize timely detection and grasping of foreign objects.
优选地,对于步骤S1,作为系统载体的机器人,其选择应该具有移动功能和抓取功能。移动功能应该能够调节移动的方向和速度,从而正确靠近异物,抓取功能应该能够调节机械手的大小、角度和力度,从而成功抓取异物。Preferably, for step S1, the robot as the system carrier should be selected to have a moving function and a grasping function. The moving function should be able to adjust the direction and speed of the movement so as to approach the foreign object correctly, and the grasping function should be able to adjust the size, angle and strength of the manipulator to successfully grasp the foreign object.
优选地,对于步骤S1,两个摄像头的位置应该安装在机器人合适位置上,从而做到监控隧道全局图像。Preferably, for step S1, the positions of the two cameras should be installed at the appropriate positions of the robot, so as to monitor the global image of the tunnel.
优选地,对于步骤S3,经摄像头获取的图像,首先对图像进行图像增强、滤波等处理,使其成为符合识别要求的图片。Preferably, for step S3, the image obtained by the camera is first subjected to image enhancement, filtering and other processing to make it a picture that meets the identification requirements.
优选地,对于步骤S3,运用基于深度神经网络的对象识别和定位算法YOLO v3,实现对图像中异物的种类与位置进行判断预测。YOLO v3的典型特点就是运行速度很快,可以用于实时检测,同时背景误检率比较低,识别准确率较高,满足技术要求。Preferably, for step S3, the object recognition and localization algorithm YOLO v3 based on the deep neural network is used to realize the judgment and prediction of the type and position of the foreign object in the image. The typical feature of YOLO v3 is that it runs very fast and can be used for real-time detection. At the same time, the background false detection rate is relatively low, and the recognition accuracy rate is relatively high, which meets the technical requirements.
YOLO识别算法实现需要包含以下步骤:(1)提前收集获取大量图像数据,并且在图像中对于异物进行标注,用包含异物的标签框表示异物位置,用标签名表示异物种类。(2)建立异物检测识别模型,利用所获取的隧道异物图像进行YOLO神经网络模型训练,获得训练好的异物检测检测模型。(3)用训练好的异物检测识别模型对摄像头采集的监测图像,测试检测模型,进行异物的识别。The implementation of the YOLO recognition algorithm needs to include the following steps: (1) Collect and obtain a large amount of image data in advance, and mark the foreign body in the image, use the label box containing the foreign body to indicate the location of the foreign body, and use the label name to indicate the type of foreign body. (2) Establish a foreign body detection and recognition model, and use the obtained tunnel foreign body images to train the YOLO neural network model to obtain a trained foreign body detection and detection model. (3) Use the trained foreign object detection and recognition model to test the monitoring image collected by the camera to test the detection model to identify foreign objects.
机器人巡检时,摄像头捕捉图像数据,将数据作为输入传递给神经网络,对其进行检测,如果存在异物则标识出具体的位置和类别,如果不存在异物则继续处理下一帧图像。When the robot inspects, the camera captures image data, and transmits the data as input to the neural network to detect it. If there is a foreign object, it will identify the specific location and category. If there is no foreign object, it will continue to process the next frame of image.
优选地,对于步骤S4,运用双目测距实现对异物和摄像头(机器人)之间的距离进行计算。根据相机成像模型,对于同一物体的监测,在不同位置观察,图像中物体的位置也不相同。双目测距的原理与人眼相似。人眼能够感知物体的远近,是由于两只眼睛对同一个物体呈现的图像存在差异,也称“视差”。物体距离越远,视差越小;反之,视差越大。通过计算同一物体在双目摄像头之间的视差,即可计算出摄像头图像中物体的深度信息,即物体和摄像头之间的距离。双目测距一般需要以下几个步骤:(1)对双目相机进行标定,得到两个相机的内外参数、单应矩阵。(2)根据标定结果对原始图像校正,校正后的两张图像位于同一平面且互相平行。(3)对校正后的两张图像进行像素点匹配。(4)根据匹配结果计算每个像素的深度,从而获得深度图。Preferably, for step S4, binocular ranging is used to calculate the distance between the foreign object and the camera (robot). According to the camera imaging model, for the monitoring of the same object, the position of the object in the image is different when observed at different positions. The principle of binocular ranging is similar to that of the human eye. The human eye can perceive the distance of an object due to the difference between the images presented by the two eyes to the same object, also known as "parallax". The farther the object is, the smaller the parallax; conversely, the greater the parallax. By calculating the parallax of the same object between the binocular cameras, the depth information of the object in the camera image can be calculated, that is, the distance between the object and the camera. Binocular ranging generally requires the following steps: (1) Calibrate the binocular camera to obtain the internal and external parameters and homography matrices of the two cameras. (2) Correct the original image according to the calibration result, and the two corrected images are located on the same plane and parallel to each other. (3) Perform pixel point matching on the two corrected images. (4) Calculate the depth of each pixel according to the matching result, thereby obtaining a depth map.
优选地,对于步骤S5,步骤S4中获得的异物种类、位置和距离数据将返回给机器人,机器人根据这些信息实时调整自身的位置,靠近并抓取异物。Preferably, for step S5, the foreign object type, position and distance data obtained in step S4 will be returned to the robot, and the robot will adjust its position in real time according to the information, and approach and grasp the foreign object.
图1示意出了本发明一种地铁隧道异物的快速自动识别与定位技术的原理框架结构图,具体步骤如下:Fig. 1 illustrates the principle frame structure diagram of a kind of fast automatic identification and positioning technology of foreign objects in subway tunnels of the present invention, and the specific steps are as follows:
S1:按照要求安装摄像头,在机器人合适的位置安装两个相同的摄像头;S1: Install the camera as required, and install two identical cameras in the appropriate position of the robot;
由于检测的需要,为实现对地铁隧道异物的检测,在机器人上加装摄像头。本发明中对于地铁隧道监测图像的获取利用双摄像头实现,单个摄像头采集的图像用来对地铁隧道中异物的识别检测,而利用双摄像头,根据两个位置不同的摄像头在同一时刻对同一个异物拍摄的图像,异物的在图像中的位置不同而产生视差这样的原理,实现对异物和摄像头时间距离进行计算;Due to the needs of detection, in order to realize the detection of foreign objects in subway tunnels, a camera is installed on the robot. In the present invention, the acquisition of the monitoring image of the subway tunnel is realized by using dual cameras, and the image collected by a single camera is used to identify and detect foreign objects in the subway tunnel. In the captured image, the position of the foreign object in the image is different and the principle of parallax is generated, which realizes the calculation of the time distance between the foreign object and the camera;
同时本发明中对于异物的拾取通过机器手实现。本发明的技术,以机器人为载体,当系统计算出异物具体的种类距离时,将数据传入机器人中,机器人将作出相应的动作,靠近并拾取异物,从而实现异物的实时检测、实时抓取。At the same time, the picking up of foreign objects in the present invention is realized by a robot hand. The technology of the present invention takes the robot as the carrier. When the system calculates the specific type and distance of the foreign object, the data is transmitted to the robot, and the robot will make corresponding actions to approach and pick up the foreign object, thereby realizing the real-time detection and real-time grasping of the foreign object. .
在为机器人上加装摄像头时,考虑到成本和结构原因,对机器人加装摄像头不应对机器人的结构进行改动。考虑到结构的同时,也要考虑摄像头的安装位置和角度。安装的两摄像头位置应尽可能关于机器人中线对称,并且方向尽可能的处在同一条水平线上,从而达到更远的视野和更简易的计算。When adding a camera to the robot, considering the cost and structural reasons, the structure of the robot should not be changed when adding a camera to the robot. While considering the structure, the installation position and angle of the camera should also be considered. The positions of the two cameras installed should be as symmetrical as possible about the center line of the robot, and their directions should be on the same horizontal line as possible, so as to achieve a farther field of view and easier calculation.
S2:获取摄像头监测区域的图像;S2: obtain the image of the camera monitoring area;
实验过程中不断调节摄像头参数,最后工作前固定好两个摄像头的参数,保证摄像头采集到的图像清晰准确,为后面的图像识别做准备。采集到的图像通过图像传输单元传送到系统当中,从而对图像进行进一步处理。During the experiment, the camera parameters were continuously adjusted, and the parameters of the two cameras were fixed before the final work to ensure that the images collected by the cameras were clear and accurate, and prepared for the subsequent image recognition. The collected images are transmitted to the system through the image transmission unit, so as to further process the images.
S3:对获取的图像进行处理,并判断图像中是否存在异物;S3: Process the acquired image, and determine whether there is foreign matter in the image;
S4:检测到异物时,判断异物的种类并计算异物与摄像头(机器人)之间的位置、距离;S4: When a foreign body is detected, determine the type of the foreign body and calculate the position and distance between the foreign body and the camera (robot);
步骤S3中,对于采集到的图像,进行图像增强、滤波等处理,由于地铁隧道中相对来说比较昏暗,通过图像增强使得采集到的图像变得清晰,提升亮度,将异物和背景更清楚的分离开来;通过滤波过滤掉背景中可能会干扰检测的噪声,对异物的识别做好准备。In step S3, the collected images are processed by image enhancement, filtering, etc. Since the subway tunnel is relatively dark, the collected images are made clearer through image enhancement, the brightness is improved, and the foreign objects and the background are clearer. Separate; prepare for the identification of foreign objects by filtering out noise in the background that may interfere with detection.
步骤S3和S4中提到的对于地铁隧道异物的进行快速检测识别。运用基于深度神经网络的对象识别和定位算法YOLO v3,实现对图像中异物的种类与位置进行判断预测。The rapid detection and identification of foreign objects in subway tunnels mentioned in steps S3 and S4. The object recognition and localization algorithm YOLO v3 based on deep neural network is used to realize the judgment and prediction of the type and location of foreign objects in the image.
对于使用YOLO算法对异物进行检测,需要以下几个步骤:1)收集大量的数据集,在网络上寻找或者现场拍摄有异物的地铁隧道图片,异物包括笔记本包、男女背包、手提包、手提纸袋、装有物品的塑料袋、快递包装袋、水杯、药瓶、矿泉水瓶、炮弹型圆柱体等,这些图片将作为我们训练的数据集。将收集的图片用标注工具如labelimg加标签,标签框包围物体,标识异物的位置及大小,标签名填写异物的种类,标注后的图片文件格式为xml。2)修改Makefile文件,用GPU训练我们的数据集,这样可以大大提升我们训练的速度。对于我们的数据集进行网络训练,利用YOLO框架提供的网络结构和训练算法进行模型训练,得到我们需要的权重;3)对于训练完成的模型进行网络预测,测试结果返回异物的边界框坐标(边界框的中心坐标,边界框的长、宽)以及异物的种类,根据测试结果不断调整训练参数,优化网络结构,使得最终的模型满足实际需求。For the detection of foreign objects using the YOLO algorithm, the following steps are required: 1) Collect a large number of data sets, find on the Internet or take pictures of subway tunnels with foreign objects on the spot, foreign objects include notebook bags, backpacks for men and women, handbags, and paper bags , plastic bags with items, express packaging bags, water cups, medicine bottles, mineral water bottles, cannonball-shaped cylinders, etc. These images will be used as our training dataset. Label the collected pictures with an annotation tool such as labelimg, the label frame surrounds the object, identifies the location and size of the foreign object, and the label name fills in the type of foreign object. The file format of the labeled picture is xml. 2) Modify the Makefile to train our dataset with GPU, which can greatly improve the speed of our training. For network training on our data set, use the network structure and training algorithm provided by the YOLO framework to train the model to get the weights we need; 3) Perform network prediction for the trained model, and the test result returns the bounding box coordinates of the foreign object (boundary). The center coordinates of the box, the length and width of the bounding box, and the types of foreign objects, the training parameters are continuously adjusted according to the test results, and the network structure is optimized, so that the final model meets the actual needs.
步骤S4中提到的计算异物和摄像头(机器人)之间的距离,通过双目测距实现。The calculation of the distance between the foreign object and the camera (robot) mentioned in step S4 is realized by binocular ranging.
双目测距可以直接利用视差计算距离,精度比单目高,通过图像得出较为准确的位置距离信息;没有识别率的限制,因为从原理上无需先进行识别再进行测算,而是对所有障碍物直接进行测量;双目视觉也无需维护样本数据库,因为对于双目没有样本的概念。Binocular ranging can directly use parallax to calculate distance, and the accuracy is higher than that of monocular, and more accurate position distance information can be obtained through images; there is no restriction on the recognition rate, because in principle, there is no need to recognize and then measure, but for all Obstacles are measured directly; binocular vision also does not need to maintain a sample database, because there is no concept of samples for binoculars.
用针孔模型来近似描述相机的成像机制,如图2所示。M为现实场景中的一物点,O为相机的光心,O'为光心在像平面上的投影,OO'为相机光轴,M'为M在像平面P上的像点。A pinhole model is used to approximate the imaging mechanism of the camera, as shown in Figure 2. M is an object point in the real scene, O is the optical center of the camera, O' is the projection of the optical center on the image plane, OO' is the optical axis of the camera, and M' is the image point of M on the image plane P.
简单起见,考虑间隔适当距离、光轴平行的两相机(相机参数一致),这是最理想化的双眼模型,同一物点P,在两个相机的成像如图3所示。For the sake of simplicity, consider two cameras spaced at an appropriate distance and with parallel optical axes (the camera parameters are the same). This is the most ideal binocular model. The imaging of the same object point P in the two cameras is shown in Figure 3.
如图3,P是待测物体上的某一点,O’和O”是左右两台相机的的光心。点P在两个相机感光器上的成像点分别为P’和P”(相机的成像平面经过旋转后放在了镜头前方),c为相机焦距,b为两相机中心线(基线),设点P到点O’的深度方向距离为Z,则由三角形相似原理:As shown in Figure 3, P is a certain point on the object to be measured, and O' and O" are the optical centers of the left and right cameras. The imaging points of point P on the two camera photoreceptors are P' and P" respectively (the camera The imaging plane is rotated and placed in front of the lens), c is the focal length of the camera, b is the center line (baseline) of the two cameras, and the depth direction distance from point P to point O' is set to Z, then the triangle similarity principle:
点P的X/Y方向坐标为:The X/Y coordinates of point P are:
在深度方向:In depth direction:
因此可推出:So it can be deduced that:
(d为视差)(3) (d is parallax) (3)
其中,x’和x”分别是点P在成像平面上像点偏移的X方向上距离,两者之差定义为视差,即像点在不同相机内成像位置的差别,用d来表示。Among them, x' and x" are the distances of point P in the X direction of the image point offset on the imaging plane, and the difference between the two is defined as parallax, that is, the difference between the imaging positions of the image points in different cameras, which is represented by d.
双目测量技术通过视差原理来确定目标位置。The binocular measurement technology determines the target position through the principle of parallax.
双目测距一般需要以下几个步骤:(1)对双目相机进行标定,得到两个相机的内外参数、单应矩阵。(2)根据标定结果对原始图像校正,校正后的两张图像位于同一平面且互相平行。(3)对校正后的两张图像进行像素点匹配。(4)根据匹配结果计算每个像素的深度,从而获得深度图。Binocular ranging generally requires the following steps: (1) Calibrate the binocular camera to obtain the internal and external parameters and homography matrices of the two cameras. (2) Correct the original image according to the calibration result, and the two corrected images are located on the same plane and parallel to each other. (3) Perform pixel point matching on the two corrected images. (4) Calculate the depth of each pixel according to the matching result, thereby obtaining a depth map.
S5:机器人根据异物的种类、位置和距离数据对异物进行抓取。S5: The robot grabs the foreign object according to the type, position and distance data of the foreign object.
综上所述,本发明通过YOLO v3目标对象识别和定位算法实现对异物种类、位置的快速识别,通过双目测距实现对异物距离的计算,并以机器人为载体,上述功能集合在一起,实现对地铁隧道异物的实时快速识别与定位,并及时拾取异物,保障地铁列车的行车安全。In summary, the present invention realizes the rapid identification of the type and location of foreign objects through the YOLO v3 target object identification and positioning algorithm, and realizes the calculation of the distance of foreign objects through binocular ranging. Real-time and rapid identification and positioning of foreign objects in subway tunnels, and timely pickup of foreign objects to ensure the safety of subway trains.
本领域的技术人员容易理解,以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。Those skilled in the art can easily understand that the above are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention, etc., All should be included within the protection scope of the present invention.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011214199.XA CN114529811A (en) | 2020-11-04 | 2020-11-04 | Rapid and automatic identification and positioning method for foreign matters in subway tunnel |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011214199.XA CN114529811A (en) | 2020-11-04 | 2020-11-04 | Rapid and automatic identification and positioning method for foreign matters in subway tunnel |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114529811A true CN114529811A (en) | 2022-05-24 |
Family
ID=81618622
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011214199.XA Pending CN114529811A (en) | 2020-11-04 | 2020-11-04 | Rapid and automatic identification and positioning method for foreign matters in subway tunnel |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114529811A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115016481A (en) * | 2022-06-16 | 2022-09-06 | 中路高科交通检测检验认证有限公司 | Obstacle avoidance system and obstacle avoidance method for tunnel lining internal detection radar |
CN115035060A (en) * | 2022-06-07 | 2022-09-09 | 贵州聚原数技术开发有限公司 | Tunnel wall deformation detection method based on computer image recognition |
CN115327520A (en) * | 2022-08-24 | 2022-11-11 | 深圳市巨龙创视科技有限公司 | Surrounding environment monitoring method, device, computer equipment and storage medium |
CN115423777A (en) * | 2022-09-05 | 2022-12-02 | 三一重型装备有限公司 | Method, device, readable storage medium and engineering equipment for roadway defect location |
CN115892131A (en) * | 2023-02-15 | 2023-04-04 | 深圳大学 | Intelligent monitoring method and system for subway tunnel |
CN119180731A (en) * | 2024-11-26 | 2024-12-24 | 北京久译科技有限公司 | Foreign matter detection method and device for subway work area steel support |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2004059900A2 (en) * | 2002-12-17 | 2004-07-15 | Evolution Robotics, Inc. | Systems and methods for visual simultaneous localization and mapping |
CN108876855A (en) * | 2018-05-28 | 2018-11-23 | 哈尔滨工程大学 | A kind of sea cucumber detection and binocular visual positioning method based on deep learning |
CN208868062U (en) * | 2018-07-23 | 2019-05-17 | 中国安全生产科学研究院 | An automatic inspection system for urban rail transit |
CN110217271A (en) * | 2019-05-30 | 2019-09-10 | 成都希格玛光电科技有限公司 | Fast railway based on image vision invades limit identification monitoring system and method |
-
2020
- 2020-11-04 CN CN202011214199.XA patent/CN114529811A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2004059900A2 (en) * | 2002-12-17 | 2004-07-15 | Evolution Robotics, Inc. | Systems and methods for visual simultaneous localization and mapping |
CN108876855A (en) * | 2018-05-28 | 2018-11-23 | 哈尔滨工程大学 | A kind of sea cucumber detection and binocular visual positioning method based on deep learning |
CN208868062U (en) * | 2018-07-23 | 2019-05-17 | 中国安全生产科学研究院 | An automatic inspection system for urban rail transit |
CN110217271A (en) * | 2019-05-30 | 2019-09-10 | 成都希格玛光电科技有限公司 | Fast railway based on image vision invades limit identification monitoring system and method |
Non-Patent Citations (2)
Title |
---|
于晓英;苏宏升;姜泽;董昱;: "基于YOLO的铁路侵限异物检测方法", 兰州交通大学学报, no. 02, 15 April 2020 (2020-04-15) * |
任新新;胡文韬;吕海翔;刘能;樊绍胜;: "电缆管道巡检清理机器人的研究与设计", 电力学报, no. 02, 25 April 2016 (2016-04-25) * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115035060A (en) * | 2022-06-07 | 2022-09-09 | 贵州聚原数技术开发有限公司 | Tunnel wall deformation detection method based on computer image recognition |
CN115016481A (en) * | 2022-06-16 | 2022-09-06 | 中路高科交通检测检验认证有限公司 | Obstacle avoidance system and obstacle avoidance method for tunnel lining internal detection radar |
CN115327520A (en) * | 2022-08-24 | 2022-11-11 | 深圳市巨龙创视科技有限公司 | Surrounding environment monitoring method, device, computer equipment and storage medium |
CN115423777A (en) * | 2022-09-05 | 2022-12-02 | 三一重型装备有限公司 | Method, device, readable storage medium and engineering equipment for roadway defect location |
CN115892131A (en) * | 2023-02-15 | 2023-04-04 | 深圳大学 | Intelligent monitoring method and system for subway tunnel |
US12179814B2 (en) | 2023-02-15 | 2024-12-31 | Shenzhen University | Subway tunnel intelligent monitoring method and system |
CN119180731A (en) * | 2024-11-26 | 2024-12-24 | 北京久译科技有限公司 | Foreign matter detection method and device for subway work area steel support |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114529811A (en) | Rapid and automatic identification and positioning method for foreign matters in subway tunnel | |
CN112132896B (en) | Method and system for detecting states of trackside equipment | |
CN108877296B (en) | Anti-collision system based on Internet of things | |
CN108444390B (en) | Unmanned automobile obstacle identification method and device | |
CN113568002A (en) | Rail transit active obstacle detection device based on laser and image data fusion | |
CN108313088A (en) | A kind of contactless rail vehicle obstacle detection system | |
CN106808482A (en) | A kind of crusing robot multisensor syste and method for inspecting | |
CN107687953A (en) | A kind of lorry failure automatic checkout equipment | |
CN104777521A (en) | Binocular-vision-based detection system for foreign matter between train door and platform shield gate, as well as detection method for detection system | |
CN102914290A (en) | Metro gauge detecting system and detecting method thereof | |
CN112977541A (en) | Train protection early warning system based on multi-technology fusion | |
CN107796373A (en) | A kind of distance-finding method of the front vehicles monocular vision based on track plane geometry model-driven | |
CN109291063A (en) | A kind of electric operating site safety supervision machine people | |
CN117672007A (en) | Road construction area safety precaution system based on thunder fuses | |
CN115909092A (en) | Light-weight power transmission channel hidden danger distance measuring method and hidden danger early warning device | |
CN111554005A (en) | Intelligent inspection method for railway freight train | |
CN112489125A (en) | Automatic detection method and device for storage yard pedestrians | |
CN113334406A (en) | Track traffic vehicle side inspection robot system and detection method | |
CN102887155A (en) | Freight train transfinite computer vision inspection system | |
CN116476888A (en) | Subway tunnel defect identification detection device and method | |
Katsamenis et al. | Real time road defect monitoring from UAV visual data sources | |
CN116559898A (en) | Rail side signal equipment limit detection compensation system | |
CN208847836U (en) | Streetcar Collision Avoidance System | |
CN104501928A (en) | Truck scale weighing method and system on basis of vehicle accurate positioning on vehicle license plate | |
CN205754595U (en) | A kind of tunnel high definition holographic imaging apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |