CN114529811A - Rapid and automatic identification and positioning method for foreign matters in subway tunnel - Google Patents

Rapid and automatic identification and positioning method for foreign matters in subway tunnel Download PDF

Info

Publication number
CN114529811A
CN114529811A CN202011214199.XA CN202011214199A CN114529811A CN 114529811 A CN114529811 A CN 114529811A CN 202011214199 A CN202011214199 A CN 202011214199A CN 114529811 A CN114529811 A CN 114529811A
Authority
CN
China
Prior art keywords
foreign matters
foreign
camera
image
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011214199.XA
Other languages
Chinese (zh)
Inventor
王�忠
罗宇
王英杰
余文勇
姚辰
王挺
邵士亮
张凯
徐梁
刘敏杰
董文博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Institute of Automation of CAS
Original Assignee
Shenyang Institute of Automation of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Institute of Automation of CAS filed Critical Shenyang Institute of Automation of CAS
Priority to CN202011214199.XA priority Critical patent/CN114529811A/en
Publication of CN114529811A publication Critical patent/CN114529811A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method for quickly and automatically identifying and positioning foreign matters in a subway tunnel, which comprises the following steps: two identical cameras are arranged on the robot; acquiring subway tunnel images of two camera monitoring areas; processing the acquired image and judging whether foreign matters exist in the image or not; when the foreign body is detected, judging the type of the foreign body and calculating the position of the foreign body and the distance between the foreign body and the camera; according to the type, position and distance of the foreign matters, the robot grabs the foreign matters. The foreign matter is identified by using a YOLO v3 target identification algorithm. The method has the advantages that the deep network model trained by the YOLO object detection framework is used for real-time foreign matter detection, and compared with the traditional detection algorithm, the deep network model trained by the YOLO object detection framework is more accurate, and meanwhile, the identification process is very rapid, and the target of real-time detection can be achieved.

Description

Rapid and automatic identification and positioning method for foreign matters in subway tunnel
Technical Field
The invention belongs to the field related to subway tunnel foreign matter detection, and particularly relates to a subway tunnel foreign matter detection method and system.
Background
In the normal operation process of the subway, the foreign matters invade the safe driving range of the subway and collide and rub with the subway occasionally. As the foreign matter intrusion event has the characteristics of emergencies, uncertainty, great harmfulness and the like, once the train collides with foreign matters, the operation is interrupted if the train collides with the foreign matters, the driving safety is endangered if the train collides with the foreign matters, meanwhile, the damage of vehicle equipment and the casualties are brought, and the safe operation of the subway train is seriously threatened. Therefore, how to timely detect the intrusion limit of the foreign matter in the running process of the train, and perform early warning and alarming before the train arrives, and timely take effective measures to avoid the collision between the subway and the intruding object is an important ring for ensuring the normal running of the subway at present.
At present, the detection of the foreign matters in the subway tunnel is mainly completed by utilizing manual itineration inspection, so that a large amount of manpower and material resources are consumed, meanwhile, the efficiency is low, the subway speed is increased and the track line is prolonged for the subway track traffic which is rapidly developed at present, and workers hardly complete the detection in a limited time.
For tunnel foreign matter detection with a robot as a carrier, an intelligent foreign matter detection device in a subway rail section is disclosed in Chinese invention patent specification CN 109131444A, a monorail trolley is used for running on a rail, image recognition is used for judging whether a subway tunnel is invaded by foreign matters, and the foreign matters are reported to a monitoring center if found; however, the method consumes a large amount of resources for maintenance, and once the trolley is in failure, the system needs manpower and material resources to be checked and repaired. The invention discloses an intelligent detection system for a rail transit tunnel in Chinese patent specification CN 108248635A, which monitors the inside of the tunnel in real time through a real-time monitoring intelligent inspection robot arranged on a guide rail and transmits detected data and information to a central control system of a platform. Potential safety hazards existing inside the rail transit tunnel can be found; however, although this method can detect the tunnel state in real time, it cannot recognize and capture a foreign object. The invention discloses a method for omnibearing real-time monitoring by using a tunnel inspection robot in Chinese patent specification CN 109688388A, wherein the tunnel inspection robot receives an instruction and then conducts real-time omnibearing monitoring on the interior of a tunnel, including illegal driving, traffic accidents, vehicle congestion, road foreign matters and other conditions, so as to timely discover and timely process tunnel abnormity and illegal conditions; however, although this method can recognize the foreign object, it cannot combine recognition with automatic grasping, and cannot recognize and grasp the foreign object at any time. In chinese patent specification CN 108657223 a, an automatic inspection system for urban rail transit and a tunnel deformation detection method are disclosed, wherein a tunnel deformation diagram of a corresponding position point is obtained by comparing a three-dimensional model of a tunnel after real-time detection processing with a standard model. Various parameters including tunnel equipment facilities and cable temperature, tunnel deformation and track damage are detected, and foreign matters in the track can be automatically identified and cleaned, so that the labor cost is greatly saved; the invention discloses an automatic inspection robot, which integrates all system modules together and does not research a specific technical method for automatically identifying and positioning foreign matters.
For the existing railway track foreign matter intrusion detection technology, in Chinese patent specification CN 108549087A, an on-line detection method based on laser radar is disclosed, which judges whether foreign matters exist in a monitoring range through laser radar monitoring; however, this method cannot recognize the type of the foreign object and capture the foreign object in time. The Chinese utility model patent specification CN 207712053U discloses a robot for rail transit, which can realize the rapid inspection of rail transit, can remove foreign matters on the rail, and adopts an inspection device with a special structure to realize the clearer detection of the rail; however, this patent does not identify parts of the system and only illustrates the working principle and mode of the robot. In chinese patent specification CN 206115276U, a robot system for inspecting underground foreign objects is disclosed, which comprises a robot body, a robot controller, a control terminal, and a power supply, wherein the robot is used for performing inspection, and the inspection result is transmitted to a central control system, so that workers can know the current situation in time; the system can not realize the identification and timely grabbing of the foreign matters, and still consumes certain manpower and material resources. In chinese patent specification CN 108197610 a, a track foreign matter detection system based on deep learning is disclosed, which comprises: the vehicle-mounted image acquisition device, the image transmission unit, the foreign matter detection device, the machine learning device and the image data can realize real-time detection of the rail foreign matter; but the system cannot determine the specific position of the foreign body and timely pick up the foreign body.
In summary, the present domestic research on the foreign matters in the subway tunnel has achieved certain achievements, but research and development on the aspects of identifying the types and the position distances of the foreign matters in real time and picking up the foreign matters in real time by using the existing robot as a carrier still need to be explored.
Disclosure of Invention
In view of the above, the invention provides a technology for quickly and automatically identifying and positioning a foreign object in a subway tunnel, which can quickly, effectively and automatically identify and position the type and position of the foreign object, and simultaneously returns the data of the foreign object in real time by combining a robot, so as to realize the grabbing of the foreign object by a manipulator.
The invention adopts the following technical scheme: a quick and automatic identification and positioning method for a foreign body in a subway tunnel comprises the following steps:
s1: two identical cameras are arranged on the robot;
s2, acquiring subway tunnel images of two camera monitoring areas;
s3: processing the acquired image and judging whether foreign matters exist in the image or not;
s4: when the foreign body is detected, judging the type of the foreign body and calculating the position of the foreign body and the distance between the foreign body and the camera;
s5: according to the type, position and distance of the foreign matters, the robot grabs the foreign matters.
The S3 includes the steps of:
acquiring an image by any one of the two cameras, and performing image enhancement and filtering processing;
and judging and predicting the type and the position of the foreign matter in the image through a neural network.
The training of the neural network comprises the following steps:
(1) acquiring image data of the foreign matters in the subway tunnel by two cameras in advance, labeling the foreign matters in the image acquired by each camera, representing the positions of the foreign matters by using a label frame containing the foreign matters, and representing the types of the foreign matters by using label names;
(2) and establishing a YOLO neural network model for foreign matter detection, and carrying out YOLO neural network model training on images acquired and marked by each camera to obtain the trained YOLO neural network model for identifying foreign matters from monitoring images acquired by any camera in real time.
The method comprises the following steps of calculating the distance between the foreign object and the camera, and calculating the distance between the foreign object and the camera by using binocular ranging, wherein the method comprises the following steps:
according to the camera parameters, the parallax error of the same foreign object between the binocular cameras is obtained, and then the depth information of the foreign object in the image, namely the distance between the foreign object and the cameras is obtained.
The method comprises the following steps of calculating the distance between the foreign object and the camera, and calculating the distance between the foreign object and the camera by using binocular ranging, wherein the method comprises the following steps:
calibrating a binocular camera in advance to obtain camera parameters of the two cameras, correcting images respectively obtained by the two cameras to enable the two corrected images to be located on the same plane and to be parallel to each other, calculating the depth of each pixel according to the matching result of the two corrected images to obtain a depth map, and obtaining the distance between the foreign matter and the cameras according to the depth map.
The distance between the foreign object and the camera is obtained by the following formula:
Figure BDA0002759810370000041
wherein c is the focal length of the camera, b is the length of the connecting line of the imaging points of the two cameras, d is the parallax, and Z is the distance from the foreign object to the midpoint of the connecting line of the two cameras.
A quick automatic identification of subway tunnel foreign matter and positioning system includes:
the image processing module is used for processing the subway tunnel image acquired by the binocular camera and judging whether foreign matters exist in the image;
and the image detection module is used for judging the type of the foreign matters and calculating the positions of the foreign matters and the distance between the foreign matters and the camera when the foreign matters are detected, so that the robot can grab the foreign matters.
Generally, compared with the prior art, the above technical solution according to the present invention mainly has the following technical advantages:
1. foreign objects are identified by using a Yolo v3 target identification algorithm. The method has the advantages that the deep network model trained by the YOLO object detection framework is used for real-time foreign matter detection, and compared with the traditional detection algorithm, the deep network model trained by the YOLO object detection framework is more accurate, and meanwhile, the identification process is very rapid, and the target of real-time detection can be achieved.
2. And calculating the distance of the foreign matters by adopting binocular ranging. Compared with monocular distance measurement, the binocular distance measurement can directly utilize parallax to calculate the distance, the precision is higher than that of a monocular distance measurement, more accurate position distance information is obtained through images, and the identification calculation requirements of people are well met.
3. This technique uses the robot as the carrier, when can accurately discern the location foreign matter position fast, can combine together with the robot, realizes real-time detection, the real-time snatching to the foreign matter, ensures no foreign matter in the subway tunnel at any time, ensures subway safe operation, guarantees driver and passenger's health safety.
Drawings
FIG. 1 is a schematic flow chart of the present technique;
FIG. 2 is a schematic diagram of a pinhole model;
fig. 3 is a schematic diagram of a binocular imaging model.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The invention comprises the following steps:
s1: installing cameras according to requirements, and installing two identical cameras at proper positions of the robot;
s2, acquiring an image of the camera monitoring area;
s3: processing the acquired image and judging whether foreign matters exist in the image or not;
s4: when the foreign body is detected, judging the type of the foreign body and calculating the position and the distance between the foreign body and a camera (a robot);
s5: the robot grabs the foreign matters according to the types, positions and distance data of the foreign matters.
Preferably, for step S1, the system is combined with existing robot technology, and the foreign object is found and captured in time by using the capturing function of the manipulator of the robot.
Preferably, for step S1, the robot as the system carrier should be selected to have a moving function and a grasping function. The moving function should be able to adjust the direction and speed of movement to correctly be close to the foreign object, and the grabbing function should be able to adjust the size, angle and force of the manipulator to successfully grab the foreign object.
Preferably, for step S1, the positions of the two cameras should be installed at the proper positions of the robot, so as to monitor the tunnel global image.
Preferably, in step S3, the image obtained by the camera is first subjected to image enhancement, filtering, and other processing to obtain a picture meeting the identification requirement.
Preferably, for step S3, the object recognition and positioning algorithm YOLO v3 based on the deep neural network is used to implement judgment and prediction of the type and position of the foreign object in the image. The YOLO v3 has the typical characteristics of high running speed, capability of being used for real-time detection, low background false detection rate, high identification accuracy rate and capability of meeting technical requirements.
The implementation of the YOLO recognition algorithm needs to include the following steps: (1) a large amount of image data is collected in advance, and a foreign object is marked in the image, the position of the foreign object is represented by a label box containing the foreign object, and the type of the foreign object is represented by a label name. (2) And establishing a foreign matter detection and identification model, and carrying out YOLO neural network model training by using the acquired tunnel foreign matter image to obtain a trained foreign matter detection model. (3) And testing the detection model by using the trained foreign matter detection and identification model to monitor images collected by the camera, and identifying the foreign matters.
When the robot patrols and examines, the camera catches image data, transmits the data as input to the neural network, detects it, if there is a foreign matter, identifies the specific position and category, if there is no foreign matter, continues to process the next frame of image.
Preferably, for step S4, calculating the distance between the alien material and the camera (robot) is accomplished using binocular ranging. According to the camera imaging model, for the monitoring of the same object, the positions of the object in the image are different when the object is observed at different positions. The principle of binocular ranging is similar to that of human eyes. Human eyes can perceive the distance of an object because the images of the same object presented by the two eyes are different, which is also called as parallax. The farther the object distance is, the smaller the parallax error is; conversely, the greater the parallax. By calculating the parallax of the same object between the binocular cameras, the depth information of the object in the camera image, namely the distance between the object and the cameras, can be calculated. Binocular ranging generally requires several steps: (1) and calibrating the binocular cameras to obtain internal and external parameters and homography matrixes of the two cameras. (2) And correcting the original image according to the calibration result, wherein the two corrected images are positioned on the same plane and are parallel to each other. (3) And matching pixel points of the two corrected images. (4) And calculating the depth of each pixel according to the matching result, thereby obtaining a depth map.
Preferably, for step S5, the foreign object type, position and distance data obtained in step S4 are returned to the robot, and the robot adjusts its position in real time according to the information, approaches and grasps the foreign object.
Fig. 1 illustrates a schematic frame structure diagram of a rapid automatic identification and positioning technology for a foreign matter in a subway tunnel according to the present invention, which comprises the following steps:
s1: installing cameras according to requirements, and installing two identical cameras at proper positions of the robot;
due to the detection requirement, in order to realize the detection of the foreign matters in the subway tunnel, a camera is additionally arranged on the robot. According to the invention, the acquisition of a subway tunnel monitoring image is realized by using two cameras, the image acquired by a single camera is used for identifying and detecting the foreign matters in the subway tunnel, and the calculation of the time distance between the foreign matters and the cameras is realized by using the principle that the two cameras are used for shooting the same foreign matters at the same moment according to the images shot by the two cameras at different positions, and the positions of the foreign matters in the images are different to generate parallax;
meanwhile, the foreign matters are picked up by a robot hand. According to the technology, the robot is used as a carrier, when the system calculates the specific type distance of the foreign matters, data are transmitted into the robot, the robot performs corresponding actions to approach and pick up the foreign matters, and therefore real-time detection and real-time capture of the foreign matters are achieved.
When the camera is additionally arranged on the robot, the structure of the robot is not changed when the camera is additionally arranged on the robot in consideration of cost and structural reasons. The structure is considered, and the installation position and the angle of the camera are also considered. The two camera positions are arranged to be symmetrical about the center line of the robot as much as possible, and the directions are on the same horizontal line as much as possible, so that the longer vision field and the simpler calculation are achieved.
S2, acquiring an image of the camera monitoring area;
the parameters of the cameras are continuously adjusted in the experimental process, and the parameters of the two cameras are fixed before the operation, so that the images collected by the cameras are clear and accurate, and preparation is made for later image identification. The collected image is transmitted to the system through the image transmission unit, so that the image is further processed.
S3: processing the acquired image and judging whether foreign matters exist in the image or not;
s4: when the foreign body is detected, judging the type of the foreign body and calculating the position and the distance between the foreign body and a camera (a robot);
in the step S3, the acquired image is subjected to image enhancement, filtering and other processing, and because the subway tunnel is relatively dark, the acquired image becomes clear through image enhancement, brightness is improved, and foreign matters and a background are more clearly separated; filtering out noise in the background that may interfere with detection provides for identification of foreign objects.
The fast detection and identification of the subway tunnel foreign matter mentioned in steps S3 and S4. And (3) judging and predicting the type and the position of the foreign matter in the image by using an object recognition and positioning algorithm YOLO v3 based on a deep neural network.
For detecting foreign objects using the YOLO algorithm, the following steps are required: 1) a large amount of data sets are collected, subway tunnel pictures with foreign matters are searched or shot on the spot on the network, the foreign matters comprise notebook bags, male and female backpacks, handbags, handbag bags, plastic bags filled with articles, express packaging bags, water cups, medicine bottles, mineral water bottles, cannon-shot cylinders and the like, and the pictures are used as data sets for training. Labeling the collected pictures with labeling tool such as labelimg, surrounding the object with a label frame, identifying the position and size of the foreign matter, filling the type of the foreign matter in the label name, and making the labeled picture file format be xml. 2) And modifying the Makefile, and training the data set by using the GPU, so that the training speed of the user can be greatly improved. Performing network training on our data set, and performing model training by using a network structure and a training algorithm provided by a YOLO framework to obtain the weight required by us; 3) and (3) performing network prediction on the trained model, returning the boundary frame coordinates (the center coordinates of the boundary frame and the length and width of the boundary frame) of the foreign matters and the types of the foreign matters from the test result, continuously adjusting the training parameters according to the test result, and optimizing the network structure so that the final model meets the actual requirement.
The calculation of the distance between the foreign object and the camera (robot) mentioned in step S4 is performed by binocular distance measurement.
The binocular distance measurement can directly utilize parallax to calculate distance, the precision is higher than that of a single eye, and more accurate position distance information is obtained through images; the method has no limit of identification rate, because in principle, identification is not needed first and then measurement is carried out, but all obstacles are directly measured; binocular vision also does not require maintenance of a sample database because there is no notion of samples for binocular.
The imaging mechanism of the camera is approximately described by a pinhole model, as shown in fig. 2. M is an object point in a real scene, O is the optical center of the camera, O ' is the projection of the optical center on an image plane, OO ' is the optical axis of the camera, and M ' is the image point of M on the image plane P.
For simplicity, consider two cameras with parallel optical axes separated by a suitable distance (the camera parameters are identical), which is the most ideal model of the eyes, and the images at the same object point P are shown in fig. 3.
Referring to fig. 3, P is a point on the object to be measured, and O' and O ″ are optical centers of the left and right cameras. The imaging points of the point P on the two camera photoreceptors are respectively P 'and P "(the imaging plane of the camera is placed in front of the lens after rotating), c is the focal length of the camera, b is the center line (base line) of the two cameras, and the distance in the depth direction from the point P to the point O' is set as Z, then the principle of similarity of triangles is adopted:
the X/Y coordinates of the point P are:
Figure BDA0002759810370000091
in the depth direction:
Figure BDA0002759810370000092
thus, it can be deduced that:
Figure BDA0002759810370000093
(d is parallax) (3)
Where X' and X "are the distances of the point P in the X direction of the image point shift on the imaging plane, respectively, and the difference between the two is defined as the parallax, i.e. the difference between the image positions of the image points in different cameras, and is denoted by d.
The binocular measurement technique determines the target position by the principle of parallax.
Binocular ranging generally requires several steps: (1) and calibrating the binocular cameras to obtain internal and external parameters and homography matrixes of the two cameras. (2) And correcting the original image according to the calibration result, wherein the two corrected images are positioned on the same plane and are parallel to each other. (3) And matching pixel points of the two corrected images. (4) And calculating the depth of each pixel according to the matching result, thereby obtaining a depth map.
S5: the robot grabs the foreign matters according to the types, positions and distance data of the foreign matters.
In conclusion, the invention realizes the rapid identification of the type and the position of the foreign matters through the YOLO v3 target object identification and positioning algorithm, realizes the calculation of the distance of the foreign matters through binocular ranging, integrates the functions by taking a robot as a carrier, realizes the real-time rapid identification and positioning of the foreign matters in the subway tunnel, picks up the foreign matters in time and ensures the driving safety of the subway train.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (7)

1. A quick and automatic identification and positioning method for foreign matters in a subway tunnel is characterized by comprising the following steps:
s1: two identical cameras are arranged on the robot;
s2, acquiring subway tunnel images of two camera monitoring areas;
s3: processing the acquired image and judging whether foreign matters exist in the image or not;
s4: when the foreign body is detected, judging the type of the foreign body and calculating the position of the foreign body and the distance between the foreign body and the camera;
s5: according to the type, position and distance of the foreign matters, the robot grabs the foreign matters.
2. The method for rapidly and automatically identifying and positioning the foreign matters in the subway tunnel as claimed in claim 1, wherein said S3 comprises the steps of:
acquiring an image by any one of the two cameras, and performing image enhancement and filtering processing;
and judging and predicting the type and the position of the foreign matter in the image through a neural network.
3. The method for rapidly and automatically identifying and positioning the foreign matters in the subway tunnel according to claim 2, wherein the training of the neural network comprises the following steps:
(1) acquiring image data of the foreign matters in the subway tunnel by two cameras in advance, labeling the foreign matters in the image acquired by each camera, representing the positions of the foreign matters by using a label frame containing the foreign matters, and representing the types of the foreign matters by using label names;
(2) and establishing a YOLO neural network model for foreign matter detection, and carrying out YOLO neural network model training on images acquired and labeled by each camera to obtain the trained YOLO neural network model for identifying foreign matters on monitoring images acquired by any camera in real time.
4. The method for rapidly and automatically identifying and positioning the foreign matters in the subway tunnel according to claim 1, wherein the step of calculating the distance between the foreign matters and the camera by using binocular distance measurement comprises the following steps:
according to the camera parameters, the parallax error of the same foreign object between the binocular cameras is obtained, and then the depth information of the foreign object in the image, namely the distance between the foreign object and the cameras is obtained.
5. The method for rapidly and automatically identifying and positioning the foreign matters in the subway tunnel according to claim 1, wherein the step of calculating the distance between the foreign matters and the camera by using binocular distance measurement comprises the following steps:
calibrating a binocular camera in advance to obtain camera parameters of the two cameras, correcting images respectively obtained by the two cameras to enable the two corrected images to be located on the same plane and to be parallel to each other, calculating the depth of each pixel according to the matching result of the two corrected images to obtain a depth map, and obtaining the distance between the foreign matter and the cameras according to the depth map.
6. A method for rapidly and automatically identifying and positioning a foreign object in a subway tunnel according to claim 4 or 5, wherein the distance between the foreign object and the camera is obtained by the following formula:
Figure FDA0002759810360000021
wherein c is the focal length of the camera, b is the length of the connecting line of the imaging points of the two cameras, d is the parallax, and Z is the distance from the foreign object to the midpoint of the connecting line of the two cameras.
7. The utility model provides a quick automatic identification of subway tunnel foreign matter and positioning system which characterized in that includes:
the image processing module is used for processing the subway tunnel image acquired by the binocular camera and judging whether foreign matters exist in the image;
and the image detection module is used for judging the type of the foreign matters and calculating the positions of the foreign matters and the distance between the foreign matters and the camera when the foreign matters are detected, so that the robot can grab the foreign matters.
CN202011214199.XA 2020-11-04 2020-11-04 Rapid and automatic identification and positioning method for foreign matters in subway tunnel Pending CN114529811A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011214199.XA CN114529811A (en) 2020-11-04 2020-11-04 Rapid and automatic identification and positioning method for foreign matters in subway tunnel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011214199.XA CN114529811A (en) 2020-11-04 2020-11-04 Rapid and automatic identification and positioning method for foreign matters in subway tunnel

Publications (1)

Publication Number Publication Date
CN114529811A true CN114529811A (en) 2022-05-24

Family

ID=81618622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011214199.XA Pending CN114529811A (en) 2020-11-04 2020-11-04 Rapid and automatic identification and positioning method for foreign matters in subway tunnel

Country Status (1)

Country Link
CN (1) CN114529811A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115035060A (en) * 2022-06-07 2022-09-09 贵州聚原数技术开发有限公司 Tunnel wall deformation detection method based on computer image recognition
CN115423777A (en) * 2022-09-05 2022-12-02 三一重型装备有限公司 Roadway defect positioning method and device, readable storage medium and engineering equipment
CN115892131A (en) * 2023-02-15 2023-04-04 深圳大学 Intelligent monitoring method and system for subway tunnel

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004059900A2 (en) * 2002-12-17 2004-07-15 Evolution Robotics, Inc. Systems and methods for visual simultaneous localization and mapping
CN108876855A (en) * 2018-05-28 2018-11-23 哈尔滨工程大学 A kind of sea cucumber detection and binocular visual positioning method based on deep learning
CN208868062U (en) * 2018-07-23 2019-05-17 中国安全生产科学研究院 A kind of urban track traffic automatic tour inspection system
CN110217271A (en) * 2019-05-30 2019-09-10 成都希格玛光电科技有限公司 Fast railway based on image vision invades limit identification monitoring system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004059900A2 (en) * 2002-12-17 2004-07-15 Evolution Robotics, Inc. Systems and methods for visual simultaneous localization and mapping
CN108876855A (en) * 2018-05-28 2018-11-23 哈尔滨工程大学 A kind of sea cucumber detection and binocular visual positioning method based on deep learning
CN208868062U (en) * 2018-07-23 2019-05-17 中国安全生产科学研究院 A kind of urban track traffic automatic tour inspection system
CN110217271A (en) * 2019-05-30 2019-09-10 成都希格玛光电科技有限公司 Fast railway based on image vision invades limit identification monitoring system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
于晓英;苏宏升;姜泽;董昱;: "基于YOLO的铁路侵限异物检测方法", 兰州交通大学学报, no. 02, 15 April 2020 (2020-04-15) *
任新新;胡文韬;吕海翔;刘能;樊绍胜;: "电缆管道巡检清理机器人的研究与设计", 电力学报, no. 02, 25 April 2016 (2016-04-25) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115035060A (en) * 2022-06-07 2022-09-09 贵州聚原数技术开发有限公司 Tunnel wall deformation detection method based on computer image recognition
CN115423777A (en) * 2022-09-05 2022-12-02 三一重型装备有限公司 Roadway defect positioning method and device, readable storage medium and engineering equipment
CN115892131A (en) * 2023-02-15 2023-04-04 深圳大学 Intelligent monitoring method and system for subway tunnel

Similar Documents

Publication Publication Date Title
CN114529811A (en) Rapid and automatic identification and positioning method for foreign matters in subway tunnel
CN111958592B (en) Image semantic analysis system and method for transformer substation inspection robot
CN106808482B (en) A kind of crusing robot multisensor syste and method for inspecting
KR102065975B1 (en) Safety Management System Using a Lidar for Heavy Machine
CN109829908B (en) Binocular image-based method and device for detecting safety distance of ground object below power line
CN112235537B (en) Transformer substation field operation safety early warning method
CN106679567A (en) Contact net and strut geometric parameter detecting measuring system based on binocular stereoscopic vision
CN110246175A (en) Intelligent Mobile Robot image detecting system and method for the panorama camera in conjunction with holder camera
CN110602449A (en) Intelligent construction safety monitoring system method in large scene based on vision
CN102737236A (en) Method for automatically acquiring vehicle training sample based on multi-modal sensor data
CN112132896A (en) Trackside equipment state detection method and system
CN106162144A (en) A kind of visual pattern processing equipment, system and intelligent machine for overnight sight
CN114812403B (en) Large-span steel structure hoisting deformation monitoring method based on unmanned plane and machine vision
CN110136186B (en) Detection target matching method for mobile robot target ranging
CN111160220B (en) Deep learning-based parcel detection method and device and storage medium
CN102788572A (en) Method, device and system for measuring attitude of engineering machinery lifting hook
CN115752462A (en) Method, system, electronic equipment and medium for inspecting key inspection targets in building
CN213518003U (en) A patrol and examine robot and system of patrolling and examining for airport pavement
CN111444891A (en) Unmanned rolling machine operation scene perception system and method based on airborne vision
CN109986605A (en) A kind of intelligence automatically tracks robot system and method
Sutjaritvorakul et al. Data-driven worker detection from load-view crane camera
CN112000094A (en) Single-and-double-eye combined high-voltage transmission line hardware fitting online identification and positioning system and method
Liu et al. An approach for auto bridge inspection based on climbing robot
CN110702016A (en) Power transmission line icing measurement system and method
CN107339950A (en) A kind of track quick track switching operating operation sleeper bolt method for detecting position

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination