CN108491758B - Track detection method and robot - Google Patents

Track detection method and robot Download PDF

Info

Publication number
CN108491758B
CN108491758B CN201810126117.2A CN201810126117A CN108491758B CN 108491758 B CN108491758 B CN 108491758B CN 201810126117 A CN201810126117 A CN 201810126117A CN 108491758 B CN108491758 B CN 108491758B
Authority
CN
China
Prior art keywords
robot
image
track
real
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810126117.2A
Other languages
Chinese (zh)
Other versions
CN108491758A (en
Inventor
冯平
卢思岑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ruiling Innovation Technology Development Co ltd
Original Assignee
Shenzhen Ruiling Innovation Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ruiling Innovation Technology Development Co ltd filed Critical Shenzhen Ruiling Innovation Technology Development Co ltd
Priority to CN201810126117.2A priority Critical patent/CN108491758B/en
Publication of CN108491758A publication Critical patent/CN108491758A/en
Application granted granted Critical
Publication of CN108491758B publication Critical patent/CN108491758B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a track detection method and a robot, wherein the track detection method comprises the following steps: the robot collects real-time video stream of the track; determining a position of the robot by visual positioning based on the real-time video stream; determining a travel scheme of the robot according to the position of the robot; controlling the robot to travel according to the travel plan; detecting whether a target image exists in each frame image of the real-time video stream, wherein the target image is an image displaying a fault area of the track; if the target image exists in each frame image of the real-time video stream, uploading the target image to a server, so that the server performs secondary analysis on the target image, and matching the target image to a corresponding client side for display based on the result of the secondary analysis. The scheme of the invention can realize automatic detection of the infrastructure and the basic equipment of the rail transit, and ensure the safe operation of the rail transit system.

Description

Track detection method and robot
Technical Field
The invention belongs to the technical field of robots, and particularly relates to a track detection method and a robot.
Background
The development of rail transit has been mainly focused on the first-line big cities and provincial cities in the last decade, and has a tendency to be popularized to the second-line and third-line cities. And safety is the first element of rail traffic. In order to ensure the safety of rail transit, the infrastructure and the infrastructure of rail transit follow the standards of 48-hour inspection, monthly inspection, semi-annual inspection, annual inspection and five-year-old inspection, and some of the inspection almost completely depends on manual inspection, such as 48-hour inspection and the like. Traditional manual detection not only needs to spend a large amount of manpower and materials, but also has the potential safety hazard scheduling problem of wrong detection, hourglass examining and testing personnel.
Disclosure of Invention
In view of this, the invention provides a rail detection method and a robot, which can realize automatic detection of infrastructure and basic equipment of rail transit and ensure safe operation of a rail transit system.
A first aspect of the present invention provides a track detection method, including:
the robot collects real-time video stream of the track;
determining a position of the robot by visual positioning based on the real-time video stream;
determining a travel scheme of the robot according to the position of the robot;
controlling the robot to travel according to the travel plan;
detecting whether a target image exists in each frame image of the real-time video stream, wherein the target image is an image displaying a fault area of the track;
if the target image exists in each frame image of the real-time video stream, uploading the target image to a server, so that the server performs secondary analysis on the target image, and matching the target image to a corresponding client side for display based on the result of the secondary analysis.
A second aspect of the present invention provides a robot applied to the field of rail transit, the robot including:
the video acquisition module is used for mechanically acquiring real-time video streams of the tracks;
a visual positioning module for determining the position of the robot by visual positioning based on the real-time video stream;
the scheme determining module is used for determining a traveling scheme of the robot according to the position of the robot;
a travel control module for controlling the robot to travel according to the travel scheme;
a fault detection module, configured to detect whether a target image exists in each frame image of the real-time video stream, where the target image is an image in which a fault area of the track is displayed;
and the information uploading module is used for uploading the target image to a server when the target image exists in each frame image of the real-time video stream, so that the server performs secondary analysis on the target image, and the target image is matched to a corresponding client side to be displayed based on the result of the secondary analysis.
As can be seen from the above, in the present invention, a robot collects a real-time video stream of a track, determines a position of the robot through visual positioning based on the real-time video stream, determines a travel scheme of the robot according to the position of the robot, controls the robot to travel according to the travel scheme, and detects whether a target image exists in each frame image of the real-time video stream, where the target image is an image showing a fault area of the track, and uploads the target image to a server when the target image exists in each frame image of the real-time video stream, so that the server performs secondary analysis on the target image, and matches the target image to a corresponding client for display based on a result of the secondary analysis. The scheme of the invention carries out the daily inspection operation of the rail with the robot, and rapidly identifies the fault area in the inspection process through the robot, thereby reducing the possibility of wrong detection and missed detection, realizing the automatic detection of the infrastructure and the basic equipment of the rail transit, and ensuring the safe operation of the rail transit system.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of an implementation of a track detection method provided in an embodiment of the present invention;
fig. 2(a) is a block diagram of a track detection system according to an embodiment of the present invention;
FIG. 2(b) is a flowchart illustrating an implementation of step 105 in the embodiment shown in FIG. 1;
FIG. 3 is a flowchart illustrating an implementation of step 102 in the embodiment shown in FIG. 1;
FIG. 4(a) is a schematic diagram of the coordinate system in step 301 in the embodiment shown in FIG. 3;
FIG. 4(b) is a schematic diagram of the central line of the image and the central line of the track in the image in step 303 in the embodiment shown in FIG. 3;
FIG. 5 is a schematic diagram of a specific implementation flow of step 103 in the embodiment shown in FIG. 1;
fig. 6 is a block diagram of a robot according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical solution of the present invention, the following description will be given by way of specific examples.
Example one
Fig. 1 shows an implementation flow of a track detection method provided in an embodiment of the present invention, which is detailed as follows:
in step 101, the robot collects a real-time video stream of a track;
in the embodiment of the invention, when starting the robot applied to the field of rail transportation, a maintainer firstly puts the robot at a rail position, for example, a rail-mounted robot is installed on a steel rail of the rail, or a flying robot is flown near the rail. After the robot is started, the frame rate of an image acquisition device (such as a camera) is set according to a sampling theorem, and then real-time video stream of a track is acquired. Specifically, the robot is provided with a plurality of cameras, and track images at different angles can be acquired based on different positions of the cameras on the robot; based on the type of the camera, real-time video streams of different video types of the track can be acquired, wherein the video types include, but are not limited to, infrared video, gray-scale video, and multi-channel ultra high definition video, and are not limited herein. Specifically, the multi-path ultrahigh-definition images in the multi-path ultrahigh-definition video are used for evaluating fault conditions such as bridge cracks, foreign matters of a contact network and the like; the infrared images in the infrared video are used for evaluating fault conditions such as tunnel, bridge seepage water and contact network faults; that is, different types of videos are used to evaluate different types of faults. Optionally, after the robot acquires various types of real-time video streams of the track, whether each frame of image in the real-time video streams reaches a preset image quality condition or not may be detected, and if multiple paths of ultra-high-definition images and/or infrared images which do not reach the image quality condition exist in the real-time video streams, the multiple paths of ultra-high-definition images and/or infrared images which do not reach the image quality condition are removed from the real-time video streams, so that the condition that the analysis processing of the real-time video streams of the track by the robot and the server is affected by the poor-quality images is avoided, and the possibility of occurrence of wrong analysis results is reduced.
In step 102, determining the position of the robot by visual positioning based on the real-time video stream;
in the embodiment of the present invention, based on the real-time video stream, a real-time image may be acquired, and specifically, the currently acquired image of the latest frame may be considered as the real-time image. After the robot is started, the position of the robot can be determined through visual positioning based on the real-time image, and the positioning of the robot is realized. Optionally, a GPS module or other positioning modules may be mounted inside the robot, the robot is initially positioned by the GPS module or other positioning modules, and then the robot is accurately positioned by the visual positioning and the real-time images of the track acquired by the robot, that is, a small area where the robot may be located is defined by the initial positioning, so that the robot is accurately positioned in the small area, and the operation pressure of the robot during the accurate positioning is reduced. After the track is laid, the position of the track cannot be changed, namely the position of the track is a fixed position, so that an environment map of the track can be obtained in advance, and the robot is started after being trained and learned through the environment map of the track; after the robot is started and acquires the real-time image of the track, the position of the robot can be determined through image processing operations such as feature point extraction of the track image. Of course, the position of the robot may be determined in other ways according to the type of the robot, and is not limited herein.
In step 103, determining a travel plan of the robot according to the position of the robot;
in the embodiment of the present invention, the traveling scheme of the robot may be further determined according to the current position of the robot determined in step 102, specifically, the traveling route of the robot may be planned through the route of the track and the current position of the robot, and when the current position of the robot overlaps the route of the track, the robot is controlled to travel along the route of the track; when the current position of the robot deviates from the position of the track, the robot is controlled to advance to the position overlapped with the route of the track and then to follow the route of the track. Of course, the travel scheme of the robot may be determined in other ways, and is not limited herein.
In step 104, controlling the robot to travel according to the travel scheme;
in the embodiment of the present invention, the robot may continuously travel according to the travel scheme, and continuously update its own travel scheme through steps 101 to 103 during the travel.
In step 105, detecting whether a target image exists in each frame image of the real-time video stream;
in the embodiment of the present invention, image processing is performed on a real-time video stream by using an embedded hardware mounted on a robot, so as to detect whether a target image exists in each frame image of the real-time video stream, where the target image is an image showing a fault area of the track. In fact, when the track is shot, since the track itself is formed by laying two parallel rails and a plurality of sleepers at equal intervals, the shot normal track pictures are usually very close, that is, if a real-time video stream of the normal track is obtained by shooting, the similarity between two adjacent frames of images in the real-time video stream should be high. When a scene change occurs, it is likely that a track failure occurs, and based on this, it is possible to detect whether or not a target image exists in each frame image. For example, if a foreign object appears at point a in the track and the other places are normal, in the process of shooting the real-time video stream, when the point a is shot, the image containing the point a in the real-time video stream is obviously different from other images not containing the point a, that is, in the real-time video stream, the track image with the foreign object and the track image without the foreign object are necessarily greatly different, so that whether the foreign object exists in the track can be judged according to the similarity between two adjacent frames of images; of course, other obstacles, such as a large crack, etc., may also be determined according to the similarity between two adjacent frames of images, and this is not limited herein.
In step 106, if the target image exists in each frame image of the real-time video stream, the target image is uploaded to a server.
In the embodiment of the present invention, a robot, a server, a Personal Computer (PC) and/or a mobile terminal may form a track detection system, the server may further include a data management platform and a database, where the database is used to store various information acquired by the robot, the data management platform provides an interactive interface for displaying various information acquired by the robot, that is, a service Person may interact with the robot through the data management platform at the server, and fig. 2(a) shows a structural block diagram of the track detection system. In the track detection system, once the target image is detected to exist, the target image is uploaded to a server through a network, so that the server performs secondary analysis on the target image and notifies a maintenance worker based on the result of the secondary analysis. Specifically, the PC and the mobile terminal may be installed with a client installed based on the track detection system, and after the client logs in, the target image may be matched to a corresponding client display based on the result of the secondary analysis, for example, a client displayed on the PC and/or a client displayed on the mobile terminal, which is not limited herein. Through the client, the data management platform and the database (namely, the access server) can be accessed. The network may be a Wireless Fidelity (WiFi) network, or may be a mobile data network, such as a General Packet Radio Service (GPRS) network, a third generation mobile communication network (i.e., a 3G network), a fourth generation mobile communication network (i.e., a 4G network), and the like, which is not limited herein. In the case that the robot erroneously uploads an image without a fault region as a target image to a server due to the fact that the robot may have errors in the process of detecting the target image in the real-time video stream, for example, a blurred or unclear region appears in a captured image due to strong light irradiation, which affects the judgment of the robot, in order to improve the accuracy of fault detection, secondary analysis and database storage are performed on the target image in the server. If the result of the secondary analysis determines that the target image does have a fault area, that is, determines that the track displayed by the target image has a fault, the information related to the target image may be displayed on a data management platform of the server. Further, if the result of the secondary analysis determines that the target image does have a fault area, the server may further determine whether to manually go to repair the fault area based on the result of the secondary analysis, and if so, display the related information of the target image on the client of the pre-bound mobile terminal and/or PC. Specifically, if the track is a subway track, the subway line, the fault type, the fault degree, the collection time, and the collection location indicated by the target image may be displayed on a client of a mobile terminal and/or a PC through a data management platform, and of course, other related information of the target image may also be displayed on the mobile terminal and/or the data management platform, which is not limited herein. The acquisition location actually indicates the specific location of the track where the fault occurred. Since there may be several robots running on the section route of each track, that is, within a period of time, the data management platform needs to display information of more than two target images, the relevant information of the multiple target images may be sorted in an order from first to last based on the acquisition time, or may be sorted in an order from heavy to light based on the failure degree, or may be sorted in an order from near to far based on the position of the acquisition location from the local machine (i.e., the mobile terminal and/or the PC displaying the relevant information of the target images), and the relevant information of the sorted multiple target images may be displayed.
Optionally, in order to improve the accuracy of detecting the target image, fig. 2(b) shows a specific implementation flow of step 105, which is detailed as follows:
in step 201, performing binarization processing on each frame image of the real-time video stream;
in the embodiment of the present invention, a binarization threshold used in binarization processing is set first, and specifically, the binarization threshold may be set by using a bimodal method, a maximum inter-class variance method, a maximum entropy threshold method, or an iterative method, and each frame image of the real-time video stream is subjected to binarization processing based on the set binarization threshold.
In step 202, extracting a region of interest of each frame image after binarization processing;
in an embodiment of the present invention, the region of interest (ROI) is an image region selected from an image, and the region of interest is an important point for performing image analysis. Specifically, the region of interest of each frame of image after the binarization processing may be extracted through a preset operator and a preset function.
In step 203, denoising the region of interest of each frame of image based on a morphological algorithm;
in the embodiment of the invention, the region of interest of each frame of image is denoised, specifically, the morphological noise filter can be formed by combining opening and closing operations, and in the process, a good denoising effect can be realized only by selecting appropriate structural elements.
In step 204, for each frame of image after denoising, calculating image feature difference between the region of interest of the image and the region of interest of the adjacent frame of image;
in an embodiment of the present invention, the adjacent frame image is an image adjacent to the image in the real-time video stream, for example, for a first frame image in the real-time video stream, an image adjacent to the image is a subsequent frame image of the image; for the last frame of image in the real-time video stream, the image adjacent to the image is the previous frame of image of the image. For an intermediate frame image in the real-time video stream, an image adjacent to the image is a previous frame image or a subsequent frame image of the image, for example, an image feature difference between an interested region of the intermediate frame image and an interested region of the previous frame image may be calculated first, and if the image quality of the previous frame image is not good, an image feature difference between the interested region of the intermediate frame image and the interested region of the subsequent frame image may be calculated again. The image features include, but are not limited to: color features, texture features, shape features, and spatial relationship features.
In step 205, if the image feature difference between the region of interest of the image and the region of interest of the adjacent frame image exceeds a preset image feature difference range, the image and/or the adjacent frame image is determined as a target image.
In the embodiment of the present invention, the preset image characteristic difference range may be adjusted according to an ambient lighting condition during shooting, which is not limited herein.
Optionally, the track detection method further includes:
acquiring the distance between the robot and an obstacle in the environment where the robot is located based on a distance sensor carried by the robot in the process of moving the robot;
and if the distance is not greater than a preset distance threshold value, adjusting the traveling scheme to enable the robot to be far away from the obstacle, or suspending the robot to travel.
In this case, the distance sensor may be mounted at a different position of the robot depending on the type of the robot. In an application scene, the robot is a flying robot, and distance sensors can be arranged in the upper, lower, front, rear, left and right directions of the robot so as to avoid colliding with possible obstacles in the air in the process of flying; in another application scenario, the robot is a rail-mounted robot, and a distance sensor can be installed in the front and back directions of the robot to avoid colliding with a possible obstacle in the area near the rail when the robot moves forwards or backwards. Further, when the distance measured by any one of the distance sensors carried by the robot is not greater than a preset distance threshold, the travel scheme may be adjusted, or the robot may be suspended from traveling.
In an application scenario, the robot is a flying robot, and when a distance measured by a distance sensor in any one of the up, down, front, back, left and right directions is not greater than a preset distance threshold, the measured distance is transmitted to an active safety control board of the robot, the active safety control board controls the robot to travel in the opposite direction to an obstacle (the direction of the sensor can be considered as the direction of the obstacle), and when the distance measured by the distance sensor exceeds the preset distance threshold by a certain proportion (for example, 20%), a travel scheme is adjusted based on the position of the obstacle and the robot continues to travel. It should be noted that, in the process of controlling the robot to travel in the opposite direction to the obstacle, it is also necessary to ensure that the values measured by the other sensors are also within the normal range. Furthermore, the active safety control board can be connected with an accelerometer, a barometer, an optical flow sensor and/or more than one ultrasonic sensor, so that not only can obstacles in the environment be measured, but also the self-flying condition can be obtained. Therefore, the active safety control panel can be combined with flight control, and intelligent obstacle avoidance can be realized; when the active safety control board detects that data measured by various sensors are disordered, the active safety control board is required to control the robot to perform emergency braking, specifically, the robot executes a descending instruction, and in the process of executing the descending instruction, when the data measured by a distance sensor arranged below the robot is less than one meter, a flying motor of the robot is closed to descend; if in the process of executing the descending instruction, the data measured by the distance sensor arranged below the robot is still disordered, the flying motor of the robot is immediately closed, so that the flying robot freely falls, and the flying robot is protected by a passive safety protection cover arranged outside the flying robot. The passive safety protection cover is mainly used for passive protection, and damage to pedestrians or passing vehicles caused by the fact that the flying robot is out of control due to unexpected factors under special conditions is avoided. Specifically, the passive safety protection cover is designed by using a soildworks design tool, and finite element analysis is performed by using an ANSYS analysis tool, so that the passive safety protection cover is optimized, and the quality (i.e., weight) of the protection cover is reduced to the maximum extent on the premise of ensuring the structural strength.
The robot can also provide a remote monitoring function, and specifically, the robot sends data collected by each sensor of the robot to a server, simultaneously sends real-time images of the track collected by the robot to the server, performs fusion and arrangement on the data collected by each sensor on the server, and sends the data obtained after fusion and arrangement to a pre-bound mobile terminal for display. When abnormal phenomena such as disorder and the like occur in the data obtained after the fusion and the arrangement, reminding can be performed on the mobile terminal in a vibration mode, a character mode and/or a ring mode and the like so as to prompt a maintainer to pay attention to the abnormal phenomena; when the robot brakes emergently, the mobile terminal can be reminded in a vibration mode, a character mode and/or a ring mode or the like so as to prompt a maintainer to go to a place where the robot brakes emergently to carry out operations such as recovery, maintenance and the like on the robot. Of course, the maintainer may also actively refer to the data obtained after the fusion and the sorting received by the mobile terminal to determine whether the robot is in a stable flight process, and when the maintainer considers that there is a problem in the current robot traveling, a pause instruction may be sent to the robot through the client or the server, and after receiving the pause instruction sent by the client or the server, the track robot may pause the robot flying based on the pause instruction. The process of suspending the flight of the robot may refer to the process of emergency braking.
In another application scenario, the robot is a rail-mounted robot, and when a distance measured by a distance sensor located in front is not greater than a preset distance threshold (the front is relative to the traveling direction of the rail-mounted robot), the measured distance is transmitted to an active safety control panel of the robot, and the active safety control panel controls the robot to stop traveling; and, because the obstacle may be an animal, the robot may also be controlled to emit an alarm tone to disperse the animal in the vicinity of the robot. Furthermore, the robot can be provided with a manipulator and an object containing frame, the robot can further judge whether the obstacle falls into the range capable of being grabbed by the manipulator by combining the collected real-time images of the track, and if so, the obstacle is grabbed to the object containing frame through the manipulator so as to clear the obstacle. Optionally, after the distance measured by the distance sensor is greater than a preset distance threshold, it may be determined that the obstacle does not exist, and the robot may be controlled to continue to travel. Furthermore, the active safety control board can be connected with an accelerometer, a barometer, an optical flow sensor and/or more than one ultrasonic sensor, so that the obstacle can be detected, and the self detailed travelling condition can be obtained. Therefore, the intelligent obstacle avoidance can be realized through the active safety control panel.
The robot can also provide a remote monitoring function, and specifically, the robot sends data collected by each sensor of the robot to a server, simultaneously sends real-time images of the track collected by the robot to the server, performs fusion and arrangement on the data collected by each sensor on the server, and sends the data obtained after fusion and arrangement to a pre-bound mobile terminal for display. When abnormal phenomena such as disorder and the like occur in the data obtained after the fusion and the arrangement, reminding can be performed on the mobile terminal in a vibration mode, a character mode and/or a ring mode and the like so as to prompt a maintainer to pay attention to the abnormal phenomena; because the data obtained after the fusion and the arrangement comprises the real-time image of the track, a maintainer can also independently look up the current real-time image to judge whether the robot is in a stable running process, when the maintainer considers that the current robot runs to have a problem, a pause instruction can be sent to the robot through the client or the server, and after the track robot receives the pause instruction sent by the client or the server, the robot can be paused based on the pause instruction to run, so that the maintainer can arrive at the position where the robot is located at the first time to recover the robot.
Optionally, fig. 3 shows a specific implementation flow of step 102 when the robot is a flying robot, which is detailed as follows:
in step 301, a real-time image is obtained based on the real-time video stream;
in the embodiment of the present invention, the currently acquired image of the latest frame may be considered as a real-time image.
In step 302, a coordinate system is initialized;
in an embodiment of the present invention, an x-axis of the coordinate system is an extending direction of a rail, a y-axis of the coordinate system is a sleeper laying direction, and a z-axis of the coordinate system is a direction perpendicular to a rail laying surface. Fig. 4(a) shows a schematic diagram of a coordinate system, from which fig. 4(a) it can be seen that the x-axis coordinate represents the distance the robot has travelled along the track, the y-axis coordinate represents the distance the robot is offset from the centre line of the track, and the z-axis coordinate represents the flying height of the robot relative to the track laying surface.
In step 303, the robot receives a radio frequency identification tag carrying positioning information and sent by a radio frequency device, and performs x-axis positioning according to the radio frequency identification tag;
in the embodiment of the present invention, the radio frequency device is preset on the track, for example, a radio frequency device may be set every preset kilometer (e.g., three kilometers), and when the robot passes through a set point of the radio frequency device, the robot may position its x-axis coordinate according to a different tag of the radio frequency transmitted by the radio frequency device, that is, obtain a distance traveled by the robot along the track. Or, the sleepers in the track can be identified according to the real-time image collected by the camera installed below the robot, the number of the sleepers passed by the robot in the running process is calculated, and the distance traveled along the track can be obtained according to the number of the sleepers multiplied by the laying distance of the sleepers, wherein the laying distance of the sleepers is fixed. Optionally, the two calculation methods may be combined, the distance traveled is calculated according to the number of sleepers, and then the calculated distance traveled is corrected according to the radio frequency identification tag carrying the positioning information, so that the result of the x-axis positioning is more accurate.
In step 304, the robot performs y-axis positioning according to the relative position of the central line of the real-time image and the central line of the track in the real-time image;
in the embodiment of the present invention, the real-time image in this step is a real-time image collected by a camera installed below the robot, and theoretically, when the robot moves towards the extending direction of the track, if the robot is located right above the track, the central line of the track will coincide with the central line of the image, as shown in fig. 4 (b). Therefore, the distance from the center line of the real-time image to the center line of the track in the real-time image can be calculated, and the distance from the robot to the center line of the track can be obtained by multiplying the distance by a certain scale.
In step 305, the robot acquires a flying height of the robot through a mounted distance sensor, and performs z-axis positioning according to the flying height;
in the embodiment of the invention, the distance from the robot to the track laying surface is acquired through the distance sensor carried at the lower position of the robot, the distance is the flight height, and the z-axis positioning is carried out according to the flight height;
in step 306, the coordinates where the robot is located in the coordinate system are determined based on the results of the x-axis positioning, the results of the y-axis positioning, and the results of the z-axis positioning.
In the embodiment of the invention, according to the result of the x-axis positioning, the result of the y-axis positioning and the result of the z-axis positioning, the mileage of the track where the robot is located, the distance from the center line of the track and the flying height, that is, the coordinates where the robot is located in the coordinate system, can be known.
Optionally, fig. 5 shows a specific implementation flow of step 103 when the robot is a flying robot, which is detailed as follows:
in step 501, a real-time image is obtained based on the real-time video stream;
in the embodiment of the present invention, the currently acquired image of the latest frame may be considered as a real-time image.
In step 502, performing binarization processing on the real-time image;
in the embodiment of the present invention, a binarization threshold used in binarization processing is set first, and specifically, the binarization threshold may be set by using a bimodal method, a maximum inter-class variance method, a maximum entropy threshold method, or an iterative method, and each frame image of the real-time video stream is subjected to binarization processing based on the set binarization threshold.
In step 503, extracting a region of interest of the real-time image after binarization processing;
in the embodiment of the invention, the region of interest of each frame of image after binarization processing can be extracted through a preset operator and a preset function.
In step 504, extracting a longitudinal rectangular region in the region of interest based on a morphological algorithm;
in the embodiment of the invention, the extracted longitudinal rectangular area comprises images of two steel rails of the track;
in step 505, the positions of the two rails of the track are determined in the longitudinal rectangular area through linear detection;
in the embodiment of the present invention, the hough transform method may be used to perform the line detection, and the lsd (line Segment detector) may also be used to perform the line detection, which is not limited herein.
In step 506, fitting the positions of the two steel rails to obtain center lines of the two steel rails;
in step 507, a travel plan of the robot is determined based on the center line of the two rails and the position of the robot.
In the embodiment of the invention, after the center lines of the two steel rails are obtained, the robot can firstly move to a certain point on the center lines of the two steel rails based on the position of the robot, and then the center lines of the two steel rails are used as a moving scheme of the robot; alternatively, a route parallel to the center line of the two rails may be determined directly from the position of the robot as the travel plan, which is not limited herein.
Optionally, when the robot is a track-mounted robot, since the track-mounted robot automatically travels on the rail and both the y axis and the z axis are fixed, the number of sleepers can be directly identified and counted by the acquired real-time image in the traveling process, so as to obtain the number of sleepers passed by the robot in the traveling process, and the number is multiplied by the laying distance of the sleepers to obtain the distance traveled by the robot along the rail. Optionally, the travel distance may be calculated according to the number of sleepers, and then the calculated travel distance may be corrected according to the rfid tag carrying the positioning information, so that the x-axis positioning result is more accurate.
Optionally, when the robot is a tracked robot, the track detection method further includes:
knocking the bolt of the rail by the manipulator in the advancing process of the robot;
collecting audio when the robot strikes the bolt of the track;
carrying out audio analysis on the audio, and judging whether the bolt is loosened or not based on the result of the audio analysis;
and if the judgment result is that the bolt is loosened, sending the judgment result to a mobile terminal and/or the server to inform a maintenance person to maintain the bolt.
The track-mounted robot is further provided with a manipulator, the robot can pause traveling and position a bolt when the bolt mounted in a track is found through analysis of real-time images in the traveling process, the manipulator is controlled to knock the bolt based on a positioning result of the bolt, a recording function of the robot is started, and audio when the robot knocks the bolt is collected. When the bolt is loosened, the tone of the knocked bolt is changed, namely the amplitude proportions of the components of the collected medium and high harmonics of the audio frequency and the components are also different, whether the bolt is loosened or not is judged by comparing the collected audio frequency with a preset audio frequency waveform, if the similarity of the collected audio frequency and the preset audio frequency waveform is higher, the judgment result is that the bolt is not loosened, and the robot can continue to move; if the similarity between the collected audio and the preset audio waveform is low and is lower than a preset similarity threshold value, the judgment result is determined to be that the bolt is loosened, and the judgment result can be sent to the mobile terminal and/or the server to inform a maintainer of overhauling the bolt.
Therefore, through the embodiment of the invention, the robot can realize intelligent advancing and intelligent detection operation based on the track according to the acquired real-time image of the track, can quickly find the track area suspected to have the fault, and reports the fault to the server, so that the server can perform secondary analysis, and timely informs the maintainer based on the result of the secondary analysis. In the process, the track is not required to be inspected manually, the daily inspection operation of the track is handed over to the robot to be executed, the fault area is rapidly identified in the inspection process through the robot, the possibility of wrong detection and missing detection is reduced, the automatic detection of infrastructure and basic equipment of the track traffic is realized, and the safe operation of a track traffic system is guaranteed.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Example two
Fig. 6 shows a specific structural block diagram of a robot according to a second embodiment of the present invention, and for convenience of description, only the parts related to the second embodiment of the present invention are shown. The robot 6 includes: the system comprises an image acquisition module 61, a visual positioning module 62, a scheme determination module 63, a travel control module 64, a fault detection module 65 and an information uploading module 66.
The video acquisition module 61 is used for mechanically acquiring real-time video streams of the tracks;
a visual positioning module 62, configured to determine a position of the robot through visual positioning based on the real-time video stream;
a plan determining module 63, configured to determine a travel plan of the robot according to the position of the robot;
a travel control module 64 for controlling the robot to travel according to the travel scheme;
a failure detection module 65, configured to detect whether a target image exists in each frame image of the real-time video stream, where the target image is an image in which a failure area of the track is displayed;
and an information uploading module 66, configured to upload the target image to a server when the target image exists in each frame image of the real-time video stream, so that the server performs secondary analysis on the target image, and matches the target image to a corresponding client for display based on a result of the secondary analysis.
Optionally, the fault detection module 65 includes:
the first binarization unit is used for carrying out binarization processing on each frame of image of the real-time video stream;
a first extraction unit, configured to extract an area of interest of each frame of image after binarization processing;
the denoising unit is used for denoising the interested region of each frame image based on a morphological algorithm;
a difference calculating unit, configured to calculate, for each frame of image after denoising, an image feature difference between an interested region of the image and an interested region of an adjacent frame of image, where the adjacent frame of image is an image adjacent to the image in the real-time video stream;
and the target image determining unit is used for determining the image and/or the adjacent frame image as the target image if the image characteristic difference between the interested area of the image and the interested area of the adjacent frame image exceeds a preset image characteristic difference range.
Optionally, the robot 6 further includes:
the obstacle detection module is used for acquiring the distance between the robot and an obstacle in the environment where the robot is located based on a distance sensor carried by the robot in the moving process of the robot;
and the scheme adjusting module is used for adjusting the traveling scheme or suspending the robot to travel if the distance is not greater than a preset distance threshold.
Optionally, the robot 6 further includes:
and the instruction receiving module is used for pausing the robot to move based on the pause instruction if the track robot receives the pause instruction sent by the mobile terminal in the moving process of the robot.
Optionally, the image capturing module 61 includes:
the multi-channel ultrahigh-definition image acquisition unit is used for shooting multi-channel ultrahigh-definition images of the track in real time;
the infrared image acquisition unit is used for shooting the infrared image of the track in real time;
the image quality detection unit is used for detecting whether the multi-channel ultrahigh-definition images and the infrared images reach preset image quality conditions or not;
and the image removing unit is used for removing the multi-path ultrahigh-definition images and/or the infrared images which do not reach the image quality condition when the multi-path ultrahigh-definition images and/or the infrared images which do not reach the image quality condition exist.
Optionally, the information uploading module 66 includes:
the acquisition time acquisition unit is used for acquiring the acquisition time of the target image;
an acquisition location acquisition unit for acquiring an acquisition location of the target image;
and the image information uploading unit is used for uploading the target image, the acquisition time of the target image and the acquisition place of the target image to a server.
Optionally, when the robot is a flying robot, the vision positioning module 62 includes:
a real-time image obtaining unit, configured to obtain a real-time image based on the real-time video stream;
the device comprises an initialization unit, a tracking unit and a control unit, wherein the initialization unit is used for initializing a coordinate system, the x axis of the coordinate system is the extending direction of a track, the y axis of the coordinate system is the laying direction of a sleeper, and the z axis of the coordinate system is the direction vertical to the laying surface of the track;
the X-axis positioning unit is used for receiving a radio frequency identification tag which is sent by a radio frequency device and carries positioning information, and carrying out X-axis positioning according to the radio frequency identification tag, wherein the radio frequency device is arranged on the track in advance;
the y-axis positioning unit is used for carrying out y-axis positioning according to the relative position of the central line of the real-time image and the central line of the track in the real-time image;
the z-axis positioning unit is used for acquiring the flying height of the robot through a carried distance sensor and carrying out z-axis positioning according to the flying height;
and a coordinate determining unit for determining the coordinate of the robot in the coordinate system according to the result of the x-axis positioning, the result of the y-axis positioning and the result of the z-axis positioning.
Alternatively, when the robot is a flying robot, the above-described scenario determination unit 63 includes:
a real-time image obtaining unit, configured to obtain a real-time image based on the real-time video stream;
a second binarization unit, configured to perform binarization processing on the real-time image;
a second extraction unit, configured to extract an area of interest of the real-time image after binarization processing;
a third extraction unit, configured to extract a longitudinal rectangular region in the region of interest based on a morphological algorithm;
a line detection unit for determining the positions of the two rails of the track in the longitudinal rectangular region by line detection;
the fitting unit is used for fitting the positions of the two steel rails to obtain the center lines of the two steel rails;
and a travel plan determination unit for determining a travel plan of the robot based on the center line of the two rails and the position of the robot.
Optionally, when the robot is a track-bound robot, the robot is provided with a manipulator, and the robot further includes:
the knocking module is used for knocking the bolt of the track through the manipulator in the advancing process;
the audio acquisition module is used for acquiring audio when the robot strikes the bolt of the track;
the audio analysis module is used for carrying out audio analysis on the audio and judging whether the bolt is loosened or not based on the result of the audio analysis;
and the bolt loosening judging module is used for sending the judging result to the mobile terminal and/or the server to inform a maintenance person to maintain the bolt when the judging result is that the bolt is loosened.
Therefore, according to the embodiment of the invention, the robot can realize track-based intelligent advancing and intelligent detection operation according to the acquired real-time video stream of the track, quickly find the track area suspected to have the fault, report the fault to the server, enable the server to perform secondary analysis, match the target image to the corresponding client side to display based on the result of the secondary analysis, and timely notify the maintainer. In the process, the track is not required to be inspected manually, the daily inspection operation of the track is handed over to the robot to be executed, the fault area is rapidly identified in the inspection process through the robot, the possibility of wrong detection and missing detection is reduced, the automatic detection of infrastructure and basic equipment of the track traffic is realized, and the safe operation of a track traffic system is guaranteed.
It will be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing functional units and modules are merely illustrated in terms of division, and in practical applications, the above functions may be distributed by different functional units and modules as needed, that is, the internal structure of the robot is divided into different functional units or modules to complete all or part of the above described functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit and module may exist alone physically, or two or more modules and units are integrated in one unit, and the integrated modules and units may be implemented in a form of hardware, or in a form of software functional modules and units. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed robot and track detection method may be implemented in other ways. For example, the above-described robot is merely illustrative, and for example, the division of the above modules or units is only one logical function division, and there may be other division ways in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (7)

1. A track detection method, comprising:
the robot collects real-time video stream of the track; the robot is a flying robot;
determining the position of the robot by visual positioning based on the real-time video stream, comprising: acquiring a real-time image based on the real-time video stream; initializing a coordinate system, wherein the x axis of the coordinate system is the extending direction of a track, the y axis of the coordinate system is the laying direction of sleepers, and the z axis of the coordinate system is the direction vertical to the laying surface of the track; the robot receives a radio frequency identification tag which is sent by a radio frequency device and carries positioning information, and carries out x-axis positioning according to the radio frequency identification tag, wherein the radio frequency device is arranged on a track route in advance; the robot carries out y-axis positioning according to the relative position of the central line of the real-time image and the central line of the track in the real-time image; the robot acquires the flying height of the robot through a carried distance sensor and performs z-axis positioning according to the flying height; determining the coordinate of the robot in the coordinate system according to the result of the x-axis positioning, the result of the y-axis positioning and the result of the z-axis positioning;
determining a travel plan for the robot from the position of the robot, comprising: planning a traveling route of the robot through the route of the track and the current position of the robot, and controlling the robot to travel along the route of the track when the current position of the robot is overlapped with the route of the track; when the current position of the robot deviates from the position of the track, controlling the robot to advance to a position overlapped with the route of the track and then to advance along the route of the track;
controlling the robot to travel according to the travel plan;
detecting whether a target image exists in each frame image of the real-time video stream, wherein the target image is an image of a fault area where the track is displayed, and the method comprises the following steps: carrying out binarization processing on each frame of image of the real-time video stream; extracting the region of interest of each frame of image after binarization processing; denoising the interested region of each frame of image based on a morphological algorithm; for each frame of image after denoising, calculating the image characteristic difference between the region of interest of the image and the region of interest of the adjacent frame of image, wherein the adjacent frame of image is the image adjacent to the image in the real-time video stream; if the image feature difference between the region of interest of the image and the region of interest of the adjacent frame image exceeds a preset image feature difference range, determining the image and/or the adjacent frame image as a target image; the image features include: color features, texture features, shape features and spatial relationship features;
if the target image exists in each frame image of the real-time video stream, uploading the target image to a server, so that the server performs secondary analysis on the target image, and matching the target image to a corresponding client side for display based on the result of the secondary analysis.
2. The track detection method of claim 1, further comprising:
in the process of advancing of the robot, acquiring the distance between the robot and an obstacle in the environment where the robot is located based on a distance sensor carried by the robot;
and if the distance is not greater than a preset distance threshold value, adjusting the traveling scheme, or stopping the robot traveling.
3. The track detection method of claim 1, further comprising:
and in the traveling process of the robot, if the track robot receives a pause instruction sent by a mobile terminal, the robot is paused based on the pause instruction.
4. The track detection method of claim 1, wherein the robot captures a real-time video stream of the track, comprising:
the robot shoots a plurality of paths of ultra-high-definition images of the track and infrared images of the track in real time;
detecting whether the multi-channel ultrahigh-definition images and the infrared images reach preset image quality conditions or not;
and if the plurality of paths of ultra-high-definition images and/or infrared images which do not reach the image quality condition exist, removing the plurality of paths of ultra-high-definition images and/or infrared images which do not reach the image quality condition.
5. The trajectory detection method of claim 1, wherein said uploading the target image to a server comprises:
acquiring the acquisition time and the acquisition place of the target image;
and uploading the target image, the acquisition time of the target image and the acquisition place of the target image to a server.
6. The track detection method of any one of claims 1 to 5, wherein determining the travel plan of the robot from the position of the robot comprises:
acquiring a real-time image based on the real-time video stream;
carrying out binarization processing on the real-time image;
extracting an interested area of the real-time image after binarization processing;
extracting a longitudinal rectangular region in the region of interest based on a morphological algorithm;
determining the positions of two steel rails of the track in the longitudinal rectangular area through linear detection;
fitting the positions of the two steel rails to obtain the center lines of the two steel rails;
determining a travel plan for the robot based on the centerlines of the two rails and the position of the robot.
7. A robot, characterized in that, the robot is applied to the rail transit field, the robot includes:
the video acquisition module is used for mechanically acquiring real-time video streams of the tracks; the robot is a flying robot;
a visual positioning module for determining the position of the robot by visual positioning based on the real-time video stream; when the robot is a flying robot, the visual positioning module comprises: a real-time image acquisition unit for acquiring a real-time image based on the real-time video stream; the system comprises an initialization unit, a tracking unit and a control unit, wherein the initialization unit is used for initializing a coordinate system, the x axis of the coordinate system is the extending direction of a track, the y axis of the coordinate system is the laying direction of a sleeper, and the z axis of the coordinate system is the direction vertical to the laying surface of the track; the X-axis positioning unit is used for receiving a radio frequency identification tag which is sent by a radio frequency device and carries positioning information, and carrying out X-axis positioning according to the radio frequency identification tag, wherein the radio frequency device is arranged on the track in advance; the y-axis positioning unit is used for carrying out y-axis positioning according to the relative position of the central line of the real-time image and the central line of the track in the real-time image; the z-axis positioning unit is used for acquiring the flying height of the robot through a carried distance sensor and carrying out z-axis positioning according to the flying height; the coordinate determination unit is used for determining the coordinate of the robot in the coordinate system according to the result of the x-axis positioning, the result of the y-axis positioning and the result of the z-axis positioning;
a plan determination module for determining a travel plan of the robot according to the position of the robot, comprising: planning a traveling route of the robot through the route of the track and the current position of the robot, and controlling the robot to travel along the route of the track when the current position of the robot is overlapped with the route of the track; when the current position of the robot deviates from the position of the track, controlling the robot to advance to a position overlapped with the route of the track and then to advance along the route of the track;
a travel control module for controlling the robot to travel according to the travel scheme;
a fault detection module, configured to detect whether a target image exists in each frame image of the real-time video stream, where the target image is an image in which a fault area of the track is displayed; the fault detection module includes: the first binarization unit is used for carrying out binarization processing on each frame of image of the real-time video stream; a first extraction unit, configured to extract an area of interest of each frame of image after binarization processing; the denoising unit is used for denoising the interested region of each frame image based on a morphological algorithm; a difference calculating unit, configured to calculate, for each frame of image after denoising, an image feature difference between an interested region of the image and an interested region of an adjacent frame of image, where the adjacent frame of image is an image adjacent to the image in the real-time video stream; a target image determining unit, configured to determine the image and/or the adjacent frame image as a target image if an image feature difference between the region of interest of the image and the region of interest of the adjacent frame image exceeds a preset image feature difference range; the image features include: color features, texture features, shape features and spatial relationship features;
and the information uploading module is used for uploading the target image to a server when the target image exists in each frame image of the real-time video stream, so that the server performs secondary analysis on the target image, and the target image is matched to a corresponding client side to be displayed based on the result of the secondary analysis.
CN201810126117.2A 2018-02-08 2018-02-08 Track detection method and robot Expired - Fee Related CN108491758B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810126117.2A CN108491758B (en) 2018-02-08 2018-02-08 Track detection method and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810126117.2A CN108491758B (en) 2018-02-08 2018-02-08 Track detection method and robot

Publications (2)

Publication Number Publication Date
CN108491758A CN108491758A (en) 2018-09-04
CN108491758B true CN108491758B (en) 2020-11-20

Family

ID=63339994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810126117.2A Expired - Fee Related CN108491758B (en) 2018-02-08 2018-02-08 Track detection method and robot

Country Status (1)

Country Link
CN (1) CN108491758B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109580137B (en) * 2018-11-29 2020-08-11 东南大学 Bridge structure displacement influence line actual measurement method based on computer vision technology
CN111380537A (en) * 2018-12-27 2020-07-07 奥迪股份公司 Method and device for positioning target position in navigation map
CN111668925A (en) * 2019-03-05 2020-09-15 特变电工智能电气有限责任公司 Transformer inspection tour inspection device based on intelligent vision
CN112444519B (en) * 2019-08-30 2022-07-15 比亚迪股份有限公司 Vehicle fault detection device and method
CN111024431B (en) * 2019-12-26 2022-03-11 江西交通职业技术学院 Bridge rapid detection vehicle based on multi-sensor unmanned driving
CN111077159A (en) * 2019-12-31 2020-04-28 北京京天威科技发展有限公司 Fault detection method, system, equipment and readable medium for track circuit box
CN112014848B (en) * 2020-02-11 2023-06-23 深圳技术大学 Sleeper positioning method, sleeper positioning device and electronic equipment
CN111672045B (en) * 2020-05-21 2021-11-30 国网湖南省电力有限公司 Fire-fighting robot, fire-fighting system and fire-fighting control method
CN112508911A (en) * 2020-12-03 2021-03-16 合肥科大智能机器人技术有限公司 Rail joint touch net suspension support component crack detection system based on inspection robot and detection method thereof
CN113111704B (en) * 2021-03-02 2023-05-12 郑州大学 Airport pavement disease foreign matter detection method and system based on deep learning
CN113085923B (en) * 2021-04-15 2022-01-25 北京智川科技发展有限公司 Track detection method and device, automatic track detection vehicle and storage medium
CN114821165A (en) * 2022-04-19 2022-07-29 北京运达华开科技有限公司 Track detection image acquisition and analysis method
CN114973694B (en) * 2022-05-19 2024-05-24 杭州中威电子股份有限公司 Tunnel traffic flow monitoring system and method based on inspection robot
CN115056264B (en) * 2022-06-30 2024-09-27 广州华方智能科技有限公司 System and method for assisting precise positioning of double-angle steel track inspection robot
CN115601719B (en) * 2022-12-13 2023-03-31 中铁十二局集团有限公司 Climbing robot and method for detecting invasion of foreign objects in subway tunnel
CN115760989B (en) * 2023-01-10 2023-05-05 西安华创马科智能控制系统有限公司 Hydraulic support robot track alignment method and device
CN116596731A (en) * 2023-05-25 2023-08-15 北京贝能达信息技术股份有限公司 Rail transit intelligent operation and maintenance big data management method and system
CN118570190B (en) * 2024-07-24 2024-10-18 常州路航轨道交通科技有限公司 Rail defect measurement and identification system and method based on machine vision

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101519981A (en) * 2009-03-19 2009-09-02 重庆大学 Mine locomotive anti-collision early warning system based on monocular vision and early warning method thereof
CN202178515U (en) * 2011-07-30 2012-03-28 山东鲁能智能技术有限公司 Transformer station intelligent robot inspection system
CN102490764A (en) * 2011-12-13 2012-06-13 天津卓朗科技发展有限公司 Automatic detection method of track turnout notch
CN104331910A (en) * 2014-11-24 2015-02-04 沈阳建筑大学 Track obstacle detection system based on machine vision
CN104796664A (en) * 2015-03-26 2015-07-22 成都市斯达鑫辉视讯科技有限公司 Video monitoring device
CN105700532A (en) * 2016-04-19 2016-06-22 长沙理工大学 Vision-based transformer substation inspection robot navigation positioning control method
CN106341661A (en) * 2016-09-13 2017-01-18 深圳市大道智创科技有限公司 Patrol robot
CN106428558A (en) * 2016-11-28 2017-02-22 北京交通大学 Rail comprehensive inspection method based on air-rail double-purpose unmanned aerial vehicle
CN106444588A (en) * 2016-11-30 2017-02-22 国家电网公司 Inspection system and inspection method of valve hall robot based on video monitoring linkage system
CN107071344A (en) * 2017-01-22 2017-08-18 深圳英飞拓科技股份有限公司 A kind of large-scale distributed monitor video data processing method and device
CN107084754A (en) * 2017-04-27 2017-08-22 深圳万发创新进出口贸易有限公司 A kind of transformer fault detection device
CN206653397U (en) * 2017-03-29 2017-11-21 张胜雷 The towed electricity of hanger rail leads to integrated piping lane environmental monitoring intelligent miniature robot
CN107433952A (en) * 2017-05-12 2017-12-05 北京瑞途科技有限公司 A kind of intelligent inspection robot

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101519981A (en) * 2009-03-19 2009-09-02 重庆大学 Mine locomotive anti-collision early warning system based on monocular vision and early warning method thereof
CN202178515U (en) * 2011-07-30 2012-03-28 山东鲁能智能技术有限公司 Transformer station intelligent robot inspection system
CN102490764A (en) * 2011-12-13 2012-06-13 天津卓朗科技发展有限公司 Automatic detection method of track turnout notch
CN104331910A (en) * 2014-11-24 2015-02-04 沈阳建筑大学 Track obstacle detection system based on machine vision
CN104796664A (en) * 2015-03-26 2015-07-22 成都市斯达鑫辉视讯科技有限公司 Video monitoring device
CN105700532A (en) * 2016-04-19 2016-06-22 长沙理工大学 Vision-based transformer substation inspection robot navigation positioning control method
CN106341661A (en) * 2016-09-13 2017-01-18 深圳市大道智创科技有限公司 Patrol robot
CN106428558A (en) * 2016-11-28 2017-02-22 北京交通大学 Rail comprehensive inspection method based on air-rail double-purpose unmanned aerial vehicle
CN106444588A (en) * 2016-11-30 2017-02-22 国家电网公司 Inspection system and inspection method of valve hall robot based on video monitoring linkage system
CN107071344A (en) * 2017-01-22 2017-08-18 深圳英飞拓科技股份有限公司 A kind of large-scale distributed monitor video data processing method and device
CN206653397U (en) * 2017-03-29 2017-11-21 张胜雷 The towed electricity of hanger rail leads to integrated piping lane environmental monitoring intelligent miniature robot
CN107084754A (en) * 2017-04-27 2017-08-22 深圳万发创新进出口贸易有限公司 A kind of transformer fault detection device
CN107433952A (en) * 2017-05-12 2017-12-05 北京瑞途科技有限公司 A kind of intelligent inspection robot

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
变电站定轨自主巡视机器人系统研究;柳斐;《中国优秀硕士学位论文全文数据库(电子期刊)工程科技Ⅱ辑》;20170615;C042-403 *

Also Published As

Publication number Publication date
CN108491758A (en) 2018-09-04

Similar Documents

Publication Publication Date Title
CN108491758B (en) Track detection method and robot
KR101949525B1 (en) Safety management system using unmanned detector
CN109878552B (en) Foreign matter monitoring devices between track traffic platform door and train based on machine vision
US8140250B2 (en) Rail vehicle identification and processing
CN110085029B (en) Highway inspection system and method based on rail type inspection robot
CN102759347B (en) Online in-process quality control device and method for high-speed rail contact networks and composed high-speed rail contact network detection system thereof
CN106954042B (en) Unmanned aerial vehicle railway line inspection device, system and method
CN104183133B (en) A kind of method gathered and transmit road traffic flow state information
KR101602376B1 (en) A train faulty monitoring system
CN109238756B (en) Dynamic image detection equipment and detection method for freight car operation fault
CN201429413Y (en) Pantograph performance on-line automatic detection system for high-speed trains
CN113011252B (en) Rail foreign matter intrusion detection system and method
CA3190996A1 (en) Systems and methods for determining defects in physical objects
CN105115605A (en) Track train infrared detection system and detection method
US20200034637A1 (en) Real-Time Track Asset Recognition and Position Determination
CN202083641U (en) Vehicular railway track vision detection device based on linear array scanning technology
CN104410820A (en) Vehicle-mounted trackside equipment box cover and cable appearance inspection system
CN103422417A (en) Dynamic identification system and method for detecting road surface damages
CN113371028A (en) Intelligent inspection system and method for electric bus-mounted track
Berry et al. High speed video inspection of joint bars using advanced image collection and processing techniques
CN207809418U (en) A kind of Railway wheelset dynamic detection system
CN114067278A (en) Railway freight goods inspection system and method thereof
CN211335993U (en) 360-degree fault image detection system for metro vehicle
CN112285111A (en) Pantograph front carbon sliding plate defect detection method, device, system and medium
CN116519703A (en) System and method for detecting carbon slide plate image of collector shoe based on line scanning 3D image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201120

CF01 Termination of patent right due to non-payment of annual fee