CN114735044A - Intelligent railway vehicle inspection robot - Google Patents

Intelligent railway vehicle inspection robot Download PDF

Info

Publication number
CN114735044A
CN114735044A CN202210244398.8A CN202210244398A CN114735044A CN 114735044 A CN114735044 A CN 114735044A CN 202210244398 A CN202210244398 A CN 202210244398A CN 114735044 A CN114735044 A CN 114735044A
Authority
CN
China
Prior art keywords
image
inspection robot
detection
inspection
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210244398.8A
Other languages
Chinese (zh)
Inventor
赵毅
李永达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Shikai Intelligent Technology Co ltd
Original Assignee
Suzhou Shikai Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Shikai Intelligent Technology Co ltd filed Critical Suzhou Shikai Intelligent Technology Co ltd
Priority to CN202210244398.8A priority Critical patent/CN114735044A/en
Publication of CN114735044A publication Critical patent/CN114735044A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61KAUXILIARY EQUIPMENT SPECIALLY ADAPTED FOR RAILWAYS, NOT OTHERWISE PROVIDED FOR
    • B61K9/00Railway vehicle profile gauges; Detecting or indicating overheating of components; Apparatus on locomotives or cars to indicate bad track sections; General design of track recording vehicles
    • B61K9/08Measuring installations for surveying permanent way
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J18/00Arms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/02Manipulators mounted on wheels or on carriages travelling along a guideway
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1615Programme controls characterised by special kind of manipulator, e.g. planar, scara, gantry, cantilever, space, closed chain, passive/active joints and tendon driven manipulators
    • B25J9/162Mobile manipulator, movable base with manipulator arm mounted on it
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Abstract

The invention provides an intelligent railway vehicle inspection robot, which comprises a system platform and an inspection robot, wherein the system platform is interacted with the inspection robot through a wireless network, and a movable base is arranged at the lower part of the inspection robot; the inspection robot is provided with an image acquisition system, a safety protection system and an automatic positioning system; the image acquisition system is used for acquiring the inspection image of the rail vehicle and comprises a mechanical arm and a two-dimensional image acquisition assembly and/or a three-dimensional image acquisition assembly which are carried on the mechanical arm; the mechanical arm adopts a six-degree-of-freedom mechanical arm; the two-dimensional image acquisition assembly comprises an LED lamp and a binocular camera; the three-dimensional image acquisition assembly comprises a projector and a binocular camera; the invention aims to provide an intelligent railway vehicle inspection robot, which is suitable for railway vehicle detection, improves the identification accuracy of key parts of a vehicle, and solves the false alarm caused by water stain, illumination change and the like.

Description

Intelligent railway vehicle inspection robot
Technical Field
The invention belongs to the field of inspection robots, and particularly relates to an intelligent rail vehicle inspection robot.
Background
The rail vehicle detection usually adopts manual work, and the method for detecting the vehicle mainly comprises the following steps: hammering, hand examination, visual examination, measurement, etc.; the manual work efficiency is low, and the detection result is influenced by human factors. At present, an inspection robot is designed for subway rail vehicle detection, the inspection robot is provided with a mobile base and image acquisition equipment, the mobile base drives the inspection robot to acquire pictures along a rail, a system platform interacts with the inspection robot through a wireless network to acquire the inspection pictures, the system platform performs vehicle detection according to the inspection pictures, and the inspection robot greatly improves the detection efficiency; however, the inspection robot has problems that the accuracy of identifying key parts of the vehicle is not high, and false alarms caused by water stains, dirt, illumination changes and the like exist in the detection of corresponding vehicle parts.
Summary of the invention
In view of the above, the present invention provides an intelligent railway vehicle inspection robot, which is suitable for railway vehicle detection, improves the accuracy of vehicle key component identification, and solves the false alarm caused by water stain, dirt, illumination change, etc.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the intelligent rail vehicle inspection robot comprises a system platform and an inspection robot, wherein the system platform is interacted with the inspection robot through a wireless network, and a movable base is arranged at the lower part of the inspection robot; the inspection robot is provided with an image acquisition system, a safety protection system and an automatic positioning system; the image acquisition system is used for acquiring the inspection image of the rail vehicle and comprises a mechanical arm and a two-dimensional image acquisition assembly and/or a three-dimensional image acquisition assembly which are carried on the mechanical arm; the mechanical arm adopts a six-degree-of-freedom mechanical arm; the two-dimensional image acquisition assembly comprises an LED lamp and a binocular camera; the three-dimensional image acquisition assembly comprises a projector and a binocular camera; the safety protection system comprises ultrasonic radars arranged around the mobile base of the inspection robot, and the ultrasonic radars are used for scanning surrounding obstacles; the automatic positioning system adopts an SLAM navigation system.
Further, the system platform acquires the inspection image, identifies the inspection image, and specifically comprises the following steps: the scheme for detecting the target identification comprises the steps of judging by identifying image characteristics by using a machine vision algorithm and analyzing faults by using a deep learning method.
Furthermore, the image feature recognition is based on an object feature recognition mode, the object feature recognition is a method for verifying by using a mark feature or a behavior feature which is owned by a detected object and uniquely identifies the detected object, and the object feature recognition comprises image acquisition, image detection, image preprocessing, image feature extraction and image matching recognition.
Further, the system platform acquires the inspection image and detects the inspection image, and the method specifically comprises the following steps:
the method comprises the following steps of firstly, recognizing key components based on deep learning, and realizing recognition and positioning of key components such as front-view bolts, side-view bolts, cable joints and the like based on a deep learning algorithm to achieve a detection effect based on component states;
step two, the foreign matter detection method based on deep learning is based on a deep learning YOLO classification algorithm, foreign matter detection is achieved, and the purpose of reducing misjudgment is achieved;
and thirdly, a three-dimensional rapid imaging technology based on binocular vision and deep learning and a deep learning algorithm based on end-to-end are adopted to realize rapid binocular vision three-dimensional measurement, improve the three-dimensional detection precision and detection speed and reduce false alarms caused by water stain, illumination change and the like.
Further, the system platform acquires the inspection image and detects the inspection image, and the method specifically comprises the following steps:
for the detection of two-dimensional images of a vehicle bottom part, the SSIM after the registration of the two images is directly calculated, the two-dimensional images of the bogie part are identified by adopting a point feature detection method to carry out feature detection, feature matching and elimination of mismatching, and the registration result is measured by utilizing the SSIM.
Further, the system platform acquires the inspection image and detects the inspection image, and the method specifically comprises the following steps:
for the detection of the three-dimensional image of the vehicle bottom part, the three-dimensional contour detection of the bogie part adopts a binocular camera and structured light to realize the target three-dimensional contour detection, including binocular image matching and three-dimensional contour restoration; the binocular calibration and matching technology mainly adopts a Zhang Zhengyou calibration model; calibrating the camera by using a calibration algorithm, and realizing feature point matching by using a binocular texture image; and carrying out three-dimensional calculation on the registration points by using the binocular calibration result to obtain a large number of matching points, and restoring the three-dimensional appearance.
Furthermore, the system platform acquires the inspection image, adopts layered transmission, and performs layered processing on the original image data of the inspection image based on image layered coding and decoding transmission.
Further, the system platform acquires the inspection image and detects the inspection image, and the method specifically comprises the following steps:
s1, analyzing the appearance condition of the train by texture comparison;
s2, monitoring the area image gray scale to judge whether a wrong foreign object exists;
s3, learning the appearance structure by using a neural network algorithm, and detecting whether the fastener net is loosened, lost or deviated;
s4, detecting whether the anti-loosening wire is displaced or not, and judging whether the screw is loosened or not;
and S5, measuring and monitoring the geometric form of the part, and detecting whether the key part is deformed.
Furthermore, the inspection robot completes switching between the station tracks through a lifting platform, the lifting platform adopts a hydraulic lifting platform, and the lifting platform is provided with a wireless communication module which is used for carrying out wireless communication with the inspection robot.
Furthermore, the mechanical arm 2 is a single-arm mechanical arm, and the number of rotating shafts 3 included in the single-arm mechanical arm 2 is more than or equal to 6.
Compared with the prior art, the intelligent railway vehicle inspection robot has the following advantages:
the intelligent railway vehicle inspection robot provided by the invention adopts an autonomous navigation technology, a high-definition shooting technology, a multi-view three-dimensional detection technology and a wireless data transmission technology to carry out full-automatic detection on train bogies and train bottoms of urban motor train units on overhaul ditches in a parking garage; by adopting a deep learning algorithm, key components, such as a front-view bolt, a side-view bolt, a cable joint and the like, can be identified and positioned, and the detection effect based on the component state is achieved; through the algorithm, the false alarm caused by water stain, dirt, illumination change and the like is reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation of the invention. In the drawings:
fig. 1 is a schematic structural view of an inspection robot according to an embodiment of the present invention;
FIG. 2 is a flowchart of a deep learning method according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating the result of the test verification according to the embodiment of the present invention;
fig. 4 is a binocular calibration shot image and a binocular texture target image according to the inventive embodiment;
FIG. 5 is a graph of the results of feature point matching and conventional registration algorithm according to the inventive embodiment;
FIG. 6 is a three-dimensional recovery result graph according to the inventive embodiment;
fig. 7 is a schematic diagram of a layered transmission according to an embodiment of the present invention.
Description of reference numerals:
1-a two-dimensional image acquisition component; 2-a mechanical arm; 3-a rotating shaft; 4-moving the base.
Detailed Description
It should be noted that the embodiments and features of the embodiments of the present invention may be combined with each other without conflict.
The invention will be described in detail with reference to the following embodiments with reference to the attached drawings.
As shown in fig. 1, the intelligent railway vehicle inspection robot comprises a system platform and an inspection robot, wherein the system platform is interacted with the inspection robot through a wireless network, and a mobile base 4 is arranged at the lower part of the inspection robot; the method is characterized in that: the inspection robot is provided with an image acquisition system, a safety protection system and an automatic positioning system; the image acquisition system is used for acquiring the inspection image of the rail vehicle and comprises a mechanical arm, and a two-dimensional image acquisition assembly 1 and/or a three-dimensional image acquisition assembly which are carried on the mechanical arm 2; the mechanical arm adopts a six-degree-of-freedom mechanical arm; the two-dimensional image acquisition assembly comprises an LED lamp and a binocular camera; the three-dimensional image acquisition assembly comprises a projector and a binocular camera; the safety protection system comprises ultrasonic radars arranged around the mobile base of the inspection robot, and the ultrasonic radars are used for scanning surrounding obstacles; the automatic positioning system adopts an SLAM navigation system.
The intelligent railway vehicle inspection robot adopts an autonomous navigation technology, a high-definition shooting technology, a multi-view three-dimensional detection technology and a wireless data transmission technology to carry out full-automatic detection on train bogies and train bottoms of urban motor train units on overhaul trenches in a parking garage. The intelligent inspection robot realizes the targeted vehicle inspection detection solution with intelligent row and intelligent inspection two functional modules. The intelligent walking function is that the robot automatically walks without manual operation in the detection process. The intelligent detection function of the detection robot refers to intellectualization and de-manual operation in fault detection of the robot. And (4) consistent with the intelligent detection function, analyzing the behavior of the current detection personnel in the detection process, and corresponding the behavior to the intelligent detection function of the robot.
The robot positions itself through an internal sensor (an encoder, an IMU and the like) and an external sensor (a laser sensor or a visual sensor) carried by the robot through SLAM navigation, and an environment map is incrementally constructed by utilizing environment information acquired by the external sensor on the basis of positioning. In the AGV based on the environment natural navigation, a robot calculates through an encoder and an IMU in the moving process to obtain odometer information, a robot motion model is used to obtain initial pose estimation of the robot, then laser data obtained through a laser sensor mounted on the robot is combined with an observation model (scanning and matching of laser) to accurately correct the pose of the robot, accurate positioning of the robot is obtained, finally, the laser data is added into a grid map on the basis of the accurate positioning, the operation is repeated, the robot moves in the environment, and finally construction of a whole scene map is completed. After the scene map is constructed, the navigation of the AGV needs to be implemented by performing map-based position and path planning on the basis of the constructed map. In the process of AGV movement, the odometer information is combined with laser data acquired by a laser sensor to be matched with a map, the accurate pose of the AGV in the map is continuously acquired in real time, meanwhile, path planning (dynamic route or fixed route, and routes at each time are slightly different) is carried out according to the current position and the task destination, and a control instruction is sent to the AGV according to the planned track, so that the AGV can automatically travel. The image shooting adopts a multi-purpose imaging component to realize the simultaneous acquisition of two-dimensional images and three-dimensional images of the target object.
The system platform acquires the inspection image and identifies the inspection image, and specifically comprises the following steps: the scheme for detecting the target identification comprises the steps of judging by identifying image characteristics by using a machine vision algorithm and analyzing faults by using a deep learning method. As shown in fig. 2, a flowchart of the deep learning method includes that 1, an AI open platform performs a small amount of picture learning; 2. the AI open platform issues a preliminary model to the industry application platform; 3. model deployment is carried out on an industry application platform; 4. the method comprises the following steps that edge equipment such as an inspection robot carries out picture snapshot, and an industry application platform obtains pictures; 5. outputting a picture by an industry application platform; 6. manually checking pictures output by an industry application platform; 7. continuously inputting the verification result of the manual verification picture into the AI open platform to realize the training and verification of the AI open platform; based on the optimization and the upgrade of the recognition algorithm, the optimization of the algorithm and the upgrade of the model are completed, and the recognition precision can meet the technical requirements.
The identification image characteristic is an identification mode based on object characteristics, the object characteristic identification is a method for verifying by using a mark characteristic or a behavior characteristic which is owned by a detection object and uniquely identifies the identity of the detection object, and the object characteristic identification comprises image acquisition, image detection, image preprocessing, image characteristic extraction and image matching identification: wherein: 1) image acquisition: acquiring a video or image containing the object through a front-end camera based on the characteristics of the object; 2) image detection: the object image contains abundant pattern characteristics, such as histogram characteristics, color characteristics, template characteristics, structural characteristics, Haar characteristics and the like, and the image detection is to detect the object in the input video or image by utilizing the characteristic information and accurately calibrate the position and the size of the object; 3) image preprocessing: based on the image detection result, optimization such as gray correction, noise filtration and the like is carried out on the selected image through an intelligent algorithm, an optimal image is formed, and the optimal image serves the process of feature extraction. The image preprocessing process mainly comprises light compensation, gray level transformation, histogram equalization, normalization, geometric correction, filtering and the like; 4) image feature extraction: features that can be used for image recognition are typically classified into visual features, pixel statistical features, image transform coefficient features, image algebraic features, and the like. The image feature extraction is performed on some features of the object, and is generally implemented by using a knowledge-based characterization method. The knowledge-based characterization method mainly obtains feature data helpful for classification according to shape description of detection objects and distance characteristics between the detection objects, and feature components of the feature data generally comprise Euclidean distance, curvature, angle and the like between feature points. The geometric description of the parts and the structural relationship between the parts can be used as an important characteristic of an identification object; 5) image matching identification: the image matching means that the extracted image characteristic data is searched and matched with the characteristic template stored in the database, and a threshold value is set, and when the deviation degree exceeds the threshold value, the result obtained by matching is output; the image recognition is to compare the object features to be recognized with the obtained object feature template, and judge the identity information of the image according to the similarity. This process is divided into two categories: one is confirmation, which is a process of performing one-to-one image comparison, and the other is recognition, which is a process of performing one-to-many image matching comparison. The first step of the image recognition process is to obtain a source image through front-end acquisition equipment, obtain an image with an object through a detection method, carry out certain pretreatment on the image, such as normalization, wavelet decomposition and the like, filter out external interference factors, such as illumination, mud points and the like, and retain the most essential parts of the object and the most beneficial parts for feature extraction; and then selecting a feature extraction algorithm to extract features of the preprocessed image, and finally obtaining a matching result according to comparison with the test image.
The system platform acquires the inspection image and detects the inspection image, and the method specifically comprises the following steps: the method comprises the steps of firstly, recognizing key components based on deep learning, and realizing recognition and positioning of key components such as front-view bolts, side-view bolts, cable joints and the like based on a deep learning algorithm to achieve a detection effect based on component states; step two, the foreign matter detection method based on deep learning is based on a deep learning YOLO classification algorithm, foreign matter detection is achieved, and the purpose of reducing misjudgment is achieved; and thirdly, a three-dimensional rapid imaging technology based on binocular vision and deep learning and a deep learning algorithm based on end-to-end are adopted to realize rapid binocular vision three-dimensional measurement, improve the three-dimensional detection precision and detection speed and reduce false alarms caused by water stain, illumination change and the like.
The system platform acquires the inspection image and detects the inspection image, and specifically comprises the following steps: for the detection of two-dimensional images of a vehicle bottom part, the SSIM after the registration of the two images is directly calculated, the two-dimensional images of the bogie part are identified by adopting a point feature detection method to carry out feature detection, feature matching and elimination of mismatching, and the registration result is measured by utilizing the SSIM. According to verification, the white area is an area with relatively high similarity, and the black or gray area (with the similarity less than 0.6) is an area with relatively low similarity. If the SSIM calculated after registration is small, it indicates that the position difference is large, and it may be a component loss or deformation. The anomaly detection of the general purpose can be realized by verifying the algorithm, and the detection verification result is shown in fig. 3.
The system platform acquires the inspection image and detects the inspection image, and specifically comprises the following steps: for the detection of the three-dimensional image of the vehicle bottom part, the three-dimensional contour detection of the bogie part adopts a binocular camera and structured light to realize the target three-dimensional contour detection, including binocular image matching and three-dimensional contour restoration; the binocular calibration and matching technology mainly adopts a Zhang-Zhengyou calibration model; calibrating the camera by using a calibration algorithm, and realizing feature point matching by using a binocular texture image; and carrying out three-dimensional calculation on the registration points by using the binocular calibration result to obtain a large number of matching points, and restoring the three-dimensional appearance. The three-dimensional contour detection of the bogie component adopts a binocular camera and structured light to realize target three-dimensional contour detection, and mainly comprises two parts of binocular image matching and three-dimensional contour restoration. The binocular calibration and matching technology mainly adopts a Zhang Zhengyou calibration model. As shown in fig. 4, the left image in the figure is a binocular calibration shot image, and the right image is a binocular texture target. The camera is calibrated by using a calibration algorithm, and the feature point matching is realized by using the binocular texture image, so that the method has more accurate result than the traditional registration algorithm. Comparing the results of the configuration algorithm, as shown in fig. 5; matching is performed based on the binocular camera, so that matching precision and effect can be remarkably improved. On the basis, the registration points are subjected to three-dimensional calculation by using a binocular calibration result to obtain a large number of matching points, so that the three-dimensional appearance can be restored, and the three-dimensional restoration result is shown in fig. 6.
The system platform acquires the inspection image, adopts layered transmission, and performs layered processing on original image data of the inspection image based on image layered coding and decoding transmission. Images acquired by the system are high-definition images, the data size is large, and data transmission is particularly important for automatic detection equipment with high real-time requirements. The robot detection system processes original image data in a layered mode based on a narrow-bandwidth and ultra-high-definition image transmission technology of image layered coding and decoding. Under the condition of limited bandwidth, useful information is transmitted preferentially, and a user can obtain required data in a short time. By applying the image layered coding and decoding and transmission technology, high-definition images are dynamically and asynchronously loaded according to needs, the image viewing waiting time is reduced, and the user experience is improved. The layered transmission principle is shown in fig. 7.
The system platform acquires the inspection image and detects the inspection image, and specifically comprises the following steps: s1, analyzing the appearance condition of the train by texture comparison; s2, monitoring the area image gray scale to judge whether a wrong foreign object exists; s3, learning the appearance structure by using a neural network algorithm, and detecting whether the fastener net is loosened, missing or deviated; s4, detecting whether the anti-loosening wire is displaced or not, and judging whether the screw is loosened or not; and S5, measuring and monitoring the geometric form of the part, and detecting whether the key part is deformed.
Patrol and examine the robot and accomplish the switching between the station track through lift platform, lift platform adopts hydraulic pressure over-and-under type platform, and it is provided with wireless communication module, and wireless communication module is used for carrying out wireless communication with patrolling and examining the robot. The intelligent maintenance robot completes switching between the station tracks through the lifting platform, and the lifting platform is provided with an electric lifting platform at the end side of the warehouse location. The length, width and weight of the platform are required to be not less than 1500mm and 1500mm, and the platform is lifted by adopting hydraulic pressure and weighed to be not less than 200 kg. Has a wireless communication module. Can accomplish wireless communication with intelligent robot of patrolling and examining.
The lifting platform is normally in a lifted state, and the lifted state is flush with the ground. And the intelligent inspection robot communicates with the lifting platform after reaching the designated area to judge whether the state is normal. And then the robot enters the range of the lifting platform and sends an instruction after reaching the designated position. So that the lifting platform can finish the landing and reach the bottom of the trench. After the landing is finished, the lifting platform sends out the state to be finished. The inspection robot automatically walks out of the lifting platform, and then the lifting platform receives an instruction of the inspection robot and restores to a lifting state. The robot switches to the map in the trench and starts to perform a predetermined inspection task on the vehicle bottom.
The mechanical arm 2 is a single-arm mechanical arm, and the number of rotating shafts 3 included in the single-arm mechanical arm 2 is more than or equal to 6. The mechanical arm adopts a 6-degree-of-freedom cooperative mechanical arm, each degree of freedom can realize 360-degree rotation, and the safety of man-machine cooperation is guaranteed while the movement range and speed are guaranteed; the inspection robot adopts a miniaturized design, can utilize a track bridge gap for overhauling a trench, and can freely shuttle, advance and overhaul between the trench and the ground and different trenches. When a plurality of robots (20 in maximum) exist in one application scene, under the coordination of a control scheduling program, the robots can be randomly allocated to carry out simultaneous maintenance operation on a train, and each robot is responsible for 1 or more carriages, so that the daily maintenance efficiency of the train is improved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the invention, so that any modifications, equivalents, improvements and the like, which are within the spirit and principle of the present invention, should be included in the scope of the present invention.

Claims (10)

1. The intelligent rail vehicle inspection robot comprises a system platform and an inspection robot, wherein the system platform is interacted with the inspection robot through a wireless network, and a movable base (4) is arranged at the lower part of the inspection robot; the method is characterized in that: the inspection robot is provided with an image acquisition system, a safety protection system and an automatic positioning system;
the image acquisition system is used for acquiring the inspection image of the rail vehicle and comprises a mechanical arm, and a two-dimensional image acquisition assembly (1) and/or a three-dimensional image acquisition assembly which are carried on the mechanical arm (2); the mechanical arm adopts a six-degree-of-freedom mechanical arm; the two-dimensional image acquisition assembly comprises an LED lamp and a binocular camera; the three-dimensional image acquisition assembly comprises a projector and a binocular camera;
the safety protection system comprises ultrasonic radars arranged around the mobile base of the inspection robot, and the ultrasonic radars are used for scanning obstacles around; the automatic positioning system adopts an SLAM navigation system.
2. The intelligent rail vehicle inspection robot according to claim 1, wherein: the system platform acquires the inspection image and identifies the inspection image, and specifically comprises the following steps: the scheme for detecting the target identification comprises the steps of judging by identifying image characteristics by using a machine vision algorithm and analyzing faults by using a deep learning method.
3. The intelligent rail vehicle inspection robot according to claim 2, wherein: the identification image characteristic is an identification mode based on object characteristics, the object characteristic identification is a method for verifying by using a mark characteristic or a behavior characteristic which is owned by a detection object and uniquely identifies the identity of the detection object, and the object characteristic identification comprises image acquisition, image detection, image preprocessing, image characteristic extraction and image matching identification.
4. The intelligent rail vehicle inspection robot according to claim 1, wherein: the system platform acquires the inspection image and detects the inspection image, and specifically comprises the following steps:
the method comprises the steps of firstly, recognizing key components based on deep learning, and realizing recognition and positioning of key components such as front-view bolts, side-view bolts, cable joints and the like based on a deep learning algorithm to achieve a detection effect based on component states;
step two, the foreign matter detection method based on deep learning is based on a deep learning YOLO classification algorithm, foreign matter detection is achieved, and the purpose of reducing misjudgment is achieved;
and thirdly, a three-dimensional rapid imaging technology based on binocular vision and deep learning and a deep learning algorithm based on end-to-end are adopted to realize rapid binocular vision three-dimensional measurement, improve the three-dimensional detection precision and detection speed and reduce false alarms caused by water stain, illumination change and the like.
5. The intelligent rail vehicle inspection robot according to claim 1, wherein: the system platform acquires the inspection image and detects the inspection image, and specifically comprises the following steps:
for the detection of two-dimensional images of a vehicle bottom part, the SSIM after the registration of the two images is directly calculated, the two-dimensional images of the bogie part are identified by adopting a point feature detection method to carry out feature detection, feature matching and elimination of mismatching, and the registration result is measured by utilizing the SSIM.
6. The intelligent rail vehicle inspection robot according to claim 1, wherein: the system platform acquires the inspection image and detects the inspection image, and specifically comprises the following steps:
for the detection of the three-dimensional image of the vehicle bottom part, the three-dimensional contour detection of the bogie part adopts a binocular camera and structured light to realize the target three-dimensional contour detection, including binocular image matching and three-dimensional contour restoration; the binocular calibration and matching technology mainly adopts a Zhang-Zhengyou calibration model; calibrating the camera by using a calibration algorithm, and realizing feature point matching by using a binocular texture image; and carrying out three-dimensional calculation on the registration points by using the binocular calibration result to obtain a large number of matching points, and restoring the three-dimensional appearance.
7. The intelligent rail vehicle inspection robot according to claim 1, wherein: the system platform acquires the inspection image, adopts layered transmission, and performs layered processing on original image data of the inspection image based on image layered coding and decoding transmission.
8. The intelligent rail vehicle inspection robot according to claim 1, wherein: the system platform acquires the inspection image and detects the inspection image, and specifically comprises the following steps:
s1, analyzing the appearance condition of the train by texture comparison;
s2, monitoring the area image gray scale to judge whether a wrong foreign object exists;
s3, learning the appearance structure by using a neural network algorithm, and detecting whether the fastener net is loosened, lost or deviated;
s4, detecting whether the anti-loosening wire is displaced or not, and judging whether the screw is loosened or not;
and S5, measuring and monitoring the geometric form of the part, and detecting whether the key part is deformed.
9. The intelligent rail vehicle inspection robot according to claim 1, wherein: patrol and examine the robot and accomplish the switching between the station track through lift platform, lift platform adopts hydraulic pressure over-and-under type platform, and it is provided with wireless communication module, and wireless communication module is used for carrying out wireless communication with patrolling and examining the robot.
10. The intelligent rail vehicle inspection robot according to claim 1, wherein: the mechanical arm (2) is a single-arm mechanical arm, and the number of rotating shafts (3) included in the single-arm mechanical arm (2) is more than or equal to 6.
CN202210244398.8A 2022-03-14 2022-03-14 Intelligent railway vehicle inspection robot Pending CN114735044A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210244398.8A CN114735044A (en) 2022-03-14 2022-03-14 Intelligent railway vehicle inspection robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210244398.8A CN114735044A (en) 2022-03-14 2022-03-14 Intelligent railway vehicle inspection robot

Publications (1)

Publication Number Publication Date
CN114735044A true CN114735044A (en) 2022-07-12

Family

ID=82275688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210244398.8A Pending CN114735044A (en) 2022-03-14 2022-03-14 Intelligent railway vehicle inspection robot

Country Status (1)

Country Link
CN (1) CN114735044A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116090802A (en) * 2023-04-12 2023-05-09 成都盛锴科技有限公司 Train inspection task intelligent distribution and scheduling system oriented to vehicle bottom part identification

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116090802A (en) * 2023-04-12 2023-05-09 成都盛锴科技有限公司 Train inspection task intelligent distribution and scheduling system oriented to vehicle bottom part identification

Similar Documents

Publication Publication Date Title
CN112418103B (en) Bridge crane hoisting safety anti-collision system and method based on dynamic binocular vision
CN1315715C (en) Camera for monitoring escalator and mobile footway
CN103837087B (en) Pantograph automatic testing method based on active shape model
CN109238756B (en) Dynamic image detection equipment and detection method for freight car operation fault
CN109840900A (en) A kind of line detection system for failure and detection method applied to intelligence manufacture workshop
CN108805868B (en) Image processing method and fault detection method for fault detection of running gear equipment under electric vehicle carrying vehicle
CN113822840A (en) Vehicle bottom inspection system and method
CN111292294A (en) Method and system for detecting abnormality of in-warehouse bottom piece
CN112528979B (en) Transformer substation inspection robot obstacle distinguishing method and system
CN115482195B (en) Train part deformation detection method based on three-dimensional point cloud
CN114037703B (en) Subway valve state detection method based on two-dimensional positioning and three-dimensional attitude calculation
CN112634269B (en) Railway vehicle body detection method
CN102622614A (en) Knife switch closing reliability judging method based on distance between knife switch arm feature point and fixing end
CN114735044A (en) Intelligent railway vehicle inspection robot
CN110667726A (en) Four-foot walking inspection robot applied to subway train inspection warehouse
CN110509272B (en) Vehicle inspection method and system and composite inspection robot
CN111855667A (en) Novel intelligent train inspection system and detection method suitable for metro vehicle
CN105068139B (en) A kind of characterization processes of piston cooling nozzle installment state
Di Stefano et al. Automatic 2D-3D vision based assessment of the attitude of a train pantograph
CN117369460A (en) Intelligent inspection method and system for loosening faults of vehicle bolts
CN115115768A (en) Object coordinate recognition system, method, device and medium based on stereoscopic vision
CN110696016A (en) Intelligent robot suitable for subway vehicle train inspection work
CN115857040A (en) Dynamic visual detection device and method for foreign matters on locomotive roof
CN114905512A (en) Panoramic tracking and obstacle avoidance method and system for intelligent inspection robot
CN116486287A (en) Target detection method and system based on environment self-adaptive robot vision system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20220712