CN113222907A - Detection robot based on bend rail - Google Patents
Detection robot based on bend rail Download PDFInfo
- Publication number
- CN113222907A CN113222907A CN202110439768.9A CN202110439768A CN113222907A CN 113222907 A CN113222907 A CN 113222907A CN 202110439768 A CN202110439768 A CN 202110439768A CN 113222907 A CN113222907 A CN 113222907A
- Authority
- CN
- China
- Prior art keywords
- rail
- representing
- chassis
- denotes
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title abstract description 24
- 230000007547 defect Effects 0.000 claims abstract description 42
- 238000000034 method Methods 0.000 claims abstract description 18
- 230000007246 mechanism Effects 0.000 claims abstract description 15
- 238000004891 communication Methods 0.000 claims abstract description 4
- 239000000725 suspension Substances 0.000 claims description 26
- 238000007689 inspection Methods 0.000 claims description 17
- 230000008569 process Effects 0.000 claims description 13
- 230000001133 acceleration Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000003708 edge detection Methods 0.000 claims description 6
- 239000003550 marker Substances 0.000 claims description 6
- 238000012216 screening Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 5
- 238000001914 filtration Methods 0.000 claims description 4
- 238000011156 evaluation Methods 0.000 claims description 3
- 238000003709 image segmentation Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000004134 energy conservation Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000035939 shock Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30136—Metal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30236—Traffic on road, railway or crossing
Abstract
The invention discloses a detection robot based on a curved rail, which comprises a chassis; a first camera rotatably coupled to one side of the chassis and capable of photographing an image of the rail side; the second camera is rotatably connected with the other side of the chassis and can shoot images of the other side of the rail; the laser radar navigation module is detachably connected with the chassis and used for planning the walking path of the chassis; the traveling mechanism is rotatably connected with the bottom of the chassis; and the controller is in communication connection with the laser radar navigation module, the first camera and the second camera, and can analyze the images and identify the defects of the rail. The invention collects the images of the two sides of the rail by detecting the advance of the robot on the rail, further discloses a method for analyzing the images and identifying the defects of the rail by the controller, realizes the automatic detection of the rail at the curve, reduces the labor cost and has high performance.
Description
Technical Field
The invention relates to the technical field of rail detection, in particular to a detection robot based on a bent rail.
Background
With the development of railway technology, trains are accelerated, the quality requirements for rails are increasingly improved, railways of various countries adopt a high-speed, heavy-load and high-density transportation mode, the service conditions of the rails are worsened, and defects of the rails are caused, particularly, the defects of bent rails are easy to wear and the like under the centripetal force effect of train operation. In order to improve the inspection efficiency and the defect inspection strength of the curved rail, it is necessary to provide an inspection robot based on the curved rail.
Disclosure of Invention
The invention discloses a detection robot based on a curved rail, which comprises a chassis, a first camera, a second camera, a laser radar navigation module, a traveling mechanism and a controller. The detection robot moves on the rail, images on two sides of the rail are collected, the controller analyzes the images to identify the defects of the rail, and automatic detection of the rail on the curve is achieved.
The technical scheme of the invention is as follows:
a curve rail-based inspection robot comprising:
a chassis;
a first camera rotatably coupled to one side of the chassis and capable of photographing an image of the rail side;
the second camera is rotatably connected with the other side of the chassis and can shoot images of the other side of the rail;
the laser radar navigation module is detachably connected with the chassis and used for planning the walking path of the chassis;
the traveling mechanism is rotatably connected with the bottom of the chassis;
and the controller is in communication connection with the laser radar navigation module, the first camera and the second camera, and can analyze the images and identify the defects of the rail.
Preferably, the traveling mechanism includes:
a first suspension elastically supported at one side of the chassis;
a second suspension elastically supported at the other side of the chassis;
a first rotating wheel rotatably connected to the first suspension;
a second rotating wheel rotatably connected to the second suspension;
and the pressure instrument is connected with the first rotating wheel and the second rotating wheel and can monitor the bearing pressure of the first rotating wheel and the second rotating wheel.
Preferably, the first and second suspensions each include:
a first cantilever;
one end of the second cantilever is hinged with one end of the first cantilever;
and a spring elastically supported between the first and second cantilever arms.
Preferably, the first rotating wheel and the second rotating wheel each include:
the front rotating wheel is rotatably connected with the other end of the first cantilever;
and the rear rotating wheel is rotatably connected with the other end of the second cantilever.
Preferably, the rail defect marking device further comprises a marker which is detachably connected with the chassis and is used for marking the defect of the rail.
Preferably, still include solar cell panel, it and first camera, second camera, lidar navigation module, running gear and controller electric connection.
Preferably, the defect identification comprises the steps of:
step one, obtaining images on two sides of a rail, and screening out candidate blocks;
step two, performing rail defect edge detection on the candidate blocks;
and step three, extracting a defect outline by adopting a Framann chain code, and identifying the defect type.
Preferably, the process of screening out the candidate blocks is as follows:
step a, performing gray scale output on the images on the two sides of the rail,
Vgray=0.28R+0.51G+0.15B
wherein, VgrayRepresenting a gray value, R red, G blue, B green;
b, denoising by adopting a median filtering method,
rk=med(r1,r2,r3···rn)
wherein r iskRepresenting pixel filtered values, med representing median operation, r1,r2,r3···rnRepresenting all pixel points in a single pixel point neighborhood window;
step c, carrying out space coordinate conversion calibration on the image to obtain a calibration image,
wherein x represents a horizontal axis calibration value, y represents a vertical axis calibration value, and f and mu represent calibration parametersX represents coordinate value of horizontal axis of space coordinate, Y represents coordinate value of vertical axis of space coordinate, Z represents coordinate value of vertical axis of space coordinate, M represents distance between camera head and track, a represents pressure of front runner, b represents pressure of rear runner, rho represents rotating speed of runner0Representing the average speed of the rotating wheel, ω representing the yaw rate, σ representing the standard deviation, vxRepresenting lateral acceleration, vyIndicating vertical acceleration;
Step d, dividing the calibration image into a plurality of blocks, acquiring the contrast of the blocks,
the formula for calculating the contrast is:
where D denotes contrast, ξ ═ m · n, m denotes the number of horizontal pixels, n denotes the number of vertical pixels, i denotes a single pixel, t denotes vertical deviation, W denotes vertical deviationTIndicating the depth of color shift, WSDenotes the texture offset depth, and λ denotes the horizontal deviation, φ (p)*) Representing a gray level;
step e, using the block with the contrast larger than the contrast threshold as the candidate block,
the contrast threshold is calculated as:
wherein D is*A contrast threshold is represented by a value that is,eta represents an evaluation coefficient, XiIndicating an edge feature or a region feature,
preferably, the edge detection includes the following processes:
calculating the gradient amplitude in a 3 multiplied by 3 neighborhood, wherein the amplitude calculation formula is as follows:
E(x,y)=Ex 2(x,y)+Ey 2(x,y);
E*(x,y)=E45° 2(x,y)+E135° 2(x,y)
wherein B denotes the gradient amplitude, ExRepresenting amplitude in the horizontal direction, EyIndicating a vertical amplitude, E45°Representing a magnitude in the 45 deg. direction, E135°Representing a magnitude of 135 ° in direction;
the image is divided into a foreground and a background, the between-class variance of the foreground and the background is calculated, and the calculation formula of the variance is as follows:
wherein tau is2The inter-class variance is represented as,denotes the distribution probability of all pixels in the gray-value interval (k-n- + n), a denotes the setting parameter, p1Probability, p, of representing foreground region2Probability of representing foreground region, m1Mean gray value, m, representing the foreground region2Denotes the average gray value of the scene area, mGRepresenting a global mean gray value;
taking the maximum value of the inter-class variance as a threshold value, carrying out image segmentation, and outputting an image gray level histogram;
and extracting straight line edges according to the histogram.
Preferably, the extracting the defect contour includes the following processes:
respectively setting 8 adjacent pixel points of a single pixel point to be 0-7;
setting the starting point of chain code extraction as the lower left corner, each column to the right, and each row to the upper direction to extract chain codes and generate a binary curve track;
traversing pixel points in the straight line edge, and counting the repetition times of code values 1, 2 and 3 on different curve tracks;
and extracting the line segment with the maximum occurrence frequency to obtain a defect outline.
The invention has the beneficial effects that:
1. according to the invention, the detection robot travels on the rail, the images of the two sides of the rail are acquired, and the controller analyzes the images to identify the defects of the rail, so that the automatic detection of the rail on the curve is realized.
2. According to the invention, the solar cell panel is arranged on the robot chassis, so that when rail detection is carried out under the condition of sunshine, solar energy can be converted into electric energy through the solar cell panel, and the electric energy is provided for each electronic component of the robot, so that the energy is saved.
3. According to the invention, the marker is arranged on the robot chassis, and when the rail defect is detected, the marker is marked at the rail position, so that the rail is convenient to replace and maintain.
4. The invention also provides a method for detecting the rail defect by the robot, which can detect the rail defect in real time in the moving process of the robot and has high detection efficiency.
Drawings
Fig. 1 is a schematic structural diagram of a detection robot based on a curved track according to the present invention.
FIG. 2 is a schematic view of a bottom structure of a detection robot based on a curved rail according to the present invention
Fig. 3 is a schematic view of the mechanism of the traveling mechanism according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of the suspension and turning wheel mechanism in one embodiment of the invention.
FIG. 5 is a flow chart of image analysis for identifying rail defects in accordance with an embodiment of the present invention.
Fig. 6 is a flowchart illustrating candidate block filtering according to an embodiment of the present invention.
FIG. 7 is a flow chart of edge detection in an embodiment of the invention.
FIG. 8 is a flow chart of defect contour extraction in an embodiment of the present invention.
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that in the description of the present invention, the terms "in", "upper", "lower", "lateral", "inner", etc. indicate directions or positional relationships based on those shown in the drawings, which are merely for convenience of description, and do not indicate or imply that the device or element must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Furthermore, it should be noted that, in the description of the present invention, unless otherwise explicitly specified or limited, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected; may be a mechanical connection; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
As shown in fig. 1-2, a curved track-based inspection robot includes a chassis 100, a first camera 200, a second camera 300, a lidar navigation module 400, a traveling mechanism 500, and a controller 600.
The first camera 200 is rotatably connected to one side of the chassis and can shoot images on one side of a rail, the second camera 300 is rotatably connected to the other side of the chassis and can shoot images on the other side of the rail, the laser radar navigation module 400 is detachably connected to the chassis and used for planning a walking path of the chassis, the walking mechanism 500 is rotatably connected to the bottom of the chassis, the controller 600 is in communication connection with the laser radar navigation module 400, the first camera 200 and the second camera 300 and can analyze the images and identify defects of the rail.
The laser radar navigation module 400 performs route planning, the traveling mechanism 500 rotates, the detection robot travels on the rail, the first camera 200 and the second camera 300 acquire images of two sides of the rail and upload the images to the controller 600, and the controller 600 analyzes the images and identifies defects of the rail, so that automatic detection of the rail with a curve is realized.
Further, as shown in fig. 3, the traveling mechanism 500 includes a first suspension 510, a second suspension 520, a first rotating wheel 530, a second rotating wheel 540, and a pressure gauge 550.
The first suspension 510 is elastically supported on one side of the chassis 100, the second suspension 520 is elastically supported on the other side of the chassis 100, the first rotating wheel 530 is rotatably connected with the first suspension 510, the second rotating wheel 540 is rotatably connected with the second suspension 540, the pressure gauge 550 is connected with the first rotating wheel 530 and the second rotating wheel 540, and the pressure of the first rotating wheel 530 and the second rotating wheel 540 can be monitored.
The first suspension 510 and the second suspension 520 have the same structure, and as shown in fig. 4, the first suspension 510 includes, for example, a first suspension arm 511, a second suspension arm 512, and a spring 513. One end of the second suspension arm 512 is hinged to one end of the first suspension arm 511, and the spring 513 is elastically supported between the first suspension arm 511 and the second suspension arm 512.
The first and second rotatable wheels 520 and 540 have the same structure, and each of the first and second rotatable wheels 530 includes a front rotatable wheel 531 and a rear rotatable wheel 532. The front wheel 531 is rotatably connected to the other end of the first arm 511, and the rear wheel 532 is rotatably connected to the other end of the second arm 512. In the process of detecting the advance of the robot, the front rotating wheel 531 and the rear rotating wheel 532 are stressed, and the spring 513 stretches out and draws back, so that on one hand, the shock absorption effect can be achieved, on the other hand, when the robot passes through a bent rail, the stress on the front rotating wheel 531 and the pressure on the rear rotating wheel 532 are different, the pressure instrument 550 can detect the pressures of the front rotating wheel 531 and the rear rotating wheel 532 in real time, rail images collected by the camera can be calibrated in a space coordinate system through the pressure difference, actual projection of the rail can be accurately reduced, and the rail defect can be further identified.
Further, the rail replacing and repairing device comprises a marker 700 which is detachably connected with the chassis 100 and is used for marking the defects of the rails, so that the efficiency of rail replacing and repairing can be greatly improved.
Further, the camera includes electronic cloud platform and camera, and the camera is infrared camera, and is connected with electronic cloud platform, and electronic cloud platform rotatable coupling chassis 100 can realize that the rotation of camera is shot, and the degree of freedom is high.
Further, still include solar cell panel 800, it and first camera 200, second camera 300, lidar navigation module 400, running gear 500 and controller 600 electric connection carry out the rail and detect under the circumstances that has sunshine, just can be through solar cell panel with solar energy conversion electric energy into, provide the electric energy to each electronic components of the robot of additional survey, energy-conservation.
Further, the defect identification comprises the following steps:
s110, obtaining images of two sides of a rail, and screening out candidate blocks;
s120, detecting the rail defect edge of the candidate block;
s130, extracting a defect outline by adopting a Framann chain code, and identifying the defect type.
Further, the process of screening out the candidate blocks is as follows:
s111, performing gray-scale output on the images on the two sides of the rail,
Vgray=0.28R+0.51G+0.15B
wherein, VgrayRepresenting a gray value, R red, G blue, B green;
s112, denoising by adopting a median filtering method,
rk=med(r1,r2,r3···rn)
wherein r iskRepresenting pixel filtered values, med representing median operation, r1,r2,r3···rnRepresenting all pixel points in a single pixel point neighborhood window;
s113, carrying out space coordinate conversion calibration on the image to obtain a calibration image,
wherein x represents a horizontal axis calibration value, y represents a vertical axis calibration value, and f and mu represent calibration parametersX represents coordinate value of horizontal axis of space coordinate, Y represents coordinate value of vertical axis of space coordinate, Z represents coordinate value of vertical axis of space coordinate, M represents distance between camera head and track, a represents pressure of front runner, b represents pressure of rear runner, rho represents rotating speed of runner0Representing the average speed of the rotating wheel, ω representing the yaw rate, σ representing the standard deviation, vxRepresenting lateral acceleration, vyRepresents a vertical acceleration;
s114, dividing the calibration image into a plurality of blocks, acquiring the contrast of the blocks,
the formula for calculating the contrast is:
where D denotes contrast, ξ ═ m · n, m denotes the number of horizontal pixels, n denotes the number of vertical pixels, i denotes a single pixel, t denotes vertical deviation, W denotes vertical deviationTIndicating the depth of color shift, WSDenotes the texture offset depth, and λ denotes the horizontal deviation, φ (p)*) Representing a gray level;
s115, taking the block with the contrast larger than the contrast threshold as a candidate block,
the contrast threshold is calculated as:
wherein D is*A contrast threshold is represented by a value that is,eta represents an evaluation coefficient, XiIndicating an edge feature or a region feature,
further, the edge detection includes the following processes:
s121, calculating the gradient amplitude in a 3 x 3 neighborhood, wherein the amplitude calculation formula is as follows:
E(x,y)=Ex 2(x,y)+Ey 2(x,y);
E*(x,y)=E45° 2(x,y)+E135° 2(x,y)
wherein B denotes the gradient amplitude, ExRepresenting amplitude in the horizontal direction, EyIndicating a vertical amplitude, E45°Representing a magnitude in the 45 deg. direction, E135°Representing a magnitude of 135 ° in direction;
s122, dividing the image into a foreground and a background, and calculating the between-class variance of the foreground and the background, wherein the calculation formula of the variance is as follows:
wherein tau is2The inter-class variance is represented as,denotes the distribution probability of all pixels in the gray-value interval (k-n- + n), a denotes the setting parameter, p1Probability, p, of representing foreground region2Probability of representing foreground region, m1Mean gray value, m, representing the foreground region2Denotes the average gray value of the scene area, mGRepresenting a global mean gray value;
s123, carrying out image segmentation by taking the maximum value of the inter-class variance as a threshold value, and outputting an image gray histogram;
and S124, extracting a straight line edge according to the histogram.
Further, the defect contour extraction includes the following processes:
s131, respectively setting 8 adjacent pixel points of a single pixel point to be 0-7;
s132, setting the starting point of chain code extraction as the lower left corner, each column to the right, and each row to the upper direction, extracting chain codes and generating a binary curve track;
s133, traversing pixel points in the straight line edge, and counting the repetition times of code values 1, 2 and 3 on different curve tracks;
s134, the line segment with the largest occurrence frequency is extracted to obtain a defect outline.
The invention provides a working process of a detection robot based on a curved rail, which comprises the following steps:
the laser radar navigation module 400 performs route planning, the traveling mechanism 500 rotates, the detection robot travels on a rail, the first camera 200 and the second camera 300 acquire images of two sides of the rail, the pressure gauge 550 acquires pressure of a rotating wheel and uploads the pressure to the controller 600, the controller 600 analyzes the images and identifies defects of the rail, automatic detection of the bent rail is achieved, if the rail is defective, the marker 700 automatically marks the positions of the defects, meanwhile, in the process of traveling, the solar cell panel 800 converts solar energy into electric energy, the electric energy is provided for all electronic components of the inspection robot, and efficient operation of the inspection robot is achieved.
According to the detection robot based on the bent rail, the detection robot travels on the rail, the images of the two sides of the rail are collected, the controller analyzes the images to identify the defects of the rail, automatic detection of the bent rail is achieved, labor cost is saved, detection efficiency is high, and accuracy is high.
In the above embodiments, the technical features may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the technical features.
The above descriptions are only examples of the present invention, and common general knowledge of known specific structures, characteristics, and the like in the schemes is not described herein too much, and it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Without departing from the invention, several changes and modifications can be made, which should also be regarded as the protection scope of the invention, and these will not affect the effect of the invention and the practicality of the patent.
Claims (10)
1. A bend rail-based inspection robot, comprising:
a chassis;
a first camera rotatably coupled to one side of the chassis and capable of photographing an image of the rail side;
the second camera is rotatably connected with the other side of the chassis and can shoot images of the other side of the rail;
the laser radar navigation module is detachably connected with the chassis and used for planning the walking path of the chassis;
the traveling mechanism is rotatably connected with the bottom of the chassis;
and the controller is in communication connection with the laser radar navigation module, the first camera and the second camera and can identify the defects of the images.
2. A curve track-based inspection robot as claimed in claim 1, wherein said traveling mechanism comprises:
a first suspension elastically supported at one side of the chassis;
a second suspension elastically supported at the other side of the chassis;
a first rotation wheel rotatably connected to the first suspension;
a second rotating wheel rotatably coupled to the second suspension;
and the pressure instrument is connected with the first rotating wheel and the second rotating wheel and can monitor the bearing pressure of the first rotating wheel and the second rotating wheel.
3. A curve rail based inspection robot as claimed in claim 2, wherein the first and second suspensions each comprise:
a first cantilever;
one end of the second cantilever is hinged with one end of the first cantilever;
a spring elastically supported between the first and second cantilever arms.
4. A curve rail based inspection robot as recited in claim 3, wherein each of the first rotatable wheel and the second rotatable wheel comprises:
the front rotating wheel is rotatably connected with the other end of the first cantilever;
and the rear rotating wheel is rotatably connected with the other end of the second cantilever.
5. A curve track based inspection robot as claimed in claim 4, further comprising a marker removably attached to said chassis for marking defects of said track.
6. A curve track based inspection robot as claimed in claim 5, further comprising a solar panel electrically connected to said first camera, said second camera, said lidar navigation module, said travel mechanism and said controller.
7. A curve track based inspection robot as claimed in claim 6, wherein said defect identification comprises the steps of:
step one, obtaining images on two sides of a rail, and screening out candidate blocks;
step two, performing rail defect edge detection on the candidate blocks;
and step three, extracting the defect outline by adopting a Framann chain code, and identifying the defect type.
8. A curve track based inspection robot as claimed in claim 7, wherein the process of screening out candidate blocks is:
step a, performing gray scale output on the images on the two sides of the rail,
Vgray=0.28R+0.51G+0.15B
wherein, VgrayRepresenting a gray value, R red, G blue, B green;
b, denoising by adopting a median filtering method,
rk=med(r1,r2,r3···rn)
wherein r iskRepresenting pixel filtered values, med representing median operation, r1,r2,r3···rnRepresenting all pixel points in a single pixel point neighborhood window;
c, converting and calibrating the space coordinates of the image to obtain a calibrated image,
wherein x represents a horizontal axis calibration value, y represents a vertical axis calibration value, and f and mu represent calibration parametersX represents coordinate value of horizontal axis of space coordinate, Y represents coordinate value of vertical axis of space coordinate, Z represents coordinate value of vertical axis of space coordinate, M represents distance between camera head and track, a represents pressure of front runner, b represents pressure of rear runner, rho represents rotating speed of runner0Representing the average speed of the rotating wheel, ω representing the yaw rate, σ representing the standard deviation, vxRepresenting lateral acceleration, vyRepresents a vertical acceleration;
step d, dividing the calibration image into a plurality of blocks, acquiring the contrast of the blocks,
the formula for calculating the contrast is:
where D denotes contrast, ξ ═ m · n, m denotes the number of horizontal pixels, n denotes the number of vertical pixels, i denotes a single pixel, t denotes vertical deviation, W denotes vertical deviationTIndicating the depth of color shift, WSDenotes the texture offset depth, and λ denotes the horizontal deviation, φ (p)*) Representing a gray level;
step e, taking the block with the contrast larger than the contrast threshold as a candidate block,
the contrast threshold is calculated as:
9. a curve rail based inspection robot as claimed in claim 8, wherein said edge detection comprises the process of:
calculating the gradient amplitude in a 3 multiplied by 3 neighborhood, wherein the amplitude calculation formula is as follows:
E(x,y)=Ex 2(x,y)+Ey 2(x,y);
E*(x,y)=E45° 2(x,y)+E135° 2(x,y)
wherein B denotes the gradient amplitude, ExRepresenting amplitude in the horizontal direction, EyIndicating a vertical amplitude, E45°Representing a magnitude in the 45 deg. direction, E135°Representing a magnitude of 135 ° in direction;
dividing the image into a foreground and a background, and calculating the between-class variance of the foreground and the background, wherein the calculation formula of the variance is as follows:
wherein tau is2The inter-class variance is represented as,denotes the distribution probability of all pixels in the gray-value interval (k-n- + n), a denotes the setting parameter, p1Probability, p, of representing foreground region2Probability of representing foreground region, m1Mean gray value, m, representing the foreground region2Denotes the average gray value of the scene area, mGRepresenting a global mean gray value;
taking the maximum value of the inter-class variance as a threshold value, carrying out image segmentation, and outputting an image gray level histogram;
and extracting a straight line edge according to the histogram.
10. A curve rail based inspection robot as claimed in claim 9, wherein said extracting a defect profile comprises the process of:
respectively setting 8 adjacent pixel points of a single pixel point to be 0-7;
setting the starting point of chain code extraction as the lower left corner, each column to the right, and each row to the upper direction to extract chain codes and generate a binary curve track;
traversing pixel points in the straight line edge, and counting the repetition times of code values 1, 2 and 3 on different curve tracks;
and extracting the line segment with the maximum occurrence frequency to obtain a defect outline.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110439768.9A CN113222907B (en) | 2021-04-23 | 2021-04-23 | Detection robot based on curved rail |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110439768.9A CN113222907B (en) | 2021-04-23 | 2021-04-23 | Detection robot based on curved rail |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113222907A true CN113222907A (en) | 2021-08-06 |
CN113222907B CN113222907B (en) | 2023-04-28 |
Family
ID=77088774
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110439768.9A Active CN113222907B (en) | 2021-04-23 | 2021-04-23 | Detection robot based on curved rail |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113222907B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114310932A (en) * | 2021-12-16 | 2022-04-12 | 杭州申昊科技股份有限公司 | Positioning device for rail-hanging type inspection robot |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102607467A (en) * | 2012-03-07 | 2012-07-25 | 上海交通大学 | Device and method for detecting elevator guide rail perpendicularity based on visual measurement |
US20130163847A1 (en) * | 2004-06-09 | 2013-06-27 | Cognex Corporation | Method and Apparatus for Locating Objects |
US20160195856A1 (en) * | 2014-01-08 | 2016-07-07 | Yechezkal Evan Spero | Integrated Docking System for Intelligent Devices |
CN110211101A (en) * | 2019-05-22 | 2019-09-06 | 武汉理工大学 | A kind of rail surface defect rapid detection system and method |
CN111553948A (en) * | 2020-04-27 | 2020-08-18 | 冀中能源峰峰集团有限公司 | Heading machine cutting head positioning system and method based on double tracers |
-
2021
- 2021-04-23 CN CN202110439768.9A patent/CN113222907B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130163847A1 (en) * | 2004-06-09 | 2013-06-27 | Cognex Corporation | Method and Apparatus for Locating Objects |
CN102607467A (en) * | 2012-03-07 | 2012-07-25 | 上海交通大学 | Device and method for detecting elevator guide rail perpendicularity based on visual measurement |
US20160195856A1 (en) * | 2014-01-08 | 2016-07-07 | Yechezkal Evan Spero | Integrated Docking System for Intelligent Devices |
CN110211101A (en) * | 2019-05-22 | 2019-09-06 | 武汉理工大学 | A kind of rail surface defect rapid detection system and method |
CN111553948A (en) * | 2020-04-27 | 2020-08-18 | 冀中能源峰峰集团有限公司 | Heading machine cutting head positioning system and method based on double tracers |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114310932A (en) * | 2021-12-16 | 2022-04-12 | 杭州申昊科技股份有限公司 | Positioning device for rail-hanging type inspection robot |
Also Published As
Publication number | Publication date |
---|---|
CN113222907B (en) | 2023-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Liu et al. | A review of applications of visual inspection technology based on image processing in the railway industry | |
CN103217111B (en) | A kind of non-contact contact line geometric parameter detection method | |
CN111666947B (en) | Pantograph head offset measuring method and system based on 3D imaging | |
US6909514B2 (en) | Wheel profile inspection apparatus and method | |
CN109359409A (en) | A kind of vehicle passability detection system of view-based access control model and laser radar sensor | |
CN109633674A (en) | Three-dimensional Track automatic planning is maked an inspection tour in transmission of electricity based on laser point cloud data | |
CN106428558B (en) | A kind of track synthesis method for inspecting based on the dual-purpose unmanned plane of sky-rail | |
US20120300060A1 (en) | Vision system for imaging and measuring rail deflection | |
CN111609813B (en) | Pantograph abrasion measurement method and system based on 3D imaging | |
CN110736999B (en) | Railway turnout detection method based on laser radar | |
CN103837087B (en) | Pantograph automatic testing method based on active shape model | |
CN102252859B (en) | Road train straight-line running transverse stability automatic identification system | |
CN105426894B (en) | Railway plug pin image detecting method and device | |
CN111768417B (en) | Railway wagon overrun detection method based on monocular vision 3D reconstruction technology | |
CN113129268B (en) | Quality detection method for riveting pier head of airplane | |
CN112070756A (en) | Three-dimensional road surface disease measuring method based on unmanned aerial vehicle oblique photography | |
CN113222907B (en) | Detection robot based on curved rail | |
CN112651988A (en) | Finger-shaped image segmentation, finger-shaped plate dislocation and fastener abnormality detection method based on double-pointer positioning | |
CN108596968B (en) | Sleeper counting method based on track 3D depth image | |
CN111640155B (en) | Pantograph head inclination angle measurement method and system based on 3D imaging | |
CN115797338B (en) | Panoramic pavement multi-performance index calculation method and system based on binocular vision | |
CN203766824U (en) | On-line rail detecting device of electric locomotive electrified boot | |
CN111539278A (en) | Detection method and system for target vehicle | |
CN116520351A (en) | Train state monitoring method, system, storage medium and terminal | |
CN105957057A (en) | Real-time snowfall intensity estimation method based on video analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |