CN114638835B  Sleeper foreign matter detection method based on depth camera  Google Patents
Sleeper foreign matter detection method based on depth camera Download PDFInfo
 Publication number
 CN114638835B CN114638835B CN202210561931.3A CN202210561931A CN114638835B CN 114638835 B CN114638835 B CN 114638835B CN 202210561931 A CN202210561931 A CN 202210561931A CN 114638835 B CN114638835 B CN 114638835B
 Authority
 CN
 China
 Prior art keywords
 sleeper
 plane
 image
 foreign matter
 depth
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Active
Links
Images
Classifications

 G—PHYSICS
 G06—COMPUTING; CALCULATING OR COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T7/00—Image analysis

 B—PERFORMING OPERATIONS; TRANSPORTING
 B61—RAILWAYS
 B61K—AUXILIARY EQUIPMENT SPECIALLY ADAPTED FOR RAILWAYS, NOT OTHERWISE PROVIDED FOR
 B61K9/00—Railway vehicle profile gauges; Detecting or indicating overheating of components; Apparatus on locomotives or cars to indicate bad track sections; General design of track recording vehicles
 B61K9/08—Measuring installations for surveying permanent way

 G—PHYSICS
 G06—COMPUTING; CALCULATING OR COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T7/00—Image analysis
 G06T7/10—Segmentation; Edge detection
 G06T7/13—Edge detection

 G—PHYSICS
 G06—COMPUTING; CALCULATING OR COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T7/00—Image analysis
 G06T7/70—Determining position or orientation of objects or cameras
 G06T7/73—Determining position or orientation of objects or cameras using featurebased methods
Abstract
The invention discloses a sleeper foreign matter detection method based on a depth camera, which comprises the steps of collecting sleeper color images under different illumination positions by the camera, manufacturing the sleeper color images into a sleeper data set, and acquiring an outer enclosure frame of a sleeper; background filtering, namely filtering pixel information outside an outer surrounding frame according to the outer surrounding frame of the target detection result, and setting the pixel information to be black; plane filtering, namely filtering pixel points of a plane in the sleeper color image; edge detection, namely defining the area threshold value of a set area as the area with the area larger than the threshold value as the area with the sleeper foreign matter, and returning the threedimensional coordinates of the central point of the sleeper foreign matter through depth information; there will be sleeper foreign matter signal and sleeper foreign matter central point threedimensional coordinate to issue for the host computer, wait for to clear away, only need just can realize the foreign matter detection of sleeper through the degree of depth camera to the training of sleeper environment, need not train to different foreign matters, adapt to the foreign matter detection of various shapes kinds.
Description
Technical Field
The invention relates to the technical field of track foreign matter detection, in particular to a sleeper foreign matter detection method based on a depth camera.
Background
The rail is the basis of train vehicle operation, and the normal operation of railway transportation can be guaranteed only through continuous periodic maintenance and overhaul. Because the railway line is long in mileage and the environment passing through is complex and changeable, the safety of driving can be influenced no matter the stone which is splashed and falls off by environmental factors or the artificially retained garbage and foreign matters, and the detection of the rail and foreign matters is of great significance in time. The conventional method at present adopts a classical image processing algorithm, extracts the orbit lines by a Hough transform method or a template matching method, and detects foreign matters in the orbit lines by adopting an optical flow method and a frame difference method to detect the difference between video frames, and the methods have low detection robustness and cannot process the target under a complex background. The method based on deep learning is not widely applied to the field of rails due to late development, and generally detects common targets such as plants and pedestrians, so that unknown objects cannot be detected.
Disclosure of Invention
According to the defects of the prior art, the invention aims to provide a sleeper foreign matter detection method based on a depth camera, which can realize rail foreign matter detection through the depth camera only by training a rail environment, does not need to train different foreign matters, and is suitable for foreign matter detection of various shapes and types.
In order to solve the technical problems, the invention adopts the technical scheme that:
a sleeper foreign matter detection method based on a depth camera comprises the following steps:
s1, target detection neural network training, namely fixing a camera on a railway inspection platform, collecting sleeper color images under different illumination positions by the camera, labeling sleepers in the sleeper color images, making a sleeper data set, sending the sleeper data set into a CenterNet target detection neural network for training, and performing sleeper detection on the sleeper color images by using the trained network to obtain an outer surrounding frame of the sleepers;
s2, background filtering, namely, carrying out target detection on the realtime image acquired by the camera through the neural network trained in the step S1, and filtering pixel information outside the outer surrounding frame by adopting a mask function in an OpenCV library in the sleeper color image and the corresponding depth image according to the outer surrounding frame of a target detection result and setting the pixel information to be black;
s3, performing plane filtering, namely identifying the sleeper color image to a sleeper plane area corresponding to a corresponding area of the depth image, identifying plane parameters in the depth image by adopting a least square method, and filtering pixel points of a plane in the color image according to the plane parameters;
step S4, edge detection, namely defining that a sleeper foreign matter exists in an area larger than a threshold value area in the setting of an area threshold value, and regressing a threedimensional coordinate of a sleeper foreign matter center point through depth information;
and S5, issuing a signal indicating the existence of the sleeper foreign matter and the threedimensional coordinate of the central point of the sleeper foreign matter to an upper computer, removing the sleeper foreign matter, and jumping to S2 until all sleepers in a section of track are detected.
Further, including sleeper foreign matter detection device, sleeper foreign matter detection device patrols and examines platform and host computer including the railway, the railway is patrolled and examined the platform and is equipped with arm, degree of depth camera, clamping jaw and collecting box, the arm with the degree of depth camera sets up on the railway patrols and examines the platform, the clamping jaw is established the arm is terminal, and host computer acquires to have sleeper foreign matter signal and sleeper foreign matter central point threedimensional coordinate, plans the motion path of department arm fast, presss from both sides through the terminal clamping jaw of arm and gets and transmit the sleeper foreign matter, places in the collecting box.
Further, in the step S1, the tie data set includes 300 to 400 tie color images under different illumination positions.
Further, in the step S1, the different illumination location condition includes one of the following complex illumination interference phenomena: insufficient illumination, ground reflection, intense illumination and dark shadows.
Further, the step S3 is to filter the tie plane, and in the process of performing plane fitting by using the least square method, the parameters of the target plane can be calculated by taking the sum of squared distances of each discrete point to the target plane as an optimization function, where a general expression of the threedimensional plane is as follows:
Ax+By+Cz+D＝0 (1)
wherein A, B, C and D are plane parameter values, and (x, y and z) are points in a threedimensional space;
the distance d to the threedimensional plane for any point in threedimensional space can be expressed as:
finishing to obtain:
d＝α _{0} x _{1} +a _{1} y _{1} +a _{2} z _{1} +a _{3} (3)
wherein (x) _{1} ，y _{1} ，z _{1} ) Any point in threedimensional spaceIs determined by the coordinate of (a) in the space,
according to the least square method, if the distance plane and the nearest plane parameter of each point need to be found, the distance between the point and the plane can be calculated as follows:
S＝∑(a _{0} x _{i} +a _{1} y _{i} +a _{2} z _{i} +a _{3} ) ^{2} (4)
the parameters of each point from the plane and the nearest plane are as follows:
in the formula, A _{p} ＝(x _{i} ，y _{i} ，z _{i} ) Is a threedimensional coordinate matrix of a fitted plane,is the vector of the parameters of the fitted plane, b is the sum A _{p} X identity matrixes with the same shape;
when S min is required, then:
so that the derivation of X from equation (6) can be obtained
Namely, it is
X＝(A _{p} ^{T} A _{p} ) ^{1} A _{p} ^{T} b (8)
Substituting RealSense point cloud data into the formula (8) in a format of (n, 3) to calculate plane parameters of the sleeper, obtaining x, y and z of pixel points in the picture according to the depth image to calculate normal vector parameters of the sleeper plane, calculating the distance between each point in the depth image and the sleeper plane by using an OpenCV (open circuit vehicle library) according to the normal vector parameters of the sleeper plane, setting the color information of the corresponding pixel in the color picture to be black by using the point with the distance smaller than a set threshold value, and finishing plane filtering.
Further, the step S4 adopts Canny algorithm edge detection.
Further, the values of three channels of RGB are calculated by using an OpenCV library, the image is converted into a gray image, a gaussian filter is used for noise reduction, the gaussian filter is implemented by two times of onedimensional gaussian kernel weighting or is completed by one twodimensional gaussian kernel convolution, and the gray value after convolution is:
wherein, sigma represents standard deviation, and f (x, y) is the gray value of each point in the image coordinate system;
calculating the firstorder difference of the noisereduced image, and performing convolution on the image by using a sobel operator to obtain a gradient matrix of the image in the two directions of the x axis and the y axis, wherein the gradient matrix has the following main functions:
wherein A is the image gray matrix after Gaussian convolution, G _{x} ，G _{y} Calculating gradient matrixes of the image in two directions; obtaining gradients in two directions, calculating total gradient with amplitude ofDirection θ ═ arctan (G) _{y} /G _{x} )；
For the image subjected to Gaussian filtering, the refined edge is judged according to some fuzzy edge information, the edge pixel is compared with the adjacent pixel, the point with the maximum local gradient is reserved, and a clear and sharp edge can be obtained.
Further, edges are detected and connected by adopting a doublethreshold method, an image is screened by setting a high threshold and a low threshold according to the gradient value, an image with a small number of false edges can be obtained by filtering according to the high threshold, and edge endpoints in the image with the high threshold are connected according to the low threshold to obtain a complete closed contour curve.
Further, in step S4, calibrating according to the depth camera sleeper color image and the depth image, obtaining internal and external parameters of the depth image and the sleeper color image:
in the formula (x) _{rgb} ，y _{rgb} ) And (x) _{d} ，y _{d} ) Coordinates Z in pixel coordinate system of sleeper color image and depth image _{rgb} And Z _{d} Is a scale factor, f is the camera focal length, (u) _{0} ，v _{0} ) As the origin coordinates of the image coordinate system, R _{rgb} ，T _{rgb} And R _{d} ，T _{d} A relative rotation matrix and a relative position matrix in the color camera and depth camera external parameters, respectively, (X) _{w} ，Y _{w} ，Z _{w} ) The target threedimensional coordinate is a point in a world coordinate system;
and combining the internal reference matrix to realize the registration of the sleeper color image and the depth image:
m is a 4 x 4 transformation matrix, and can be obtained by substituting corresponding points of a plurality of groups of depth images and color images, thereby realizing the registration of the color images and the depth images according to the pixel coordinate system of the color image of the sleeper on demandScaling to obtain target threedimensional coordinates (X) _{w} ，Y _{w} ，Z _{w} )。
Compared with the prior art, the invention has the following advantages and beneficial effects:
according to the depth camerabased sleeper foreign matter detection method, a selfmade sleeper data set is learned by adopting CenterNet, target detection is utilized to filter image information outside a sleeper, a background plane is extracted by a least square method for depth image information and filtered, foreign matter information is obtained through edge detection, a threedimensional position of an object is obtained by combining a depth image and a color image, foreign matter detection of the sleeper can be achieved through a single depth camera only by training the sleeper environment, training for different foreign matters is not needed, the method is suitable for foreign matter detection of various shapes and types, and is convenient and rapid, and rapid detection can be achieved.
Drawings
Fig. 1 is a flowchart of a sleeper foreign matter detection method of the present invention.
Fig. 2 is a hardware device to which the present invention relates.
Fig. 3 is a schematic illustration of the sleeper detection effect of the present invention.
FIG. 4 is a schematic diagram of depth image plane filtering results of the present invention.
FIG. 5 is a schematic diagram of the edge detection effect of the present invention.
Fig. 6 is a schematic illustration of the final effect in one embodiment of the invention.
Fig. 7 is a schematic illustration of the final effect in another embodiment of the invention.
Fig. 8(a) is a foreign matter information map of the sleeper color image after filtering according to the depth information.
Fig. 8(b) is a sleeper color image original of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. The invention provides a sleeper foreign matter detection method based on a depth camera, which comprises the following steps as shown in figures 17:
s1, target detection neural network training, namely fixing a camera on a railway inspection platform, collecting sleeper color images under different illumination positions by the camera, labeling sleepers in the sleeper color images, making a sleeper data set, sending the sleeper data set into a CenterNet target detection neural network for training, and performing sleeper detection on the sleeper color images by using the trained network to obtain an outer surrounding frame of the sleepers;
according to the invention, sleeper color images under different illumination position conditions are collected by a camera to manufacture a sleeper data set, the sleeper data set is sent to a CenterNet target detection neural network for training, in the actual detection process, the detection is not influenced by external illumination and changing environment, and the purpose of rapid detection is realized.
S2, background filtering, namely, carrying out target detection on the realtime image acquired by the camera through the neural network trained in the step S1, and filtering pixel information outside the outer surrounding frame by adopting a mask function in an OpenCV library in the sleeper color image and the corresponding depth image according to the outer surrounding frame of a target detection result and setting the pixel information to be black;
the sleeper is arranged on the ballast, so that the ballast is easily influenced by the external environment and moves to the sleeper, and the ballast also belongs to one type of foreign matters of the sleeper.
S3, performing plane filtering, namely identifying the sleeper color image to a sleeper plane area corresponding to a corresponding area of the depth image, identifying plane parameters in the depth image by adopting a least square method, and filtering pixel points of a plane in the color image according to the plane parameters;
through plane filtering, the sleeper foreign matter existence information of unknown shape and size can be identified through depth information, and the corresponding area is mapped to a color image, and meanwhile, the position is convenient to obtain.
Step S4, edge detection, namely defining that a sleeper foreign matter exists in an area larger than a threshold value area in the setting of an area threshold value, and regressing a threedimensional coordinate of a sleeper foreign matter center point through depth information;
s5, the signal that the sleeper foreign matter exists and the threedimensional coordinate of the central point of the sleeper foreign matter are issued to an upper computer, the sleeper foreign matter is removed, the railway inspection platform moves along the track, and the step S2 is skipped until all sleepers in a section of track are detected.
According to the method, the selfmade sleeper data set is learned by adopting the CenterNet, image information outside the sleeper is filtered by utilizing target detection, background plane filtering is extracted by a depth image information least square method, foreign matter information is obtained through edge detection, the threedimensional position of an object is obtained by combining a depth image and a color image, the foreign matter detection of the sleeper can be realized through a depth camera only by training the sleeper environment, training aiming at different foreign matters is not needed, the method is suitable for the detection of foreign matters in various shapes and types, is convenient and rapid, and can realize rapid detection.
The invention can rapidly detect the unknown foreign body samples of the system under different illumination conditions in the moving process of the inspection platform, obtain the position and the plane placing posture, rapidly detect the unknown shape and type of the foreign body, detect the position and the placing posture, and only recognize the trained samples and can not return to the position by the common algorithm.
As shown in fig. 2, the sleeper foreign matter detection method based on the depth camera comprises a sleeper foreign matter detection device, the sleeper foreign matter detection device comprises a railway inspection platform and an upper computer, the railway inspection platform is provided with a mechanical arm, the depth camera, a clamping jaw and a collecting box, the mechanical arm and the depth camera are arranged on the railway inspection platform, the clamping jaw is arranged at the tail end of the mechanical arm, the upper computer acquires a sleeper foreign matter signal and threedimensional coordinates of a sleeper foreign matter central point, a motion path of the mechanical arm is planned quickly, and the sleeper foreign matter is clamped and transmitted to the sleeper foreign matter through the clamping jaw at the tail end of the mechanical arm and is placed in the collecting box.
Wherein, arm, clamping jaw, degree of depth camera are connected to the host computer through specific cable.
In step S1, as shown in fig. 3, the tie data set includes 300 to 400 tie color images under different illumination location conditions, the different illumination location conditions including one of the following complex illumination interference phenomena: the sleeper foreign body identification device has the advantages that due to insufficient illumination, ground reflection, strong illumination and dark shadows, foreign bodies on sleepers can be inspected under illumination conditions of different places, and the applicability of the sleeper foreign body identification device is enhanced.
As shown in fig. 4, in the step S3, the plane filtering is performed on the sleepers, and in the process of performing the plane fitting by the least square method, the parameters of the target plane can be calculated by using the sum of squared distances of the target plane at each discrete point as an optimization function.
The specific implementation process is as follows:
the general expression for the threedimensional plane is:
Ax+By+Cz+D＝0 (1)
wherein A, B, C and D are plane parameter values, and (x, y and z) are points in a threedimensional space;
the distance d to the threedimensional plane for any point in threedimensional space can be expressed as:
the finishing can be carried out as follows:
d＝a _{0} x _{1} +a _{1} y _{1} +a _{2} z _{1} +a _{3} (3)
in the formula (x) _{1} ，y _{1} ，z _{1} ) The coordinates of any point in the threedimensional space,
according to the least square method, if the distance plane and the nearest plane parameter of each point need to be found, the distance between the point and the plane can be calculated as follows:
S＝∑(a _{0} x _{i} +a _{1} y _{i} +a _{2} z _{i} +a _{3} ) ^{2} (4)
the parameters of each point from the plane and the nearest plane are as follows:
in the formula, A _{p} ＝(x _{i} ，y _{i} ，z _{i} ) Is a threedimensional coordinate matrix of a fitted plane,is the vector of the parameters of the fitted plane, b is the sum A _{p} X identity matrixes with the same shape;
when S min is required, then:
so that the derivation of X from equation (6) can be obtained
Namely, it is
X＝(A _{p} ^{T} A _{p} ) ^{1} A _{p} ^{T} b (8)
The plane parameters of the sleeper can be calculated by substituting the RealSense point cloud data into the above formula (8) in the format of (n, 3).
And calculating normal vector parameters of a sleeper plane according to x, y and z of pixel points in the depth image obtained picture according to the above formula, calculating the distance between each point in the depth image and the sleeper plane according to the normal vector parameters of the sleeper plane by using an OpenCV (open circuit vehicle library), setting the color information of the corresponding pixel in the color picture to be black according to the point with the distance smaller than a set threshold value, and finishing plane filtering.
Through the plane filtration, can filter the point except sleeper foreign matter in the sleeper color image, only leave the sleeper foreign matter.
Specifically, the distance between the discrete data point and the fitting plane is calculated according to the depth information, a threshold is set, the points below the threshold are regarded as plane points, and the pixel information of the corresponding plane points in the color image is filtered by using a mask function in an OpenCV library, so that an image including the target object in the field of view can be obtained as shown in fig. 8(a) and 8(b), where fig. 8(b) is a sleeper color image, and fig. 8(a) is foreign matter information obtained by filtering the sleeper color image according to the depth information. It can be seen that, after the camera collects the sleeper plane information, the presence information of the sleeper plane foreign matter can be identified through the depth image, the black area in fig. 8(a) is the sleeper area filtering result, the blue area is the pixel part left after the plane extraction, and the color image is subjected to further feature extraction on the basis of foreign matter identification in the following process.
As shown in fig. 5, step S4 employs Canny algorithm edge detection.
The values of RGB of the three channels are calculated by utilizing an OpenCV library, the image is converted into a gray image, and Gaussian filtering is adopted for noise reduction. The gaussian filtering can be implemented by two times of onedimensional gaussian kernel weighting or by a twodimensional gaussian kernel convolution, and the gray values after convolution are:
where σ denotes the standard deviation and f (x, y) is the gray value of each point in the image coordinate system.
After convolution, the image can become smoother, highfrequency pixel noise can be removed, and the width of the edge can be increased. Calculating the firstorder difference of the noisereduced image, and performing convolution on the image by using a sobel operator to obtain a gradient matrix of the image in the two directions of the x axis and the y axis, wherein the gradient matrix has the following main functions:
wherein A is the image gray matrix after Gaussian convolution, G _{x} ，G _{y} The gradient matrix of the two directions of the image after calculation. Obtaining gradients in two directions, calculating total gradient with amplitude ofDirection θ ═ arctan (G) _{y} /G _{x} )；
For the image after Gaussian filtering, the refined edge is judged according to some fuzzy edge information, the edge pixel is compared with the adjacent pixel, the point with the maximum local gradient is reserved, and a clear and sharp edge can be obtained. The method comprises the steps of detecting and connecting edges by a doublethreshold method, firstly, setting a high threshold and a low threshold according to a gradient value to screen images, filtering the images according to the high threshold to obtain an image with a few false edges, and connecting edge endpoints in the image with the high threshold according to the low threshold to obtain a complete closed contour curve.
As shown in fig. 6 and 7, in step S4, after the area where the foreign object exists is mapped to the sleeper color image by plane filtering, the edge detection is performed on the sleeper color image foreign object area, the position of the center point in the color image and the rotation angle of the object are obtained by using the minimum outer surrounding frame of the edge, and the threedimensional coordinates of the foreign object center point in the camera coordinate system are obtained by using the depth image and the position information corresponding to the sleeper color image to be fused.
Specifically, according to the depth camera sleeper color image and the depth image calibration, internal and external parameters of the depth image and the sleeper color image are obtained:
in the formula (x) _{rgb} ，y _{rgb} ) And (x) _{d} ，y _{d} ) Coordinates Z in pixel coordinate system of sleeper color image and depth image _{rgb} And Z _{d} Is a scale factor, f is the camera focal length, (u) _{0} ，v _{0} ) As the origin coordinates of the image coordinate system, R _{rgb} ，T _{rgb} And R _{d} ，T _{d} A relative rotation matrix and a relative position matrix in the color camera and depth camera external parameters, respectively, (X) _{w} ，Y _{w} ，Z _{w} ) The target threedimensional coordinate is a point in a world coordinate system;
and combining the internal reference matrix to realize the registration of the sleeper color image and the depth image:
m is a 4X 4 transformation matrix, and can be obtained by substituting corresponding points of a plurality of groups of depth images and color images, so that the registration of the color images and the depth images is realized according to the coordinate values of the pixel coordinate system of the color image of the sleeper as required, and further a target threedimensional coordinate (X) is obtained _{w} ，Y _{w} ，Z _{w} )。
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (9)
1. A sleeper foreign matter detection method based on a depth camera is characterized by comprising the following steps:
s1, target detection neural network training, namely fixing a camera on a railway inspection platform, collecting sleeper color images under different illumination positions by the camera, labeling sleepers in the sleeper color images, making a sleeper data set, sending the sleeper data set into a CenterNet target detection neural network for training, and performing sleeper detection on the sleeper color images by using the trained network to obtain an outer surrounding frame of the sleepers;
s2, background filtering, namely, carrying out target detection on the realtime image acquired by the camera through the neural network trained in the step S1, and filtering pixel information outside the outer surrounding frame by adopting a mask function in an OpenCV library in the sleeper color image and the corresponding depth image according to the outer surrounding frame of a target detection result and setting the pixel information to be black;
s3, performing plane filtering, namely identifying the sleeper color image to a sleeper plane area corresponding to a corresponding area of the depth image, identifying plane parameters in the depth image by adopting a least square method, and filtering pixel points of a plane in the color image according to the plane parameters;
step S4, edge detection, namely defining that a sleeper foreign matter exists in an area larger than a threshold value area in the setting of an area threshold value, and regressing a threedimensional coordinate of a sleeper foreign matter center point through depth information;
and S5, issuing a signal indicating the existence of the sleeper foreign matter and the threedimensional coordinate of the central point of the sleeper foreign matter to an upper computer, removing the sleeper foreign matter, and jumping to S2 until all sleepers in a section of track are detected.
2. The depth camerabased sleeper foreign object detection method as claimed in claim 1, characterized in that: including sleeper foreign matter detection device, sleeper foreign matter detection device patrols and examines platform and host computer including the railway, the railway is patrolled and examined the platform and is equipped with arm, degree of depth camera, clamping jaw and collecting box, the arm with the degree of depth camera sets up on the railway is patrolled and examined the platform, the clamping jaw is established the arm is terminal, and host computer acquires to have sleeper foreign matter signal and sleeper foreign matter central point threedimensional coordinate, and the motion path of quick planning department arm is got to the clamping jaw clamp through the arm end and is got and transmit for the sleeper foreign matter, places in the collecting box.
3. The depth camerabased sleeper foreign object detection method as claimed in claim 1, characterized in that: in step S1, the tie data set includes 300 to 400 tie color images under different illumination positions.
4. The depth camerabased sleeper foreign object detection method as claimed in claim 1, characterized in that: in the step S1, the different illumination position condition includes one of the following complex illumination interference phenomena: insufficient illumination, ground reflection, intense illumination and dark shadows.
5. The depth camerabased sleeper foreign object detection method as claimed in claim 1, characterized in that: the step S3 is to filter the tie plane, and in the process of performing plane fitting by using the least square method, the parameter of the target plane can be calculated by using the sum of squared distances of each discrete point to the target plane as an optimization function, and the general expression of the threedimensional plane is as follows:
Ax+By+Cz+D＝0 (1)
wherein A, B, C and D are plane parameter values, and (x, y and z) are points in a threedimensional space;
the distance d to the threedimensional plane for any point in threedimensional space can be expressed as:
finishing to obtain:
d＝a _{0} x _{1} +a _{1} y _{1} +a _{2} z _{1} +a _{3} (3)
in the formula (x) _{1} ，y _{1} ，z _{1} ) The coordinates of any point in the threedimensional space,
according to the least square method, if the distance plane and the nearest plane parameter of each point need to be found, the distance between the point and the plane can be calculated as follows:
S＝∑(a _{0} x _{i} +a _{1} y _{i} +a _{2} z _{i} +a _{3} ) ^{2} (4)
the parameters of each point from the plane and the nearest plane are as follows:
in the formula, A _{p} ＝(x _{i} ，y _{i} ，z _{i} ) Is a threedimensional coordinate matrix of a fitted plane,is the vector of the parameters of the fitted plane, b is the sum A _{p} X identity matrixes with the same shape;
when S min is required, then:
so that the derivation of X from equation (6) can be obtained
Namely, it is
X＝(A _{p} ^{T} A _{p} ) ^{1} A _{p} ^{T} b (8)
Substituting RealSense point cloud data into the formula (8) in a format of (n, 3) to calculate plane parameters of the sleeper, obtaining x, y and z of pixel points in the picture according to the depth image to calculate normal vector parameters of the sleeper plane, calculating the distance between each point in the depth image and the sleeper plane by using an OpenCV (open circuit vehicle library) according to the normal vector parameters of the sleeper plane, setting the color information of the corresponding pixel in the color picture to be black by using the point with the distance smaller than a set threshold value, and finishing plane filtering.
6. The depth camerabased sleeper foreign object detection method as claimed in claim 1, characterized in that: the step S4 adopts Canny algorithm edge detection.
7. The depth camerabased sleeper foreign object detection method of claim 6, characterized in that: the method comprises the steps of firstly calculating RGB values of three channels by utilizing an OpenCV library, converting an image into a gray image, and performing noise reduction by adopting Gaussian filtering, wherein the Gaussian filtering is realized by two times of onedimensional Gaussian kernel weighting or is completed by one twodimensional Gaussian kernel convolution, and the gray value after the convolution is as follows:
wherein, sigma represents standard deviation, and f (x, y) is the gray value of each point in the image coordinate system; calculating the firstorder difference of the noisereduced image, and performing convolution on the image by using a sobel operator to obtain a gradient matrix of the image in the two directions of the x axis and the y axis, wherein the gradient matrix has the following main functions:
wherein A is the image gray matrix after Gaussian convolution, G _{x} ，G _{y} Calculating gradient matrixes of the image in two directions;
obtaining gradients in two directions, calculating total gradient with amplitude ofDirection θ ═ arctan (G) _{y} /G _{x} )；
For the image subjected to Gaussian filtering, the refined edge is judged according to some fuzzy edge information, the edge pixel is compared with the adjacent pixel, the point with the maximum local gradient is reserved, and a clear and sharp edge can be obtained.
8. The depth camerabased sleeper foreign object detection method of claim 7, characterized in that: and detecting and connecting edges by adopting a doublethreshold method, setting a high threshold and a low threshold according to the gradient value to screen the images, filtering the images to obtain an image with a small number of false edges according to the high threshold, and connecting edge endpoints in the highthreshold image to obtain a complete closed contour curve according to the low threshold.
9. The depth camerabased sleeper foreign object detection method as claimed in claim 1, characterized in that: in step S4, calibrating the depth camera sleeper color image and the depth image to obtain an internal and external reference of the depth image and the sleeper color image:
in the formula (x) _{rgb} ，y _{rgb} ) And (x) _{d} ，y _{d} ) Coordinates Z in pixel coordinate system of sleeper color image and depth image _{rgb} And Z _{d} Is a scale factor, f is the camera focal length, (u) _{0} ，v _{0} ) As the origin coordinates of the image coordinate system, R _{rgb} ，T _{rgb} And R _{d} ，T _{d} A relative rotation matrix and a relative position matrix in the color camera and depth camera external parameters, respectively, (X) _{w} ，Y _{w} ，Z _{w} ) The target threedimensional coordinate is a point in a world coordinate system;
and combining the internal reference matrixes to realize the registration of the sleeper color image and the depth image:
m is a 4X 4 transformation matrix, and can be obtained by substituting corresponding points of a plurality of groups of depth images and color images, so that the registration of the color images and the depth images is realized according to the coordinate values of the pixel coordinate system of the color image of the sleeper as required, and further a target threedimensional coordinate (X) is obtained _{w} ，Y _{w} ，Z _{w} )。
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

CN202210561931.3A CN114638835B (en)  20220523  20220523  Sleeper foreign matter detection method based on depth camera 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

CN202210561931.3A CN114638835B (en)  20220523  20220523  Sleeper foreign matter detection method based on depth camera 
Publications (2)
Publication Number  Publication Date 

CN114638835A CN114638835A (en)  20220617 
CN114638835B true CN114638835B (en)  20220816 
Family
ID=81952883
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

CN202210561931.3A Active CN114638835B (en)  20220523  20220523  Sleeper foreign matter detection method based on depth camera 
Country Status (1)
Country  Link 

CN (1)  CN114638835B (en) 
Families Citing this family (1)
Publication number  Priority date  Publication date  Assignee  Title 

CN115352490B (en) *  20221024  20230324  四川赛博创新科技有限公司  Railway track crack detection device 
Family Cites Families (5)
Publication number  Priority date  Publication date  Assignee  Title 

WO2005120924A1 (en) *  20040611  20051222  Stratech Systems Limited  Method and system for rail track scanning and foreign object detection 
US20190039633A1 (en) *  20170802  20190207  Panton, Inc.  Railroad track anomaly detection 
CN114248819B (en) *  20200925  20231229  中车株洲电力机车研究所有限公司  Railway intrusion foreign matter unmanned aerial vehicle detection method, device and system based on deep learning 
CN112949482B (en) *  20210301  20220429  浙江大学  Noncontact type rail sleeper relative displacement realtime measurement method based on deep learning and visual positioning 
CN113011283B (en) *  20210301  20220429  浙江大学  Noncontact type rail sleeper relative displacement realtime measurement method based on video 

2022
 20220523 CN CN202210561931.3A patent/CN114638835B/en active Active
Also Published As
Publication number  Publication date 

CN114638835A (en)  20220617 
Similar Documents
Publication  Publication Date  Title 

CN106067023B (en)  Container number and truck number identification system and method based on image processing  
TWI409718B (en)  Method of locating license plate of moving vehicle  
CN109657632B (en)  Lane line detection and identification method  
Saha et al.  License Plate localization from vehicle images: An edge based multistage approach  
CN108985170A (en)  Transmission line of electricity hanger recognition methods based on Three image difference and deep learning  
KR20130030220A (en)  Fast obstacle detection  
CN103077384A (en)  Method and system for positioning and recognizing vehicle logo  
CN101630412A (en)  Camerabased lane marker detection  
CN104978746A (en)  Running vehicle body color identification method  
CN106651893A (en)  Edge detectionbased wall body crack identification method  
CN110910443A (en)  Contact net geometric parameter realtime measuring method and device based on single monitoring camera  
CN106803087A (en)  A kind of car number automatic identification method and system  
CN111368797A (en)  Target realtime ranging method based on road end monocular camera  
CN114638835B (en)  Sleeper foreign matter detection method based on depth camera  
CN112528861A (en)  Foreign matter detection method and device applied to track bed in railway tunnel  
CN110705553B (en)  Scratch detection method suitable for vehicle distant view image  
CN113673614B (en)  Metro tunnel foreign matter intrusion detection device and method based on machine vision  
CN109063669B (en)  Bridge area ship navigation situation analysis method and device based on image recognition  
CN112949482B (en)  Noncontact type rail sleeper relative displacement realtime measurement method based on deep learning and visual positioning  
JP3589293B2 (en)  Road white line detection method  
CN112288682A (en)  Electric power equipment defect positioning method based on image registration  
CN115857040A (en)  Dynamic visual detection device and method for foreign matters on locomotive roof  
CN114724119A (en)  Lane line extraction method, lane line detection apparatus, and storage medium  
CN112949483B (en)  Noncontact rail stretching displacement realtime measurement method based on fast RCNN  
CN109886120B (en)  Zebra crossing detection method and system 
Legal Events
Date  Code  Title  Description 

PB01  Publication  
PB01  Publication  
SE01  Entry into force of request for substantive examination  
SE01  Entry into force of request for substantive examination  
GR01  Patent grant  
GR01  Patent grant 