CN114638835B - Sleeper foreign matter detection method based on depth camera - Google Patents

Sleeper foreign matter detection method based on depth camera Download PDF

Info

Publication number
CN114638835B
CN114638835B CN202210561931.3A CN202210561931A CN114638835B CN 114638835 B CN114638835 B CN 114638835B CN 202210561931 A CN202210561931 A CN 202210561931A CN 114638835 B CN114638835 B CN 114638835B
Authority
CN
China
Prior art keywords
sleeper
plane
image
foreign matter
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210561931.3A
Other languages
Chinese (zh)
Other versions
CN114638835A (en
Inventor
肖晓晖
左晨乐
吴少诚
程佳慧
周世煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202210561931.3A priority Critical patent/CN114638835B/en
Publication of CN114638835A publication Critical patent/CN114638835A/en
Application granted granted Critical
Publication of CN114638835B publication Critical patent/CN114638835B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61KAUXILIARY EQUIPMENT SPECIALLY ADAPTED FOR RAILWAYS, NOT OTHERWISE PROVIDED FOR
    • B61K9/00Railway vehicle profile gauges; Detecting or indicating overheating of components; Apparatus on locomotives or cars to indicate bad track sections; General design of track recording vehicles
    • B61K9/08Measuring installations for surveying permanent way
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Abstract

The invention discloses a sleeper foreign matter detection method based on a depth camera, which comprises the steps of collecting sleeper color images under different illumination positions by the camera, manufacturing the sleeper color images into a sleeper data set, and acquiring an outer enclosure frame of a sleeper; background filtering, namely filtering pixel information outside an outer surrounding frame according to the outer surrounding frame of the target detection result, and setting the pixel information to be black; plane filtering, namely filtering pixel points of a plane in the sleeper color image; edge detection, namely defining the area threshold value of a set area as the area with the area larger than the threshold value as the area with the sleeper foreign matter, and returning the three-dimensional coordinates of the central point of the sleeper foreign matter through depth information; there will be sleeper foreign matter signal and sleeper foreign matter central point three-dimensional coordinate to issue for the host computer, wait for to clear away, only need just can realize the foreign matter detection of sleeper through the degree of depth camera to the training of sleeper environment, need not train to different foreign matters, adapt to the foreign matter detection of various shapes kinds.

Description

Sleeper foreign matter detection method based on depth camera
Technical Field
The invention relates to the technical field of track foreign matter detection, in particular to a sleeper foreign matter detection method based on a depth camera.
Background
The rail is the basis of train vehicle operation, and the normal operation of railway transportation can be guaranteed only through continuous periodic maintenance and overhaul. Because the railway line is long in mileage and the environment passing through is complex and changeable, the safety of driving can be influenced no matter the stone which is splashed and falls off by environmental factors or the artificially retained garbage and foreign matters, and the detection of the rail and foreign matters is of great significance in time. The conventional method at present adopts a classical image processing algorithm, extracts the orbit lines by a Hough transform method or a template matching method, and detects foreign matters in the orbit lines by adopting an optical flow method and a frame difference method to detect the difference between video frames, and the methods have low detection robustness and cannot process the target under a complex background. The method based on deep learning is not widely applied to the field of rails due to late development, and generally detects common targets such as plants and pedestrians, so that unknown objects cannot be detected.
Disclosure of Invention
According to the defects of the prior art, the invention aims to provide a sleeper foreign matter detection method based on a depth camera, which can realize rail foreign matter detection through the depth camera only by training a rail environment, does not need to train different foreign matters, and is suitable for foreign matter detection of various shapes and types.
In order to solve the technical problems, the invention adopts the technical scheme that:
a sleeper foreign matter detection method based on a depth camera comprises the following steps:
s1, target detection neural network training, namely fixing a camera on a railway inspection platform, collecting sleeper color images under different illumination positions by the camera, labeling sleepers in the sleeper color images, making a sleeper data set, sending the sleeper data set into a CenterNet target detection neural network for training, and performing sleeper detection on the sleeper color images by using the trained network to obtain an outer surrounding frame of the sleepers;
s2, background filtering, namely, carrying out target detection on the real-time image acquired by the camera through the neural network trained in the step S1, and filtering pixel information outside the outer surrounding frame by adopting a mask function in an OpenCV library in the sleeper color image and the corresponding depth image according to the outer surrounding frame of a target detection result and setting the pixel information to be black;
s3, performing plane filtering, namely identifying the sleeper color image to a sleeper plane area corresponding to a corresponding area of the depth image, identifying plane parameters in the depth image by adopting a least square method, and filtering pixel points of a plane in the color image according to the plane parameters;
step S4, edge detection, namely defining that a sleeper foreign matter exists in an area larger than a threshold value area in the setting of an area threshold value, and regressing a three-dimensional coordinate of a sleeper foreign matter center point through depth information;
and S5, issuing a signal indicating the existence of the sleeper foreign matter and the three-dimensional coordinate of the central point of the sleeper foreign matter to an upper computer, removing the sleeper foreign matter, and jumping to S2 until all sleepers in a section of track are detected.
Further, including sleeper foreign matter detection device, sleeper foreign matter detection device patrols and examines platform and host computer including the railway, the railway is patrolled and examined the platform and is equipped with arm, degree of depth camera, clamping jaw and collecting box, the arm with the degree of depth camera sets up on the railway patrols and examines the platform, the clamping jaw is established the arm is terminal, and host computer acquires to have sleeper foreign matter signal and sleeper foreign matter central point three-dimensional coordinate, plans the motion path of department arm fast, presss from both sides through the terminal clamping jaw of arm and gets and transmit the sleeper foreign matter, places in the collecting box.
Further, in the step S1, the tie data set includes 300 to 400 tie color images under different illumination positions.
Further, in the step S1, the different illumination location condition includes one of the following complex illumination interference phenomena: insufficient illumination, ground reflection, intense illumination and dark shadows.
Further, the step S3 is to filter the tie plane, and in the process of performing plane fitting by using the least square method, the parameters of the target plane can be calculated by taking the sum of squared distances of each discrete point to the target plane as an optimization function, where a general expression of the three-dimensional plane is as follows:
Ax+By+Cz+D=0 (1)
wherein A, B, C and D are plane parameter values, and (x, y and z) are points in a three-dimensional space;
the distance d to the three-dimensional plane for any point in three-dimensional space can be expressed as:
Figure GDA0003736973180000021
finishing to obtain:
d=α 0 x 1 +a 1 y 1 +a 2 z 1 +a 3 (3)
wherein (x) 1 ,y 1 ,z 1 ) Any point in three-dimensional spaceIs determined by the coordinate of (a) in the space,
Figure GDA0003736973180000022
Figure GDA0003736973180000023
according to the least square method, if the distance plane and the nearest plane parameter of each point need to be found, the distance between the point and the plane can be calculated as follows:
S=∑(a 0 x i +a 1 y i +a 2 z i +a 3 ) 2 (4)
the parameters of each point from the plane and the nearest plane are as follows:
Figure GDA0003736973180000024
in the formula, A p =(x i ,y i ,z i ) Is a three-dimensional coordinate matrix of a fitted plane,
Figure GDA0003736973180000025
is the vector of the parameters of the fitted plane, b is the sum A p X identity matrixes with the same shape;
when S min is required, then:
Figure GDA0003736973180000026
so that the derivation of X from equation (6) can be obtained
Figure GDA0003736973180000027
Namely, it is
X=(A p T A p ) -1 A p T b (8)
Substituting RealSense point cloud data into the formula (8) in a format of (n, 3) to calculate plane parameters of the sleeper, obtaining x, y and z of pixel points in the picture according to the depth image to calculate normal vector parameters of the sleeper plane, calculating the distance between each point in the depth image and the sleeper plane by using an OpenCV (open circuit vehicle library) according to the normal vector parameters of the sleeper plane, setting the color information of the corresponding pixel in the color picture to be black by using the point with the distance smaller than a set threshold value, and finishing plane filtering.
Further, the step S4 adopts Canny algorithm edge detection.
Further, the values of three channels of RGB are calculated by using an OpenCV library, the image is converted into a gray image, a gaussian filter is used for noise reduction, the gaussian filter is implemented by two times of one-dimensional gaussian kernel weighting or is completed by one two-dimensional gaussian kernel convolution, and the gray value after convolution is:
Figure GDA0003736973180000031
wherein, sigma represents standard deviation, and f (x, y) is the gray value of each point in the image coordinate system;
calculating the first-order difference of the noise-reduced image, and performing convolution on the image by using a sobel operator to obtain a gradient matrix of the image in the two directions of the x axis and the y axis, wherein the gradient matrix has the following main functions:
Figure GDA0003736973180000032
wherein A is the image gray matrix after Gaussian convolution, G x ,G y Calculating gradient matrixes of the image in two directions; obtaining gradients in two directions, calculating total gradient with amplitude of
Figure GDA0003736973180000033
Direction θ ═ arctan (G) y /G x );
For the image subjected to Gaussian filtering, the refined edge is judged according to some fuzzy edge information, the edge pixel is compared with the adjacent pixel, the point with the maximum local gradient is reserved, and a clear and sharp edge can be obtained.
Further, edges are detected and connected by adopting a double-threshold method, an image is screened by setting a high threshold and a low threshold according to the gradient value, an image with a small number of false edges can be obtained by filtering according to the high threshold, and edge endpoints in the image with the high threshold are connected according to the low threshold to obtain a complete closed contour curve.
Further, in step S4, calibrating according to the depth camera sleeper color image and the depth image, obtaining internal and external parameters of the depth image and the sleeper color image:
Figure GDA0003736973180000041
Figure GDA0003736973180000042
in the formula (x) rgb ,y rgb ) And (x) d ,y d ) Coordinates Z in pixel coordinate system of sleeper color image and depth image rgb And Z d Is a scale factor, f is the camera focal length, (u) 0 ,v 0 ) As the origin coordinates of the image coordinate system, R rgb ,T rgb And R d ,T d A relative rotation matrix and a relative position matrix in the color camera and depth camera external parameters, respectively, (X) w ,Y w ,Z w ) The target three-dimensional coordinate is a point in a world coordinate system;
and combining the internal reference matrix to realize the registration of the sleeper color image and the depth image:
Figure GDA0003736973180000043
m is a 4 x 4 transformation matrix, and can be obtained by substituting corresponding points of a plurality of groups of depth images and color images, thereby realizing the registration of the color images and the depth images according to the pixel coordinate system of the color image of the sleeper on demandScaling to obtain target three-dimensional coordinates (X) w ,Y w ,Z w )。
Compared with the prior art, the invention has the following advantages and beneficial effects:
according to the depth camera-based sleeper foreign matter detection method, a self-made sleeper data set is learned by adopting CenterNet, target detection is utilized to filter image information outside a sleeper, a background plane is extracted by a least square method for depth image information and filtered, foreign matter information is obtained through edge detection, a three-dimensional position of an object is obtained by combining a depth image and a color image, foreign matter detection of the sleeper can be achieved through a single depth camera only by training the sleeper environment, training for different foreign matters is not needed, the method is suitable for foreign matter detection of various shapes and types, and is convenient and rapid, and rapid detection can be achieved.
Drawings
Fig. 1 is a flowchart of a sleeper foreign matter detection method of the present invention.
Fig. 2 is a hardware device to which the present invention relates.
Fig. 3 is a schematic illustration of the sleeper detection effect of the present invention.
FIG. 4 is a schematic diagram of depth image plane filtering results of the present invention.
FIG. 5 is a schematic diagram of the edge detection effect of the present invention.
Fig. 6 is a schematic illustration of the final effect in one embodiment of the invention.
Fig. 7 is a schematic illustration of the final effect in another embodiment of the invention.
Fig. 8(a) is a foreign matter information map of the sleeper color image after filtering according to the depth information.
Fig. 8(b) is a sleeper color image original of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. The invention provides a sleeper foreign matter detection method based on a depth camera, which comprises the following steps as shown in figures 1-7:
s1, target detection neural network training, namely fixing a camera on a railway inspection platform, collecting sleeper color images under different illumination positions by the camera, labeling sleepers in the sleeper color images, making a sleeper data set, sending the sleeper data set into a CenterNet target detection neural network for training, and performing sleeper detection on the sleeper color images by using the trained network to obtain an outer surrounding frame of the sleepers;
according to the invention, sleeper color images under different illumination position conditions are collected by a camera to manufacture a sleeper data set, the sleeper data set is sent to a CenterNet target detection neural network for training, in the actual detection process, the detection is not influenced by external illumination and changing environment, and the purpose of rapid detection is realized.
S2, background filtering, namely, carrying out target detection on the real-time image acquired by the camera through the neural network trained in the step S1, and filtering pixel information outside the outer surrounding frame by adopting a mask function in an OpenCV library in the sleeper color image and the corresponding depth image according to the outer surrounding frame of a target detection result and setting the pixel information to be black;
the sleeper is arranged on the ballast, so that the ballast is easily influenced by the external environment and moves to the sleeper, and the ballast also belongs to one type of foreign matters of the sleeper.
S3, performing plane filtering, namely identifying the sleeper color image to a sleeper plane area corresponding to a corresponding area of the depth image, identifying plane parameters in the depth image by adopting a least square method, and filtering pixel points of a plane in the color image according to the plane parameters;
through plane filtering, the sleeper foreign matter existence information of unknown shape and size can be identified through depth information, and the corresponding area is mapped to a color image, and meanwhile, the position is convenient to obtain.
Step S4, edge detection, namely defining that a sleeper foreign matter exists in an area larger than a threshold value area in the setting of an area threshold value, and regressing a three-dimensional coordinate of a sleeper foreign matter center point through depth information;
s5, the signal that the sleeper foreign matter exists and the three-dimensional coordinate of the central point of the sleeper foreign matter are issued to an upper computer, the sleeper foreign matter is removed, the railway inspection platform moves along the track, and the step S2 is skipped until all sleepers in a section of track are detected.
According to the method, the self-made sleeper data set is learned by adopting the CenterNet, image information outside the sleeper is filtered by utilizing target detection, background plane filtering is extracted by a depth image information least square method, foreign matter information is obtained through edge detection, the three-dimensional position of an object is obtained by combining a depth image and a color image, the foreign matter detection of the sleeper can be realized through a depth camera only by training the sleeper environment, training aiming at different foreign matters is not needed, the method is suitable for the detection of foreign matters in various shapes and types, is convenient and rapid, and can realize rapid detection.
The invention can rapidly detect the unknown foreign body samples of the system under different illumination conditions in the moving process of the inspection platform, obtain the position and the plane placing posture, rapidly detect the unknown shape and type of the foreign body, detect the position and the placing posture, and only recognize the trained samples and can not return to the position by the common algorithm.
As shown in fig. 2, the sleeper foreign matter detection method based on the depth camera comprises a sleeper foreign matter detection device, the sleeper foreign matter detection device comprises a railway inspection platform and an upper computer, the railway inspection platform is provided with a mechanical arm, the depth camera, a clamping jaw and a collecting box, the mechanical arm and the depth camera are arranged on the railway inspection platform, the clamping jaw is arranged at the tail end of the mechanical arm, the upper computer acquires a sleeper foreign matter signal and three-dimensional coordinates of a sleeper foreign matter central point, a motion path of the mechanical arm is planned quickly, and the sleeper foreign matter is clamped and transmitted to the sleeper foreign matter through the clamping jaw at the tail end of the mechanical arm and is placed in the collecting box.
Wherein, arm, clamping jaw, degree of depth camera are connected to the host computer through specific cable.
In step S1, as shown in fig. 3, the tie data set includes 300 to 400 tie color images under different illumination location conditions, the different illumination location conditions including one of the following complex illumination interference phenomena: the sleeper foreign body identification device has the advantages that due to insufficient illumination, ground reflection, strong illumination and dark shadows, foreign bodies on sleepers can be inspected under illumination conditions of different places, and the applicability of the sleeper foreign body identification device is enhanced.
As shown in fig. 4, in the step S3, the plane filtering is performed on the sleepers, and in the process of performing the plane fitting by the least square method, the parameters of the target plane can be calculated by using the sum of squared distances of the target plane at each discrete point as an optimization function.
The specific implementation process is as follows:
the general expression for the three-dimensional plane is:
Ax+By+Cz+D=0 (1)
wherein A, B, C and D are plane parameter values, and (x, y and z) are points in a three-dimensional space;
the distance d to the three-dimensional plane for any point in three-dimensional space can be expressed as:
Figure GDA0003736973180000061
the finishing can be carried out as follows:
d=a 0 x 1 +a 1 y 1 +a 2 z 1 +a 3 (3)
in the formula (x) 1 ,y 1 ,z 1 ) The coordinates of any point in the three-dimensional space,
Figure GDA0003736973180000062
Figure GDA0003736973180000063
according to the least square method, if the distance plane and the nearest plane parameter of each point need to be found, the distance between the point and the plane can be calculated as follows:
S=∑(a 0 x i +a 1 y i +a 2 z i +a 3 ) 2 (4)
the parameters of each point from the plane and the nearest plane are as follows:
Figure GDA0003736973180000071
in the formula, A p =(x i ,y i ,z i ) Is a three-dimensional coordinate matrix of a fitted plane,
Figure GDA0003736973180000072
is the vector of the parameters of the fitted plane, b is the sum A p X identity matrixes with the same shape;
when S min is required, then:
Figure GDA0003736973180000073
so that the derivation of X from equation (6) can be obtained
Figure GDA0003736973180000074
Namely, it is
X=(A p T A p ) -1 A p T b (8)
The plane parameters of the sleeper can be calculated by substituting the RealSense point cloud data into the above formula (8) in the format of (n, 3).
And calculating normal vector parameters of a sleeper plane according to x, y and z of pixel points in the depth image obtained picture according to the above formula, calculating the distance between each point in the depth image and the sleeper plane according to the normal vector parameters of the sleeper plane by using an OpenCV (open circuit vehicle library), setting the color information of the corresponding pixel in the color picture to be black according to the point with the distance smaller than a set threshold value, and finishing plane filtering.
Through the plane filtration, can filter the point except sleeper foreign matter in the sleeper color image, only leave the sleeper foreign matter.
Specifically, the distance between the discrete data point and the fitting plane is calculated according to the depth information, a threshold is set, the points below the threshold are regarded as plane points, and the pixel information of the corresponding plane points in the color image is filtered by using a mask function in an OpenCV library, so that an image including the target object in the field of view can be obtained as shown in fig. 8(a) and 8(b), where fig. 8(b) is a sleeper color image, and fig. 8(a) is foreign matter information obtained by filtering the sleeper color image according to the depth information. It can be seen that, after the camera collects the sleeper plane information, the presence information of the sleeper plane foreign matter can be identified through the depth image, the black area in fig. 8(a) is the sleeper area filtering result, the blue area is the pixel part left after the plane extraction, and the color image is subjected to further feature extraction on the basis of foreign matter identification in the following process.
As shown in fig. 5, step S4 employs Canny algorithm edge detection.
The values of RGB of the three channels are calculated by utilizing an OpenCV library, the image is converted into a gray image, and Gaussian filtering is adopted for noise reduction. The gaussian filtering can be implemented by two times of one-dimensional gaussian kernel weighting or by a two-dimensional gaussian kernel convolution, and the gray values after convolution are:
Figure GDA0003736973180000075
where σ denotes the standard deviation and f (x, y) is the gray value of each point in the image coordinate system.
After convolution, the image can become smoother, high-frequency pixel noise can be removed, and the width of the edge can be increased. Calculating the first-order difference of the noise-reduced image, and performing convolution on the image by using a sobel operator to obtain a gradient matrix of the image in the two directions of the x axis and the y axis, wherein the gradient matrix has the following main functions:
Figure GDA0003736973180000081
wherein A is the image gray matrix after Gaussian convolution, G x ,G y The gradient matrix of the two directions of the image after calculation. Obtaining gradients in two directions, calculating total gradient with amplitude of
Figure GDA0003736973180000082
Direction θ ═ arctan (G) y /G x );
For the image after Gaussian filtering, the refined edge is judged according to some fuzzy edge information, the edge pixel is compared with the adjacent pixel, the point with the maximum local gradient is reserved, and a clear and sharp edge can be obtained. The method comprises the steps of detecting and connecting edges by a double-threshold method, firstly, setting a high threshold and a low threshold according to a gradient value to screen images, filtering the images according to the high threshold to obtain an image with a few false edges, and connecting edge endpoints in the image with the high threshold according to the low threshold to obtain a complete closed contour curve.
As shown in fig. 6 and 7, in step S4, after the area where the foreign object exists is mapped to the sleeper color image by plane filtering, the edge detection is performed on the sleeper color image foreign object area, the position of the center point in the color image and the rotation angle of the object are obtained by using the minimum outer surrounding frame of the edge, and the three-dimensional coordinates of the foreign object center point in the camera coordinate system are obtained by using the depth image and the position information corresponding to the sleeper color image to be fused.
Specifically, according to the depth camera sleeper color image and the depth image calibration, internal and external parameters of the depth image and the sleeper color image are obtained:
Figure GDA0003736973180000083
Figure GDA0003736973180000091
in the formula (x) rgb ,y rgb ) And (x) d ,y d ) Coordinates Z in pixel coordinate system of sleeper color image and depth image rgb And Z d Is a scale factor, f is the camera focal length, (u) 0 ,v 0 ) As the origin coordinates of the image coordinate system, R rgb ,T rgb And R d ,T d A relative rotation matrix and a relative position matrix in the color camera and depth camera external parameters, respectively, (X) w ,Y w ,Z w ) The target three-dimensional coordinate is a point in a world coordinate system;
and combining the internal reference matrix to realize the registration of the sleeper color image and the depth image:
Figure GDA0003736973180000092
m is a 4X 4 transformation matrix, and can be obtained by substituting corresponding points of a plurality of groups of depth images and color images, so that the registration of the color images and the depth images is realized according to the coordinate values of the pixel coordinate system of the color image of the sleeper as required, and further a target three-dimensional coordinate (X) is obtained w ,Y w ,Z w )。
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (9)

1. A sleeper foreign matter detection method based on a depth camera is characterized by comprising the following steps:
s1, target detection neural network training, namely fixing a camera on a railway inspection platform, collecting sleeper color images under different illumination positions by the camera, labeling sleepers in the sleeper color images, making a sleeper data set, sending the sleeper data set into a CenterNet target detection neural network for training, and performing sleeper detection on the sleeper color images by using the trained network to obtain an outer surrounding frame of the sleepers;
s2, background filtering, namely, carrying out target detection on the real-time image acquired by the camera through the neural network trained in the step S1, and filtering pixel information outside the outer surrounding frame by adopting a mask function in an OpenCV library in the sleeper color image and the corresponding depth image according to the outer surrounding frame of a target detection result and setting the pixel information to be black;
s3, performing plane filtering, namely identifying the sleeper color image to a sleeper plane area corresponding to a corresponding area of the depth image, identifying plane parameters in the depth image by adopting a least square method, and filtering pixel points of a plane in the color image according to the plane parameters;
step S4, edge detection, namely defining that a sleeper foreign matter exists in an area larger than a threshold value area in the setting of an area threshold value, and regressing a three-dimensional coordinate of a sleeper foreign matter center point through depth information;
and S5, issuing a signal indicating the existence of the sleeper foreign matter and the three-dimensional coordinate of the central point of the sleeper foreign matter to an upper computer, removing the sleeper foreign matter, and jumping to S2 until all sleepers in a section of track are detected.
2. The depth camera-based sleeper foreign object detection method as claimed in claim 1, characterized in that: including sleeper foreign matter detection device, sleeper foreign matter detection device patrols and examines platform and host computer including the railway, the railway is patrolled and examined the platform and is equipped with arm, degree of depth camera, clamping jaw and collecting box, the arm with the degree of depth camera sets up on the railway is patrolled and examined the platform, the clamping jaw is established the arm is terminal, and host computer acquires to have sleeper foreign matter signal and sleeper foreign matter central point three-dimensional coordinate, and the motion path of quick planning department arm is got to the clamping jaw clamp through the arm end and is got and transmit for the sleeper foreign matter, places in the collecting box.
3. The depth camera-based sleeper foreign object detection method as claimed in claim 1, characterized in that: in step S1, the tie data set includes 300 to 400 tie color images under different illumination positions.
4. The depth camera-based sleeper foreign object detection method as claimed in claim 1, characterized in that: in the step S1, the different illumination position condition includes one of the following complex illumination interference phenomena: insufficient illumination, ground reflection, intense illumination and dark shadows.
5. The depth camera-based sleeper foreign object detection method as claimed in claim 1, characterized in that: the step S3 is to filter the tie plane, and in the process of performing plane fitting by using the least square method, the parameter of the target plane can be calculated by using the sum of squared distances of each discrete point to the target plane as an optimization function, and the general expression of the three-dimensional plane is as follows:
Ax+By+Cz+D=0 (1)
wherein A, B, C and D are plane parameter values, and (x, y and z) are points in a three-dimensional space;
the distance d to the three-dimensional plane for any point in three-dimensional space can be expressed as:
Figure FDA0003736973170000021
finishing to obtain:
d=a 0 x 1 +a 1 y 1 +a 2 z 1 +a 3 (3)
in the formula (x) 1 ,y 1 ,z 1 ) The coordinates of any point in the three-dimensional space,
Figure FDA0003736973170000031
Figure FDA0003736973170000032
according to the least square method, if the distance plane and the nearest plane parameter of each point need to be found, the distance between the point and the plane can be calculated as follows:
S=∑(a 0 x i +a 1 y i +a 2 z i +a 3 ) 2 (4)
the parameters of each point from the plane and the nearest plane are as follows:
Figure FDA0003736973170000033
in the formula, A p =(x i ,y i ,z i ) Is a three-dimensional coordinate matrix of a fitted plane,
Figure FDA0003736973170000034
is the vector of the parameters of the fitted plane, b is the sum A p X identity matrixes with the same shape;
when S min is required, then:
Figure FDA0003736973170000035
so that the derivation of X from equation (6) can be obtained
Figure FDA0003736973170000036
Namely, it is
X=(A p T A p ) -1 A p T b (8)
Substituting RealSense point cloud data into the formula (8) in a format of (n, 3) to calculate plane parameters of the sleeper, obtaining x, y and z of pixel points in the picture according to the depth image to calculate normal vector parameters of the sleeper plane, calculating the distance between each point in the depth image and the sleeper plane by using an OpenCV (open circuit vehicle library) according to the normal vector parameters of the sleeper plane, setting the color information of the corresponding pixel in the color picture to be black by using the point with the distance smaller than a set threshold value, and finishing plane filtering.
6. The depth camera-based sleeper foreign object detection method as claimed in claim 1, characterized in that: the step S4 adopts Canny algorithm edge detection.
7. The depth camera-based sleeper foreign object detection method of claim 6, characterized in that: the method comprises the steps of firstly calculating RGB values of three channels by utilizing an OpenCV library, converting an image into a gray image, and performing noise reduction by adopting Gaussian filtering, wherein the Gaussian filtering is realized by two times of one-dimensional Gaussian kernel weighting or is completed by one two-dimensional Gaussian kernel convolution, and the gray value after the convolution is as follows:
Figure FDA0003736973170000041
wherein, sigma represents standard deviation, and f (x, y) is the gray value of each point in the image coordinate system; calculating the first-order difference of the noise-reduced image, and performing convolution on the image by using a sobel operator to obtain a gradient matrix of the image in the two directions of the x axis and the y axis, wherein the gradient matrix has the following main functions:
Figure FDA0003736973170000042
wherein A is the image gray matrix after Gaussian convolution, G x ,G y Calculating gradient matrixes of the image in two directions;
obtaining gradients in two directions, calculating total gradient with amplitude of
Figure FDA0003736973170000043
Direction θ ═ arctan (G) y /G x );
For the image subjected to Gaussian filtering, the refined edge is judged according to some fuzzy edge information, the edge pixel is compared with the adjacent pixel, the point with the maximum local gradient is reserved, and a clear and sharp edge can be obtained.
8. The depth camera-based sleeper foreign object detection method of claim 7, characterized in that: and detecting and connecting edges by adopting a double-threshold method, setting a high threshold and a low threshold according to the gradient value to screen the images, filtering the images to obtain an image with a small number of false edges according to the high threshold, and connecting edge endpoints in the high-threshold image to obtain a complete closed contour curve according to the low threshold.
9. The depth camera-based sleeper foreign object detection method as claimed in claim 1, characterized in that: in step S4, calibrating the depth camera sleeper color image and the depth image to obtain an internal and external reference of the depth image and the sleeper color image:
Figure FDA0003736973170000051
Figure FDA0003736973170000052
in the formula (x) rgb ,y rgb ) And (x) d ,y d ) Coordinates Z in pixel coordinate system of sleeper color image and depth image rgb And Z d Is a scale factor, f is the camera focal length, (u) 0 ,v 0 ) As the origin coordinates of the image coordinate system, R rgb ,T rgb And R d ,T d A relative rotation matrix and a relative position matrix in the color camera and depth camera external parameters, respectively, (X) w ,Y w ,Z w ) The target three-dimensional coordinate is a point in a world coordinate system;
and combining the internal reference matrixes to realize the registration of the sleeper color image and the depth image:
Figure FDA0003736973170000061
m is a 4X 4 transformation matrix, and can be obtained by substituting corresponding points of a plurality of groups of depth images and color images, so that the registration of the color images and the depth images is realized according to the coordinate values of the pixel coordinate system of the color image of the sleeper as required, and further a target three-dimensional coordinate (X) is obtained w ,Y w ,Z w )。
CN202210561931.3A 2022-05-23 2022-05-23 Sleeper foreign matter detection method based on depth camera Active CN114638835B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210561931.3A CN114638835B (en) 2022-05-23 2022-05-23 Sleeper foreign matter detection method based on depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210561931.3A CN114638835B (en) 2022-05-23 2022-05-23 Sleeper foreign matter detection method based on depth camera

Publications (2)

Publication Number Publication Date
CN114638835A CN114638835A (en) 2022-06-17
CN114638835B true CN114638835B (en) 2022-08-16

Family

ID=81952883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210561931.3A Active CN114638835B (en) 2022-05-23 2022-05-23 Sleeper foreign matter detection method based on depth camera

Country Status (1)

Country Link
CN (1) CN114638835B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115352490B (en) * 2022-10-24 2023-03-24 四川赛博创新科技有限公司 Railway track crack detection device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005120924A1 (en) * 2004-06-11 2005-12-22 Stratech Systems Limited Method and system for rail track scanning and foreign object detection
US20190039633A1 (en) * 2017-08-02 2019-02-07 Panton, Inc. Railroad track anomaly detection
CN114248819B (en) * 2020-09-25 2023-12-29 中车株洲电力机车研究所有限公司 Railway intrusion foreign matter unmanned aerial vehicle detection method, device and system based on deep learning
CN112949482B (en) * 2021-03-01 2022-04-29 浙江大学 Non-contact type rail sleeper relative displacement real-time measurement method based on deep learning and visual positioning
CN113011283B (en) * 2021-03-01 2022-04-29 浙江大学 Non-contact type rail sleeper relative displacement real-time measurement method based on video

Also Published As

Publication number Publication date
CN114638835A (en) 2022-06-17

Similar Documents

Publication Publication Date Title
CN106067023B (en) Container number and truck number identification system and method based on image processing
TWI409718B (en) Method of locating license plate of moving vehicle
CN109657632B (en) Lane line detection and identification method
Saha et al. License Plate localization from vehicle images: An edge based multi-stage approach
CN108985170A (en) Transmission line of electricity hanger recognition methods based on Three image difference and deep learning
KR20130030220A (en) Fast obstacle detection
CN103077384A (en) Method and system for positioning and recognizing vehicle logo
CN101630412A (en) Camera-based lane marker detection
CN104978746A (en) Running vehicle body color identification method
CN106651893A (en) Edge detection-based wall body crack identification method
CN110910443A (en) Contact net geometric parameter real-time measuring method and device based on single monitoring camera
CN106803087A (en) A kind of car number automatic identification method and system
CN111368797A (en) Target real-time ranging method based on road end monocular camera
CN114638835B (en) Sleeper foreign matter detection method based on depth camera
CN112528861A (en) Foreign matter detection method and device applied to track bed in railway tunnel
CN110705553B (en) Scratch detection method suitable for vehicle distant view image
CN113673614B (en) Metro tunnel foreign matter intrusion detection device and method based on machine vision
CN109063669B (en) Bridge area ship navigation situation analysis method and device based on image recognition
CN112949482B (en) Non-contact type rail sleeper relative displacement real-time measurement method based on deep learning and visual positioning
JP3589293B2 (en) Road white line detection method
CN112288682A (en) Electric power equipment defect positioning method based on image registration
CN115857040A (en) Dynamic visual detection device and method for foreign matters on locomotive roof
CN114724119A (en) Lane line extraction method, lane line detection apparatus, and storage medium
CN112949483B (en) Non-contact rail stretching displacement real-time measurement method based on fast R-CNN
CN109886120B (en) Zebra crossing detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant