CN117433447A - Three-dimensional geometric detection system for rail fastener - Google Patents

Three-dimensional geometric detection system for rail fastener Download PDF

Info

Publication number
CN117433447A
CN117433447A CN202311377492.1A CN202311377492A CN117433447A CN 117433447 A CN117433447 A CN 117433447A CN 202311377492 A CN202311377492 A CN 202311377492A CN 117433447 A CN117433447 A CN 117433447A
Authority
CN
China
Prior art keywords
fastener
image
dimensional
camera
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311377492.1A
Other languages
Chinese (zh)
Inventor
李治蒙
张志毅
杨磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwest A&F University
Original Assignee
Northwest A&F University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwest A&F University filed Critical Northwest A&F University
Priority to CN202311377492.1A priority Critical patent/CN117433447A/en
Publication of CN117433447A publication Critical patent/CN117433447A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/2433Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures for measuring outlines by shadow casting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The application discloses three-dimensional geometric detecting system of rail fastener, its characterized in that includes: the measuring device comprises a camera and a laser projector, wherein the laser projector is used for projecting light bars to the steel rail fasteners, and the camera is used for collecting fastener images containing the light bars; and the computer is provided with detection software, the detection software is used for determining the surface three-dimensional information of the steel rail fastener according to the gray information of the light bar in the fastener image, converting the surface three-dimensional information into corresponding elevation three-dimensional data based on the substrate surface, encoding the elevation three-dimensional data to form an actual elevation code, and comparing the actual elevation code with the standard elevation code to determine the state of the steel rail fastener. According to the method, the steel rail fastener is detected by adopting the three-dimensional scanning technology, and the technology is used for analyzing and processing the three-dimensional information of the steel rail fastener by collecting the three-dimensional information of the steel rail fastener, so that the detection of the state of the steel rail fastener can be realized, the labor cost is reduced, the detection time is shortened, and the accuracy of the detection result is improved.

Description

Three-dimensional geometric detection system for rail fastener
Technical Field
The application relates to the technical field of industrial equipment detection, in particular to a three-dimensional geometric detection system for a steel rail fastener.
Background
Along with the rapid development of China railway, the bearing capacity of railway transportation is higher and higher, the loss of rail infrastructure is also increased gradually, and stricter requirements are put forward on the railway transportation safety. With the increasing scale of railway line transportation, railway driving safety problems have become one of the important bottlenecks in railway transportation development. Railway lines are an important component in railway systems, and rail fasteners are used as key components, and the state of the rail fasteners directly influences railway driving safety. For railway systems, good rail fastener status is an important guarantee for maintaining railway transportation safety. Therefore, the development of an accurate, efficient and safe rail fastener detection method is important for safety monitoring along the railway.
The traditional rail fastener detection is mainly manual inspection, and the mode can maintain the safety state of the rail fastener to a certain extent, but has a series of problems such as low efficiency, high cost, missed inspection and the like, and potential safety hazards of personnel are easy to exist. On the basis of manual inspection, a modern detection mode based on computer vision is developed, at present, two detection methods aiming at the steel rail fasteners are mainly adopted, the first method is a traditional image processing mode, an image is shot by a high-speed camera, preprocessing such as edge detection and morphological processing is carried out on the steel rail fastener image, and characteristic information is searched in the image to detect defects. However, this method suffers from complex orbital environment, and the accuracy at the time of detection is low. The second is a deep learning-based track image detection method, which trains through a large amount of track image data, but causes difficulty in feature training due to the lack of a large amount of regular railway fastener defect image data. In addition, the method is difficult to land in practical application because of large calculation amount and resource consumption.
Disclosure of Invention
The embodiment of the application provides a three-dimensional geometric detection system for a steel rail fastener, which is used for solving the problems of low detection precision and high cost in the detection method in the prior art.
In one aspect, embodiments of the present application provide a rail fastener three-dimensional geometry detection system comprising:
the measuring device comprises a camera and a laser projector, wherein the laser projector is used for projecting light bars to the steel rail fasteners, and the camera is used for collecting fastener images containing the light bars;
and the computer is in communication connection with the camera, detection software is arranged in the computer and used for carrying out positioning marking on the fastener image so as to determine whether the steel rail fastener appears in the fastener image, if so, the detection software determines the surface three-dimensional information of the steel rail fastener according to the gray information of the light bar in the fastener image, converts the surface three-dimensional information into corresponding elevation three-dimensional data based on the substrate surface, encodes the elevation three-dimensional data to form an actual elevation code, and the detection software also compares the actual elevation code with the standard elevation code to determine the state of the steel rail fastener.
The three-dimensional geometric detection system of the steel rail fastener has the following advantages:
the three-dimensional scanning technology is adopted to detect the steel rail fasteners, the technology collects three-dimensional information of the steel rail fasteners by combining technologies such as image processing, computer vision, quick high-resolution camera calibration and the like, and the detection of the state of the steel rail fasteners can be realized by analyzing and processing the three-dimensional information, so that the labor cost is reduced, the detection time is shortened, and the accuracy of a detection result is improved. In addition, because the technical hardware principle is simple, the three-dimensional information acquisition of the surface of the fastener can be realized only by a camera, a laser projector and detection software, and the cost is greatly reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of a three-dimensional geometric detection system for rail fasteners and the position of the rail fasteners provided in embodiments of the present application;
fig. 2 is a schematic view of a detection plane of a rail fastener according to an embodiment of the present application;
fig. 3 is a schematic diagram of a camera imaging process according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a structured light measurement model according to an embodiment of the present application;
fig. 5 is a schematic diagram of a multi-line structured light measurement model according to an embodiment of the present application.
FIG. 6 is a schematic diagram of three-dimensional data of the elevation of the first light bar according to the embodiment of the present application;
FIG. 7 is a schematic diagram of three-dimensional data of the elevation of the second light bar according to the embodiment of the present application;
FIG. 8 is a schematic diagram of three-dimensional data of the elevation of a third light bar according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of three-dimensional data of a fourth light bar according to an embodiment of the present disclosure;
FIG. 10 is a schematic diagram of three-dimensional data of a fifth light bar according to an embodiment of the present disclosure;
FIG. 11 is a three-dimensional elevation schematic of a plurality of light bar points according to an embodiment of the present disclosure;
FIG. 12 is a schematic three-dimensional elevation view of a plurality of light bar points in a first light bar according to an embodiment of the present disclosure;
FIG. 13 is a schematic three-dimensional elevation view of a plurality of light bar points in a second light bar according to an embodiment of the present disclosure;
FIG. 14 is a schematic three-dimensional elevation view of a plurality of light bar points in a third light bar according to an embodiment of the present disclosure;
FIG. 15 is a schematic three-dimensional elevation view of a plurality of light bar points in a fourth light bar according to an embodiment of the present disclosure;
fig. 16 is a three-dimensional elevation schematic diagram of a plurality of light bar points in a fifth light bar according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Fig. 1 is a schematic diagram of a three-dimensional geometric detection system for a rail fastener and a position of the rail fastener according to an embodiment of the present application. The embodiment of the application provides a three-dimensional geometric detection system of rail fastener, including:
the measuring device comprises a camera and a laser projector, wherein the laser projector is used for projecting light bars to the steel rail fasteners, and the camera is used for collecting fastener images containing the light bars;
and the computer is in communication connection with the camera, detection software is arranged in the computer and used for carrying out positioning marking on the fastener image so as to determine whether the steel rail fastener appears in the fastener image, if so, the detection software determines the surface three-dimensional information of the steel rail fastener according to the gray information of the light bar in the fastener image, converts the surface three-dimensional information into corresponding elevation three-dimensional data based on the substrate surface, encodes the elevation three-dimensional data to form an actual elevation code, and the detection software also compares the actual elevation code with the standard elevation code to determine the state of the steel rail fastener.
For example, the measuring device may be disposed at the bottom of the rail car, and when the rail car moves on the rail, the camera in the measuring device may continuously collect the fastener images to form an image sequence, and after the image sequence is screened by the detection software, the fastener images of the rail fasteners, such as the substrate, the bolts, the sleeve or the spring strip, are retained, and the other fastener images are deleted.
Further, since two steel rails are arranged in parallel, and fasteners are arranged on two sides of each steel rail, the number of the measuring devices in the embodiment of the application is four, two groups of the measuring devices form two groups, and two measuring devices in each group face the fasteners on two sides of the same steel rail respectively, so that the fasteners near the two steel rails are detected simultaneously.
Through experimental comparison and verification, the five light bars are used to meet the system requirements, and the buckle can be comprehensively detected. Therefore, in the final system actual measurement, five light bars were selected for the experiment.
According to the detection process of the measuring device, a plan view of the fastener detection can be obtained as shown in fig. 2. In actual detection, the laser projector projects five light planes to the surface of the fastener, and the five light planes are respectively projected on the surfaces of the steel rail, the elastic strip, the sleeve and the bolt to form a bright light strip on the surface, then a camera is used for shooting an image of the fastener, and the detection software can obtain three-dimensional point cloud data of the light strip point according to a laser triangulation method. The five light bars have different functions, the first light bar is projected on the surface of the steel rail to play a role in fixing a light bar detection area, and the subsequent detection only processes the light bar position area in the image to improve the detection speed; the second light bar is projected at the joint of the elastic bar fixing steel rail, and whether the elastic bar is missing or not is detected; the third light bar is projected on the surfaces of the sleeve and the elastic bar, and whether the sleeve is cracked or not and whether the elastic bar is complete or not is detected; the fourth light bar is projected on the surfaces of the bolt and the elastic bar, and the missing of the bolt and the completeness of the elastic bar are detected; and the fifth light bar is projected on the surface of the elastic bar to confirm the integrity of the elastic bar.
The camera in the application adopts the principle of pinhole imaging to form a fastener image, four coordinate systems are involved in the imaging process of the camera, and the four coordinate systems are a world coordinate system, a camera coordinate system, an image coordinate system and a pixel coordinate system respectively corresponding to coordinates in a three-dimensional space, coordinates under the camera space, coordinates on an image plane and positions of image pixels, and the process is shown in fig. 3.
In practical applications, it is often necessary to perform conversion between different coordinate systems, for example, converting coordinates of a three-dimensional object into corresponding two-dimensional pixel coordinates, or converting coordinates in a world coordinate system into coordinates in a camera coordinate system. Therefore, it is important to accurately understand these four coordinate systems and the conversion relationships therebetween.
1. Definition of a coordinate System
The world coordinate system is used to describe the spatial position of the object, the camera coordinate system is used to describe the position and orientation of the object in the camera imaging space, the image coordinate system is used to describe the actual position and orientation of the object on the image plane, and the pixel coordinate system is used to describe the position and orientation of the object on the digital image. Specifically, taking fig. 3 as an example, the world coordinate system uses a certain point on the measured object as a reference to establish a fixed coordinate system, and adopts a coordinate system O w X w Y w Z w
The camera coordinate system takes the camera optical center as the origin, and the Z axis is the focus of the main optical axis objectThe X axis and the Y axis are two sides parallel to the CCD plane, and O is adopted c X c Y c Z c And (5) a coordinate system.
The image coordinate system takes the intersection point of the main optical axis and the CCD plane as an origin, the x 'axis and the y' axis are two sides parallel to the CCD plane, and O is adopted r x 'y' coordinate system.
The pixel coordinate system takes the upper left corner of the digital image as the origin, the two axes are u, v, and the u, v axes are parallel to the x ', y' axes, and O is adopted i uv coordinate system.
It should be noted here that, in order to facilitate coordinate calculation, it is assumed that a virtual imaging plane exists during the imaging process of the camera, which coincides with the direction of the world coordinate system. Assuming that there is a point P in the world coordinate system, it is projected as an inverted image P on the actual imaging plane r Projected as an image P on a virtual imaging plane v . Since the symbol problem is not required to be considered when coordinate conversion is performed on the virtual imaging plane, the calculation of coordinate conversion is performed later using the virtual imaging plane. The virtual imaging plane adopts O v And (5) an xy coordinate system.
2. Conversion between coordinate systems
The world coordinate system is converted to a camera coordinate system: to describe the positional relationship of the camera to objects in space, a reference coordinate system, i.e., the world coordinate system, may be selected in space, which is defined by X w 、Y w 、Z w A shaft. The conversion of the world coordinate system into the camera coordinate system is a rigid transformation process, which can be represented by a rotation matrix R and translation vector t. Therefore, in the world coordinate system and the camera coordinate system, the homogeneous coordinates of a certain point P are P w =(X w ,Y w ,Z w ) T And P c =(X c ,Y c ,Z c ) T . The following transformation relationship exists between them:
(1)
wherein R is a 3×3 rotation matrix, t is a 3×1 translation vector, P w And P c The homogeneous coordinates of the point P in the world coordinate system and the camera coordinate system, respectively. By such a transformation, points in the world coordinate system can be converted into the camera coordinate system.
The camera coordinate system is converted to an image coordinate system: the transformation from the camera coordinate system to the image coordinate system is a projective transformation that projects points in three-dimensional space onto a two-dimensional plane. According to the pinhole imaging model, the projection position P of any point P in space on the image plane is the intersection point of the optical center and P point connecting line OP and the image plane. This relationship is also called center projection, and the following equation constraint relationship can be obtained:
(2)
wherein (X, y) represents the image coordinates of the p-point, (X) c ,Y c ,Z c ) Representing the coordinates of the spatial point P in the camera coordinate system, f being the focal length of the camera. The projection relation is expressed by homogeneous coordinates and a matrix as follows:
(3)
In general, the multiplication of s on the left of the equation represents a scaling factor, which is practically equal to Z c Is a value of (2). By such projective transformation, points in three-dimensional space can be converted into points on a two-dimensional image plane.
The image coordinate system is converted to a pixel coordinate system: the image is an M x N array in which each pixel represents a gray value for the corresponding image point. O (O) i The uv coordinate system is a pixel coordinate system, and the number of columns and rows of each pixel in the array is denoted by (u, v). O (O) v The xy coordinate system is an image coordinate system in millimeters (mm) with the x-axis and y-axis parallel to the u-axis and v-axis, respectively. The intersection point of the camera optical axis and the image plane is the origin O of the image coordinate system v Typically at the center of the image, but there may be an offset. If O v In the u, v coordinate system, the coordinates are (cx, cy), the physical dimensions of each pixel in the directions of the x axis and the y axis are dx, dy, then any pixel in the image is positioned at two seatsThe coordinates under the label system have the following relationship:
(4)
the above relationship is represented by homogeneous coordinates and a matrix:
(5)
where x, y represents the coordinates of any pixel in the image, u, v represents the index of that pixel in the array. dx and dy are the width and height of a single pixel of the image in mm/pixel, and cx and cy represent the pixel coordinates of the principal point of the image, i.e. the intersection of the optical axis with the CCD plane.
The world coordinate system is converted to a pixel coordinate system: the world coordinate system is used to describe the position and orientation of an object in real space. The pixel coordinate system is a two-dimensional image coordinate system, and is used for describing the number of rows and columns of each pixel in the image in the array. To convert the world coordinate system to the pixel coordinate system, the principle of three steps needs to be known: the transformation from world coordinate system to camera coordinate system, projection from camera coordinate system to image coordinate system, scaling and translation from image coordinate system to pixel coordinate system, then the complete conversion formula can be deduced as follows:
(5)
in the above formula, it is also necessary to know the definition of some parameters. Wherein f x And f y Commonly referred to as normalized focal length. They are equal to the focal length f divided by the physical dimensions dx and dy of each pixel in the x-axis and y-axis directions. K is the internal reference matrix of the camera, which is defined by f x 、f y Four parameters are set up for cx and cy. M refers to the camera's extrinsic matrix.
In practical cases, camera imaging is not a complete linear transformation, and there may be errors between the actual pixel location and the ideal pixel location, which results in distortion of the image. The aberrations of a lens are generally classified into radial aberrations and tangential aberrations. According to the distortion principle, a mathematical model is established to describe the distortion process, and the distortion process is expressed by the following polynomials:
(6)
Wherein k1, k2, k3 represent radial distortion parameters, p1, p2 are tangential distortion coefficients, (x) d ,y d ) For distorted image points, (x, y) is the ideal imaging point.
Currently, internal parameters of a camera can be obtained from a linear camera imaging model, including linear model parameters f x 、f y Cx and cy. By means of the nonlinear camera imaging model, nonlinear distortion parameters k1, k2, k3, p1 and p2 can also be obtained, which together constitute the camera internal parameters under the nonlinear model. The image can be subjected to de-distortion correction by obtaining calibrated camera internal parameters, and the steps are as follows:
(1) Acquiring the coordinates of any point (x, y) of an undistorted image, wherein the pixel value of the coordinates is F (x, y);
(2) By adding distortion, the corresponding distortion point (x d ,y d );
(3) Obtaining gray values F (x) of distorted points using interpolation d ,y d );
(4) Will value F (x d ,y d ) Assigned the value F (x, y).
In one possible embodiment, the camera is calibrated first before capturing the fastener image to determine parameters of the camera, and after the calibration of the camera is completed, the light plane calibration is also performed on the light bar to determine a light plane equation.
For example, camera calibration is used to describe the mapping relationship between the three-dimensional world coordinate system and the two-dimensional image coordinate system of an object during camera imaging, points in the three-dimensional space first need to be transformed from the world coordinate system to the camera coordinate system through rigid transformation, and then are projected to be imaged on the two-dimensional image plane according to the principle of pinhole imaging. Due to the influence of lens distortion, the image needs to be corrected using a corresponding distortion model. Finally, the pixel points in the corrected image are mapped onto a pixel coordinate system through coordinate system conversion. The whole process can be expressed by the following mathematical model:
(7)
The simplified formula is obtained:
(8)
wherein:
(9)
the purpose of calibrating the camera is thus to accurately determine these parameters, including the internal parameters and distortions of the camera, and the external parameters matrix R and t. In the calibration process, an objective function is established and the parameters are solved using a maximum likelihood estimation method. However, in the calibration process, the solving process becomes relatively complicated due to the distortion, rotation, and other factors. Therefore, the calibration parameters are not typically obtained by direct solution. The general practice is to obtain the initial value of linear optimization through calculation, which can be realized by adopting common algorithms such as a direct linear solution (DLT) method or a Zhang Zhengyou calibration method. And secondly, a nonlinear optimization method is used, a local minimum value is obtained through multiple iterations, and the optimization parameters are continuously updated, so that the objective function is minimized, more accurate calibration parameters are obtained, and the purpose of accurate calibration is achieved.
Zhang Zhengyou calibration method is one of classical algorithms for camera calibration, and by processing checkerboard images shot at different angles and combining a nonlinear optimization method and a least square method, internal parameters and external parameters of a camera, distortion coefficients and other camera calibration parameters are solved. Compared with the traditional calibration method, the method has the advantages of higher precision, smaller calculated amount, relatively simple algorithm and the like.
In order to construct 2D-3D corresponding characteristic points with higher precision, a certain requirement is provided for the calibration plate. The basic requirements of the calibration plate include: precision, stability, uniformity, and Pattern design. The precision and stability problems of the calibration plate mainly relate to the precision and reliability of the 3D feature points on the calibration plate. In the Zhang Zhengyou calibration method, a calibrated world coordinate system is set on a calibration plate and considered as a coplanar point. Therefore, the flatness of the calibration plate needs to meet certain requirements to ensure that the characteristic points are coplanar. In addition, the spacing between feature points on the calibration plate also needs to be accurate. The stability of the calibration plate depends on the calibration plate providing the same accurate and reliable 3D feature points at different temperatures, so that the influence of the temperature expansion coefficient on the calibration plate needs to be reduced. The uniformity and pattern design problems of the calibration plate mainly relate to the reliability of the 2D coordinate points of the characteristic points on the calibration plate on the image. To reduce ambient light interference and improve extraction accuracy, it is generally desirable to select a calibration target with a backlight to ensure uniformity. In addition, the design of the pattern needs to meet two requirements: feature points (usually using corner extraction or dot extraction) and feature point coordinates are easily extracted.
Typically, calibration is performed using checkerboard or circular calibration targets. Generally, the extraction accuracy of a circular or ring calibration plate is higher than that of a checkerboard.
Further, after calibration of the camera is completed, the calibration accuracy is also evaluated.
The camera calibration accuracy assessment is to judge whether the mapping accuracy of the calibration parameters between the three-dimensional space and the two-dimensional image reaches the expectations or not through analysis and comparison of the camera calibration results. Usually, a certain error measurement index, such as a reprojection error, binocular polar correction, and the like, is adopted to evaluate the quality of the calibration result. The purpose of the camera calibration accuracy assessment is to determine the accuracy of camera imaging so as to obtain more accurate three-dimensional information in subsequent applications.
For the evaluation of calibration parameters, the following three main modes are currently available:
(1) Reprojection error: the method is characterized by constructing corresponding characteristic points of 3D-2D, solving calibration parameters of a camera, projecting coordinates of a space point onto a plane by using a projection matrix obtained by calculation, and calculating a difference value by using pixel coordinates of the projection point and mapping coordinates of an actual three-dimensional point in an image, wherein the difference value cannot be completely 0 due to the existence of a calibration error, so that the difference value needs to be minimized to obtain an accurate camera calibration result.
(2) Binocular epipolar line correction: for binocular vision, epipolar corrections may be used to evaluate calibration accuracy. Firstly, calibrating a single camera, then shooting an image by using another camera, extracting and matching the same name points of characteristic points, observing the corrected error size and whether line alignment is realized, and judging whether the error size reaches the expected precision.
(3) 3D reconstruction of standard: two-dimensional coordinates are extracted by using a checkerboard and a circular target, and the points are back projected under a camera coordinate system by using calibration parameters of camera calibration solution. Specifically, the camera external parameter matrix can be solved by using the calibration plate, and then three-dimensional point coordinates in the world coordinate system are converted into a camera coordinate system. The accuracy of the calibration result can be evaluated by comparing whether the designed center distance is consistent with the size of the actual back projection.
Structured light three-dimensional imaging is an active three-dimensional vision measurement method, and generally consists of hardware such as a camera and a laser projector. In this method, a laser projector emits structural information (e.g., laser stripes, gray codes, sinusoidal fringes, etc.) in some form onto the surface of the object under test, and then captures an image of the surface under test using a single or multiple cameras. Finally, based on the triangulation principle, three-dimensional point cloud information of the measured object can be obtained by processing and analyzing the two-dimensional images.
(1) Line-plane intersecting model
In the measuring process, the laser projector actively projects a light plane on the space object, and the light plane intersects with the surface of the measured object to form a light bar image. After the light bar central point coordinates are extracted by using a light bar extraction algorithm, the pixel coordinates of the light bar are obtained at the moment, and the pixel coordinates need to be converted into an image coordinate system. Because of the imaging principle of the camera, the image coordinate of each light bar center point uniquely corresponds to one ray passing through the optical center of the camera in space, so that a ray equation passing through the optical center and the light bar points can be constructed. In general, the equation for the light plane can be derived from a light plane calibration algorithm. After the equation of the light plane and the rays is obtained, the three-dimensional coordinates of the center point of the light bar under the camera coordinate system can be uniquely determined by utilizing the intersecting points of the light plane and the rays in the space. The measurement model is shown in fig. 4.
(2) Principle of line-plane intersection calculation
According to the ray equation of the optical center and the optical stripe point and the intersection of the optical plane equation line surface, solving the three-dimensional coordinate, the process is as follows:
a ray equation is defined using a base point P and a direction d:
(10)
suppose that a pair of points (P 0 ,P 1 ) To represent the line segment S. The line segment can be processed using an intersection algorithm with the processing ray/planar object by converting the line segment into a ray form as follows:
(11)
That is, the direction vector d is defined by two different points. Typically, iidiiq.1. Only t is required to be equal to or greater than 0, so that it is only necessary to check whether the value of t obtained by using the straight-line intersecting function is greater than or equal to 0, and to determine or negate the intersection accordingly.
Constructing a ray equation passing through the optical center of the camera for the space uniquely corresponding to the image coordinates of the center point of the light bar, and assuming that the coordinate of the optical center is P 0 (0, 0), the light bar center point coordinate P 1 (x, y, z). Wherein x, y, z are pixel coordinate system coordinates to image coordinate system coordinates. The process is as follows:
the formula for converting the pixel coordinate system into the image coordinate system is as follows:
(12)
then cross P 0 And P 1 Is a ray square of (2)The program is constructed by adopting a parameter equation form:
(13)
in general, the light plane equation can be fitted by means of structured light field calibration, and the plane equation expression is:
(14)
wherein a, b, c, d are parameters of the light plane equation, and c.noteq.0 for convenient calculation, the original equation is divided by-d to obtain a simplified equation as follows:
(15)
wherein,
the ray equation and the light plane equation are subjected to intersection operation, and an expression of t can be obtained:
(16)
where dx, dy, f is actually unknown during the solution process, and f is known when actually solving the camera model x =f/dx,f y =f/dy. The center point of the light bar in the image coordinate system can be projected on the unit focal plane, that is, the unit plane with z=1, so that a new constraint relation can be obtained:
(17)
The expression for parameter t is:
(18)
wherein dx/f is the calibration parameter f x Is the reciprocal of dy/f, and is the calibration parameter f y Is the inverse of (c).
Through such conversion process, no unknown item exists in the current equation, and after t is calculated, the three-dimensional coordinate (X c ,Y c ,Z c ):
(19)
The calibration of the light plane equation is an important step in three-dimensional imaging of structured light, and the three-dimensional coordinates of light stripe points are obtained by calculating the two-dimensional coordinates of the points on the light plane in a camera image and corresponding to the coordinates of a world coordinate system through a certain constraint relation, for example, a mode that a plane target and a light stripe line are coplanar is used, so that the light plane equation is fitted.
The method mainly comprises two steps, namely extracting the center point of the light bar to obtain the accurate center coordinate of the light bar. And secondly, solving the three-dimensional coordinates of the light bar by adopting a series of space constraint relations.
(1) Light bar extraction algorithm
The gray level gravity center method is adopted as a light bar extraction algorithm in a system, the gray level gravity center method is used for obtaining a group of light bar cross section data according to a determined threshold value set in each row of light bars, the gray level value of each pixel point on the cross section is calculated to be multiplied by the column coordinate where the pixel point is located, and then the sum is divided by the sum of the gray level values of all cross section data to obtain the gray level gravity center coordinate of the row. The method is fast and has small calculation amount. However, it should be noted that in practical application, image binarization preprocessing is required to make the gray value distribution of the light bar more consistent with gaussian distribution.
(2) Light plane calibration method
In an embodiment of the present application, each laser projector projects a plurality of light bars simultaneously to the rail fastener, and the camera captures the fastener image containing the plurality of light bars. For the measuring device of the multi-line structured light, the application is based on the Sun Junhua structured light general field calibration method for calibrating the light plane. Meanwhile, the light bar center extraction algorithm is compared, and finally, the gray level gravity center method is used for extracting the light bar center. A multi-line structured light scanning model is shown in fig. 5.
When multi-line structured light scanning is carried out, firstly, the pose of a camera and a laser projector is fixed, then, a light plane is projected on a flat plate with a two-dimensional circular target, and a light bar image projected on a whiteboard is obtained through the camera, so that the two-dimensional target and the laser projector are ensured to be in the field of view of the camera. After the image acquisition is completed, firstly performing plane fitting on a target plane, then extracting a light bar center point by using a light bar extraction algorithm, and classifying the light bar points by using a classification algorithm. And then, solving the three-dimensional coordinates of the light bar point through a light center point and light bar point ray equation and a coplanar target plane equation. And finally, moving the whiteboard, shooting a plurality of images, obtaining three-dimensional points on a plurality of light planes, and performing least square fitting according to different light planes to which the three-dimensional points belong to so as to obtain different light plane equations.
In one possible embodiment, after the light plane calibration is performed on the light bar, the camera also collects a whiteboard image after the whiteboard is placed on the substrate surface of the steel rail fastener, the whiteboard image contains the light bar, and the detection software analyzes the whiteboard image to determine a plane equation of the substrate surface.
Illustratively, in the real world, it is observed that the heights of the base plate, rail, bolt, sleeve, spring strip are not the same. Wherein the substrate surface is in the plane with the lowest height, and then is the surface of the steel rail, the bolt, the sleeve and the elastic strip. Depending on the height, the substrate surface may be set to be the bottommost surface in space, with the remaining components corresponding to different heights. Therefore, the three-dimensional point cloud data of the fastener can be converted into the three-dimensional elevation data taking the substrate surface as a reference.
Specifically, a blank white board can be placed at the height of the substrate surface, a light bar is beaten on the blank white board, the three-dimensional coordinates of light bar points are solved, at the moment, the three-dimensional points of the light bar can be considered to belong to the plane of the white board in space, then the three-dimensional coordinates of the light bar points are fitted, and an equation of the plane of the white board can be obtained, wherein the blank white board and the substrate surface are located on the same height plane, namely the plane equation of the substrate surface. And obtaining a plane equation of the substrate surface, and solving the distances from all points on the three-dimensional point cloud data of the fastener to the substrate surface according to a point-to-plane distance formula to obtain the three-dimensional elevation data of the fastener.
In one possible embodiment, after the light level calibration of the light bars is completed, the detection software also estimates the working range parameters, and when estimating the working range parameters, first determines the range in which all the light bars appear, and then determines the region boundaries of the first and second light bars within the light bar range.
In an exemplary embodiment, in the three-dimensional geometric detection system for a rail fastener, after the calibration portion is completed, working range parameter estimation is required, where the working range parameter estimation mainly includes two parts: the range (upper, lower, left and right boundaries) in which all the light bars appear is determined, and the region boundaries of the first and second light bars within the light bar range are determined.
(1) Light bar position area
The positions of all light bars in the two-dimensional image are determined, namely the upper, lower, left and right boundaries of the light bar region in the image are estimated. Because the fixed pose of the camera and the laser projector is unchanged, and the proportion of the light bar to the field of view of the camera is small. In the captured image, the light bar can deviate to a certain extent along with different heights, the surrounding conditions of the fastener show periodic similarity, and the light bar image can be limited in a certain extent. In the image processing process, in order to improve the detection efficiency of the system, the light bar position area needs to be determined, and only the image in the light bar range is processed later. Firstly, under static conditions, the light bar can be projected on the surface of the complete fastener, and the center point of the first light bar is extracted by using a gray-scale gravity center method. Then, the column coordinates of the center points of all the light bars are averaged to obtain an averaged value, and the averaged value is shifted to the left by a certain interval to be used as a left boundary value. Since the fringe center-to-center spacing is fixed, five intervals can be added to the left boundary value to determine the right boundary value. The upper and lower boundaries can be determined by shifting the first point of the first bar and the last point of the last bar by a fixed interval. The interval size is adjusted in real time in detection and is loaded into the system in the form of input parameters.
(2) First and second light bar region boundary
Determining the zone boundaries of the first and second light bars within the light bar range is an important guarantee of system stability and accuracy. The first light bar is projected on the rail and is used for detecting whether the laser projector is derailed or not. If the center point of the light bar is not detected in the boundary of the first light bar area, the hardware fault of the detection system is indicated at the moment, and the detection should be stopped. The second light strip projects at the joint of the fastener fixed steel rail, and can play a role in detecting whether the elastic strip is missing or not. If no light bar point is detected in the second light bar area, the current system is not touched with a fastener, and the next row of data is detected.
The first light bar is projected on the surface of the steel rail, and pixel offset basically does not occur, so that the boundary of the left area and the right area can be determined according to the median of the column coordinates of the center point of the light bar. The specific implementation mode is that the central point of all the light bars is solved to obtain the median of the column coordinates, and then the central point is translated to the left side and the right side by a certain interval to determine the region boundary of the first light bar in the light bar range. The interval size is adjusted in real time in detection and is loaded into the system in the form of input parameters.
The second light bar is projected on the surface of the elastic bar, and the position of the light bar can generate pixel offset along with the height change of the elastic bar. Therefore, it is necessary to calculate the threshold value of the right boundary by projecting the coordinates of the highest point in the three-dimensional point cloud information of the light bar onto the two-dimensional plane. And then shifting rightwards by a certain interval to determine the right boundary of the second light bar in the light bar range. Similarly, the threshold value of the left boundary can be obtained by utilizing the three-dimensional coordinate projection of the lowest point, and the left boundary of the second light bar in the light bar range can be determined by shifting a certain interval to the left.
In one possible embodiment, after the detection software acquires the fastener image, the fastener image is first preprocessed, where the preprocessing of the fastener image includes:
and intercepting the fastener image according to the working range parameters to obtain a preprocessed image matched with the working range parameters.
For example, when the system is designed, four cameras and laser projectors are adopted, and four images inside and outside two rails are shot at the same time, so that the requirement on the response speed of the system is high. To ensure the detection speed, a preprocessing operation must be performed on the image. In the image loading process, preprocessing operation is carried out according to the working range parameters of the calibration module, the upper, lower, left and right boundaries are used for determining the light bar position detection area, and a pair of new images with the same size are constructed according to the size of the area. The image pixel values in the light bar location area are then mapped into the new image, resulting in an image of the same size as the light bar location detection area, and used for the acquisition of light bar information, called a pre-processed image.
And obtaining a preprocessed image, and judging the current image state. If the center point of the light bar exists in the boundary of the first light bar in the light bar area, the next light bar classifying and identifying operation can be performed. If the center point of the light bar does not exist, the current light bar is not projected on the steel rail track, the hardware may be in fault, and the detection and the maintenance are required to be stopped immediately.
After preprocessing the image, performing light bar identification and classification processing on the light bar position detection area. The system reads the preprocessed image and examines the color value (or gray value) of each pixel row by row starting from the first row of the image. If the preset gray threshold is not detected in a certain row, the system will continue to detect in the next row, if the preset gray threshold is not detected in the certain row, and the light bar processing area is not reached in the certain row. When the system detects a preset threshold region, region division is carried out according to a set threshold value, and a gray level gravity center method is used for obtaining a light bar center point. Then, according to the obtained light bar center point, a light bar classification algorithm can be used for classifying the light bars, and different light plane equations are corresponding.
For the classification algorithm of the light bars, the system adopts a high/low threshold judgment algorithm. When a complete image of a fastener is shot, three-dimensional point cloud data of each light bar are respectively extracted under a static condition, then the lowest point and the highest point data are found in the three-dimensional point cloud data, the highest point data are further projected and transformed into a two-dimensional image to determine a right boundary, and the lowest point data are projected into the two-dimensional image to determine a left boundary, so that the region boundary of each light bar is obtained.
For the system of the present application, five light planes are used to project on the surface of the object, and five light bars are present in the resulting fastener image. In order to perform light bar detection, it is necessary to estimate the area of the light bar position according to the working range, and then perform the extraction process of the light bar center point line by line. In the process, the gray-scale gravity center method is used for calculating the position of the center point of the light bar, and theoretically, five center points of the light bar can be obtained for each line of image. The five light bars are then classified according to a classification algorithm and correspond to different light plane equations.
In one possible embodiment, the computer has stored therein a code table in which standard elevation codes corresponding to different rail fastener types are stored, and the detection software compares the actual elevation code with the different standard elevation codes to determine the type of rail fastener currently detected.
Illustratively, the railway fastener system is of various types, including a spring fastener system, a buckle fastener system, a spring fastener system and an expansion bolt fastener system, and the size of the railway fastener system at home and abroad are about 30 kinds. The application provides carrying out elevation coding to different rail fasteners, saving in the coding table, can discern different fastener types according to the coding table contrast when detecting, can carry out standard defect information coding to the defect of different grade type fastener in the later stage, only need with standard code contrast in the testing process, just can realize multi-type fastener discernment and defect detection.
After the three-dimensional code of the fastener elevation is obtained, a standard defect type information code can be constructed, and the actually obtained fastener code is compared with the standard defect information code during detection, so that the defect state of the current fastener can be detected. Under static conditions, standard elevation three-dimensional codes are constructed to correspond to different fastener states, such as a complete fastener state, a spring strip missing state, a bolt missing state and the like, and the different fastener states are unique. The process of standard defect elevation three-dimensional coding will be discussed in detail in the system calibration module. Once the standard code is determined, the current defect type can be detected rapidly and accurately by only comparing the actual detection code value with the standard code. Through experimental tests, the method can detect the complete fastener state, the spring strip missing state and the bolt missing state.
After determining the type of rail fastener, the actual elevation code is compared with a plurality of standard elevation codes for that type to determine that the rail fastener is defective.
After three-dimensional data of the fastener elevation is obtained through solving, the data can be divided into 32/16/8 multiplied by 5 multiplied by 4 grids, a point is selected from each grid to serve as a mark point, the whole fastener elevation information is subjected to downsampling, and the structural information of the fastener is reserved. The expression 32/16/8 means that 32 or 16 or 8 light bar points are obtained on each light bar as mark points of the fastener, 5 represents a total of 5 light bars, and 4 represents the division of elevation data into four levels.
The principle is illustrated by using a set of theoretical data, for more detailed observation data, the base plate surface is assumed to be at a height of 100mm, the steel rail surface is assumed to be at a height of 120mm, the elastic strip surface is assumed to be at a height of 130-160mm, the bolt surface is assumed to be at a height of 120mm, and the sleeve surface is assumed to be at a height of 120 mm. The three-dimensional data of the fastener elevation is shown in fig. 6-10.
Taking eight points selected on each light bar as an example, dividing the data into four grades, and supposing that the elevation grade is divided into four grades of 90-110mm, 110-130mm, 130-150mm and 150-170mm from low to high, obtaining elevation three-dimensional data as shown in table 1:
table 1 8 point fastener elevation data
According to the obtained 8 point fastener elevation data, the elevation data can be displayed as shown in fig. 11.
After the three-dimensional data of the elevation are obtained, the grade of the elevation is divided into four grades from low to high, namely 90-110mm, 110-130mm, 130-150mm and 150-170mm, and the buckle can be encoded corresponding to A, B, C, D four code values. Elevation data values are shown in fig. 12-16.
The fastener information was encoded according to elevation classification, resulting in the complete fastener code as shown in table 2 below:
TABLE 2 fastener elevation encoding
In one possible embodiment, after determining the status of the rail fastener, the detection software also emits an audible and visual alarm based on the specific condition of the status.
Illustratively, in order to find the quality defect of the fastener in time and perform alarm processing, the system is provided with an audible and visual alarm module. When the system detects abnormal information, abnormal reminding can be carried out on a software interface, and recorded audio playing can be carried out. In the concrete implementation, qt is used as a main interface of system software, and a real-time dynamic detection page is set in a system homepage. For the image currently being detected, the system can process the image in real time and completely output related information such as image names, image paths, image states and the like in a text form on a system main interface. Meanwhile, the color of the characters can be changed according to the different defect types.
In addition, a software system in practical application is built inside the rail car, monitoring interfaces are arranged on two sides of the rail car, real-time data are displayed in a monitoring screen, and the detection state of the fastener can be observed. Meanwhile, aiming at different defect types, the system can control hardware, for example, when the first light bar state code is abnormal, the system can judge that the hardware is faulty, and the rail car can be immediately stopped to give out a warning so as to be convenient for test personnel to overhaul. For other types of defects, the system only records defect grading information and performs early warning processing.
Further, in the processing process, the arrangement of the fastener defect data storage module plays an important role in the normal operation of the system. In order to facilitate the management and maintenance of the detection information by operators, the information is stored in a text file of a naming specification besides the related information such as the position number of the fastener, the defect type, the defect position and the like, which are displayed on a main interface of the system. In these text files, specific defect information is presented in a simple text description, and the format of the file is set as follows: detection time, detection picture name, mark coding, defect type. As shown in table 3:
TABLE 3 fastener defect file format
In addition, these text files are saved in the same folder as the original image to prevent management confusion due to file dispersion. In the subsequent expansion process of the system, a perfect fastener management system can be constructed. The text file of the detection information can be sent to the management system in a fixed form, so that the comprehensive tracking and management of the defect life cycle are realized. The management system provides website service, can check the current detection state in real time, records the progress of defects, processing results and other information, thereby improving the working efficiency of operators and avoiding delay of detection and processing due to missing or improper management of defect information.
Meanwhile, in the processing process, the original image is required to be numbered and named, the named formats are detection time, number and defect type, and the original image is placed under a designated folder, so that the defect state can be conveniently distinguished. Therefore, when multiple batches and multiple types of detection data appear, the detection data can be effectively distinguished and classified, and confusion and misjudgment are avoided. In order to ensure standardization, readability and usability of data, the timestamp of defect detection, the corresponding fastener number and the defect type are integrated, naming is carried out according to the format of YYYY-MM-DD-HH: MM: ss-number-defect type, and the generated file is placed under a designated folder, so that subsequent inspection and management are facilitated.
Description of the experiment
In order to verify the principle and feasibility of the three-dimensional geometric detection system of the rail fastener. In the experimental process, three conditions are mainly verified. Fastener elevation three-dimensional code type identification, fastener defect detection, sequence image fastener counting and information registration.
The three-dimensional code type identification of the fastener elevation is mainly to carry out three-dimensional codes of different fasteners, the codes are stored in a standard information coding table, the type of the currently detected fastener can be known only by contrasting the standard information coding table during detection, and the defect information coding can be carried out on the type of the currently detected fastener subsequently, so that the system has a corresponding defect detection method for various types of fasteners, and the aim of detecting the multi-type fasteners is fulfilled. At present, because the experimental environment is limited, no other types of fasteners exist, only the 'III' -type fasteners are tested, the system is verified by setting an image of an artificial interference comparison group, and the experimental result shows that the identification accuracy of the 'III' -type fasteners can reach more than 97%.
For the defect state test of the fastener, the detection of three states of the fastener is mainly completed: complete fastener state, spring strip missing state, bolt missing state. Before actual detection, the complete fastener state, the bolt missing state and the spring strip missing state are required to be encoded under static conditions according to a standard fastener defect information encoding method. And in the subsequent detection, comparing the actually detected code value with a standard code to determine the defect type of the fastener. And 5 groups of control experiments are set, defect detection accuracy and time efficiency are analyzed, the final processing speed can reach 80 ms/amplitude according to the analysis of experimental results, and the detection accuracy can reach more than 96%.
And finally, verifying the sequence image fastener count and the information registration, and when the system detects the dynamic sequence image, detecting the image parts which do not touch the substrate surface, the elastic strip and the bolt signal, counting the fastener for the image which touches the signal, completing the elevation three-dimensional coding, and comparing the detection codes of the two images, thereby completing the information registration process. And comparing the detected codes with standard defect codes to determine the defect type of the fastener. In the experimental process, the fastener counting and information registering process is completed.
1. Fastener elevation three-dimensional code type identification
Due to experimental environment limitations, only "III" profile rail fasteners are currently available in the laboratory. Setting artificial comparison experiments, placing interference mark points at different positions on the fastener, collecting interference images, comparing the three-dimensional code of the height of the fastener of the interference images with a standard code, and observing experimental results. And finally, 100 images are acquired for testing, wherein 80 normal images and 20 interference images are acquired, the interference images have four standard point positions in total, and 5 images are acquired at each marking point position.
Five groups of control experiments are arranged, 40 normal images and 10 interference images are randomly extracted from each group, the detection results are shown in the following table 4, the identification accuracy of the III-type fastener can reach more than 97%, redundant noise information is generated due to high laser intensity, and better effect can be achieved by later optimization.
TABLE 4 accuracy of fastener type identification
2. Fastener defect status detection
In the experimental process, the detection is carried out aiming at the defect type of the fastener, and the detection of the states of three fasteners, namely the detection of the state of the complete fastener, the detection of the state of the spring strip missing and the detection of the state of the bolt missing, is mainly completed. During the experiment, the camera and the laser are fixedly installed on the ground, and the positions of the camera and the laser are not changed. Therefore, in the detection of the fastener state, the first light bar is not required to be used for judging whether the scanning device is separated from the track, and the data of the other four light bars are only used for judging the fastener state. Before actual detection, the complete fastener state, the bolt missing state and the spring strip missing state are required to be encoded under static conditions according to a standard fastener information encoding method. For subsequent detection, the actually detected code value is used for comparison with the standard code. In the detection process, shooting a fastener image, solving three-dimensional data of the fastener surface, converting the three-dimensional data into an elevation three-dimensional code by taking a substrate surface equation as a bottom surface, carrying out fastener information coding by grading, and comparing the obtained test data coding with a standard coding to determine the current fastener state. Through system test, the detection of three fastener states is successfully realized.
In an experimental environment, aiming at a performance development experiment of a fastener state detection algorithm, the accuracy and time efficiency of fastener defect state detection are evaluated. Because the three-dimensional scanning system cannot be built under the real subway environment to acquire the fastener surface image, the data used in the method are all fastener images shot in a laboratory. And for the test data set image, actually shooting the image under the complete fastener state, the spring strip missing state and the bolt missing state, and carrying out artificial interference on the three state conditions to carry out a comparison experiment. 500 pieces of three fastener state pictures are collected, wherein 100 pieces of interference images are artificially added, 400 pieces of normal images are divided into 5 groups for testing, namely 80 pieces of fastener state images are selected for each group, and 20 pieces of interference images are used for testing. Recording detection time and judging algorithm detection efficiency.
The three fastener detection state results are as follows:
(1) The detection results of the complete fastener state are shown in table 5.
(2) The detection results of the spring strip missing state are shown in table 6.
(3) The detection results of the missing bolt state are shown in table 7.
TABLE 5 complete fastener status detection
TABLE 6 spring strip missing status detection
TABLE 7 detection of missing bolt status
Through the experimental comparison, the feasibility of system detection is verified. Under the laboratory environment, the defect detection can be basically finished for three fastener states, and the accuracy can reach more than 96%. Second, the processing speed of each image can reach 80 ms/frame. Finally, for detecting the defect state of the fastener, a group of detection data in a real environment is provided to illustrate the detection process. Firstly shooting a fastener state image, extracting 32 image points from each light bar in an image light bar position area, solving the three-dimensional coordinates of the 32 image points, converting the three-dimensional coordinates into an elevation three-dimensional value by taking a substrate surface as a bottom surface, dividing the elevation three-dimensional value according to four grades of 90-110mm,110mm-130mm,130mm-150mm and 150mm-170mm, carrying out fastener information coding corresponding to A, B, C, D four code values, and finally obtaining an elevation three-dimensional coding table of the fastener state. And (3) obtaining a three-dimensional code table of the fastener elevation, and comparing the three-dimensional code table with a three-dimensional code table of a static standard fastener to determine the defect type.
3. Sequence image fastener count and information registration
And carrying out experimental verification on the sequence image fastener counting and information registering process. Firstly, a group of sequence images are shot, estimation working range parameters of a system calibration module are adopted for judging, detection is carried out within the boundary values of the first light bar and the second light bar, and image parts which do not touch the substrate surface, the spring bar and the bolt signals are not detected. For the image part of the touch signal, the buttons are counted and the elevation three-dimensional coding is performed. The detection codes of the two images are used for comparison, and the information registration process can be completed. And comparing the detected codes with standard defect codes to determine the defect type of the fastener.
The method is characterized in that a group of actual test data is used for illustration, a camera and a laser are fixed on the ground in a laboratory environment, the light bar area cannot deviate, the first piece of light bar information is not needed to relatively position the fastener, and therefore only the other four pieces of light bar information are detected for detection. A group of 13 fastener sequence images are shot through experiments, the fastener information is encoded, and the information registration process can be observed according to a code value table between the two images. From the experimental result, the counting of the fastener can be completed, any two continuous images can be registered according to the code value comparison table, and finally the defect detection of the fastener can be completed according to the comparison of the actual fastener information code and the standard code.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (9)

1. The three-dimensional geometric detection system of rail fastener, its characterized in that includes:
the measuring device comprises a camera and a laser projector, wherein the laser projector is used for projecting light bars to the steel rail fasteners, and the camera is used for collecting fastener images containing the light bars;
and the computer is in communication connection with the camera, the computer is internally provided with detection software, the detection software is used for carrying out positioning marking on the fastener image so as to determine whether a steel rail fastener appears in the fastener image, if so, the detection software determines the surface three-dimensional information of the steel rail fastener according to the gray information of the light bar in the fastener image, converts the surface three-dimensional information into corresponding elevation three-dimensional data based on a substrate surface, then encodes the elevation three-dimensional data to form an actual elevation code, and the detection software also compares the actual elevation code with a standard elevation code to determine the state of the steel rail fastener.
2. The system of claim 1, wherein the camera is calibrated first to determine parameters of the camera before capturing the fastener image, and the light bar is calibrated after the calibration of the camera is completed to determine a light plane equation.
3. A rail fastener three-dimensional geometry inspection system according to claim 2, wherein calibration accuracy is also assessed after calibration of the camera is completed.
4. The three-dimensional geometric inspection system of rail fasteners according to claim 2, wherein said camera further captures a whiteboard image of a whiteboard placed on a substrate surface of the rail fasteners after said light bar is light plane calibrated, said whiteboard image including the light bar, said inspection software analyzing said whiteboard image to determine a plane equation of said substrate surface.
5. A rail fastener three-dimensional geometrical detection system according to claim 2, wherein the detection software is further adapted to estimate operating range parameters after the light beam has been calibrated, and wherein upon estimating the operating range parameters, the range of occurrence of all the light beams is first determined, and then the zone boundaries of the first and second light beams within the range of light beams are determined.
6. The system of claim 5, wherein the software for detecting the three-dimensional geometry of the rail fastener, after acquiring the fastener image, first pre-processes the fastener image, the pre-processes the fastener image comprising:
and intercepting the fastener image according to the working range parameters to obtain a preprocessed image matched with the working range parameters.
7. The system of claim 1, wherein the computer has a code table stored therein, wherein the code table has stored therein standard elevation codes corresponding to different rail fastener types, and wherein the detection software compares the actual elevation codes with the different standard elevation codes to determine the type of rail fastener currently detected.
8. The three-dimensional geometric inspection system of a rail fastener of claim 1, wherein each of said laser projectors simultaneously projects a plurality of said light bars onto the rail fastener, and said camera captures said fastener image comprising a plurality of said light bars.
9. A rail fastener three-dimensional geometry inspection system according to claim 1 wherein after determining the condition of the rail fastener, the inspection software also emits an audible and visual alarm based on the specifics of the condition.
CN202311377492.1A 2023-10-24 2023-10-24 Three-dimensional geometric detection system for rail fastener Pending CN117433447A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311377492.1A CN117433447A (en) 2023-10-24 2023-10-24 Three-dimensional geometric detection system for rail fastener

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311377492.1A CN117433447A (en) 2023-10-24 2023-10-24 Three-dimensional geometric detection system for rail fastener

Publications (1)

Publication Number Publication Date
CN117433447A true CN117433447A (en) 2024-01-23

Family

ID=89549152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311377492.1A Pending CN117433447A (en) 2023-10-24 2023-10-24 Three-dimensional geometric detection system for rail fastener

Country Status (1)

Country Link
CN (1) CN117433447A (en)

Similar Documents

Publication Publication Date Title
Xu et al. Real-time 3D shape inspection system of automotive parts based on structured light pattern
Liu et al. An improved online dimensional measurement method of large hot cylindrical forging
CN105526882A (en) Turnout wear detection system and detection method based on structured light measurement
US20140307085A1 (en) Distance measuring apparatus and method
CN104634242A (en) Point adding system and method of probe
CN105960569A (en) Methods of inspecting a 3d object using 2d image processing
US9214024B2 (en) Three-dimensional distance measurement apparatus and method therefor
CN105403183A (en) Work template metering detection method for examining coupler
TW201415010A (en) Inspection device, inspection method, and inspection program
Fernandez et al. Planar-based camera-projector calibration
JP2019197018A (en) Flatness detection method, flatness detection device and flatness detection program
CN111780678A (en) Method for measuring diameter of track slab embedded sleeve
JP4837538B2 (en) End position measuring method and dimension measuring method
CN111385558B (en) TOF camera module precision measurement method and system thereof
CN107271445B (en) Defect detection method and device
CN111369484A (en) Method and device for detecting steel rail profile
JP5274173B2 (en) Vehicle inspection device
JP2013178174A (en) Three-dimensional shape measuring apparatus using a plurality of gratings
CN112902869B (en) Method and device for adjusting laser plane of rail profile measuring system
CN105096314A (en) Binary grid template-based method for obtaining structured light dynamic scene depth
US8102516B2 (en) Test method for compound-eye distance measuring apparatus, test apparatus, and chart used for the same
CN117433447A (en) Three-dimensional geometric detection system for rail fastener
JPS62501165A (en) Method and equipment used for measurement
CN113983951B (en) Three-dimensional target measuring method, device, imager and storage medium
JP4549931B2 (en) Mixing vane inspection method and inspection device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination