CN113884002B - Pantograph slide plate upper surface detection system and method based on two-dimensional and three-dimensional information fusion - Google Patents

Pantograph slide plate upper surface detection system and method based on two-dimensional and three-dimensional information fusion Download PDF

Info

Publication number
CN113884002B
CN113884002B CN202110936962.8A CN202110936962A CN113884002B CN 113884002 B CN113884002 B CN 113884002B CN 202110936962 A CN202110936962 A CN 202110936962A CN 113884002 B CN113884002 B CN 113884002B
Authority
CN
China
Prior art keywords
dimensional
array camera
pantograph
abrasion
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110936962.8A
Other languages
Chinese (zh)
Other versions
CN113884002A (en
Inventor
黄磊
李苏祺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Jicui Intelligent Photoelectric System Research Institute Co ltd
Original Assignee
Jiangsu Jicui Intelligent Photoelectric System Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Jicui Intelligent Photoelectric System Research Institute Co ltd filed Critical Jiangsu Jicui Intelligent Photoelectric System Research Institute Co ltd
Priority to CN202110936962.8A priority Critical patent/CN113884002B/en
Publication of CN113884002A publication Critical patent/CN113884002A/en
Application granted granted Critical
Publication of CN113884002B publication Critical patent/CN113884002B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Current-Collector Devices For Electrically Propelled Vehicles (AREA)

Abstract

The invention discloses a pantograph slide plate upper surface detection system and method based on two three-dimensional information fusion, an online abrasion measurement module of the system is connected with a linear array camera, an area array camera and a three-dimensional abrasion display module, a trigger module is connected with the linear array camera and the area array camera and synchronously triggers the linear array camera and the area array camera, a laser sensor sends laser to a pantograph at the top of a train, the linear array camera and the area array camera respectively synchronously acquire two-dimensional and three-dimensional image data of the pantograph at the pantograph slide plate after the pantograph is lifted, the two-dimensional and three-dimensional image data are transmitted to the online abrasion measurement module for processing, and the three-dimensional abrasion display module displays images. According to the system and the method for detecting the upper surface of the pantograph slide plate based on the fusion of the two three-dimensional information, the abrasion of the upper surface of the pantograph slide plate is measured with high precision in an innovative manner by utilizing the fusion of the two three-dimensional information, and the linear array structure provides necessary hardware support for acquiring the two three-dimensional information of the pantograph with high precision by utilizing a mode of combining a linear array camera and an area array camera.

Description

Pantograph slide plate upper surface detection system and method based on two-dimensional and three-dimensional information fusion
Technical Field
The invention belongs to the technical field of pantograph slide plate detection, and particularly relates to a system and a method for detecting the upper surface of a pantograph slide plate based on two-dimensional information fusion.
Background
The traditional pantograph slide plate abrasion measuring method comprises the following steps: the locomotive enters a locomotive working section, stops to lower the bow, cuts off the power supply, and a worker climbs the top, and the abrasion of the pantograph slide plate is measured by a special measuring tool to observe whether abnormal abrasion exists or not. The method has low efficiency, low precision and poor accuracy, and has artificial factors and potential safety hazards. The current common method is to detect and identify the abrasion overrun of the pantograph slide plate based on the two-dimensional image layer through the processes of slide plate edge positioning detection, image segmentation, image enhancement, image restoration and the like, but the accuracy of the method is to be improved and cannot be used in combination with three-dimensional data.
The depth data and the two-dimensional data are fused together to detect the abrasion of the sliding plate of the pantograph, the abrasion overrun region can be accurately identified theoretically, and the abrasion value is detected, but the related research is very few at present, and theoretical support cannot be given.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention aims to provide a pantograph slide plate upper surface detection system and method based on two-dimensional and three-dimensional information fusion, which are used for actual installation and high-precision acquisition and realize on-line detection of wear of a pantograph slide plate.
In order to achieve the above purpose and achieve the above technical effects, the invention adopts the following technical scheme:
the upper surface detection system of the pantograph slide plate based on the fusion of two three-dimensional information comprises a linear array structure body, a trigger module, an online abrasion measurement module and a three-dimensional abrasion display module, wherein the linear array structure body is fixed on a portal frame at the top of a train, the linear array structure body comprises a linear array camera, an area array camera and a laser sensor, the online abrasion measurement module is connected with the linear array camera, the area array camera and the three-dimensional abrasion display module, the trigger module is connected with the linear array camera and the area array camera and synchronously triggers the linear array camera and the area array camera, the laser sensor sends laser to a pantograph at the top of the train, two-dimensional and three-dimensional image data of the pantograph slide plate after the pantograph is lifted are respectively and synchronously collected through the linear array camera and the area array camera and uploaded to the online abrasion measurement module for processing, and the three-dimensional abrasion display module displays the generated images of the abrasion area.
Further, the area array camera is mounted on the left side of the linear area array structure body, the field of view of the area array camera comprises the whole pantograph, and the linear array camera and the laser sensor are mounted on the right side of the linear area array structure body.
Further, the laser sensor and the linear array camera are arranged perpendicular to the ground, and the central line of the light bar of the laser sensor coincides with the scanning line of the linear array camera.
Furthermore, the trigger lines of the linear array camera and the area array camera are connected to the same trigger module, so that synchronous acquisition is realized.
The invention discloses a detection method of a pantograph slide plate upper surface detection system based on two-dimensional information fusion, which comprises the following steps:
step 11: installing and preparing a linear area array structure body;
step 12: calibrating an area array camera;
step 13: when the train runs, the trigger module synchronously triggers the linear camera and the area camera to acquire two-dimensional images and three-dimensional depth bitmap information of the pantograph and transmit the two-dimensional images and the three-dimensional depth bitmap information to the online abrasion measurement module;
step 14: the online abrasion measuring module processes image information uploaded by the linear array camera and the area array camera, performs time scale normalization alignment on a two-dimensional image uploaded by the linear array camera and three-dimensional depth bitmap information uploaded by the area array camera, and positions the area of the pantograph in the three-dimensional depth bitmap by utilizing a target detection algorithm;
step 15: extracting three-dimensional point clouds of the upper surface of the pantograph from the three-dimensional point clouds of the upper surface of the pantograph in a separation area of the online abrasion measurement module, and obtaining the abrasion value through matching of the three-dimensional point clouds;
step 16: the online abrasion measuring module generates an image of an abrasion area, the image is displayed through the three-dimensional abrasion display module, and abrasion of the pantograph slide plate is reflected in two-dimensional pixels.
Further, in step 14, the step of processing the image information uploaded by the line camera and the area camera by the online abrasion measurement module includes:
step 141: the area array camera obtains three-dimensional point cloud information of the upper surface of the pantograph by shooting an image of laser emitted by the laser sensor and projected onto the upper surface of the pantograph, and solves three-dimensional coordinate information of a laser position according to a spatial analysis method;
camera coordinate system O c -x c y c z c Is a right-hand coordinate system; image plane coordinate system O i -XY; module coordinate system O L -x L y L z L For the right hand coordinate system, the module coordinate system O L -x L y L z L X of (2) L O L y L The coordinate plane is in the structured light plane pi, O L z L The axis is orthogonal to the light plane pi and is consistent with the normal vector direction of pi, the equation of the light plane under the module coordinate system is:
z L =0 (1)
the spatial positional relationship of the camera coordinate system and the module coordinate system can be expressed by the following formula:
wherein (r) 1 ,r 4 ,r 7 ),(r 2 ,r 5 ,r 8 ),(r 3 ,r 6 ,r 9 ) Respectively represent the coordinate system O L -x L y L z L X of (2) L Axis, y L Axis and z L A direction vector of the shaft; f is the focal length of the area array camera;
when z L If not equal to 0, a constraint is added to find the unique corresponding three-dimensional point coordinates (x L ,y L ,z L ) Therefore, a light plane equation is introduced, and the structural light plane pi is set in a module coordinate system O L -x L y L z L The following equation is:
ax L +by L +cz L +d=0 (3)
wherein a, b, c, d are coefficients of the light plane equation;
the coordinates of the three-dimensional points can be solved by the simultaneous expression (2) and the expression (3):
the parameters that give the light plane equation physical meaning are as follows:
wherein, alpha and beta are the rotation angles of the light plane relative to the X axis and the Y axis respectively; the model of the following structured light sensor is obtained from the light plane equation and the camera model:
step 142: correcting the three-dimensional depth bitmap through improved time scale normalization to achieve the same appearance as the two-dimensional image.
In step 15, the step of obtaining the wear value includes:
step 151: the aligned three-dimensional depth bitmap has the same morphological characteristics as the two-dimensional line scan image, the pixel coordinates of the pantograph in the two three-dimensional images are approximately the same, and the pantograph can be locked at the same position in the three-dimensional depth bitmap by positioning the bounding box of the pantograph in the two-dimensional images, so that the three-dimensional point cloud on the surface of the upper sliding plate of the pantograph is obtained;
step 152: and obtaining the abrasion value of the corresponding position of the abrasion bow through matching and subtracting the three-dimensional point cloud of the standard abrasion-free bow and the abrasion bow.
Compared with the prior art, the invention has the beneficial effects that:
the invention discloses a pantograph slide plate upper surface detection system and a pantograph slide plate upper surface detection method based on two three-dimensional information fusion. According to the system and the method for detecting the upper surface of the pantograph slide plate based on the fusion of the two three-dimensional information, the abrasion of the upper surface of the pantograph slide plate is measured with high precision in an innovative manner by utilizing the fusion of the two three-dimensional information, and the linear array structure provides necessary hardware support for acquiring the two three-dimensional information of the pantograph with high precision by utilizing a mode of combining a linear array camera and an area array camera.
Drawings
FIG. 1 is a flow chart of the present invention;
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the drawings so that the advantages and features of the present invention can be more easily understood by those skilled in the art, thereby making clear and unambiguous the scope of the present invention.
As shown in FIG. 1, the upper surface detection system of the pantograph slide plate based on two three-dimensional information fusion comprises a linear array structure body, a trigger module, an online abrasion measurement module and a three-dimensional abrasion display module, wherein the linear array structure body is fixed on a portal frame at the top of a train, the linear array structure body comprises a linear array camera, an area array camera and a laser sensor, the online abrasion measurement module is connected with the linear array camera, the area array camera and the three-dimensional abrasion display module, the trigger module is connected with the linear array camera and the area array camera and synchronously triggers the linear array camera and the area array camera, the laser sensor sends laser to the pantograph at the top of the train, two-dimensional and three-dimensional image data of the pantograph slide plate after the pantograph is lifted are respectively and synchronously acquired through the linear array camera and the area array camera and are uploaded to the online abrasion measurement module for processing, and the three-dimensional abrasion display module displays the generated images of the abrasion area.
The area array camera is arranged on the left side of the linear area array structure body, the field of view of the area array camera comprises the whole pantograph, and the linear array camera and the laser sensor are arranged on the right side of the linear area array structure body.
The laser sensor and the linear array camera are arranged perpendicular to the ground, and the central line of the light bar of the laser sensor coincides with the scanning line of the linear array camera.
The trigger lines of the linear array camera and the area array camera are connected to the same trigger module, so that synchronous acquisition is realized.
The invention discloses a detection method of a pantograph slide plate upper surface detection system based on two-dimensional information fusion, which comprises the following steps:
step 11: the method comprises the steps of fixing a linear array structure on a portal frame at the top of a train, adjusting an internal area array camera and a linear array camera of the linear array structure to be capable of collecting images of a pantograph at a pantograph slide plate after the pantograph is lifted, adjusting a lens of the area array camera and a lens of the linear array camera to be fixed at the slide plate part, and realizing synchronous triggering of the area array camera and the linear array camera through a triggering module;
step 12: performing target image acquisition on a large field of view of 2.5-3m by using a high-precision sawtooth target on the train roof to finish the calibration of an array camera;
step 13: when the train runs, the trigger module triggers the linear array camera and the area array camera simultaneously, and the two-dimensional image information collected by the linear array camera and the three-dimensional depth information collected by the area array camera are transmitted into the server in real time;
step 14: the online abrasion measurement module carries out real-time processing on an incoming image, obtains abrasion size through three-dimensional point cloud matching, aligns two three-dimensional information, and locates the area of the pantograph in the three-dimensional depth bitmap by utilizing a target detection algorithm, and specifically comprises the following steps:
step 141:
the area array camera shoots an image of the light bar emitted by the laser sensor projected onto the upper surface of the pantograph, and three-dimensional point cloud information of the upper surface of the pantograph is obtained after the image is processed by the online abrasion measurement module. Parameters such as an included angle between the laser sensor and the optical axis of the area array camera are known, and three-dimensional information at the laser position is solved according to a spatial analysis method. Camera coordinate system O c -x c y c z c Is a right-hand coordinate system; image plane coordinate system O i -XY; module coordinate system O L -x L y L z L Is a right hand coordinate system. Module coordinate system O L -x L y L z L X of (2) L O L y L The coordinate plane is in the structured light plane pi, O L z L The axis is orthogonal to the light plane pi and is consistent with the normal vector direction of pi, the equation of the light plane under the module coordinate system is:
z L =0 (1)
the spatial positional relationship of the camera coordinate system and the module coordinate system can be expressed by the following formula:
wherein (r) 1 ,r 4 ,r 7 ),(r 2 ,r 5 ,r 8 ),(r 3 ,r 6 ,r 9 ) Respectively represent the coordinate system O L -x L y L z L X of (2) L Axis, y L Axis and z L A direction vector of the shaft; f is the focal length of the area array camera; will z L The =0-introduced reduced model is as follows:
wherein z is L =0
When z L If not equal to 0, a constraint is added to find the unique corresponding three-dimensional point coordinates (x L ,y L ,z L ) We therefore introduce the light plane equation.
With structured light plane pi in module coordinate system O L -x L y L z L The following equation is:
ax L +by L +cz L +d=0 (3)
wherein a, b, c, d are coefficients of the light plane equation;
the coordinates of the three-dimensional points can be solved by the simultaneous expression (2) and the expression (3):
parameters that give the light plane equation more specific physical meaning are described as follows:
wherein α and β are rotational angles of the light plane with respect to the X-axis and the Y-axis, respectively. From the light plane equation and the camera model, a model of the following structured light sensor can be obtained:
step 142:
in order to obtain the corresponding relation of the two three-dimensional images, the three-dimensional point cloud is reflected in the two-dimensional images, and the three-dimensional depth bitmap can be corrected to achieve the same appearance as the two-dimensional images through improved time scale normalization. High precision image alignment requires high performance feature matching algorithms to provide the correct matching feature points.
Step 1421:
the GMS feature matching algorithm is superior to other matching methods (e.g., SIFT) in terms of both speed and accuracy. The GMS encapsulates motion smoothness into statistical likelihood of a certain number of matches in the region, thus requiring solving for the feature point x i From region a (in image I a In) to region b (in image I b In) the arrival probability of the vehicle. Only if the regions { a, b } are located at the same position in the 3D scene and support feature f a Is located in region b, it can be considered a true match. Conversely, when the region { a, b } corresponds to a different location of the real scene, and f a A match error is defined as a mismatch. The neighborhood of the correct matching point will have more positive matches to support, and thus have a higher confidence score.
Let f a Is one of the N support features in region a and its probability of correct matching is t, i.eWill beDefined as the same location for a given region { a, b }, and feature f a Probability of a neighborhood of (b) in region bI.e. center feature x i The matching is correct. The probability is as follows:
wherein T is ab The same region is observed for the representative region { a, b }; feature f a The probability of correct match is t, i.e
Definition of the definitionIs formed by the steps of forming a region { a, b } corresponding to a different region (F ab ) And feature x i The probability that the nearest neighbor of (c) is in region b, meaning x i Matching errors. Thus, the first and second heat exchangers are arranged,
by p t 、p f We can approximate the distribution of the support N in the whole grid. Through the operation, each pair of matching points can obtain a score to judge whether the matching is correct or not. However, when features are located in areas of high brightness and less texture, false matches (false positive matches) with a higher score can cause confusion. These cases are characterized by a much smaller number, but concentrated distribution, and when a mismatch is considered a true match, the GMS performance is poor. As shown below, it can be attributed to the following two cases.
For case 1, the regions { a, b } view the same 3D region, but the features inside match incorrectly. These erroneous matches can get a high score due to their high degree of concentration and similarity, and therefore, like positive matches, there are many support points in their neighborhood.
For case 2, the region { a, b } views a different 3D region, which will certainly produce a false match but also many neighbor support points, thus achieving a higher confidence score. For invariance of the sequence images in the vertical direction, we propose strong geometric constraints to reject false positive matches. Probability of correct matching accordinglyIs corrected as:
wherein T is ab The same region is observed for the representative region { a, b }; feature f a The probability of correct match is t, i.e
The probability of matching error is:
step 1422:
first, a line is scanned in a horizontal direction into a two-dimensional image I a And three-dimensional depth bitmap I b Is divided into equally spaced portions. The improved time scale normalization alignment method corrects each sub-block by solving the scaling of each sub-block in the horizontal direction. The rectified sub-blocks are stitched into a complete image. The characteristic points in each sub-block are subjected to k-means clustering to obtain a typical characteristic point. I b The distance between adjacent representative feature points in the horizontal direction needs to be corrected to be equal to I a The distances between the corresponding adjacent characteristic points are equal, so that the purpose of morphology scaling is achieved.
And calculating the scaling ratio of the current operation sub-block, and the result is inaccurate. It is necessary to extend the current sub-block to contain more information. Let { x } 1b ,x 2b },{x 1b ',x 2b ' indicates images { I }, respectively a ,I b Characteristic feature point pairs of adjacent sub-blocks in the sequence. Let { x } 1r ,x 2r },{x 1r ',x 2r ' indicates images { I }, respectively a ,I b Characteristic feature point pairs of adjacent extension sub-blocks in the sequence.
The correction ratio of the extended sub-block and the scaling ratio of the original sub-block are respectively:
to improve algorithm robustness, add error correction terms:
wherein { x i ,x i ′} i=1,2,…,m Representative image { I } a ,I b All feature point pairs of the corresponding sub-block. ratio of r ,ratio b The bias three terms together constitute the correction ratio for the current operating block:
ratio s =α·ratio r +β·ratio b +γ·bias (13)
wherein, alpha is 0.8, beta is 0.15, and gamma is 0.05.
Step 15: the method comprises the steps of extracting three-dimensional point clouds of the upper surface of the pantograph from three-dimensional point clouds of the upper surface of the pantograph in a separation area of an online abrasion measurement module, and obtaining the abrasion value through matching of the three-dimensional point clouds, and specifically comprises the following steps:
step 151: the aligned three-bit depth bitmap has the same topographical features as the two-dimensional line scan image, so the pixel coordinates of the pantograph in both images are approximately the same. And the pantograph can be locked at the same position in the three-dimensional depth bitmap by positioning the bounding box of the pantograph in the two-dimensional image, so that the three-dimensional point cloud on the surface of the upper sliding plate of the pantograph is obtained. The target detection method used in the step is a deep learning algorithm, and the current algorithm can detect the pantograph.
Step 152:
the premise of obtaining abrasion is that a group of three-dimensional point clouds of the standard bow are arranged, and the abrasion value of the corresponding position of the abrasion bow is obtained through matching and subtracting the three-dimensional point clouds of the standard bow and the abrasion bow. The overall achievement goal is: assuming that two point cloud data sets of a standard bow and a wearing bow are P and G, to convert the data sets into G through P (assuming that two groups of point clouds have parts with similar local geometric characteristics), the data sets can be converted into G through P cross-multiplying a quaternary matrix in a rotation translation mode, and the overall idea is that a 4×4 rotation translation matrix is needed. For the distances from all (sampling) points of P to the corresponding (nearest) points of G calculated after each rotation translation transformation, the least square error is calculated by a least square method (variance is calculated) to see whether the least square error is in a required range, if the least square error is smaller than a set value, or the iteration number reaches an upper limit, or the least square error is not changed in a small range after each re-iteration, the calculation is finished, otherwise, iteration is continued.
Matrix rotated about three axes:
the overall rotation matrix is expressed as:
R=R z (β)R y (α)R x (θ) (15)
the rotation matrix rotation process here is: the angle θ is rotated first about the x-axis, then the angle α is rotated about the y-axis, and finally the angle β is rotated about the z-axis.
Step 1521:
and removing data points of the NAN, adopting downsampling filtering to reduce the number of the three-dimensional point clouds on the premise of keeping the shape characteristics of the point clouds, and improving the algorithm speed. The voxel grid class implemented by PCL creates a three-dimensional voxel grid (a collection of small spatial three-dimensional cubes can be thought of as a voxel grid) from the input point cloud data, then within each voxel (i.e., three-dimensional cube), the other points in the voxel are approximately displayed by the centers of gravity of all points in the voxel, so that all points in the voxel are finally represented by a center of gravity point, and the filtered point cloud is obtained after all voxels are processed.
Step 1522:
and calculating a normal line and FPFH histogram characteristics of the point cloud. The specific steps are to set a neighborhood of a given point cloud P with n points, where k is each point P in PThe number of point clouds and the time complexity is O (nk). Calculate each query point p q And a simplified point characteristic histogram SPFH between a group of unit cells alpha, phi and theta in the field, so that the FPFH of the point cloud is deduced to be:
wherein the weight ω k Representing a query point p in a given metric space q And neighborhood point p k The distance between them, thus the pair (p q ,p k ) The point pairs are scored, but may be selected as different metrics if desired. To understand the importance of such weighting schemes.
Thus, for a given query point p q The algorithm first estimates its SPFH value by creating a pair between itself and its neighbors. This is repeated for all points in the dataset and then p is used k SPFH value re-weighting p for neighborhood q SPFH value of (2), thus p q An FPFH is created. The extra FPFH connection due to the extra weighting scheme is represented by a black line, some value pairs will be counted twice.
Step 1523:
coarse registration using SAC-IA (sample consensus initial registration algorithm). One or more points with similar FPFH characteristics to sampling points in the point cloud P are searched in the target point cloud Q, and one point is randomly selected from the similar points to be used as a one-to-one corresponding point of the point cloud P in the target point cloud Q. And calculating a rigid body transformation matrix between the corresponding points, and then judging the performance of the current registration transformation by solving a distance error sum function after the transformation of the corresponding points. The distance error and function here uses a Huber penalty functionThe representation is:
wherein: m is m i Is a preset value, l i Is the distance difference after the i-th set of corresponding points is transformed. The final purpose of the above registration is to find a set of optimal transformations among all transformations so that the value of the error function is minimized, and the transformation at this time is the final registration transformation matrix, and further the registration result can be obtained.
Step 1524:
accurate registration using ICP:
1) Searching for the closest point: taking a point P in P i Find distance p in Q i Nearest q i Then (p) i ,q i ) A set of corresponding point pairs is formed. P is p i And q i The rotation and translation relation (R, T) is the requirement;
2) Solving the transformation relation (R, T): n-to-p (p) i ,q i ) For n equation sets, the (R, T) can be obtained by solving the equation sets by a mathematical method, but an iterative algorithm is adopted for solving a more accurate transformation relation;
3) Applying a transformation: each point P in the pair point set P i Obtaining a point set P' by using a transformation relation, and defining a function E:
4) According to the accuracy requirement, a condition for terminating the iteration is defined, namely, the iteration is terminated when E is smaller than a specific value.
If the new transformation point set and the reference point set meet the requirement of the objective function, namely the average distance between the two point sets is smaller than a given threshold value, stopping iterative calculation, otherwise, continuing iteration by taking the new transformation point set P' as a new P until the requirement of the objective function is met.
In order to achieve the improvement of the processing speed, the processing of point cloud alignment is simplified into three-dimensional points of the claw areas on two sides of the standard bow and the abrasion bow to be calculated. Abrasion is concentrated on the contact part of the pantograph slide plate and the power grid, so that the matching precision is reduced, and a re-matching link is removed.
Step 16:
the wear information is corresponding to the corresponding position in the two-dimensional image. Because the linear array camera and the area array camera are triggered and collected simultaneously, and the scanning line frequency is consistent, two three-dimensional data are in one-to-one correspondence in theory. If the number of lines of the two-dimensional information and the abrasion information are inconsistent, the two-dimensional image is scaled to correspond to the abrasion information, and the two-dimensional image is displayed through the three-dimensional abrasion display module.
The parts of the invention which are not specifically described adopt the prior art, and the parts which are not specifically described adopt the prior art, so that the details are not described herein.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes or direct or indirect application in other related technical fields are included in the scope of the present invention.

Claims (5)

1. The detection method of the upper surface detection system of the pantograph slide plate based on the two-dimensional information fusion is characterized in that the upper surface detection system of the pantograph slide plate based on the two-dimensional information fusion comprises a linear array structure body, a trigger module, an online abrasion measurement module and a three-dimensional abrasion display module, wherein the linear array structure body is fixed on a portal frame at the top of a train, the linear array structure body comprises a linear array camera, an area array camera and a laser sensor, the online abrasion measurement module is connected with the linear array camera, the area array camera and the three-dimensional abrasion display module, the trigger module is connected with the linear array camera and the area array camera and synchronously triggers the linear array camera and the area array camera, the laser sensor sends laser to a pantograph at the top of the train, and the linear array camera and the area array camera synchronously acquire two-dimensional and three-dimensional image data of the pantograph slide plate after the pantograph is lifted respectively and are uploaded to the online abrasion measurement module for processing, and the three-dimensional abrasion display module displays images of a generated abrasion area;
the method comprises the following steps:
step 11: installing and preparing a linear area array structure body;
step 12: calibrating an area array camera;
step 13: when the train runs, the trigger module synchronously triggers the linear camera and the area camera to acquire two-dimensional images and three-dimensional depth bitmap information of the pantograph and transmit the two-dimensional images and the three-dimensional depth bitmap information to the online abrasion measurement module;
step 14: the online abrasion measuring module processes image information uploaded by the linear array camera and the area array camera, performs time scale normalization alignment on a two-dimensional image uploaded by the linear array camera and three-dimensional depth bitmap information uploaded by the area array camera, and positions the area of the pantograph in the three-dimensional depth bitmap by utilizing a target detection algorithm;
step 15: extracting three-dimensional point clouds of the upper surface of the pantograph from the three-dimensional point clouds of the upper surface of the pantograph in a separation area of the online abrasion measurement module, and obtaining the abrasion value through matching of the three-dimensional point clouds;
step 16: the online abrasion measuring module generates an image of an abrasion area, the image is displayed through the three-dimensional abrasion display module, and abrasion of the pantograph slide plate is reflected in the two-dimensional pixels; in step 14, the step of processing the image information uploaded by the line camera and the area camera by the online abrasion measurement module includes:
step 141: the area array camera obtains three-dimensional point cloud information of the upper surface of the pantograph by shooting an image of laser emitted by the laser sensor and projected onto the upper surface of the pantograph, and solves three-dimensional coordinate information of a laser position according to a spatial analysis method;
camera coordinate system O c -x c y c z c Is a right-hand coordinate system; image plane coordinate system O i -XY; module coordinate system O L -x L y L z L For the right hand coordinate system, the module coordinate system O L -x L y L z L X of (2) L O L y L The coordinate plane is in the structured light plane pi, O L z L The axis is orthogonal to the structural light plane pi and is consistent with the normal vector direction of the structural light plane pi, the equation of the light plane under the module coordinate system is as follows:
z L =0 (1)
the spatial positional relationship of the camera coordinate system and the module coordinate system can be expressed by the following formula:
wherein (r) 1 ,r 4 ,r 7 ),(r 2 ,r 5 ,r 8 ),(r 3 ,r 6 ,r 9 ) Respectively represent the coordinate system O L -x L y L z L X of (2) L Axis, y L Axis and z L A direction vector of the shaft; f is the focal length of the area array camera;
when z L If not equal to 0, a constraint is added to find the unique corresponding three-dimensional point coordinates (x L ,y L ,z L ) Therefore, a light plane equation is introduced, and the structural light plane pi is set in a module coordinate system O L -x L y L z L The following equation is:
ax L +by L +cz L +d=0 (3)
wherein a, b, c, d are coefficients of the light plane equation;
the coordinates of the three-dimensional points can be solved by the simultaneous expression (2) and the expression (3):
the parameters that give the light plane equation physical meaning are as follows:
wherein, alpha and beta are the rotation angles of the light plane relative to the X axis and the Y axis respectively; the model of the following structured light sensor is obtained from the light plane equation and the camera model:
step 142: correcting the three-dimensional depth bitmap through improved time scale normalization to achieve the same appearance as the two-dimensional image.
2. The method for detecting the upper surface detection system of the pantograph slide plate based on the two-dimensional information fusion according to claim 1, wherein in the step 15, the step of obtaining the magnitude of the abrasion value comprises the steps of:
step 151: the aligned three-dimensional depth bitmap has the same morphological characteristics as the two-dimensional line scan image, the pixel coordinates of the pantograph in the two three-dimensional images are approximately the same, and the pantograph can be locked at the same position in the three-dimensional depth bitmap by positioning the bounding box of the pantograph in the two-dimensional images, so that the three-dimensional point cloud on the surface of the upper sliding plate of the pantograph is obtained;
step 152: and obtaining the abrasion value of the corresponding position of the abrasion bow through matching and subtracting the three-dimensional point cloud of the standard abrasion-free bow and the abrasion bow.
3. The method for detecting the upper surface detection system of the pantograph slide plate based on the two-dimensional information fusion according to claim 1, wherein the area array camera is installed on the left side of the linear area array structure body, the field of view of the area array camera comprises the whole pantograph, and the linear array camera and the laser sensor are installed on the right side of the linear area array structure body.
4. The detection method of the pantograph slide plate upper surface detection system based on the two-dimensional information fusion according to claim 1, wherein the laser sensor and the linear array camera are placed perpendicular to the ground, and the light bar center line of the laser sensor coincides with the scanning line of the linear array camera.
5. The detection method of the pantograph slide plate upper surface detection system based on two-dimensional information fusion according to claim 1, wherein trigger lines of the linear array camera and the area array camera are connected to the same trigger module, so that synchronous acquisition is realized.
CN202110936962.8A 2021-08-16 2021-08-16 Pantograph slide plate upper surface detection system and method based on two-dimensional and three-dimensional information fusion Active CN113884002B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110936962.8A CN113884002B (en) 2021-08-16 2021-08-16 Pantograph slide plate upper surface detection system and method based on two-dimensional and three-dimensional information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110936962.8A CN113884002B (en) 2021-08-16 2021-08-16 Pantograph slide plate upper surface detection system and method based on two-dimensional and three-dimensional information fusion

Publications (2)

Publication Number Publication Date
CN113884002A CN113884002A (en) 2022-01-04
CN113884002B true CN113884002B (en) 2023-08-29

Family

ID=79010665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110936962.8A Active CN113884002B (en) 2021-08-16 2021-08-16 Pantograph slide plate upper surface detection system and method based on two-dimensional and three-dimensional information fusion

Country Status (1)

Country Link
CN (1) CN113884002B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972042B (en) * 2022-08-01 2022-10-21 成都唐源电气股份有限公司 Pantograph point cloud splicing method, system, equipment and medium based on standard model
CN115578524B (en) * 2022-12-09 2023-03-31 华东交通大学 Infrared three-dimensional reconstruction method, infrared three-dimensional reconstruction system, storage medium and computer equipment
CN116147525B (en) * 2023-04-17 2023-07-04 南京理工大学 Pantograph contour detection method and system based on improved ICP algorithm
CN116678321B (en) * 2023-08-03 2023-10-13 北京科技大学 Rail abrasion measuring method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2312269A1 (en) * 2009-08-26 2011-04-20 Image House A/S A method and a system for inspecting a pantograph
CN102507600A (en) * 2011-11-08 2012-06-20 南京大学 Automatic abrasion detection device for pantograph pan of high-speed locomotive
CN104567729A (en) * 2014-12-31 2015-04-29 苏州华兴致远电子科技有限公司 Monitoring method and device for operating state of pantograph
CN106289106A (en) * 2016-08-04 2017-01-04 北京航空航天大学 Stereo vision sensor that a kind of line-scan digital camera and area array cameras combine and scaling method
CN112146591A (en) * 2020-09-21 2020-12-29 北京运达华开科技有限公司 Pantograph slide plate abrasion detection method and system
CN112762827A (en) * 2020-12-25 2021-05-07 江苏集萃智能光电系统研究所有限公司 Pantograph comprehensive geometric parameter online detection device and method based on laser projection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2312269A1 (en) * 2009-08-26 2011-04-20 Image House A/S A method and a system for inspecting a pantograph
CN102507600A (en) * 2011-11-08 2012-06-20 南京大学 Automatic abrasion detection device for pantograph pan of high-speed locomotive
CN104567729A (en) * 2014-12-31 2015-04-29 苏州华兴致远电子科技有限公司 Monitoring method and device for operating state of pantograph
CN106289106A (en) * 2016-08-04 2017-01-04 北京航空航天大学 Stereo vision sensor that a kind of line-scan digital camera and area array cameras combine and scaling method
CN112146591A (en) * 2020-09-21 2020-12-29 北京运达华开科技有限公司 Pantograph slide plate abrasion detection method and system
CN112762827A (en) * 2020-12-25 2021-05-07 江苏集萃智能光电系统研究所有限公司 Pantograph comprehensive geometric parameter online detection device and method based on laser projection

Also Published As

Publication number Publication date
CN113884002A (en) 2022-01-04

Similar Documents

Publication Publication Date Title
CN113884002B (en) Pantograph slide plate upper surface detection system and method based on two-dimensional and three-dimensional information fusion
CN113436260B (en) Mobile robot pose estimation method and system based on multi-sensor tight coupling
CN109598762B (en) High-precision binocular camera calibration method
CN108597009B (en) Method for detecting three-dimensional target based on direction angle information
CN111311679B (en) Free floating target pose estimation method based on depth camera
CN109035200A (en) A kind of bolt positioning and position and posture detection method based on the collaboration of single binocular vision
CN111223133A (en) Registration method of heterogeneous images
CN110763204B (en) Planar coding target and pose measurement method thereof
CN112484746B (en) Monocular vision auxiliary laser radar odometer method based on ground plane
CN109341668B (en) Multi-camera measuring method based on refraction projection model and light beam tracking method
CN111897349A (en) Underwater robot autonomous obstacle avoidance method based on binocular vision
CN110930365B (en) Orthogonal vanishing point detection method under traffic scene
CN110222661B (en) Feature extraction method for moving target identification and tracking
CN111060006A (en) Viewpoint planning method based on three-dimensional model
CN113393524B (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN113313116B (en) Underwater artificial target accurate detection and positioning method based on vision
CN115774265A (en) Two-dimensional code and laser radar fusion positioning method and device for industrial robot
CN113313659A (en) High-precision image splicing method under multi-machine cooperative constraint
CN116563377A (en) Mars rock measurement method based on hemispherical projection model
CN116358449A (en) Aircraft rivet concave-convex amount measuring method based on binocular surface structured light
CN114820474A (en) Train wheel defect detection method based on three-dimensional information
Zhang LILO: A novel LiDAR–IMU SLAM system with loop optimization
CN110030979B (en) Spatial non-cooperative target relative pose measurement method based on sequence images
CN111738971B (en) Circuit board stereoscopic scanning detection method based on line laser binocular stereoscopic vision
CN114660579A (en) Full-automatic laser radar and camera calibration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant