CN114820474A - Train wheel defect detection method based on three-dimensional information - Google Patents

Train wheel defect detection method based on three-dimensional information Download PDF

Info

Publication number
CN114820474A
CN114820474A CN202210371673.2A CN202210371673A CN114820474A CN 114820474 A CN114820474 A CN 114820474A CN 202210371673 A CN202210371673 A CN 202210371673A CN 114820474 A CN114820474 A CN 114820474A
Authority
CN
China
Prior art keywords
wheel
laser
dimensional
data
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210371673.2A
Other languages
Chinese (zh)
Inventor
郭其昌
梅劲松
王干
吴松野
李祥勇
董智源
张兆贵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tycho Information Technology Co ltd
Original Assignee
Nanjing Tycho Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Tycho Information Technology Co ltd filed Critical Nanjing Tycho Information Technology Co ltd
Priority to CN202210371673.2A priority Critical patent/CN114820474A/en
Publication of CN114820474A publication Critical patent/CN114820474A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a train wheel defect detection method based on three-dimensional information. The invention adopts the combination of a plurality of linear array cameras and a 3D laser scanner module to generate a plurality of two-dimensional image data and laser scanning data with depth information, extracts wheel laser central line data through a complex image processing algorithm, reconstructs a wheel three-dimensional outline with the depth information, then splices the wheel three-dimensional information into complete wheel three-dimensional information, finds out an area with larger depth change through local depth information comparison, finally carries out defect detection on the tread and the rim surface of the complete wheel to obtain the information of the area, the position, the depth and the like of the defect area, and dynamically generates a detection report in time for a client to review and overhaul.

Description

Train wheel defect detection method based on three-dimensional information
Technical Field
The invention belongs to the field of rail transit, and particularly relates to a train wheel defect detection method based on three-dimensional information, which is applied to a train wheel set online detection product.
Background
With the rapid development of rail transit in China, the conventional manual wheel defect detection cannot meet the daily operation requirement, and the safe operation of a train is guaranteed due to the appearance of high-precision intelligent detection equipment. During train operation, wheels are key parts for driving safety, the running wheels have defects such as abrasion, scratch and peeling, and the serious defects can cause train derailment accidents, so that the train wheels need to be dynamically detected.
At present, most wheel detection equipment adopts a two-dimensional camera to obtain wheel image data, a tread image obtained by the two-dimensional camera cannot detect depth information of a defect area, and the wheel detection equipment is low in image quality, large in environmental factor influence and not beneficial to analysis and processing of the wheel image. And laser scanners with depth information are increasingly being used in wheel on-line detection systems.
Chinese patent ZL202110047893 discloses a wheel tread defect three-dimensional detection method and a system thereof. According to the method, only three-dimensional reconstruction of partial regions of the wheel is completed, complete three-dimensional wheel data are not reconstructed, defect detection cannot be carried out on the wheel in all directions, and the defect detection is carried out in a contrast mode, so that more storage space is consumed. Chinese patent ZL201410798858 discloses a tread defect information detecting system and method, which obtains a plurality of curved surface image information of a wheel tread for constructing spatial curved surface information of a train wheel tread, and compares the constructed spatial curved surface information with preset tread curved surface information to obtain a tread defect detecting module of defect information of the train wheel tread. The method detects the tread defect by a comparison mode, and consumes more storage space and more processing time.
Disclosure of Invention
In view of the above disadvantages and shortcomings in the prior art, the present invention is directed to a method for detecting train wheel defects based on three-dimensional information.
The invention adopts the following technical scheme for solving the technical problems:
a train wheel defect detection method based on three-dimensional information comprises the following steps:
acquiring and preprocessing an image to obtain a laser line image;
extracting a laser central line: extracting a central line of the obtained laser image, wherein the two-dimensional coordinate is (x, y);
laser coordinate transformation: converting the (X, y) into coordinates (X) in a real world coordinate system w ,Y w ,Z w );
And (3) three-dimensional wheel data splicing: the coordinate system of a certain acquisition device is used as a reference object and is defined as a world coordinate system, the coordinate systems of other acquisition devices are subjected to affine transformation, and the overlapped areas are subjected to fusion processing to obtain three-dimensional information of the complete wheel;
and (3) defect detection: and according to the local depth information of the wheel, completing defect detection by adopting a multi-stage judgment strategy.
Further, the defect detection: firstly, a row of data is taken along the tread direction of the wheel, the depth information is compared with that of the adjacent row in the column direction, the point with the larger value is marked as 1, and the point with the smaller value is marked as 0; then traversing the whole wheel, and finding out the position of the area with larger change, namely all areas marked as 1 are suspicious defect positions; and finally, carrying out local clustering processing on the regions marked as 1, comparing the clustered regions one by one, finding out all regions meeting the conditions as defects, and counting the defect regions to calculate the area, position and maximum depth information of the defect regions.
Further, the laser centerline extraction: adjusting and amplifying the preprocessed laser line image by multiple times to achieve a sub-pixel precision image; detecting the edge points of the amplified laser line image by adopting a self-adaptive edge detection algorithm, and performing morphological expansion processing on the edge detection image; calculating a self-adaptive threshold value through a histogram of the laser line image after statistical amplification; dividing the amplified laser line image by using a self-adaptive threshold value, and thinning the divided image; and combining the expanded points through region growing processing, and finally extracting the central line of the obtained laser image, wherein the coordinate of the central line is (x, y).
Further, the image acquisition: a plurality of collection equipment distribute and form the collection equipment array beside the track, and collection equipment includes linear array camera and laser scanner.
Further, the laser coordinate transformation: the (X, y) and coordinates (X) in the world coordinate system w ,Y w ,Z w ) The relationship of (a) to (b) is as follows:
Figure BDA0003588827320000021
Figure BDA0003588827320000022
Figure BDA0003588827320000031
wherein f is the focal length of the line-scan camera (X) c ,Y c ,Z c ) The coordinate of the linear array camera is the coordinate of the coordinate system of the linear array camera, and H is a parameter obtained by calibrating the linear array camera; the corresponding relation between the two-dimensional image point coordinates and the corresponding target three-dimensional coordinates can be known through the formula.
Further, O-X in world coordinate system w Y w Z w And correcting the three-dimensional wheel information to facilitate splicing and reconstruction of wheel data: and (3) segmenting the point cloud data of the non-wheel part by using a point cloud clustering algorithm DNSCN, extracting the point cloud data only containing wheel information, correcting the cambered wheels into regular rectangular wheels by using a coordinate mapping method, and obtaining laser line data after coordinate transformation and correction.
Further, the three-dimensional wheel data stitching: the method comprises the steps of taking a camera coordinate system of one acquisition device as a reference object and defining the camera coordinate system as a world coordinate system, obtaining an external parameter rotation matrix R0 and a translational vector T0 of other acquisition devices relative to the reference object through measurement and calibration, taking R0 and T0 as initial values, calculating transformation parameters between adjacent point cloud data through an NICP algorithm to obtain more accurate rotation parameters R and translation parameters T, finally carrying out affine transformation on respective data through R and T, carrying out fusion processing on overlapping areas, and completing the splicing of wheel data.
Further, the pre-processing: and finally, deleting interference data irrelevant to the laser line in a contour extraction and judgment mode. The train wheel defect detection method based on three-dimensional information adopts the combination of a plurality of linear array cameras and a 3D laser scanner module to generate a plurality of two-dimensional image data and laser scanning data with depth information, extracts wheel laser central line data through a complex image processing algorithm, reconstructs a wheel three-dimensional outline with the depth information, then splices the wheel three-dimensional information into three-dimensional information of a complete wheel, finds out an area with larger depth change through local depth information comparison, finally carries out defect detection on the surface of a tread and a rim of the complete wheel to obtain the information of the area, the position, the depth and the like of the defect area, and dynamically generates a detection report in time for a client to recheck and overhaul.
Drawings
FIG. 1 is a general flow diagram of the present invention;
FIG. 2 is a schematic view of the acquisition apparatus of the present invention: (a) collecting an angle schematic diagram by collection equipment; (b) a plurality of acquisition device setup schematics;
FIG. 3 is a laser line drawing of a raw wheel of the present invention;
FIG. 4 is a laser line extraction diagram of the present invention;
FIG. 5 is a three-dimensional coordinate transformation diagram of the present invention;
fig. 6 is a three-dimensional map of the wheel of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings and examples.
Referring to fig. 1, a general flowchart of a train wheel defect detection method based on three-dimensional information according to this embodiment is shown.
Fig. 2(a) is a schematic diagram of a certain collection device of the present invention for collecting wheel data, and fig. 2(b) is a schematic diagram of a collection device of the present invention.
As can be seen from fig. 1, the train wheel defect detection method based on three-dimensional information according to the embodiment has 6 main implementation steps, namely, data acquisition, data preprocessing, laser line extraction, laser coordinate transformation, three-dimensional wheel splicing and defect detection, and the implementation of each step is as follows:
firstly, data acquisition:
the collecting equipment of this embodiment is installed in the train and goes out the track both sides of putting in and out the storehouse throat section, will select the quantity of installation data acquisition equipment according to the wheel size, and the size that every collecting equipment gathered the wheel is certain, and the wheel is big more, and the collecting equipment quantity that needs is just more. The acquisition equipment comprises a laser scanner, a linear array camera, a light source, a transmitter, a receiver, temperature control and other components. In this embodiment, 10 above-mentioned collection devices are installed in the detection shed of the bullet train, 5 above-mentioned collection devices are installed on one side of the track, the layout of the devices is shown in fig. 2(b), which shows the layout of the devices on the left side of the track, and the layout on the right side is the same as the layout on the left side. In addition, the collecting equipment is arranged at the edge of the rail, keeps a safe distance with the edge of the rail, cannot be higher than the height of the rail surface, the angle between the collecting equipment and the rail is about 5 degrees, and the distance between the collecting equipment and the rail is about 600 millimeters.
Due to the characteristic that the wheel image acquisition equipment is fixed for imaging, when a wheel passes through, each wheel has thousands of scans to capture the detailed state of each wheel, a schematic diagram of the data of the wheel acquired by the equipment is shown in fig. 2(a), the acquired wheel images are numbered in sequence according to the layout and shooting sequence of the acquisition equipment and are transmitted to the server for storage through TCP, and subsequent data processing and analysis are facilitated.
Secondly, data preprocessing:
as shown in fig. 3, a part of the wheel original laser images acquired by a certain acquiring device may affect subsequent data processing and measurement accuracy due to interference of light sources, noise, and the like in the acquired data.
In the embodiment, Gaussian filtering is adopted to perform noise reduction on original data to obtain a smooth laser line image, the definition of the laser line image is improved through contrast stretching, interference data irrelevant to a laser line are deleted in a contour extraction and judgment mode, and the preprocessed laser line image is recorded as Img.
Third, laser centerline extraction
For the laser line image extracted in the second step, the center of the laser line needs to be further extracted, and the data processing precision is improved.
The method comprises the following specific steps:
(1) adjusting and amplifying the Img by 2 times to achieve a sub-pixel precision image, and recording the sub-pixel precision image as ImgResize;
(2) detecting the edge point of the amplified laser line image ImgResize by adopting a self-adaptive edge detection algorithm, and performing morphological expansion processing on the edge detection image, wherein the image after the expansion processing is recorded as ImgDilate;
(3) counting a histogram of the ImgResize, and calculating an adaptive threshold according to histogram information;
(4) dividing the image ImgResize by using the adaptive threshold obtained in the step (3), and thinning the divided image, and recording as ImgThin;
(5) the ImgThin is subjected to region growing, and the imgdialte points are processed and merged to finally extract the center line of the laser image, and as shown in fig. 4, the center line coordinates are (x, y).
Laser line coordinate transformation
Converting the extracted image point set (x, y) generated by the wheel surface laser line into coordinates (Xw, Yw, Zw) in a real world coordinate system, namely converting the data coordinates shown in fig. 4 into the data shown in fig. 5, and according to the principle of photogrammetry, the relationship between the data is as follows:
Figure BDA0003588827320000051
Figure BDA0003588827320000052
Figure BDA0003588827320000053
wherein f is the focal length of the camera, (X) c ,Y c ,Z c ) The coordinates under the camera coordinate system, H is the parameters obtained by calibration, and the corresponding relation between the two-dimensional image point coordinates and the corresponding target three-dimensional coordinates can be known through the formula.
In this embodiment, the installation positions of the laser scanner and the line-scan camera in the acquisition device are relatively fixed, and the camera is calibrated in advance through the black and white checkerboard to obtain the calibrated conversion parameter H of the camera.
Because the coordinates of the laser line image and the position of the calibration object are known, the laser scanner is calibrated, and the laser plane equation a.X w +b·Y W +c·Z w + d ═ 0, written AX ═ B, where a 2 +b 2 +c 2 =1,A=[a,b,c],X=[X w Y w Z ww ] T And B ═ d, the parameter a of the calibrated laser scanner is obtained by calculation using an svd (singular Value decomposition) decomposition method.
Once the laser scanner and the line camera are calibrated, any laser point on the two-dimensional image of the wheel can be converted through the coordinates to obtain the position of the wheel in the world coordinate system (the mounting positions of the line camera and the laser scanner are relatively fixed, the camera is calibrated first, then the laser scanner is calibrated to obtain the position of the laser scanner relative to the camera, and the data of the laser scanner is reflected through the imaging of the camera, so that the data of the two-dimensional laser image can be converted into the three-dimensional world coordinate system).
In the data acquisition process, the distance between the wheel and the laser scanner is changed, namely the laser line data acquired at a short distance is wider, the laser line data acquired at a far distance is narrower, so the acquired laser line is in an arc shape in a world coordinate system, and O-X in the world coordinate system is required w Y w Z w And the three-dimensional wheel information is corrected, so that the splicing and reconstruction of subsequent wheel data are facilitated. The point cloud data of the non-wheel part is segmented by a point cloud Clustering algorithm DNSCN (sensitivity-Based Spatial Clustering of Applications with Noise), point cloud data only with wheel information is extracted, and finally, the cambered wheel is corrected into a regular rectangular wheel by a coordinate mapping method, and as shown in FIG. 5, laser line data after coordinate transformation correction is obtained.
Five, three-dimensional data stitching
In the embodiment, a plurality of acquisition devices are used for acquiring data of the same wheel, the acquisition devices are horizontally arranged, the data acquired by adjacent acquisition devices are overlapped, and the data of all the acquisition devices need to be spliced, so that complete three-dimensional information of the wheel is constructed.
In the fourth step, coordinate conversion of data acquired by each laser scanner is already completed, however, affine transformation still exists between the laser scanners, and to complete three-dimensional wheel data stitching, laser data acquired by all the acquisition devices need to be converted into the same world coordinate system, in this embodiment, a camera coordinate system of a first acquisition device is defined as a world coordinate system, as shown in fig. 2(b), taking an acquisition device 1 and an acquisition device 2 as an example for explanation, two groups of data to be stitched are respectively denoted as data1 and data2, and the specific steps are as follows:
(1) obtaining a rotation matrix R0 and a translational vector T0 of the acquisition equipment 2 relative to the acquisition equipment 1 through measurement and calibration, taking R0 and T0 as initial values, and calculating registration transformation parameters between adjacent point cloud data through an NICP (Normal Iterative closed Point) algorithm to obtain more accurate rotation parameters R and translation parameters T;
(2) using R and T to process the data2 to complete affine transformation calculation, and recording the transformed data as data2 c;
(3) the overlapping area of the data1 and the data2c can be calculated through the step (2), the overlapping area is overlapped and fused, the fused area is filtered through voxel filtering, redundant point cloud data are removed, the size of the voxel is set to be 2 x 2 mm, and the splicing of the wheel data can be completed through the steps.
Similarly, the data of other adjacent acquisition devices can be processed and spliced according to the method, and as shown in fig. 6, the three-dimensional data of the wheel spliced by the 5 acquisition devices is obtained.
Sixthly, detecting the defects of the wheels
And step five, obtaining the three-dimensional information of the complete wheel, and according to the local depth information of the wheel, adopting a multi-stage judgment strategy to complete defect detection. The method comprises the following specific steps:
(1) taking a certain line of data along the tread direction of the wheel and recording the data as R i Then, the data of the adjacent row is taken and recorded as R i+1 In the column direction, R i And R i+1 Performing depth information comparison, wherein the larger point is marked as 1, and the smaller point is marked as 0, wherein i is 1,2, … …, N is the total row number of the wheel data;
(2) traversing the whole wheel in sequence and processing according to the step (1) to find out the position of the area with larger change, namely all areas marked as 1 are suspicious defect positions;
(3) clustering the region marked as 1, finding out all points meeting the conditions, and marking the clustered region as an ROI;
(4) and comparing the ROI with the adjacent regions by depth mean, extracting the regions meeting the conditions, determining the regions as defects, and counting the defect regions to calculate the area, position and depth information of all the defect regions of the wheel.
While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (8)

1. A train wheel defect detection method based on three-dimensional information is characterized by comprising the following steps:
acquiring and preprocessing an image to obtain a laser line image;
extracting a laser central line: extracting a central line of the obtained laser image, wherein the two-dimensional coordinate is (x, y);
laser coordinate transformation: converting the set of image points generated by the extracted wheel surface laser lines, namely (X, y), into coordinates (X) in a real world coordinate system w ,Y w ,Z w );
And (3) three-dimensional wheel data splicing: the coordinate system of a certain acquisition device is used as a reference object and is defined as a world coordinate system, the coordinate systems of other acquisition devices are subjected to affine transformation, and the overlapped areas are subjected to fusion processing to obtain three-dimensional information of the complete wheel;
and (3) defect detection: and according to the local depth information of the wheel, completing defect detection by adopting a multi-stage judgment strategy.
2. The train wheel defect detection method based on three-dimensional information as claimed in claim 1, wherein the defect detection comprises: firstly, a row of data is taken along the tread direction of the wheel, the depth information is compared with that of the adjacent row in the column direction, the point with the larger value is marked as 1, and the point with the smaller value is marked as 0; then traversing the whole wheel, and finding out the position of the area with larger change, namely all areas marked as 1 are suspicious defect positions; and finally, carrying out local clustering processing on the regions marked as 1, comparing the clustered regions one by one, finding out all regions meeting the conditions as defects, and counting the defect regions to calculate the area, position and maximum depth information of the defect regions.
3. The train wheel defect detection method based on three-dimensional information as claimed in claim 2, wherein the laser centerline extraction: adjusting and amplifying the preprocessed laser line image by multiple times to achieve a sub-pixel precision image; detecting the edge points of the amplified laser line image by adopting a self-adaptive edge detection algorithm, and performing morphological expansion processing on the edge detection image; calculating a self-adaptive threshold value through a histogram of the laser line image after statistical amplification; dividing the amplified laser line image by using a self-adaptive threshold value, and thinning the divided image; and combining the expanded points through region growing processing, and finally extracting the central line of the obtained laser image, wherein the coordinate of the central line is (x, y).
4. The train wheel defect detection method based on three-dimensional information as claimed in any one of claims 1-3, wherein the image acquisition: a plurality of collection equipment distribute and form the collection equipment array beside the track, and collection equipment includes linear array camera and laser scanner.
5. The train wheel defect detection method based on three-dimensional information as claimed in any one of claims 1-3, wherein the laser coordinate transformation: the (X, y) and coordinates (X) in world coordinate system w ,Y w ,Z w ) The relationship of (a) to (b) is as follows:
Figure FDA0003588827310000021
Figure FDA0003588827310000022
Figure FDA0003588827310000023
wherein f is the focal length of the line-scan camera (X) c ,Y c ,Z c ) Coordinates of the linear array camera in a coordinate system, and H is a parameter obtained by calibrating the linear array camera; the corresponding relation between the two-dimensional image point coordinates and the corresponding target three-dimensional coordinates can be known through the formula.
6. The method of claim 5, wherein the method comprises O-X in a world coordinate system w Y w Z w And correcting the three-dimensional wheel information to facilitate splicing and reconstruction of wheel data: and (3) segmenting the point cloud data of the non-wheel part by using a point cloud clustering algorithm DNSCN, extracting the point cloud data only containing wheel information, correcting the cambered wheels into regular rectangular wheels by using a coordinate mapping method, and obtaining laser line data after coordinate transformation and correction.
7. The train wheel defect detection method based on three-dimensional information as claimed in any one of claims 1-3, wherein the three-dimensional wheel data stitching: the method comprises the steps of taking a camera coordinate system of one acquisition device as a reference object and defining the camera coordinate system as a world coordinate system, obtaining an external parameter rotation matrix R0 and a translational vector T0 of other acquisition devices relative to the reference object through measurement and calibration, taking R0 and T0 as initial values, calculating transformation parameters between adjacent point cloud data through an NICP algorithm to obtain more accurate rotation parameters R and translation parameters T, finally carrying out affine transformation on respective data through R and T, carrying out fusion processing on overlapping areas, and completing the splicing of wheel data.
8. The train wheel defect detection method based on three-dimensional information as claimed in any one of claims 1-3, wherein the preprocessing: and finally, deleting interference data irrelevant to the laser line in a contour extraction and judgment mode.
CN202210371673.2A 2022-04-11 2022-04-11 Train wheel defect detection method based on three-dimensional information Pending CN114820474A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210371673.2A CN114820474A (en) 2022-04-11 2022-04-11 Train wheel defect detection method based on three-dimensional information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210371673.2A CN114820474A (en) 2022-04-11 2022-04-11 Train wheel defect detection method based on three-dimensional information

Publications (1)

Publication Number Publication Date
CN114820474A true CN114820474A (en) 2022-07-29

Family

ID=82534642

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210371673.2A Pending CN114820474A (en) 2022-04-11 2022-04-11 Train wheel defect detection method based on three-dimensional information

Country Status (1)

Country Link
CN (1) CN114820474A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205278A (en) * 2022-08-02 2022-10-18 昆山斯沃普智能装备有限公司 Electric vehicle chassis scratch detection method and system
CN117523111A (en) * 2024-01-04 2024-02-06 山东省国土测绘院 Method and system for generating three-dimensional scenic spot cloud model
WO2024108971A1 (en) * 2022-11-21 2024-05-30 上海交通大学 Agv system for vehicle chassis corrosion evaluation

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205278A (en) * 2022-08-02 2022-10-18 昆山斯沃普智能装备有限公司 Electric vehicle chassis scratch detection method and system
CN115205278B (en) * 2022-08-02 2023-05-02 昆山斯沃普智能装备有限公司 Electric automobile chassis scratch detection method and system
WO2024108971A1 (en) * 2022-11-21 2024-05-30 上海交通大学 Agv system for vehicle chassis corrosion evaluation
CN117523111A (en) * 2024-01-04 2024-02-06 山东省国土测绘院 Method and system for generating three-dimensional scenic spot cloud model
CN117523111B (en) * 2024-01-04 2024-03-22 山东省国土测绘院 Method and system for generating three-dimensional scenic spot cloud model

Similar Documents

Publication Publication Date Title
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN114820474A (en) Train wheel defect detection method based on three-dimensional information
Cheng et al. 3D building model reconstruction from multi-view aerial imagery and lidar data
CN110230998B (en) Rapid and precise three-dimensional measurement method and device based on line laser and binocular camera
CN108597009B (en) Method for detecting three-dimensional target based on direction angle information
CN110232389A (en) A kind of stereoscopic vision air navigation aid based on green crop feature extraction invariance
CN110569861B (en) Image matching positioning method based on point feature and contour feature fusion
CN106996748A (en) A kind of wheel footpath measuring method based on binocular vision
CN109961417A (en) Image processing method, device and mobile device control method
CN114331986A (en) Dam crack identification and measurement method based on unmanned aerial vehicle vision
CN112184725B (en) Method for extracting center of structured light bar of asphalt pavement image
CN108510544B (en) Light strip positioning method based on feature clustering
CN116358449A (en) Aircraft rivet concave-convex amount measuring method based on binocular surface structured light
CN113884002A (en) Pantograph slide plate upper surface detection system and method based on two-dimensional and three-dimensional information fusion
CN116486287A (en) Target detection method and system based on environment self-adaptive robot vision system
CN113066050A (en) Method for resolving course attitude of airdrop cargo bed based on vision
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
CN112288682A (en) Electric power equipment defect positioning method based on image registration
CN106709432B (en) Human head detection counting method based on binocular stereo vision
CN115330684A (en) Underwater structure apparent defect detection method based on binocular vision and line structured light
CN113219472B (en) Ranging system and method
CN111080685A (en) Airplane sheet metal part three-dimensional reconstruction method and system based on multi-view stereoscopic vision
Jiang et al. Foreign object recognition technology for port transportation channel based on automatic image recognition
CN114120354A (en) Human body detection and positioning method and device applied to air conditioner and intelligent sensing system
CN112016558B (en) Medium visibility recognition method based on image quality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB02 Change of applicant information

Address after: 210019 floor 12, building 01, No. 8, Bailongjiang East Street, Jianye District, Nanjing, Jiangsu Province

Applicant after: NANJING TYCHO INFORMATION TECHNOLOGY Co.,Ltd.

Address before: Room 113, 11 / F, building 03, No. 18, Jialing Jiangdong Street, Jianye District, Nanjing City, Jiangsu Province, 210019

Applicant before: NANJING TYCHO INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination