CN115760828A - Method for detecting out-of-roundness of wheels of three-dimensional mapping train - Google Patents

Method for detecting out-of-roundness of wheels of three-dimensional mapping train Download PDF

Info

Publication number
CN115760828A
CN115760828A CN202211512463.7A CN202211512463A CN115760828A CN 115760828 A CN115760828 A CN 115760828A CN 202211512463 A CN202211512463 A CN 202211512463A CN 115760828 A CN115760828 A CN 115760828A
Authority
CN
China
Prior art keywords
wheel
dimensional image
characteristic
train
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211512463.7A
Other languages
Chinese (zh)
Inventor
丁建明
徐梦楠
吴蔚
陆志豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Tielianke Technology Co ltd
Original Assignee
Chengdu Tielianke Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Tielianke Technology Co ltd filed Critical Chengdu Tielianke Technology Co ltd
Priority to CN202211512463.7A priority Critical patent/CN115760828A/en
Publication of CN115760828A publication Critical patent/CN115760828A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a three-dimensional mapping train wheel out-of-roundness detection method, which relates to the technical field of railway wheel out-of-roundness detection, and is characterized in that when a train runs, characteristic points are marked on wheels to obtain a two-dimensional image of a local state of the wheels when the train runs, and the method also comprises the following steps: sequentially extracting feature points, detecting spatial extreme points, accurately positioning the feature points, matching feature point direction information, describing the feature points and matching the feature points on the local two-dimensional image to obtain motion pose data of wheels between adjacent images; optimizing the motion pose data of the wheels to obtain globally consistent wheel motion track data and wheel two-dimensional image data; building wheel space three-dimensional image data according to the wheel motion track data and the wheel two-dimensional image data; and comparing the wheel space three-dimensional image data with standard wheel data to obtain the out-of-roundness information of the wheel. The invention can easily obtain the essential characteristics of the image and improve the matching stability and the anti-noise capability.

Description

Method for detecting out-of-roundness of wheels of three-dimensional mapping train
Technical Field
The invention relates to the technical field of railway wheel out-of-roundness detection, in particular to a three-dimensional mapping train wheel out-of-roundness detection method.
Background
After the railway vehicle runs for a period of time, the tread of the wheel can generate uneven abrasion, and when the abrasion reaches a certain value, the out-of-roundness of the wheel can cause additional vibration, impact and noise to the vehicle, thereby affecting the stability of the vehicle and threatening the running safety of the train.
Currently, the mainstream wheel out-of-roundness detection technology includes various manual detectors, a stress-strain detection method, an ultrasonic flaw detection method, an electric signal monitoring method and the like. On one hand, the methods cannot realize online detection, consume a large amount of manpower, have low detection efficiency, and on the other hand, the wheels cannot be overhauled in time and cannot realize the visualization of the out-of-roundness of the wheels. Therefore, a dynamic non-contact measurement method for the out-of-roundness of train wheels is urgently needed.
The central point chord measuring method is a relatively new vehicle-mounted non-contact dynamic wheel out-of-roundness measuring method. According to the method, the detection device needs to be installed in advance during measurement, the main body of the detection device is provided with 3 laser displacement sensors, when measurement is started, the wheel tread is measured by the 3 laser displacement sensors from a measurement center to the wheel tread to be measured, the rotary encoder records the acquisition step length, the acquisition step length is transmitted to an upper computer through a WIFI module, the measurement data are restored through an inverse filter, and finally the real out-of-roundness data of the wheel to be measured are obtained. The vehicle-mounted non-contact dynamic wheel out-of-roundness measuring method is high in accuracy, but installation is complex, each wheel needs to be provided with an adjusting and measuring device, and the method can only be applied to parking lots and scenes of low-speed running of trains, and cannot realize on-line running measurement of the trains.
Chinese patent ZL202210372005.1 discloses a train wheel out-of-roundness detection method based on three-dimensional information. The method adopts a combination mode of a plurality of linear array cameras and a 3D laser scanner to generate a plurality of two-dimensional image data and laser scanning data with depth information, extracts tread areas scanned by the laser, and reconstructs a wheel tread profile with the depth information based on an elliptical model. The set of measuring equipment is arranged beside a track of a warehousing throat section of a train, and 5 measuring equipment are respectively arranged on two sides of the track. The method can complete the dynamic non-contact out-of-roundness measurement of the train wheel, but has poor visual effect, can not realize the visualization of the out-of-roundness of the wheel, and has higher cost of a laser scanner.
Disclosure of Invention
The invention provides a method for detecting out-of-roundness of wheels of a three-dimensional mapping train.
In order to alleviate the above problems, the technical scheme adopted by the invention is as follows:
the invention provides a method for detecting out-of-roundness of a wheel of a three-dimensional mapping train, which is used for marking the characteristic point of the wheel when the train runs and acquiring a two-dimensional image of the local state of the wheel when the train runs, and further comprises the following steps:
s100, generating a Gaussian difference pyramid according to the local state two-dimensional image, and sequentially performing spatial extreme value feature point detection, feature point accurate positioning, feature point direction information matching, feature point description and feature point matching according to the Gaussian difference pyramid to obtain motion pose data of wheels between adjacent images;
s200, sequentially carrying out nonlinear least square optimization and loop detection on the motion pose image data of the wheels to obtain globally consistent wheel motion track data and wheel two-dimensional image data;
s300, performing monocular dense reconstruction according to the wheel motion trail data and the wheel two-dimensional image data to establish wheel space three-dimensional image data;
s400, calculating the distance between the wheel space three-dimensional image point cloud and the standard wheel point cloud to obtain the out-of-roundness information of the wheel.
In a preferred embodiment of the present invention,
marking the characteristic points of the wheels by adopting a wheel characteristic point marking device;
the wheel characteristic point marking device comprises two tread characteristic marking devices arranged on the outer side of a train track and two wheel flange characteristic marking devices arranged on the inner side of the track;
the characteristic marking device comprises a characteristic marking device box and a plurality of characteristic marking pens, wherein the characteristic marking pens are arranged on the characteristic marking device box, and when the wheel is in contact with the characteristic marking pens, the characteristic point marking of the wheel is realized;
acquiring a two-dimensional image of a local state of a wheel when a train runs by adopting a monocular camera set;
the monocular camera set is arranged on the train track and is positioned behind the wheel characteristic point marking device;
when the wheel passes through the feature marking pen closest to the monocular camera set, the monocular camera set starting switch is triggered, the monocular camera set starts to shoot and acquire a two-dimensional image of the local state of the wheel when the train runs, when the train completely passes through the wheel feature point marking device, the time for the wheel to contact the feature marking pen is longer than the set time, and the monocular camera set is closed.
In a preferred embodiment of the present invention,
calculating the real-time running speed of the train and the axle distance of the bogie according to the time difference between the first characteristic marker pen and the last characteristic marker pen which are contacted by the wheels and the distance between the two characteristic marker pens;
the shooting frequency of the monocular camera set is determined according to the real-time running speed of the train, when the shooting frequency of the monocular camera set needs to ensure that one wheel completely passes through the wheel characteristic point marking device, 24 local state two-dimensional images can be shot by the monocular camera set, and the intensity of real-time image processing of a computer can be reduced on the premise that the quantity of the shot local state two-dimensional images needs to ensure strong three-dimensional image building robustness.
In a preferred embodiment of the present invention, before performing step S100, the local state two-dimensional images of different groups of wheels are classified, stored and stored for storage by the same wheel set at intervals of time when the local state two-dimensional images are captured by the same wheel set for different groups of wheels according to the axle distance of the train bogie.
In a preferred embodiment of the present invention, in step S100, a specific method for generating the gaussian difference pyramid includes: the method comprises the steps of conducting Gaussian blur of different scales on a wheel local state two-dimensional image, calculating a blur template by using a Gaussian function, conducting convolution operation on the template and an original wheel local state two-dimensional image to blur the wheel local state two-dimensional image, conducting down-sampling on the blurred local state two-dimensional image for multiple times, obtaining one layer of image of a Gaussian pyramid by each down-sampling, combining a plurality of images on each layer to form one group of the Gaussian pyramid, and subtracting adjacent upper and lower layers of images in each group of the Gaussian pyramid to obtain a Gaussian difference pyramid.
The characteristic points are extracted in different scale spaces, the scale invariance of the characteristic points is guaranteed, the characteristic points have invariance to image angles and rotation, and the distribution of directions is achieved by solving the gradient of each characteristic point.
In a preferred embodiment of the present invention, the spatial extreme feature point detection is to identify a feature point with unchanged potential scale and rotation angle by using a gaussian difference pyramid, and perform spatial extreme point detection to obtain a spatial extreme feature point set; the precise positioning of the characteristic points is to precisely determine the positions and the scales of the characteristic points by fitting a three-dimensional quadratic function to a space extreme characteristic point set, and simultaneously remove the characteristic points with low contrast and unstable edge response points so as to enhance the matching stability and improve the anti-noise capability; the characteristic point direction information matching is to use an image gradient method to obtain the stable direction of a space extreme characteristic point set; the feature point description is to establish a descriptor for each spatial extreme feature point, wherein the descriptor is a group of feature vectors, and the feature point is described by using a group of feature vectors and is not changed along with the change of various factors; and the characteristic point matching is to estimate the motion pose data of the wheels according to the space extreme value characteristic point set.
The feature point description blocks a pixel region around the feature point, calculates an intra-block gradient histogram, and generates vector abstract expression image information of the region, wherein the information not only contains the feature point, but also includes neighborhood points around the feature point contributing to the feature point.
The feature point matching is realized by calculating 128-dimensional Euclidean distances of two groups of feature points, the smaller the Euclidean distance is, the higher the similarity is, when the Euclidean distance is smaller than a set threshold value, the matching is judged to be successful, the relationship between the motion and the pose of the wheel is estimated by matching the feature points of each key frame, the key frame matching feature is inserted, the positioning of the frame is calculated, the wheel road mark points are calculated by a triangulation method, and the motion and the pose of the wheel are estimated by distributing all key frames and road mark points.
In a preferred embodiment of the present invention, in step S200, the non-linear least square optimization comprises: estimating the condition distribution of the wheel state variables in batches by using a Bayes rule, solving the maximum likelihood estimation, and obtaining better wheel motion and pose estimation values; substituting the better wheel motion and pose estimation values into the motion and observation equation of the SLAM, providing iterative initial values of the wheel motion and pose estimation values by adopting a PNP algorithm, and finally performing continuous iterative fine adjustment on the wheel motion and pose estimation values by adopting a Gauss-Newton method to obtain a minimum value through solving, so as to obtain locally consistent wheel motion track data and wheel two-dimensional image data.
In a preferred embodiment of the present invention, the loop detection comprises: clustering the wheel motion track data and the wheel two-dimensional image data which are locally consistent by adopting a K-means algorithm, and describing all clustered wheel motion and pose images by using description vectors; calculating L of each description vector 1 And comparing the norms to obtain the similarity of each description vector, defining the similarity degree between each image, judging that the loop is successfully detected if the similarity degree is more than 90%, wherein the detected loop may be multi-frame, clustering the similar loops into one type by using a clustering algorithm, enabling the algorithm not to repeatedly detect the loops of the same type, and finally obtaining globally consistent wheel motion track data and wheel two-dimensional image data.
In a preferred embodiment of the present invention, the step S300 specifically includes: according to the wheel motion track data and the wheel two-dimensional image data, the depth values of the wheel motion and pose images are calculated through triangulation, the uncertainty of depth information is calculated according to a geometric relation, then the current observation depth is fused into the last estimation, each pixel of the current wheel motion and pose images is traversed, the shot first wheel pose image is taken as a reference frame, pixel points of the reference frame are converted into three-dimensional coordinates under a camera coordinate system from pixel point coordinates, the three-dimensional coordinates are multiplied by a rotation matrix and converted into a camera coordinate system under the current frame, then the pixel coordinates are projected to the current frame, under the condition of the maximum and minimum depth, the converted three-dimensional coordinates in the reference frame are projected twice to obtain two projection coordinates, the connecting line of the two points is the polar line to be searched, the best matching block on the polar line is searched through NCC, after the search is successful, a depth map is updated, and the establishment of the wheel space three-dimensional image data is completed.
In a preferred embodiment of the present invention, the step S400 specifically includes: and transforming the wheel space three-dimensional image point cloud and the standard wheel point cloud in the wheel space three-dimensional image data to a world coordinate system, and then calculating the Euclidean distance of the wheel space three-dimensional image point cloud and the standard wheel point cloud to obtain out-of-roundness information of the wheel.
Compared with the prior art, the invention has the beneficial effects that:
compared with the conventional detection method, the method for dynamically detecting the out-of-roundness of the wheel has the advantages that the three-dimensional image information of the out-of-roundness of the wheel, which is obtained by the method, can be well visually presented through a computer, so that the decision making can be better assisted by manpower;
the wheel image processing method has the advantages that the Gaussian difference pyramids of different wheel images are expressed, the Gaussian difference pyramids comprise a series of low-pass filters, the advantages of spanning a large frequency range are achieved, multi-scale description can be better conducted on the wheel images, the wheel image characteristics are easier to obtain, meanwhile, the problems of wheel image rotation, scale scaling, brightness change and the like are solved, the wheel image processing method has better stability on wheel image visual angle change, noise and the like, wheel characteristic points are accurately and quickly matched, and motion pose data of wheels between adjacent images are efficiently obtained;
compared with the traditional single-scale image information processing technology, the method for the scale space is easier to obtain the essential characteristics of the image, the fuzzy degree of each scale image in the scale space is gradually increased, and the forming process of the target on the retina when the distance from the target to the target is from near to far can be simulated;
by utilizing a loop detection technology, the constraint of the wheel pose is increased, the accumulated error is reduced, more accurate wheel global track data is obtained, the point cloud distance of the wheel is smaller than the initial point cloud distance through continuous optimization, the wheel pose deviation is corrected, the edge ghost image is reduced, and the wheel geometric structure is clear;
the positions and the scales of the key points are accurately determined by fitting a three-dimensional quadratic function, and meanwhile, the key points with low contrast and unstable edge response points are removed, so that the matching stability is enhanced, and the anti-noise capability is improved;
aiming at the problems of complex installation, low applicable vehicle speed, incapability of realizing out-of-roundness three-dimensional visualization and the like of the conventional vehicle-mounted non-contact dynamic wheel out-of-roundness measuring method based on a central point chord measuring method, the technical scheme does not need to repeatedly install and adjust detection equipment for each wheel, only needs to install the detection system on a train arrival line and regularly maintains the detection system to detect out-of-roundness information of a plurality of wheels, is suitable for measuring the vehicle speed of 40-80km/h which is higher than the measuring vehicle speed of 20km/h of the method, can provide the out-of-roundness three-dimensional visualization information of the wheels with better effect, and is convenient for a worker to observe, identify and write a report;
aiming at the problems that the visual effect of the train wheel out-of-roundness detection method based on three-dimensional information is poor and the cost of a laser scanner is high, which are disclosed in the Chinese patent ZL202210372005.1, the technical scheme adopts a monocular camera set, the hardware cost is greatly reduced, and the monocular camera has a better visual imaging effect than the laser scanner.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flow chart of a three-dimensional charting wheel out-of-round detection method;
FIG. 2 is a layout view of a wheel out-of-round detection device;
FIG. 3 is a schematic view of a wheel feature point marking device;
FIG. 4 is a front view of a camera assembly;
FIG. 5 is a schematic diagram of a camera set for capturing dynamic wheels from the side;
fig. 6 is a flowchart of estimation of the wheel motion pose data.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of a train wheel three-dimensional map and a wheel out-of-roundness online visual detection method in this embodiment.
As shown in fig. 2, the wheel feature point marking devices 3 are disposed on the inner and outer sides of the rail 1, and the wheel feature point marking devices 3 include four feature marking devices, two tread feature marking devices disposed on the outer side of the rail, and two rim feature marking devices installed on the inner side of the rail, respectively.
Fig. 3 is a schematic diagram of a single feature marker, which includes a feature marker cassette 302, and 24 feature markers 301 disposed on the feature marker cassette 302, according to the configuration of the wheel 2. The feature marker cassette 302 of the rim feature marker is designed to be slightly lower in height than the feature marker cassette 302 of the tread feature marker, and the distances between the feature marker pens 301 on the feature marker cassettes 302 are equal.
When the wheel tread characteristic point marking device is installed, the distance between the first characteristic marking pen 301 (the most front characteristic marking pen) and the last characteristic marking pen 301 on the characteristic marking device box 302 along the length direction of the track 1 is larger than the circumference of the wheel, and the design aim is to uniformly, clearly and densely mark the tread characteristic point and the wheel rim characteristic point when the wheel passes through, so that the wheel image processing difficulty is reduced, and the wheel characteristic matching is more accurate.
As shown in fig. 2, the wheel feature point marking device 3 is arranged in front of the monocular camera set 4 (the traveling direction of the train is front, and vice versa), the distance between the wheel feature point marking device 3 and the camera set is larger than the diameter of the wheel, when the wheel runs over the last feature marking pen 301 (which is the marking pen closest to the monocular camera set 4), the monocular camera set start switch is triggered, the camera set starts shooting, and when the train completely runs over the wheel feature point marking device 3, the time for which the wheel contacts the feature marking pen 301 is longer than the set time, the camera set is closed.
Acquiring the real-time running speed of the train by combining the distance between the first characteristic marker pen 301 and the last characteristic marker pen 301 according to the time difference of the contact between the wheels and the distance between the two marker pens;
the system determines the shooting frequency of the monocular camera set according to the real-time running speed of the train, the shooting frequency of the monocular camera set is ensured to be proper, 24 pictures are needed for one wheel, and on the premise that the quantity of the obtained pictures ensures strong three-dimensional image construction robustness, the intensity of the computer for processing the pictures in real time is required to be reduced, and resources are saved.
Generally, a bogie of a train is provided with two groups of wheels, one train is provided with a plurality of bogies, the interval between the two groups of wheels of one bogie is short, the time for each group of wheels to contact the last characteristic marker pen 301 (namely the time for marking the wheels) is recorded, and the system numbers, sorts and respectively establishes a library for storage processing two-dimensional images of the local states of the wheels shot by a camera group according to the contact time interval recorded by the marker pens, so that when the characteristics are matched, the images are ordered and belong to the same wheel.
The number of the monocular camera sets 4 is 24, 6 monocular cameras are arranged in the left rail inner side and the right rail outer side of the rail 1, the distance between the monocular cameras is adjusted according to the structure and the road condition of the rail 1, the distance between the first monocular camera and the last monocular camera is larger than the circumference of a wheel, and the angle of the wheel is adjusted.
As shown in fig. 4, it is ensured that the left and right monocular cameras can clearly photograph the wheel tread in a large area, and at the same time, it is ensured that as many wheel feature points marked on the wheel by the feature marking pen 301 are photographed as possible.
In practical application, the included angle between the monocular camera and the horizontal plane is in the range of 30-60 degrees, an excessively small angle may cause an excessively small area of a tread surface of a shot wheel and fewer wheel feature points, the use efficiency of a picture is reduced, the difficulty of calculating the feature points and matching the features is increased, and an excessively large angle may reduce the definition of the shot picture, so that the calculation accuracy of out-of-roundness of the wheel is reduced.
As shown in fig. 5, a schematic side view of a camera shooting a wheel during a train running process is shown, a wheel track position 201 is a position where the wheel 2 is located before moving, two monocular cameras 401 and 402 on the inner side and the outer side of a monorail are respectively responsible for shooting local positions of the inner side and the outer side of the wheel, and the monocular camera set 4 can completely record displacement and a movement track of the wheel when the wheel runs for a circle.
After the device is used for acquiring the two-dimensional image of the local state of the train wheel, the three-dimensional mapping out-of-roundness detection of the train wheel is carried out based on the acquired two-dimensional image of the local state, as shown in fig. 1 and 6, the specific steps are as follows:
step 100, generating a Gaussian difference pyramid according to the local state two-dimensional image, and sequentially performing space extreme value feature point detection, feature point accurate positioning, feature point direction information matching, feature point description and feature point matching according to the Gaussian difference pyramid to obtain motion pose data of wheels between adjacent images.
Step 101, generating a difference of gaussian pyramid (DOG pyramid).
The method comprises the steps of performing Gaussian blur of different scales on a wheel local state two-dimensional image, calculating a blur template by using a Gaussian function, performing convolution operation on the template and an original wheel local state two-dimensional image to enable the wheel local state two-dimensional image to achieve the purpose of blur, performing multiple times of down-sampling on the blurred local state two-dimensional image, performing down-sampling each time to obtain one layer of image of a Gaussian pyramid, combining a plurality of images on each layer to be called as one group of the Gaussian pyramid, and subtracting adjacent upper and lower layers of images in each group of the Gaussian pyramid to obtain a Gaussian difference pyramid.
Compared with the traditional single-scale image information processing technology, the method for the scale space is easier to acquire the essential characteristics of the image, the fuzzy degree of each scale image in the scale space is gradually increased, and the forming process of the target on the retina when the human is far away from the target from the near can be simulated.
And 102, detecting the spatial extreme characteristic points to obtain a spatial extreme characteristic point set.
Searching DOG function extreme points (spatial extreme characteristic points) between two adjacent layers of images of the Gaussian difference pyramid, comparing each pixel point with all adjacent points of the pixel point, comparing the sizes of the adjacent points of the image domain and the scale domain,
in a two-dimensional image space, a central pixel point is compared with 8 points in 3-by-3 adjacent areas, in a scale space in the same group, the central pixel point is compared with 2-by-9 points of two layers of images which are adjacent up and down, namely the central pixel point is compared with 26 points, the extreme points are ensured to be detected in the scale space and the two-dimensional image space, and finally, a space extreme characteristic point set is obtained to finish space extreme point detection.
And 103, accurately positioning the characteristic points to obtain a space extreme value characteristic point set with accurate positioning.
The method comprises the steps of utilizing sub-pixel interpolation to interpolate detected discrete space extreme points (space extreme characteristic points) to obtain continuous space extreme points, utilizing a three-dimensional quadratic function to fit the continuous space extreme points obtained through interpolation, calculating offset of an interpolation center, changing the position of a current characteristic point when the center offset in any dimension is larger than 0.5, meanwhile, repeatedly interpolating at a new position, continuously iterating until convergence, simultaneously removing low-contrast and unstable characteristic points, and obtaining the accurate position of the space extreme characteristic points.
The positions and the scales of the key points are accurately determined by fitting a three-dimensional quadratic function, and meanwhile, the key points with low contrast and unstable edge response points are removed, so that the matching stability is enhanced, and the anti-noise capability is improved.
And step 104, matching the direction information of the characteristic points, and solving the stable direction of the characteristic point set of the spatial extreme value by using an image gradient method.
The stable extreme points are extracted in different scale spaces, the scale invariance of the feature points is guaranteed, the feature points have invariance to the angle and rotation of the image, and the distribution of the directions is realized by solving the gradient of each extreme point.
Specifically, a space extreme characteristic point set is traversed, gradient and direction distribution characteristics of pixels in a neighborhood window of a Gaussian pyramid image where each space extreme characteristic point is located are searched, a histogram is used for counting gradient directions and amplitudes corresponding to the pixels in the neighborhood of the characteristic point, 10 columns are counted in 36 degrees of the histogram, the horizontal axis is an angle of the gradient direction, the vertical axis is accumulation of the gradient amplitudes corresponding to the gradient directions, a Gaussian function is used for smoothing the histogram of the gradient directions to enhance the effect of the neighborhood points on the direction of the characteristic point and reduce the influence of sudden change, then parabolic interpolation is carried out on three column values closest to the main peak value of the histogram of the gradient directions, and when 80% of the main peak value is reached, the three column values serve as auxiliary directions of the characteristic point and enhance the robustness of characteristic point direction information matching.
And 105, describing feature points, namely establishing a descriptor for each DOG function extreme point (spatial extreme feature point), namely describing the feature points by using a group of feature vectors, so that the feature points are not changed along with the change of various factors.
Specifically, a feature point is taken as a center, a coordinate axis is rotated to be a main direction of the feature point, an 8 × 8 window is selected with the main direction as the center, each cell is a pixel in a scale space where a feature point neighborhood is located, a gradient amplitude and a gradient direction of each pixel are obtained, a gaussian window is used for conducting weighting operation on the pixels, finally, gradient histograms in 8 directions are drawn on each 4 × 4 small block, an accumulated value in each gradient direction is calculated, a seed point is formed, each feature point has 4 seed points, each seed point has gradient information in 8 directions, and the 4 × 4 × 8=128 gradient information is a feature vector of the feature point.
And 106, performing characteristic point matching on the spatial extreme characteristic point set, and estimating the motion pose data of the wheel.
Specifically, the method is realized by calculating 128-dimensional Euclidean distance of two groups of spatial extreme feature points (DOG function extreme points) in a spatial extreme feature point set, the smaller the Euclidean distance is, the higher the similarity is, when the Euclidean distance is smaller than a set threshold value, the matching is judged to be successful, the motion and pose relation of wheels is estimated by matching each key frame feature point, the key frame matching feature is inserted, and the positioning of the frame is calculated; calculating wheel road marking points by a triangularization method; and distributing all key frames and landmark points to estimate the motion pose data of the wheels.
And 200, sequentially carrying out nonlinear least square optimization and loop detection on the motion pose image data of the wheels to obtain globally consistent wheel motion track data and wheel two-dimensional image data.
And step 201, optimizing by using a nonlinear least square method.
Due to the existence of noise, the wheel motion and the pose acquired in the steps are not accurately corresponding, so that the wheel motion pose data are optimized by adopting a nonlinear least square method. Firstly, the Bayes principle is utilized to estimate the condition distribution of the wheel state variables in batch, and the maximum likelihood estimation, namely the optimal wheel motion and pose estimation value, is obtained. And then, substituting the wheel motion and pose estimation values into a motion and observation equation of the SLAM, providing iterative initial values of the wheel motion and pose estimation values by adopting a PNP algorithm, continuously iterating and finely adjusting the wheel motion and pose estimation values by adopting a Gauss-Newton method, solving to obtain a minimum value, finishing optimization, and obtaining locally consistent wheel motion track data and wheel two-dimensional image data.
And 202, carrying out loop detection on the wheel motion and pose images.
Specifically, a K-means algorithm is adopted to cluster wheel motion track data and wheel two-dimensional image data which are locally consistent, then all clustered wheel motion and pose images are described by using description vectors to ensure that the description vectors do not change, and then L of each description vector is calculated 1 And comparing the norms to obtain the similarity of each description vector, thereby defining the similarity degree of each image, judging that the loop is successfully detected if the similarity degree is more than 90%, wherein the detected loop may be multi-frame, clustering the similar loops into one class by using a clustering algorithm, enabling the algorithm not to repeatedly detect the loops of the same class, and finally completing loop detection to obtain globally consistent wheel motion track data and wheel two-dimensional image data.
And step 300, performing monocular dense reconstruction according to the wheel motion track data and the wheel two-dimensional image data to establish wheel space three-dimensional image data.
Specifically, according to wheel motion trajectory data and wheel two-dimensional image data, the depth values of wheel motion and pose images are calculated in a triangularization mode, uncertainty of depth information is calculated according to a geometric relation, then current observation depth is fused into last estimation, each pixel of the current wheel motion and pose images is traversed, a first shot wheel pose image is used as a reference frame, pixel points of the reference frame are converted into three-dimensional coordinates under a camera coordinate system from pixel point coordinates, the three-dimensional coordinates are multiplied by a rotation matrix and converted into a camera coordinate system under a current frame, then the pixel coordinates of the current frame are projected, the variance of the depth is considered, the converted three-dimensional coordinates in the reference frame are projected twice under the condition of the maximum and minimum depth, two projection coordinates are obtained, a connecting line of the two points is a polar line to be searched, an optimal matching block on the polar line is searched by using NCC, after the search is successful, a depth map is updated, and the establishment of a wheel space three-dimensional image is completed.
In the invention, three-dimensional monocular dense reconstruction is carried out on the wheel, partial outliers are removed through a point cloud filtering algorithm, in one embodiment, the number of the characteristic points is reduced from 7127546 to 6587342, the problems of poor reconstruction effect and low efficiency caused by the outliers are solved, the finally reconstructed wheel three-dimensional image is accurate and comprehensive, and the detail aspect of the wheel tread is well shown.
And step 400, calculating the distance between the wheel space three-dimensional image point cloud and the standard wheel point cloud to obtain the out-of-roundness information of the wheel.
Specifically, a wheel space three-dimensional image point cloud and a standard wheel point cloud in the wheel space three-dimensional image data are converted into a world coordinate system, and then the Euclidean distance is calculated to obtain out-of-roundness information of the wheel, so that manual decision making is assisted.
In one embodiment, the distance between the wheel space three-dimensional image point cloud and the standard wheel point cloud is calculated, finally, wheel out-of-roundness data obtained by three-dimensional mapping under polar coordinates is compared and matched with real data, the relative error of a sampling point is 0.06mm, and the measurement result meets the requirement.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A three-dimensional mapping train wheel out-of-roundness detection method is characterized in that when a train runs, characteristic points of wheels are marked, and a two-dimensional image of a local state of the wheels when the train runs is obtained, and the method further comprises the following steps:
s100, generating a Gaussian difference pyramid according to the two-dimensional image in the local state, and sequentially carrying out spatial extreme value feature point detection, feature point accurate positioning, feature point direction information matching, feature point description and feature point matching according to the Gaussian difference pyramid to obtain motion pose data of wheels between adjacent images;
s200, sequentially carrying out nonlinear least square optimization and loop detection on the motion pose image data of the wheels to obtain globally consistent wheel motion track data and wheel two-dimensional image data;
s300, performing monocular dense reconstruction according to the wheel motion trail data and the wheel two-dimensional image data to establish wheel space three-dimensional image data;
s400, calculating the distance between the wheel space three-dimensional image point cloud and the standard wheel point cloud to obtain the out-of-roundness information of the wheel.
2. The method according to claim 1, wherein the step of measuring the out-of-roundness of the train wheel includes measuring the out-of-roundness of the train wheel,
marking the characteristic points of the wheels by adopting a wheel characteristic point marking device;
the wheel characteristic point marking device comprises two tread characteristic marking devices arranged on the outer side of a train track and two wheel flange characteristic marking devices arranged on the inner side of the track;
the characteristic marking device comprises a characteristic marking device box and a plurality of characteristic marking pens, wherein the characteristic marking pens are arranged on the characteristic marking device box, and when the wheel is in contact with the characteristic marking pens, the characteristic point marking of the wheel is realized;
acquiring a two-dimensional image of a local state of a wheel when a train runs by adopting a monocular camera set;
the monocular camera set is arranged on the train track and is positioned behind the wheel characteristic point marking device;
when the wheel passes through the feature marking pen closest to the monocular camera set, the monocular camera set starting switch is triggered, the monocular camera set starts to shoot and acquire a two-dimensional image of the local state of the wheel when the train runs, when the train completely passes through the wheel feature point marking device, the time for the wheel to contact the feature marking pen is longer than the set time, and the monocular camera set is closed.
3. The method according to claim 2, wherein the step of measuring the out-of-roundness of the train wheel includes measuring the out-of-roundness of the train wheel,
calculating the real-time running speed of the train and the axle distance of the bogie according to the time difference between the first characteristic marker pen and the last characteristic marker pen which are contacted by the wheels and the distance between the two characteristic marker pens;
the shooting frequency of the monocular camera set is determined according to the real-time running speed of the train, when the shooting frequency of the monocular camera set needs to ensure that one wheel completely passes through the wheel feature point marking device, 24 local state two-dimensional images can be shot by the monocular camera set, and the intensity of the computer for processing the images in real time can be reduced on the premise that the number of the shot local state two-dimensional images needs to ensure strong three-dimensional image building robustness.
4. The method for detecting out-of-roundness of a wheel of a train according to claim 3, wherein before the step S100, the local state two-dimensional images of different groups of wheels are classified, stored and stored according to the distance between the axles of the train bogie and the time interval between the local state two-dimensional images of different groups of wheels shot by the same wheel set.
5. The method for detecting out-of-roundness of a train wheel on a three-dimensional mapping according to claim 1, wherein in step S100, the specific method for generating the gaussian difference pyramid comprises: the method comprises the steps of conducting Gaussian blur of different scales on a wheel local state two-dimensional image, calculating a blur template by using a Gaussian function, conducting convolution operation on the template and an original wheel local state two-dimensional image to blur the wheel local state two-dimensional image, conducting down-sampling on the blurred local state two-dimensional image for multiple times, obtaining one layer of image of a Gaussian pyramid by down-sampling each time, combining a plurality of images in each layer to form one group of the Gaussian pyramid, and subtracting adjacent upper and lower layers of images in each group of the Gaussian pyramid to obtain a Gaussian difference pyramid.
6. The method for detecting out-of-roundness of a wheel of a three-dimensional mapping train according to claim 5, wherein in step S100: the spatial extreme value feature point detection is to identify potential feature points with unchanged scale and rotation angle through a Gaussian difference pyramid and carry out spatial extreme value point detection to obtain a spatial extreme value feature point set; the precise positioning of the characteristic points is to further screen local extreme points detected in a scale space of a space extreme characteristic point set, namely to precisely determine the positions and the scales of the characteristic points by fitting a three-dimensional quadratic function to the space extreme characteristic point set and simultaneously remove characteristic points with low contrast and unstable edge response points; the characteristic point direction information matching is to use an image gradient method to obtain the stable direction of a space extreme characteristic point set; the characteristic point description is to establish a descriptor for each spatial extreme characteristic point, wherein the descriptor is a group of characteristic vectors; and the characteristic point matching is to estimate the motion pose data of the wheels according to the space extreme value characteristic point set.
7. The method according to claim 6, wherein the step S200 of non-linear least squares optimization comprises: estimating the condition distribution of the wheel state variables in batches by using a Bayes rule, solving the maximum likelihood estimation, and obtaining better wheel motion and pose estimation values; substituting the better wheel motion and pose estimation values into the motion and observation equation of the SLAM, providing iterative initial values of the wheel motion and pose estimation values by adopting a PNP algorithm, and finally performing continuous iterative fine adjustment on the wheel motion and pose estimation values by adopting a Gauss-Newton method to obtain a minimum value by solving, thereby obtaining locally consistent wheel motion track data and wheel two-dimensional image data.
8. The method for detecting the out-of-roundness of a wheel of a three-dimensional mapping train as set forth in claim 7, wherein the loop detection in step S200 includes: clustering the wheel motion track data and the wheel two-dimensional image data which are locally consistent by adopting a K-means algorithm, and describing all clustered wheel motion and pose images by using description vectors; calculating L of each description vector 1 Norm, comparing to obtain the similarity of each description vector, defining the similarity between each imageAnd if the similarity is more than 90%, judging that the loops are successfully detected, wherein the detected loops may be multi-frame, clustering the similar loops into one class by using a clustering algorithm, so that the algorithm does not need to repeatedly detect the loops of the same class, and finally obtaining globally consistent wheel motion track data and wheel two-dimensional image data.
9. The method for detecting the out-of-roundness of the three-dimensional mapping train wheel according to claim 8, wherein the step S300 specifically includes: according to wheel motion track data and wheel two-dimensional image data, the depth values of wheel motion and pose images are calculated in a triangularization mode, the uncertainty of depth information is calculated according to a geometric relation, then the current observation depth is fused into the last estimation, each pixel of the current wheel motion and pose images is traversed, the shot first wheel pose image serves as a reference frame, pixel points of the reference frame are converted into three-dimensional coordinates under a camera coordinate system from pixel point coordinates, the three-dimensional coordinates are multiplied by a rotation matrix and converted into a camera coordinate system under the current frame, then the pixel coordinates are projected to the current frame, the converted three-dimensional coordinates in the reference frame are projected twice under the condition of the maximum and minimum depth, two projection coordinates are obtained, the connecting line of the two points is the polar line to be searched, the best matching block on the polar line is searched by utilizing the NCC, after the search is successful, a depth map is updated, and the establishment of the wheel space three-dimensional image data is completed.
10. The method for detecting out-of-roundness of a wheel of a three-dimensional mapping train according to claim 9, wherein the step S400 specifically includes: and transforming the wheel space three-dimensional image point cloud and the standard wheel point cloud in the wheel space three-dimensional image data to a world coordinate system, and then calculating the Euclidean distance of the wheel space three-dimensional image point cloud and the standard wheel point cloud to obtain out-of-roundness information of the wheel.
CN202211512463.7A 2022-11-28 2022-11-28 Method for detecting out-of-roundness of wheels of three-dimensional mapping train Pending CN115760828A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211512463.7A CN115760828A (en) 2022-11-28 2022-11-28 Method for detecting out-of-roundness of wheels of three-dimensional mapping train

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211512463.7A CN115760828A (en) 2022-11-28 2022-11-28 Method for detecting out-of-roundness of wheels of three-dimensional mapping train

Publications (1)

Publication Number Publication Date
CN115760828A true CN115760828A (en) 2023-03-07

Family

ID=85340352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211512463.7A Pending CN115760828A (en) 2022-11-28 2022-11-28 Method for detecting out-of-roundness of wheels of three-dimensional mapping train

Country Status (1)

Country Link
CN (1) CN115760828A (en)

Similar Documents

Publication Publication Date Title
CN110285793B (en) Intelligent vehicle track measuring method based on binocular stereo vision system
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN109949375B (en) Mobile robot target tracking method based on depth map region of interest
CN112767490B (en) Outdoor three-dimensional synchronous positioning and mapping method based on laser radar
CN103814306B (en) Depth survey quality strengthens
CN108805904B (en) Moving ship detection and tracking method based on satellite sequence image
CN104574393B (en) A kind of three-dimensional pavement crack pattern picture generates system and method
Broggi et al. Self-calibration of a stereo vision system for automotive applications
CN111563469A (en) Method and device for identifying irregular parking behaviors
CN109238173B (en) Three-dimensional live-action reconstruction system for coal storage yard and rapid coal quantity estimation method
CN112329747B (en) Vehicle parameter detection method based on video identification and deep learning and related device
CN106558072A (en) A kind of method based on SIFT feature registration on remote sensing images is improved
CN108225319B (en) Monocular vision rapid relative pose estimation system and method based on target characteristics
CN110197173B (en) Road edge detection method based on binocular vision
CN101488222A (en) Camera self-calibration method based on movement target image and movement information
CN110059683A (en) A kind of license plate sloped antidote of wide-angle based on end-to-end neural network
CN111340881B (en) Direct method visual positioning method based on semantic segmentation in dynamic scene
CN113221648B (en) Fusion point cloud sequence image guideboard detection method based on mobile measurement system
CN110136174B (en) Target object tracking method and device
CN108416798A (en) A kind of vehicle distances method of estimation based on light stream
CN112070756A (en) Three-dimensional road surface disease measuring method based on unmanned aerial vehicle oblique photography
CN115690138A (en) Road boundary extraction and vectorization method fusing vehicle-mounted image and point cloud
CN106709432B (en) Human head detection counting method based on binocular stereo vision
CN114719873A (en) Low-cost fine map automatic generation method and device and readable medium
CN113256731A (en) Target detection method and device based on monocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination