CN113947116A - Non-contact real-time detection method for train rail looseness based on camera shooting - Google Patents

Non-contact real-time detection method for train rail looseness based on camera shooting Download PDF

Info

Publication number
CN113947116A
CN113947116A CN202111163756.4A CN202111163756A CN113947116A CN 113947116 A CN113947116 A CN 113947116A CN 202111163756 A CN202111163756 A CN 202111163756A CN 113947116 A CN113947116 A CN 113947116A
Authority
CN
China
Prior art keywords
virtual feature
track
pixel
gray
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111163756.4A
Other languages
Chinese (zh)
Other versions
CN113947116B (en
Inventor
徐自力
辛存
王存俊
李康迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202111163756.4A priority Critical patent/CN113947116B/en
Publication of CN113947116A publication Critical patent/CN113947116A/en
Application granted granted Critical
Publication of CN113947116B publication Critical patent/CN113947116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Abstract

The invention discloses a non-contact real-time detection method for train rail looseness based on camera shooting, aiming at the problem that the current train rail looseness is mainly detected by manpower and has low efficiency. The method uses a high-speed camera to obtain a vibration video of a track under the condition that a train passes through or is artificially excited, and provides a virtual feature point detection method based on image pixel gray gradient and feature clustering algorithm by using the features of staggered arrangement between the train track and a sleeper. And obtaining the natural vibration frequency of the track through FFT decomposition, and further judging whether the track is loosened or not in real time through the change of the natural vibration frequency. The measuring device of the method is simple, high in precision, low in cost and easy to operate.

Description

Non-contact real-time detection method for train rail looseness based on camera shooting
Technical Field
The invention belongs to the technical field of structural motion measurement, and particularly relates to a non-contact real-time detection method for rail looseness of a train based on camera shooting.
Background
The train track system is an important part in a transportation system and mainly comprises tracks, sleepers, fasteners and a roadbed. The track is fixed on the sleeper by the fastener, and in the train course of traveling, it produces periodic impact dynamic load and causes the fastener to vibrate easily, and long this past, can cause the track not hard up. Along with the aggravation of the looseness of the track, the dynamic response amplitude of the track structure is obviously increased, and the train derailment accident can be caused in serious conditions. Therefore, the rail looseness detection of the train has great significance for guaranteeing the running safety of the train.
At present, the common train rail looseness detection method is mainly manual inspection, and the inspection personnel with abundant experience generally judges the state of the fastener by naked eyes. The method is simple to operate, low in efficiency, high in cost, high in omission factor and large in potential safety hazard. In recent years, with the development of image processing technology, computer vision technology is also gradually applied to structural health monitoring of train tracks. Currently, the computer vision detection method needs to mark the structure and measure the rail vibration by tracking the mark point. However, due to the fact that the number of the train tracks is large and the train tracks are in an outdoor environment for a long time, the marked points are easy to fall off, and great challenges are brought to measurement.
Disclosure of Invention
In order to overcome the defects of the prior art and solve the problem of low efficiency and low precision of a train track looseness detection method based on manual inspection, the invention aims to provide a train track looseness non-contact real-time detection method based on camera shooting, which utilizes the staggered arrangement characteristics between a train track and a sleeper to provide a virtual feature point detection method based on image pixel gray gradient and a feature clustering algorithm, adopts an optical flow algorithm to calculate the optical flow of the virtual feature point, further measures the time domain vibration of the track, obtains the natural vibration frequency of the track through FFT decomposition, and further judges whether the track is loosened in real time through the change of the natural vibration frequency.
In order to achieve the purpose, the invention adopts the technical scheme that:
a pickup-based train rail looseness non-contact real-time detection method comprises the following steps:
step 1), carrying out video recording on the vibration of a train track by using a high-speed camera;
step 2), carrying out gray processing on the recorded video frame by frame;
step 3), calculating the gray gradient of the pixels of the first frame of image, clustering the gray gradient by adopting a K-means clustering algorithm, determining a track and a fastener region, and taking the gray extreme value of the pixels in the region as a virtual feature point of the region;
step 4), constructing an image multi-scale pyramid through an image multi-scale decomposition technology, establishing optical flow equations of images on different scales based on a short-time constant brightness theory, and calculating optical flows of virtual feature points on the images on the different scales by adopting a least square algorithm;
step 5), fusing the optical flows of the virtual feature points on the images with different scales by using the scale relation among the pyramid images to obtain the calculation result of the optical flows of the virtual feature points, and acquiring time domain vibration signals of the track and the fastener based on an image calibration technology;
and 6), carrying out frequency domain analysis on the track and the fastener time domain vibration signals to obtain the natural frequency of the track, and detecting the track looseness through the natural frequency change characteristics.
Further, step 2 specifically includes, in order to improve the screening efficiency of the virtual feature points, performing graying processing on the color image acquired by the camera frame by frame to acquire a grayscale image:
I9x,y)=0.299*R9x,y)+0.579*G9x,y)+0.114*B(x,y)
in the formula: i (x, y) is the gray value of the pixel (x, y); r (x, y), G (x, y), and B (x, y) are pixel values of three channels of the pixel (x, y), respectively.
Further, the step 3 is specifically to screen the virtual feature points based on the image pixel gray gradient and a feature clustering algorithm.
In the present invention, the set of gradients in the horizontal direction calculated by the pixels of the first frame image is defined as
Figure BDA0003290757670000021
Performing cluster analysis on the gradient in the horizontal direction by adopting a K-means clustering algorithm:
Figure BDA0003290757670000022
in the formula: exThe calculation result of the classified sum of squares in the horizontal direction is used, and when the value is the minimum, the optimal clustering is performed; v is a kind of clustering, and in the present invention, v is 2, v ═ 1 represents an orbital region, and v ═ 2 represents other regions;
Figure BDA0003290757670000031
represents the mean value of the gradient in the horizontal direction; l is the number of image pixels.
Let the set of gradients in the vertical direction calculated for the pixels of the first frame image be
Figure BDA0003290757670000032
Performing cluster analysis on the gradient in the vertical direction by adopting a K-means clustering algorithm:
Figure BDA0003290757670000033
in the formula: eyThe calculation result of the classified sum of squares in the vertical direction is the optimal clustering when the value is the minimum; h is a clustered category, h is 2 in the invention, h 1 represents a fastener and a sleeper area, and h 2 represents other areas;
Figure BDA0003290757670000037
represents the mean vertical gradient; l is the number of image pixels.
In the present invention, the detected track and fastener area is designated as Ω ═ Ω (Ω)12,...,Ωq) Q is a definite orbitAnd the number of zones of fasteners. And utilizing the pixel gray extreme points in different areas as virtual feature points of the areas. For region omega1And the extreme value of the gray level of the pixel in the area is expressed as:
Figure BDA0003290757670000034
in the formula:
Figure BDA0003290757670000035
is omega1The pixel coordinate of the point in the region having the maximum gradation value is defined as Ω in the present invention1The virtual feature points of the region are,
Figure BDA0003290757670000036
the gray value of the virtual feature point, I (x, y) is the gray value at the pixel (x, y) position, n2Is omega1The number of pixels within a region.
And detecting the tracks at different positions and the gray extreme points in the fastener area to obtain virtual feature points at different positions.
Further, the step 4 is specifically for the virtual feature point (k)x,ky) And obtaining an optical flow equation of the image under different scales based on the short-time brightness constant theory and the space consistency assumption. Selecting the size of a neighborhood window of the virtual feature point m multiplied by m, wherein m multiplied by m pixels in the neighborhood of the virtual feature point meet the following requirements according to the principle of pixel motion consistency in the neighborhood:
Figure BDA0003290757670000041
in the formula: u, v are virtual feature points (k)x,ky) Optical flow in horizontal and vertical directions, (k)x-m,ky-m),…,(kx+m,ky+ m) are respectively virtual feature points (k)x,ky) In-neighborhood pixel coordinates of, Iy(kx+m,ky+ m) and Iy(kx+m,ky+ m) respectively denote virtualGradient in x and y directions of the gray level of the pixels in the neighborhood of the pseudo-feature point, It(kx+m,ky+ m) represents the derivative of the gray level of the pixels in the neighborhood of the virtual feature point with respect to time t.
In the invention, the equation is solved by adopting a least square method, and the optical flow information of the virtual feature points under any scale can be obtained.
Further, the step 5 is specifically to fuse the optical flow information of the virtual feature points under different scales according to the scale relationship between the images to obtain the optical flow of the virtual feature points. And calculating the vibration of the track and the fastener under the pixel coordinate system by using the optical flow of the virtual feature points.
Let the optical flows of the virtual feature points of different frames be { u } in the horizontal and vertical directions respectivelyk, v k1,2,3, and K, where the unit is pixel/frame, and K is the total frame number of the video, the motion of the structure can be obtained by the optical flow of the virtual feature points:
Figure BDA0003290757670000042
in the formula: m isx、myThe motion of the virtual feature point in the horizontal and vertical directions, respectively, and f is the shooting frame rate.
In the invention, a grid calibration board is adopted to calibrate the camera, a scale factor is obtained according to the size of the grid and the size relation of the grid in a pixel coordinate system, and the time domain vibration of the track and the fastener in a physical coordinate system is calculated.
Let the calibrated gray image be I (x, y), and the gray gradients of the image in the horizontal and vertical directions are respectively:
Figure BDA0003290757670000043
Figure BDA0003290757670000051
in the formula:
Figure BDA0003290757670000052
for convolution operations, Hx、HyGradient operators in the x-direction and y-direction, respectively.
The gradient amplitude is:
Figure BDA0003290757670000053
by comparing the gradient amplitudes, the size of the grid in pixel coordinate system can be determined, and is denoted as J, and the unit is pixel (pixel), the actual size of the grid is R, and the unit is millimeter (mm), and then the scale factor SF is:
Figure BDA0003290757670000054
the time domain vibration of the track is:
Figure BDA0003290757670000055
in the formula: sx、SyTime domain vibration of the rail and the fastener in the horizontal direction and the vertical direction under the physical coordinate system is respectively.
Further, step 6 specifically includes performing FFT decomposition on the track time domain vibration signal to obtain a track natural vibration frequency:
Figure BDA0003290757670000056
and judging whether the track is loosened or not in real time through the change of the natural vibration frequency.
Compared with the prior rail looseness detection method, the rail looseness detection method has the beneficial effects that:
1) the measuring efficiency is high, and all the tracks in the visual field can be monitored simultaneously;
2) the virtual feature points are used for replacing the original manual marks without marking, and the method has wider application range.
Drawings
FIG. 1 is a schematic view of a detecting device of the method for monitoring a rail fastener according to the present invention.
Fig. 2 is a monitoring flow chart of the rail clip of the present invention.
FIG. 3 is a schematic diagram of a principle of track and fastener region identification based on pixel gradient and clustering algorithm.
Fig. 4 is a schematic diagram of detection of virtual feature points.
FIG. 5 is a schematic diagram of pixel position change and gray matrix change over a period of time.
FIG. 6 is a schematic diagram of a calculation flow of scale factors.
FIG. 7 is a schematic diagram of a calculation process of time-domain vibration of a fastener.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the drawings and examples.
As shown in figure 1, the invention relates to a non-contact real-time detection method for a train track looseness based on camera shooting, which adopts a high-speed camera to obtain a vibration video of a track under the condition that a train passes through or is artificially excited, and provides a virtual feature point detection method based on image pixel gray gradient and a feature clustering algorithm by utilizing the staggered arrangement characteristics between the train track and a sleeper. The invention is further described below with reference to the accompanying drawings.
Step 1: as shown in fig. 2, the vibrations of the track fastener are recorded visually with a high speed camera.
Step 2: in order to improve the screening efficiency of the virtual feature points, the gray level processing is carried out on the color images acquired by the camera frame by frame to acquire gray level images:
I(x,y)=0.299*R(x,y)+0.579*G(x,y)+0.114*B(x,y) (1)
in the formula: i (x, y) is the gray value of the pixel (x, y); r (x, y), G (x, y), and B (x, y) are pixel values of three channels of the pixel (x, y), respectively.
And step 3: as shown in FIG. 3, in the present invention, the set of gradients in the horizontal direction calculated by the pixels of the first frame image is defined as
Figure BDA0003290757670000061
Performing cluster analysis on the gradient in the horizontal direction by adopting a K-means clustering algorithm:
Figure BDA0003290757670000062
in the formula: exIn the present invention, v is 2, v-1 represents an orbit region, v-2 represents other regions,
Figure BDA0003290757670000063
representing the horizontal direction gradient mean, l is the number of image pixels.
Let the set of gradients in the vertical direction calculated for the pixels of the first frame image be
Figure BDA0003290757670000071
Performing cluster analysis on the gradient in the vertical direction by adopting a K-means clustering algorithm:
Figure BDA0003290757670000072
in the formula: eyH is the calculation result of the classified sum of squares in the vertical direction, h is the kind of the cluster, h is 2 in the invention, h 1 represents the fastener and sleeper area, h 2 represents other areas,
Figure BDA0003290757670000073
representing the vertical direction gradient mean, l is the number of image pixels.
In the present invention, as shown in fig. 4, the detected track and fastening area is represented as Ω(Ω12,...,Ωq) And q is the number of the determined tracks and the regions of the fasteners, and in the invention, the gray extreme points of the pixels in different regions are used as the virtual characteristic points of the regions. For region omega1And the extreme value of the gray level of the pixel in the area is expressed as:
Figure BDA0003290757670000074
in the formula:
Figure BDA0003290757670000075
is omega1The pixel coordinate of the point in the region having the maximum gradation value is defined as Ω in the present invention1The virtual feature points of the region are,
Figure BDA0003290757670000076
the gray value of the virtual feature point, I (x, y) is the gray value at the pixel (x, y) position, n2Is omega1The number of pixels within a region.
And detecting the tracks at different positions and the gray extreme points in the fastener area to obtain virtual feature points at different positions.
And 4, step 4: as shown in fig. 5, for the virtual feature point (k)x,ky) And obtaining an optical flow equation of the image under different scales based on the short-time brightness constant theory and the space consistency assumption. Selecting the size of a neighborhood window of the virtual feature point m multiplied by m, wherein m multiplied by m pixels in the neighborhood of the virtual feature point meet the following requirements according to the principle of pixel motion consistency in the neighborhood:
Figure BDA0003290757670000077
in the formula: u, v are virtual feature points (k)x,ky) Optical flow in horizontal and vertical directions, (k)x-m,ky-m),…,(kx+m,ky+ m) are respectively virtual feature points (k)x,ky) In-neighborhood pixel coordinates of, Iy(kx+m,ky+ m) and Iy(kx+m,ky+ m) represents the gradient in the x and y directions of the gray level of the pixels in the neighborhood of the virtual feature point, It(kx+m,ky+ m) represents the derivative of the gray level of the pixels in the neighborhood of the virtual feature point with respect to time t.
In the invention, the equation is solved by adopting a least square method, and the optical flow information of the virtual feature points under any scale can be obtained.
And 5: and according to the scale relation between the images, fusing the optical flow information of the virtual feature points under different scales to obtain the optical flow of the virtual feature points. And calculating the vibration of the track and the fastener under the pixel coordinate system by using the optical flow of the virtual feature points.
Let the optical flows of the virtual feature points of different frames be { u } in the horizontal and vertical directions respectivelyk, v k1,2,3, and K, where the unit is pixel/frame, and K is the total frame number of the video, the motion of the structure can be obtained by the optical flow of the virtual feature points:
Figure BDA0003290757670000081
in the formula: m isx、myThe motion of the virtual feature point in the horizontal and vertical directions, respectively, and f is the shooting frame rate.
In the invention, a grid calibration board is adopted to calibrate the camera, a scale factor is obtained according to the size of the grid and the size relation of the grid in a pixel coordinate system, and the time domain vibration of the track and the fastener in a physical coordinate system is calculated.
Referring to fig. 6 and 7, let I (x, y) be the scaled gray image, the gray gradients of the image in the horizontal and vertical directions:
Figure BDA0003290757670000082
in the formula:
Figure BDA0003290757670000083
for convolution operations, Hx、HyGradient operators in the x-direction and y-direction, respectively.
The gradient amplitude is:
Figure BDA0003290757670000084
by comparing the gradient amplitudes, the size of the grid in pixel coordinate system can be determined, and is denoted as J, and the unit is pixel (pixel), the actual size of the grid is R, and the unit is millimeter (mm), and then the scale factor SF is:
Figure BDA0003290757670000091
the time domain vibration of the track is:
Figure BDA0003290757670000092
in the formula: sx、SyTime domain vibration of the rail and the fastener in the horizontal direction and the vertical direction under the physical coordinate system is respectively.
Step 6: performing FFT decomposition on the track time domain vibration signal to obtain the track natural vibration frequency:
Figure BDA0003290757670000093
and judging whether the track is loosened or not in real time through the change of the natural vibration frequency.

Claims (6)

1. A non-contact real-time detection method for train rail looseness based on camera shooting is characterized by comprising the following steps:
step 1), carrying out video recording on the vibration of a train track by using a high-speed camera;
step 2), carrying out gray processing on the recorded video frame by frame;
step 3), calculating the gray gradient of the pixels of the first frame of image, clustering the gray gradient by adopting a K-means clustering algorithm, determining a track and a fastener region, and taking the gray extreme value of the pixels in the region as a virtual feature point of the region;
step 4), constructing an image multi-scale pyramid through an image multi-scale decomposition technology, establishing optical flow equations of images on different scales based on a short-time constant brightness theory, and calculating optical flows of virtual feature points on the images on the different scales by adopting a least square algorithm;
step 5), fusing the optical flows of the virtual feature points on the images with different scales by using the scale relation among the pyramid images to obtain the calculation result of the optical flows of the virtual feature points, and acquiring time domain vibration signals of the track and the fastener by combining an image calibration technology;
and 6), carrying out frequency domain analysis on the track and the fastener time domain vibration signals to obtain the natural frequency of the track, and detecting the track looseness through the natural frequency change characteristics.
2. The camera-based train rail looseness non-contact real-time detection method according to claim 1, wherein in the step 3), the gray gradient set in the horizontal direction calculated by the first frame image pixel is set as
Figure FDA0003290757660000011
Carrying out clustering analysis on the gray gradient in the horizontal direction by adopting a K-means clustering algorithm:
Figure FDA0003290757660000012
in the formula: exThe calculation result of the classified sum of squares in the horizontal direction is used, and when the value is the minimum, the optimal clustering is performed; v is the category of the cluster, v ═ 1 represents the track region, v ═ 2 represents other regions;
Figure FDA0003290757660000013
represents the mean value of the gradient in the horizontal direction; l isThe number of image pixels;
let the gray gradient set in vertical direction calculated by the first frame image pixel be
Figure FDA0003290757660000021
Carrying out clustering analysis on the gray gradient in the vertical direction by adopting a K-means clustering algorithm:
Figure FDA0003290757660000022
in the formula: eyThe calculation result of the classified sum of squares in the vertical direction is the optimal clustering when the value is the minimum; h is the type of the cluster, h is 1 for the fastener and the sleeper area, and h is 2 for other areas;
Figure FDA0003290757660000023
represents the mean vertical gradient; l is the number of image pixels;
the detected track and fastener area is recorded as Ω ═ Ω12,...,Ωq) Q is the number of the determined tracks and the area of the fastener, the gray extreme point of the pixel in each area is used as the virtual characteristic point of the area, and the area omega is used1And the extreme value of the gray level of the pixel in the area is expressed as:
Figure FDA0003290757660000024
in the formula:
Figure FDA0003290757660000025
is omega1The pixel coordinate of the point in the region having the maximum gradation value is set as Ω1Regional virtual feature points;
Figure FDA0003290757660000026
gray values of the virtual feature points; i (x, y) is the gray scale at the pixel (x, y) locationA value; n is2Is omega1The number of pixels within a region;
and detecting the tracks at different positions and the gray extreme points in the fastener area to obtain virtual feature points at different positions.
3. The camera-based train rail looseness non-contact real-time detection method according to claim 1, wherein in the step 4), a virtual feature point (k) is determinedx,ky) Obtaining optical flow equations of images with different scales based on a short-time brightness constant theory and a space consistency hypothesis, selecting a virtual feature point neighborhood window with the size of m multiplied by m, and according to a pixel motion consistency principle in the neighborhood, m multiplied by m pixels in the virtual feature point neighborhood meet a hyperstatic equation:
Figure FDA0003290757660000027
in the formula: u, v are virtual feature points (k)x,ky) Optical flow in the horizontal and vertical directions; (k)x-m,ky-m),…,(kx+m,ky+ m) are respectively virtual feature points (k)x,ky) Pixel coordinates within the neighborhood of (c); i isy(kx+m,ky+ m) and Iy(kx+m,ky+ m) represents the gradient of the gray level of the pixel in the virtual feature point neighborhood in the x and y directions respectively; i ist(kx+m,ky+ m) represents the derivative of the gray level of the pixel in the neighborhood of the virtual feature point with respect to time t
And solving the equation by adopting a least square method to obtain the optical flows of the virtual feature points on different scales.
4. The camera-based train rail looseness non-contact real-time detection method according to claim 1, wherein in the step 5), optical flows of virtual feature points of different frames are set to { u } in horizontal and vertical directions respectivelyk,vk1,2,3, K, in pixels/frame, K being the total number of frames of the video, light passing through the virtual feature pointsThe flow gets the motion of the structure:
Figure FDA0003290757660000031
in the formula: m isx、myRespectively representing the motion of the virtual feature points in the horizontal direction and the vertical direction, and f is a shooting frame rate;
and calibrating the camera by adopting a grid calibration board, acquiring a scale factor according to the size of the grid and the size relation of the grid under a pixel coordinate system, and calculating the time domain vibration of the track and the fastener under a physical coordinate system.
5. The camera-based train rail looseness non-contact real-time detection method according to claim 4, wherein when the camera is calibrated, the calibrated gray level image is I9x and y0, and the gray level gradients of the image in the horizontal direction and the vertical direction are respectively as follows:
Figure FDA0003290757660000032
Figure FDA0003290757660000033
in the formula:
Figure FDA0003290757660000034
for convolution operations, Hx、HyGradient operators in the x-direction and y-direction, respectively.
The gradient amplitude is:
Figure FDA0003290757660000035
by comparing the gradient amplitudes, the size of the grid in pixel coordinate system can be determined, and is denoted as J, and the unit is pixel (pixel), the actual size of the grid is R, and the unit is millimeter (mm), and then the scale factor SF is:
Figure FDA0003290757660000041
the time domain vibration of the track is:
Figure FDA0003290757660000042
in the formula: sx、SyTime domain vibration of the rail and the fastener in the horizontal direction and the vertical direction under the physical coordinate system is respectively.
6. The camera-based train rail looseness non-contact real-time detection method according to claim 1, wherein in the step 6), the rail time domain vibration signal is subjected to FFT decomposition to obtain a rail natural vibration frequency:
Figure FDA0003290757660000043
and judging whether the track is loosened or not in real time through the change of the natural vibration frequency.
CN202111163756.4A 2021-09-30 2021-09-30 Train track looseness non-contact real-time detection method based on camera shooting Active CN113947116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111163756.4A CN113947116B (en) 2021-09-30 2021-09-30 Train track looseness non-contact real-time detection method based on camera shooting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111163756.4A CN113947116B (en) 2021-09-30 2021-09-30 Train track looseness non-contact real-time detection method based on camera shooting

Publications (2)

Publication Number Publication Date
CN113947116A true CN113947116A (en) 2022-01-18
CN113947116B CN113947116B (en) 2023-10-31

Family

ID=79329769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111163756.4A Active CN113947116B (en) 2021-09-30 2021-09-30 Train track looseness non-contact real-time detection method based on camera shooting

Country Status (1)

Country Link
CN (1) CN113947116B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845364A (en) * 2016-12-28 2017-06-13 中国航天电子技术研究院 A kind of fast automatic object detection method
CN111532295A (en) * 2019-12-28 2020-08-14 昆山高新轨道交通智能装备有限公司 Rail transit removes intelligent operation and maintenance detecting system
CN112381860A (en) * 2020-11-21 2021-02-19 西安交通大学 Unmarked computer vision method for measuring dynamic frequency of rotating blade
CN112763904A (en) * 2020-12-29 2021-05-07 广州航天海特系统工程有限公司 Circuit breaker detection method, device, equipment and storage medium
WO2021163928A1 (en) * 2020-02-19 2021-08-26 华为技术有限公司 Optical flow obtaining method and apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845364A (en) * 2016-12-28 2017-06-13 中国航天电子技术研究院 A kind of fast automatic object detection method
CN111532295A (en) * 2019-12-28 2020-08-14 昆山高新轨道交通智能装备有限公司 Rail transit removes intelligent operation and maintenance detecting system
WO2021163928A1 (en) * 2020-02-19 2021-08-26 华为技术有限公司 Optical flow obtaining method and apparatus
CN112381860A (en) * 2020-11-21 2021-02-19 西安交通大学 Unmarked computer vision method for measuring dynamic frequency of rotating blade
CN112763904A (en) * 2020-12-29 2021-05-07 广州航天海特系统工程有限公司 Circuit breaker detection method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
储林臻;闫钧华;杭谊青;许俊峰;: "基于改进光流法的旋转运动背景下对地运动目标实时检测", 数据采集与处理, no. 06 *
李鹏程;郑树彬;彭乐乐;李立明;: "轨道图像特征点规律分布研究", 计算机测量与控制, no. 04 *

Also Published As

Publication number Publication date
CN113947116B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
CN107678036B (en) Vehicle-mounted non-contact type contact net geometric parameter dynamic detection system and method
Liu et al. Simple and fast rail wear measurement method based on structured light
CN102175219B (en) High-speed contact network locator gradient detection method and apparatus thereof based on video analysis
Li et al. A cyber-enabled visual inspection system for rail corrugation
CN109238756B (en) Dynamic image detection equipment and detection method for freight car operation fault
CN111882882B (en) Method for detecting cross-lane driving behavior of automobile in dynamic flat-plate scale weighing area
CN109029283B (en) Track fastener bolt floating detection method based on height comparison
CN105158257A (en) Sliding plate measurement method and device
Pan et al. On-site reliable wheel size measurement based on multisensor data fusion
CN110567680A (en) Track fastener looseness detection method based on angle comparison
CN107703513A (en) A kind of novel non-contact contact net relative position detection method based on image procossing
CN104574969A (en) Vehicle overload dynamic monitoring system and working method of vehicle overload dynamic monitoring system
US20220383478A1 (en) Computer vision-based system and method for assessment of load distribution, load rating, and vibration serviceability of structures
Qiu et al. Rail fastener positioning based on double template matching
CN203768774U (en) Rut form simulator for accuracy calibration of laser rut detection apparatus
CN113947116B (en) Train track looseness non-contact real-time detection method based on camera shooting
Li et al. A visual inspection system for rail corrugation based on local frequency features
Ye et al. Computer vision for hunting stability inspection of high-speed trains
Yang et al. Online pantograph-catenary contact point detection in complicated background based on multiple strategies
CN111551122A (en) Train wagon number and length measuring system and method based on laser radar
CN104006804A (en) Method for detecting offset of contact net based on observation benchmark instability compensation
CN115761487A (en) Method for quickly identifying vibration characteristics of small and medium-span bridges based on machine vision
CN112949483B (en) Non-contact rail stretching displacement real-time measurement method based on fast R-CNN
CN115035087A (en) Novel railway line image detection method and system
CN106875427A (en) A kind of locomotive hunting monitoring method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant