CN114987564B - Portable high-speed turnout detection trolley based on binocular identification and detection method - Google Patents

Portable high-speed turnout detection trolley based on binocular identification and detection method Download PDF

Info

Publication number
CN114987564B
CN114987564B CN202210682514.4A CN202210682514A CN114987564B CN 114987564 B CN114987564 B CN 114987564B CN 202210682514 A CN202210682514 A CN 202210682514A CN 114987564 B CN114987564 B CN 114987564B
Authority
CN
China
Prior art keywords
image
sub
steel rail
scale
camera module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210682514.4A
Other languages
Chinese (zh)
Other versions
CN114987564A (en
Inventor
钱瑶
王平
徐井芒
张傲南
陈嵘
马前涛
乐明静
方嘉晟
王凯
罗燕
袁钰雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202210682514.4A priority Critical patent/CN114987564B/en
Publication of CN114987564A publication Critical patent/CN114987564A/en
Application granted granted Critical
Publication of CN114987564B publication Critical patent/CN114987564B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61DBODY DETAILS OR KINDS OF RAILWAY VEHICLES
    • B61D15/00Other railway vehicles, e.g. scaffold cars; Adaptations of vehicles for use on railways
    • B61D15/08Railway inspection trolleys
    • B61D15/10Railway inspection trolleys hand or foot propelled
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61KAUXILIARY EQUIPMENT SPECIALLY ADAPTED FOR RAILWAYS, NOT OTHERWISE PROVIDED FOR
    • B61K9/00Railway vehicle profile gauges; Detecting or indicating overheating of components; Apparatus on locomotives or cars to indicate bad track sections; General design of track recording vehicles
    • B61K9/08Measuring installations for surveying permanent way

Abstract

The invention discloses a portable high-speed turnout detection trolley based on binocular identification and a detection method thereof, and belongs to the technical field of rail traffic detection, wherein the trolley comprises pulleys, a first base, a second base, a base cross beam, a hand push rod, a lithium battery power supply module, a first 3D camera module, a second 3D camera module, a first encoder, a second encoder and an image information processor; the invention solves the problem that the existing track detection trolley is insufficient in detection in a turnout area, realizes the detection of the variable cross section steel rail profile and geometric parameters of the turnout area through the continuous movement of the portable trolley, and simultaneously takes account of the acquisition of other information such as the steel rail profile, the steel rail light band and the like.

Description

Portable high-speed turnout detection trolley based on binocular identification and detection method
Technical Field
The invention belongs to the technical field of rail transit detection, and particularly relates to a portable high-speed turnout detection trolley based on binocular identification and a detection method.
Background
The precise measurement of the geometrical parameters of the turnout in China usually adopts a track geometrical state measuring instrument, and the model used by the method is a track measurement model, so that the turnout characteristics are not fully considered.
The existing track detection trolley for detecting the geometrical parameters of the turnout generally utilizes a track static geometrical parameter track detection trolley, but the track static geometrical parameter track detection trolley cannot detect the geometrical parameters of the variable cross section steel rail in the turnout area, and the existing track detection trolley can only measure the geometrical parameters of the track and cannot measure other information such as the profile of the steel rail, the light band of the steel rail and the like; therefore, a portable high-speed turnout detection trolley capable of measuring both geometric parameters of a rail and other information such as the profile of the rail, the light band of the rail and the like is needed.
Disclosure of Invention
Aiming at the defects in the prior art, the portable high-speed turnout detection trolley and the detection method based on binocular identification solve the problem that the existing track detection trolley is insufficient in turnout area detection.
In order to achieve the aim of the invention, the invention adopts the following technical scheme:
the invention provides a portable high-speed turnout detection trolley based on binocular identification, which comprises:
pulleys for supporting the first and second bases and traveling on the high-speed rail;
a first mount for carrying the mount cross beam, the first 3D camera module, and the first encoder;
a second mount for carrying the mount cross beam, the second 3D camera module, and the second encoder;
the base cross beam is used for stably advancing according to stress after being fixedly connected with the hand push rod, keeping the first base and the second base to synchronously advance and bearing the lithium battery power supply module;
the hand push rod is used for driving the base cross beam, the first base, the second base and the roller skate to move through stress;
the lithium battery power supply module is used for supplying power to the first 3D camera module, the second 3D camera module, the first encoder, the second encoder and the image information processor;
the first 3D camera module is used for shooting first steel rails with different angles close to the first base side to obtain a plurality of images of the first steel rails;
the second 3D camera module is used for detecting and shooting second steel rails with different angles close to the second base side to obtain a plurality of images of the second steel rails;
the first encoder is used for acquiring and encoding the images of the first steel rails to obtain a plurality of encoded first steel rail images;
the second encoder is used for acquiring and encoding the images of the second steel rails to obtain a plurality of encoded images of the second steel rails;
and the image information processor is used for respectively acquiring and processing the encoded first steel rail image and the encoded second steel rail image to obtain the three-dimensional data of the first steel rail surface and the three-dimensional data of the second steel rail surface.
The beneficial effects of the invention are as follows: the portable high-speed turnout detection trolley based on binocular identification is built by pulleys, the first base, the second base, the base cross beam and a hand push rod, a lithium battery power supply module, a first 3D camera module, a first encoder, a second 3D camera module, a second encoder and an image information processor are additionally arranged on the basis, so that the rail image acquired by the 3D camera module is processed by the encoder, the encoded rail image is processed by the image information processor, three-dimensional data of the first rail surface and three-dimensional data of the second rail surface are finally obtained, continuous movement of the portable trolley is realized, the variable cross section rail profile and geometric parameters of a turnout area are detected, and other information such as the rail profile, a rail light band and the like is acquired.
Further, the first 3D camera module and the second 3D camera module are binocular cameras respectively including two cameras.
The beneficial effects of adopting the further scheme are as follows: the determination of the optimal matching point is realized through binocular recognition of the binocular camera, and a basis is provided for obtaining three-dimensional data of the surface of the steel rail through processing the steel rail image acquired by the camera.
The invention also provides a detection method of the portable high-speed turnout detection trolley based on binocular identification, which comprises the following steps:
s1, powering a first 3D camera module, a second 3D camera module, a first encoder, a second encoder and an image information processor by starting a lithium battery power supply module;
s2, pushing a hand push rod to drive a base cross beam, a first base, a second base and a pulley to stably travel, and continuously shooting a first steel rail and a second steel rail by using a first 3D camera module and a second camera module respectively to obtain a plurality of images of the first steel rail and a plurality of images of the second steel rail;
s3, encoding and processing the images of the first steel rails and the images of the second steel rails by using a first encoder and a second encoder respectively to obtain a plurality of encoded first steel rail images and second steel rail images;
and S4, respectively acquiring and processing the coded images of the first steel rail and the second steel rail by using an image information processor to obtain three-dimensional data of the surface of the first steel rail and three-dimensional data of the surface of the second steel rail.
The beneficial effects of the invention are as follows: the detection method of the portable high-speed turnout detection trolley based on the binocular identification is the detection method corresponding to the portable high-speed turnout detection trolley based on the binocular identification and is used for detecting and acquiring surface three-dimensional data of a high-speed turnout steel rail.
Further, the step S4 includes the steps of:
s41, respectively acquiring a plurality of encoded first steel rail images and encoded second steel rail images by using an image information processor;
s42, taking the average value of pixel point components acquired by a first 3D camera module and a second 3D camera module from a steel rail sub-image group shot from different camera angles at the same time as the gray value of the pixel point based on the first steel rail image and the second steel rail image after encoding, wherein the steel rail sub-image group comprises a first sub-image and a second sub-image of the first steel rail image shot by the first 3D camera module at the same time, and a third sub-image and a fourth sub-image of the second steel rail image shot by the second 3D camera module;
s43, preprocessing gray values of all pixel points in the steel rail sub-image group based on a frequency domain to obtain a preprocessed steel rail sub-image group;
s44, based on a Gaussian pyramid principle, respectively taking each sub-image in the preprocessed steel rail sub-image group as 0 layer, and constructing a multi-scale image by taking 2 as a sampling factor and 5 multiplied by 5 as a Gaussian kernel template;
s45, establishing a fixed cross window for each scale image respectively with the calculated pixel point center based on the multi-scale image, and performing median filtering to obtain a bit string of each scale image;
s46, taking any pixel point in the first sub-image or the third sub-image under each scale as a point to be matched, correspondingly calculating the Hamming distance between the point to be matched and bit strings of all candidate distance points in the second sub-image or the fourth sub-image, and calculating a first matching cost to obtain a cost agent under each scale;
s47, obtaining the optimal matching point and parallax value of the multi-scale image based on the cost of each scale;
s48, obtaining three-dimensional data of the surface of the first steel rail and three-dimensional data of the surface of the second steel rail based on the optimal matching points and the parallax values of the multi-scale images.
The beneficial effects of adopting the further scheme are as follows: the image information processor is used for respectively acquiring and processing the image of each encoded first steel rail and the image of each encoded second steel rail to obtain three-dimensional data of the surface of the first steel rail and three-dimensional data of the surface of the second steel rail, continuous movement of the portable trolley is realized, the variable cross-section steel rail profile and geometric parameters of the turnout area are detected, and meanwhile, other information such as the steel rail profile, the steel rail light band and the like is acquired.
Further, the step S43 includes the steps of:
s431, based on a frequency domain, taking the logarithm of the product of the high-frequency component and the low-frequency component of the steel rail sub-image group to obtain a steel rail sub-image group after the product of the high-frequency component and the low-frequency component is processed;
s432, respectively carrying out Fourier transform, high-frequency filtering, inverse Fourier transform and indexing on the steel rail sub-image group subjected to the product processing of the high-frequency component and the low-frequency component in sequence to obtain a preprocessed steel rail sub-image group.
The beneficial effects of adopting the further scheme are as follows:
further, the calculation expression of the multi-scale image in the step S44 is as follows:
wherein I is n-1 (x, y) represents each sub-image at the n-1 th scale, x represents the lateral position of the window center point, y represents the longitudinal position of the window center point, I n (x+s, y+t) represents each sub-image at the n-th scale, s represents the transverse position of the corresponding position in the Gaussian kernel, t represents the corresponding longitudinal position in the Gaussian kernel,represent tensor product, G 5×5 (s, t) represents a window with a gaussian kernel of 5×5.
The beneficial effects of adopting the further scheme are as follows: the gray values of all pixel points in the steel rail sub-image group are preprocessed respectively to obtain the preprocessed steel rail sub-image group, so that a foundation is provided for constructing the multi-scale image.
Further, the step S45 includes the steps of:
s451, based on the multi-scale image, respectively establishing a fixed cross window for each scale image by using the calculated pixel point center;
s452, taking the average value difference between the RGB value of any pixel point in the cross window and the RGB value of the median pixel as a first binary output, and comparing the difference between the gray value of any pixel point and the gray average value of the pixel in the cross window with the self-adaptive linear threshold value as a second binary output to perform Census transformation of double sequences, so as to obtain bit strings of images of various scales:
wherein ζ (I (p), I (q)) represents a bit string of each scale image, I (p) represents a gray value of an arbitrary pixel point p within the cross window,&and I (q) represents the gray value of the calculated window center point q, I avg Represents the gray scale average value of pixels in a cross window, and tau q The linear threshold value of the neighborhood point of the current pixel point is represented, delta represents the RGB to mean value difference of any pixel point in a cross window, I represents the ith channel in RGB three channels, I i (p) information value of ith channel representing arbitrary pixel point in cross window, I i (q) represents the i-th channel information value of the calculated window center point q.
The beneficial effects of adopting the further scheme are as follows: and establishing a fixed cross window for each scale image by using the calculated pixel point center to perform median filtering to obtain bit strings of each scale image, thereby providing a basis for calculating the cost at each scale and obtaining the optimal matching point and parallax value of the multi-scale image.
Further, the calculation expression of the token in step S46 is as follows:
C(x′,y′,d)=Hamming(Str(x′,y′),Str(x′-d,y′))
wherein C (x ', y', d) represents the matching cost between the point to be matched and the candidate point, x 'and y' represent the transverse position and the longitudinal position of the pixel point in each sub-image respectively, hamming (·) represents the Hamming distance calculation matching cost, str (x ', y') represents the bit string of the point to be matched in the first sub-image or the third sub-image, and Str (x '-d, y') represents the bit string of the candidate point with the parallax d of the point to be matched in the first sub-image or the third sub-image respectively corresponding to the second sub-image or the fourth sub-image.
The beneficial effects of adopting the further scheme are as follows: and calculating the Hamming distance calculation matching cost of the bit strings of the point to be matched and all possible candidate distance points by calculating the images respectively shot by the binocular camera under each scale, thereby obtaining cost volumes under each scale.
Further, the step S47 includes the steps of:
s471, based on the cost of each scale, respectively carrying out cost aggregation on each scale image by using a box type filtering kernel method to obtain second matching cost of each pixel point of each scale image;
s472, calculating to obtain the optimal matching point and parallax value of each pixel point under each scale according to a WTA algorithm based on the principle that the smaller the second matching cost is, the larger the similarity is;
s473, based on the best matching point and the parallax value of each pixel point under each scale, utilizing Tikhonov regularization matrix to aggregate the layer-by-layer cost from the coarsest scale layer until the cost value and the parallax value of the finest scale of the first sub-image, the second sub-image, the third sub-image and the fourth sub-image of the preset 0 layer are reached, and obtaining the best matching point and the parallax value of the multi-scale image.
The beneficial effects of adopting the further scheme are as follows: and (3) respectively carrying out cost aggregation on the images under each scale by using a box type filtering kernel method, adding interlayer correlation constraint on the basis of intra-layer image correlation, optimizing a cost result, reducing the mismatching rate, and obtaining the final optimal matching point and parallax value under the multi-scale image, thereby obtaining the three-dimensional data of the surface of the steel rail.
Drawings
Fig. 1 is a front view of a portable high-speed switch detection trolley based on binocular recognition in an embodiment of the present invention.
Fig. 2 is a rear view of a portable high-speed switch detection trolley based on binocular recognition in an embodiment of the present invention.
Fig. 3 is a flow chart of steps of a method for detecting a portable high-speed switch detecting trolley based on binocular identification in an embodiment of the invention.
Wherein: 1. a pulley; 2. a first base; 3. a second base; 4. a base cross beam; 5. a hand push rod; 6. a lithium battery power supply module; 7. a first 3D camera module; 8. a second 3D camera module; 9. a first encoder; 10. a second encoder; 11. an image information processor.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and all the inventions which make use of the inventive concept are protected by the spirit and scope of the present invention as defined and defined in the appended claims to those skilled in the art.
Census transformation belongs to a non-parametric image transformation, and can better detect local structural features in an image, such as edge, corner features and the like; the essence of Census transformation is that the gray value of an image pixel is encoded into a binary code stream, so that the size relation of the gray value of a neighborhood pixel relative to the gray value of a center pixel is obtained;
as shown in fig. 1 and 2, in one embodiment of the present invention, the present invention provides a portable high-speed switch detection trolley based on binocular recognition, comprising:
a pulley 1 for supporting a first base 2 and a second base 3 and traveling on a high-speed rail;
a first mount 2 for carrying a mount beam 4, a first 3D camera module 7 and a first encoder 9;
a second mount 3 for carrying a mount beam 4, a second 3D camera module 8 and a second encoder 10;
the base cross beam 4 is used for stably advancing according to stress after being fixedly connected with the hand push rod 5, keeping the first base 2 and the second base 3 to synchronously advance and bearing the lithium battery power supply module 6;
the hand push rod 5 is used for driving the base cross beam 4, the first base 3, the second base 3 and the roller skate 1 to advance through stress;
a lithium battery power supply module 6 for supplying power to the first 3D camera module 7, the second 3D camera module 8, the first encoder 9, the second encoder 10, and the image information processor 11;
the first 3D camera module 7 is configured to capture first rails of different angles near the first base 2 side, and obtain a plurality of images of the first rails;
the second 3D camera module 8 is configured to detect and shoot a second rail with different angles near the second base 2 side, so as to obtain a plurality of images of the second rail;
the first encoder 9 is used for acquiring and encoding the images of the first steel rails to obtain a plurality of encoded first steel rail images;
a second encoder 10, configured to acquire and encode images of each second rail, so as to obtain a plurality of encoded images of the second rails;
an image information processor 11, configured to acquire and process each encoded first rail image and second rail image, respectively, to obtain three-dimensional data of the first rail surface and three-dimensional data of the second rail surface;
the first and second 3D camera modules 7 and 8 are binocular cameras each including two cameras.
The portable high-speed turnout detection trolley based on binocular identification is built by pulleys, the first base, the second base, the base cross beam and a hand push rod, a lithium battery power supply module, a first 3D camera module, a first encoder, a second 3D camera module, a second encoder and an image information processor are additionally arranged on the basis, so that the rail image acquired by the 3D camera module is processed by the encoder, the encoded rail image is processed by the image information processor, three-dimensional data of the first rail surface and three-dimensional data of the second rail surface are finally obtained, continuous movement of the portable trolley is realized, the variable cross section rail profile and geometric parameters of a turnout area are detected, and other information such as the rail profile, a rail light band and the like is acquired.
In another embodiment of the present invention, as shown in fig. 3, the present invention provides a method for detecting a portable high-speed switch detection trolley based on binocular recognition, comprising the following steps:
s1, powering a first 3D camera module 7, a second 3D camera module 8, a first encoder 9, a second encoder 10 and an image information processor 11 by starting a lithium battery power supply module 6;
s2, pushing a hand push rod 5 to drive a base cross beam 4, a first base 2, a second base 3 and a pulley 1 to stably travel, and continuously shooting a first steel rail and a second steel rail by using a first 3D camera module 7 and a second camera module 8 respectively to obtain a plurality of images of the first steel rail and a plurality of images of the second steel rail;
s3, encoding and processing the images of the first steel rails and the images of the second steel rails by using a first encoder 9 and a second encoder 10 respectively to obtain a plurality of encoded first steel rail images and second steel rail images;
s4, respectively acquiring and processing the image of each encoded first steel rail and the image of each encoded second steel rail by utilizing an image information processor 11 to obtain three-dimensional data of the surface of the first steel rail and three-dimensional data of the surface of the second steel rail;
the step S4 includes the steps of:
s41, respectively acquiring a plurality of encoded first steel rail images and encoded second steel rail images by using an image information processor 11;
s42, taking the average value of pixel point components acquired by the first 3D camera module 7 and the second 3D camera module 8 from a steel rail sub-image group shot from different camera angles at the same time as the gray value of the pixel point based on the first steel rail image and the second steel rail image after encoding, wherein the steel rail sub-image group comprises a first sub-image and a second sub-image of the first steel rail image shot by the first 3D camera module 7 at the same time, and a third sub-image and a fourth sub-image of the second steel rail image shot by the second 3D camera module 8;
s43, preprocessing gray values of all pixel points in the steel rail sub-image group based on a frequency domain to obtain a preprocessed steel rail sub-image group;
the step S43 includes the steps of:
s431, based on a frequency domain, taking the logarithm of the product of the high-frequency component and the low-frequency component of the steel rail sub-image group to obtain a steel rail sub-image group after the product of the high-frequency component and the low-frequency component is processed;
s432, respectively carrying out Fourier transform, high-frequency filtering, inverse Fourier transform and indexing on the steel rail sub-image group subjected to the product processing of the high-frequency component and the low-frequency component in sequence to obtain a preprocessed steel rail sub-image group;
s44, based on a Gaussian pyramid principle, respectively taking each sub-image in the preprocessed steel rail sub-image group as 0 layer, and constructing a multi-scale image by taking 2 as a sampling factor and 5 multiplied by 5 as a Gaussian kernel template;
the calculation expression of the multi-scale image in the step S44 is as follows:
wherein I is n-1 (x, y) represents each sub-image at the n-1 th scale, x represents the lateral position of the window center point, y represents the longitudinal position of the window center point, I n (x+s, y+t) represents each sub-image at the n-th scale, s represents the transverse position of the corresponding position in the Gaussian kernel, t represents the corresponding longitudinal position in the Gaussian kernel,represent tensor product, G 5×5 (s, t) represents a window with a gaussian kernel of 5×5;
s45, establishing a fixed cross window for each scale image respectively with the calculated pixel point center based on the multi-scale image, and performing median filtering to obtain a bit string of each scale image;
the step S45 includes the steps of:
s451, based on the multi-scale image, respectively establishing a fixed cross window for each scale image by using the calculated pixel point center;
s452, taking the average value difference between the RGB value of any pixel point in the cross window and the RGB value of the median pixel as a first binary output, and comparing the difference between the gray value of any pixel point and the gray average value of the pixel in the cross window with the self-adaptive linear threshold value as a second binary output to perform Census transformation of double sequences, so as to obtain bit strings of images of various scales:
wherein ζ (I (p), I (q)) represents a bit string of each scale image, I (p) represents a gray value of an arbitrary pixel point p within the cross window,&and I (q) represents the gray value of the calculated window center point q, I avg Represents the gray scale average value of pixels in a cross window, and tau q The linear threshold value of the neighborhood point of the current pixel point is represented, delta represents the RGB to mean value difference of any pixel point in a cross window, I represents the ith channel in RGB three channels, I i (p) information value of ith channel representing arbitrary pixel point in cross window, I i (q) an i-th channel information value representing the calculated window center point q;
s46, taking any pixel point in the first sub-image or the third sub-image under each scale as a point to be matched, correspondingly calculating the Hamming distance between the point to be matched and bit strings of all candidate distance points in the second sub-image or the fourth sub-image, and calculating a first matching cost to obtain a cost agent under each scale;
the calculation expression of the token in step S46 is as follows:
C(x′,y′,d)=Hamming(Str(x′,y′),Str(x′-d,y′))
wherein C (x ', y', d) represents the matching cost between the point to be matched and the candidate point, x 'and y' represent the transverse position and the longitudinal position of the pixel point in each sub-image respectively, hamming (·) represents the Hamming distance calculation matching cost, str (x ', y') represents the bit string of the point to be matched in the first sub-image or the third sub-image, str (x '-d, y') represents the bit string of the candidate point with the parallax d of the point to be matched in the first sub-image or the third sub-image respectively corresponding to the second sub-image or the fourth sub-image;
s47, obtaining the optimal matching point and parallax value of the multi-scale image based on the cost of each scale;
the step S47 includes the steps of:
s471, based on the cost of each scale, respectively carrying out cost aggregation on each scale image by using a box type filtering kernel method to obtain second matching cost of each pixel point of each scale image;
s472, calculating to obtain the optimal matching point and parallax value of each pixel point under each scale according to a WTA algorithm based on the principle that the smaller the second matching cost is, the larger the similarity is;
s473, based on the best matching point and the parallax value of each pixel point under each scale, utilizing Tikhonov regularization matrix to aggregate the cost layer by layer from the coarsest scale layer until the cost value and the parallax value of the finest scale of the first sub-image, the second sub-image, the third sub-image and the fourth sub-image reach the preset 0 layer, so as to obtain the best matching point and the parallax value of the multi-scale image;
s48, obtaining three-dimensional data of the surface of the first steel rail and three-dimensional data of the surface of the second steel rail based on the optimal matching points and the parallax values of the multi-scale images.
The detection method of the portable high-speed turnout detection trolley based on binocular identification is the detection method corresponding to the portable high-speed turnout detection trolley based on binocular identification, and is used for detecting and acquiring surface three-dimensional data of a high-speed turnout steel rail; the portable high-speed turnout detection trolley based on binocular identification provided by the embodiment can execute the technical scheme shown in the method embodiment, and the implementation principle is similar to the beneficial effect, and is not repeated here.

Claims (6)

1. The detection method of the portable high-speed turnout detection trolley based on binocular identification is characterized in that the portable high-speed turnout detection trolley comprises pulleys (1) which are used for supporting a first base (2) and a second base (3) and advancing on a high-speed rail;
a first mount (2) for carrying a mount cross beam (4), a first 3D camera module (7) and a first encoder (9);
a second mount (3) for carrying a mount cross beam (4), a second 3D camera module (8) and a second encoder (10);
the base cross beam (4) is used for stably advancing according to stress after being fixedly connected with the hand push rod (5), keeping the first base (2) and the second base (3) to synchronously advance and bearing the lithium battery power supply module (6);
the hand push rod (5) is used for driving the base cross beam (4), the first base (2), the second base (3) and the pulley (1) to advance through stress;
a lithium battery power supply module (6) for supplying power to the first 3D camera module (7), the second 3D camera module (8), the first encoder (9), the second encoder (10) and the image information processor (11);
the first 3D camera module (7) is used for shooting first steel rails with different angles close to the first base (2) side to obtain a plurality of images of the first steel rails;
the second 3D camera module (8) is used for shooting second steel rails with different angles close to the second base (3) side to obtain a plurality of images of the second steel rails;
the first 3D camera module (7) and the second 3D camera module (8) are binocular cameras comprising two cameras;
the first encoder (9) is used for acquiring and encoding the images of the first steel rails to obtain a plurality of encoded first steel rail images;
the second encoder (10) is used for acquiring and encoding the images of the second steel rails to obtain a plurality of encoded images of the second steel rails;
the image information processor (11) is used for respectively acquiring and processing the encoded first steel rail image and the encoded second steel rail image to obtain three-dimensional data of the first steel rail surface and three-dimensional data of the second steel rail surface;
the detection method comprises the following steps:
s1, a lithium battery power supply module (6) is started to supply power to a first 3D camera module (7), a second 3D camera module (8), a first encoder (9), a second encoder (10) and an image information processor (11);
s2, pushing a push rod (5) to drive a base cross beam (4), a first base (2), a second base (3) and a pulley (1) to stably travel, and continuously shooting a first steel rail and a second steel rail by using a first 3D camera module (7) and a second 3D camera module (8) respectively to obtain a plurality of images of the first steel rail and a plurality of images of the second steel rail;
s3, encoding and processing the images of the first steel rails and the images of the second steel rails by using a first encoder (9) and a second encoder (10) respectively to obtain a plurality of encoded first steel rail images and second steel rail images;
s4, respectively acquiring and processing the image of each encoded first steel rail and the image of each encoded second steel rail by using an image information processor (11) to obtain three-dimensional data of the surface of the first steel rail and three-dimensional data of the surface of the second steel rail;
the step S4 includes the steps of:
s41, respectively acquiring a plurality of encoded first steel rail images and encoded second steel rail images by using an image information processor (11);
s42, taking the average value of pixel point components acquired by a first 3D camera module (7) and a second 3D camera module (8) from a steel rail sub-image group shot from different camera angles at the same time as the gray value of the pixel point based on the first steel rail image and the second steel rail image after encoding, wherein the steel rail sub-image group comprises a first sub-image and a second sub-image of the first steel rail image shot by the first 3D camera module (7) at the same time, and a third sub-image and a fourth sub-image of the second steel rail image shot by the second 3D camera module (8);
s43, preprocessing gray values of all pixel points in the steel rail sub-image group based on a frequency domain to obtain a preprocessed steel rail sub-image group;
s44, based on a Gaussian pyramid principle, respectively taking each sub-image in the preprocessed steel rail sub-image group as 0 layer, and constructing a multi-scale image by taking 2 as a sampling factor and 5 multiplied by 5 as a Gaussian kernel template;
s45, establishing a fixed cross window for each scale image respectively with the calculated pixel point center based on the multi-scale image, and performing median filtering to obtain a bit string of each scale image;
s46, taking any pixel point in the first sub-image or the third sub-image under each scale as a point to be matched, correspondingly calculating the Hamming distance between the point to be matched and bit strings of all candidate distance points in the second sub-image or the fourth sub-image, and calculating a first matching cost to obtain a cost agent under each scale;
s47, obtaining the optimal matching point and parallax value of the multi-scale image based on the cost of each scale;
s48, obtaining three-dimensional data of the surface of the first steel rail and three-dimensional data of the surface of the second steel rail based on the optimal matching points and the parallax values of the multi-scale images.
2. The method for detecting a portable high-speed switch detecting trolley based on binocular recognition according to claim 1, wherein the step S43 comprises the steps of:
s431, based on a frequency domain, taking the logarithm of the product of the high-frequency component and the low-frequency component of the steel rail sub-image group to obtain a steel rail sub-image group after the product of the high-frequency component and the low-frequency component is processed;
s432, respectively carrying out Fourier transform, high-frequency filtering, inverse Fourier transform and indexing on the steel rail sub-image group subjected to the product processing of the high-frequency component and the low-frequency component in sequence to obtain a preprocessed steel rail sub-image group.
3. The method for detecting a portable high-speed switch detection trolley based on binocular recognition according to claim 2, wherein the calculation expression of the multi-scale image in the step S44 is as follows:
wherein I is n-1 (x, y) represents each sub-image at the n-1 th scale, x represents the lateral position of the window center point, y represents the longitudinal position of the window center point, I n (x+s, y+t) represents each sub-image at the n-th scale, s represents the transverse position of the corresponding position in the Gaussian kernel, t represents the corresponding longitudinal position in the Gaussian kernel,represent tensor product, G 5×5 (s, t) represents a window with a gaussian kernel of 5×5.
4. The method for detecting a portable high-speed switch detecting trolley based on binocular recognition according to claim 3, wherein the step S45 comprises the steps of:
s451, based on the multi-scale image, respectively establishing a fixed cross window for each scale image by using the calculated pixel point center;
s452, taking the average value difference between the RGB value of any pixel point in the cross window and the RGB value of the median pixel as a first binary output, and comparing the difference between the gray value of any pixel point and the gray average value of the pixel in the cross window with the self-adaptive linear threshold value as a second binary output to perform Census transformation of double sequences, so as to obtain bit strings of images of various scales:
wherein ζ (I (p), I (q)) represents a bit string of each scale image, I (p) represents a gray value of an arbitrary pixel point p within the cross window,&and I (q) represents the gray value of the calculated window center point q, I avg Represents the gray scale average value of pixels in a cross window, and tau q The linear threshold value of the neighborhood point of the current pixel point is represented, delta represents the RGB to mean value difference of any pixel point in a cross window, I represents the ith channel in RGB three channels, I i (p) information value of ith channel representing arbitrary pixel point in cross window, I i (q) represents the i-th channel information value of the calculated window center point q.
5. The method for detecting a portable high-speed switch detecting trolley based on binocular recognition according to claim 4, wherein the calculation expression of the token in the step S46 is as follows:
C(x′,y′,d)=Hamming(Str(x′,y′),Str(x′-d,y′))
wherein C (x ', y', d) represents the matching cost between the point to be matched and the candidate point, x 'and y' represent the transverse position and the longitudinal position of the pixel point in each sub-image respectively, hamming (·) represents the Hamming distance calculation matching cost, str (x ', y') represents the bit string of the point to be matched in the first sub-image or the third sub-image, and Str (x '-d, y') represents the bit string of the candidate point with the parallax d of the point to be matched in the first sub-image or the third sub-image respectively corresponding to the second sub-image or the fourth sub-image.
6. The method for detecting a portable high-speed switch detecting trolley based on binocular recognition according to claim 5, wherein the step S47 comprises the steps of:
s471, based on the cost of each scale, respectively carrying out cost aggregation on each scale image by using a box type filtering kernel method to obtain second matching cost of each pixel point of each scale image;
s472, calculating to obtain the optimal matching point and parallax value of each pixel point under each scale according to a WTA algorithm based on the principle that the smaller the second matching cost is, the larger the similarity is;
s473, based on the best matching point and the parallax value of each pixel point under each scale, utilizing Tikhonov regularization matrix to aggregate the layer-by-layer cost from the coarsest scale layer until the cost value and the parallax value of the finest scale of the first sub-image, the second sub-image, the third sub-image and the fourth sub-image of the preset 0 layer are reached, and obtaining the best matching point and the parallax value of the multi-scale image.
CN202210682514.4A 2022-06-16 2022-06-16 Portable high-speed turnout detection trolley based on binocular identification and detection method Active CN114987564B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210682514.4A CN114987564B (en) 2022-06-16 2022-06-16 Portable high-speed turnout detection trolley based on binocular identification and detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210682514.4A CN114987564B (en) 2022-06-16 2022-06-16 Portable high-speed turnout detection trolley based on binocular identification and detection method

Publications (2)

Publication Number Publication Date
CN114987564A CN114987564A (en) 2022-09-02
CN114987564B true CN114987564B (en) 2023-10-20

Family

ID=83034121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210682514.4A Active CN114987564B (en) 2022-06-16 2022-06-16 Portable high-speed turnout detection trolley based on binocular identification and detection method

Country Status (1)

Country Link
CN (1) CN114987564B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116304954B (en) * 2023-05-08 2023-07-28 西南交通大学 Mileage alignment method and system for high-frequency sampling data of high-speed railway dynamic inspection vehicle
CN117232435B (en) * 2023-11-14 2024-01-30 北京科技大学 Device and method for measuring abrasion value and reduction value of switch tongue

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105526882A (en) * 2015-12-28 2016-04-27 西南交通大学 Turnout wear detection system and detection method based on structured light measurement
CN105891217A (en) * 2016-04-27 2016-08-24 重庆大学 System and method for detecting surface defects of steel rails based on intelligent trolley
CN110293993A (en) * 2019-08-09 2019-10-01 大连维德集成电路有限公司 A kind of track switch detection device and system
CN112172862A (en) * 2020-09-04 2021-01-05 天津津航技术物理研究所 Multifunctional track detection system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105526882A (en) * 2015-12-28 2016-04-27 西南交通大学 Turnout wear detection system and detection method based on structured light measurement
CN105891217A (en) * 2016-04-27 2016-08-24 重庆大学 System and method for detecting surface defects of steel rails based on intelligent trolley
CN110293993A (en) * 2019-08-09 2019-10-01 大连维德集成电路有限公司 A kind of track switch detection device and system
CN112172862A (en) * 2020-09-04 2021-01-05 天津津航技术物理研究所 Multifunctional track detection system

Also Published As

Publication number Publication date
CN114987564A (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN114987564B (en) Portable high-speed turnout detection trolley based on binocular identification and detection method
Wang et al. Automatic laser profile recognition and fast tracking for structured light measurement using deep learning and template matching
CN107967695B (en) A kind of moving target detecting method based on depth light stream and morphological method
CN101216885A (en) Passerby face detection and tracing algorithm based on video
CN111353395A (en) Face changing video detection method based on long-term and short-term memory network
CN110458903B (en) Image processing method of coding pulse sequence
Aydin A new approach based on firefly algorithm for vision-based railway overhead inspection system
CN108288047A (en) A kind of pedestrian/vehicle checking method
CN108921076B (en) Pavement crack disease self-adaptive constant false alarm detection method based on image
CN108200432A (en) A kind of target following technology based on video compress domain
CN104517095A (en) Head division method based on depth image
CN109902565A (en) The Human bodys' response method of multiple features fusion
CN108681689A (en) Based on the frame per second enhancing gait recognition method and device for generating confrontation network
CN115131760A (en) Lightweight vehicle tracking method based on improved feature matching strategy
CN104778670A (en) Fractal-wavelet self-adaption image denoising method based on multivariate statistical model
CN106777159A (en) A kind of video clip retrieval and localization method based on content
CN110598540B (en) Method and system for extracting gait contour map in monitoring video
Liu et al. SETR-YOLOv5n: A Lightweight Low-Light Lane Curvature Detection Method Based on Fractional-Order Fusion Model
CN104063682A (en) Pedestrian detection method based on edge grading and CENTRIST characteristic
Daramola et al. Automatic Ear Recognition System using Back Propagation Neural Network.
CN112907597A (en) Railway track line detection method based on deep convolutional neural network
CN104240269A (en) Video target tracking method based on spatial constraint coding
CN113838102B (en) Optical flow determining method and system based on anisotropic dense convolution
Kuang et al. An effective skeleton extraction method based on Kinect depth image
Piniarski et al. Multi-branch classifiers for pedestrian detection from infrared night and day images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant