CN114987564A - Portable high-speed turnout detection trolley based on binocular recognition and detection method - Google Patents

Portable high-speed turnout detection trolley based on binocular recognition and detection method Download PDF

Info

Publication number
CN114987564A
CN114987564A CN202210682514.4A CN202210682514A CN114987564A CN 114987564 A CN114987564 A CN 114987564A CN 202210682514 A CN202210682514 A CN 202210682514A CN 114987564 A CN114987564 A CN 114987564A
Authority
CN
China
Prior art keywords
image
steel rail
sub
scale
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210682514.4A
Other languages
Chinese (zh)
Other versions
CN114987564B (en
Inventor
钱瑶
王平
徐井芒
张傲南
陈嵘
马前涛
乐明静
方嘉晟
王凯
罗燕
袁钰雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202210682514.4A priority Critical patent/CN114987564B/en
Publication of CN114987564A publication Critical patent/CN114987564A/en
Application granted granted Critical
Publication of CN114987564B publication Critical patent/CN114987564B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61DBODY DETAILS OR KINDS OF RAILWAY VEHICLES
    • B61D15/00Other railway vehicles, e.g. scaffold cars; Adaptations of vehicles for use on railways
    • B61D15/08Railway inspection trolleys
    • B61D15/10Railway inspection trolleys hand or foot propelled
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61KAUXILIARY EQUIPMENT SPECIALLY ADAPTED FOR RAILWAYS, NOT OTHERWISE PROVIDED FOR
    • B61K9/00Railway vehicle profile gauges; Detecting or indicating overheating of components; Apparatus on locomotives or cars to indicate bad track sections; General design of track recording vehicles
    • B61K9/08Measuring installations for surveying permanent way

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a portable high-speed turnout detection trolley based on binocular recognition and a detection method, and belongs to the technical field of rail transit detection, wherein the trolley comprises a pulley, a first base, a second base, a base beam, a hand push rod, a lithium battery power supply module, a first 3D camera module, a second 3D camera module, a first encoder, a second encoder and an image information processor, and the method is a detection method corresponding to the portable high-speed turnout detection trolley based on binocular recognition and is used for detecting and acquiring surface three-dimensional data of a high-speed turnout steel rail; the invention solves the problem of insufficient detection of the existing track detection trolley in the turnout area, realizes the detection of the profile and the geometric parameters of the variable-section steel rail in the turnout area through the continuous movement of the portable trolley, and simultaneously acquires other information such as the profile of the steel rail, the light band of the steel rail and the like.

Description

Portable high-speed turnout detection trolley based on binocular recognition and detection method
Technical Field
The invention belongs to the technical field of rail transit detection, and particularly relates to a portable high-speed turnout detection trolley based on binocular recognition and a detection method.
Background
The precise measurement of the turnout geometrical parameters in China usually adopts a track geometrical state measuring instrument, and the model used by the method is a track measuring model, so that the turnout characteristics are not fully considered.
The conventional turnout geometrical parameter detection trolley generally utilizes a track static geometrical parameter track detection trolley, but the track static geometrical parameter track detection trolley cannot detect the geometrical parameters of the variable-section steel rail in the turnout area, and the conventional track detection trolley can only measure the track geometrical parameters and cannot measure other information such as the profile of the steel rail and the light band of the steel rail; therefore, the portable high-speed turnout detection trolley which can realize measuring geometrical parameters of the track and other information such as a steel rail profile, a steel rail light band and the like is urgently needed.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides the portable high-speed turnout detection trolley based on binocular recognition and the detection method thereof, and solves the problem that the conventional track detection trolley is insufficient in detection of a turnout area.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that:
the invention provides a portable high-speed turnout detection trolley based on binocular recognition, which comprises:
a pulley for supporting the first and second pedestals and traveling on the high-speed rail;
the first base is used for bearing a base beam, a first 3D camera module and a first encoder;
the second base is used for bearing a base beam, a second 3D camera module and a second encoder;
the base beam is used for stably advancing according to stress after being fixedly connected with the hand push rod, keeping the first base and the second base to synchronously advance, and bearing the lithium battery power supply module;
the hand push rod is used for driving the base cross beam, the first base, the second base and the wheel to slide under stress;
the lithium battery power supply module is used for supplying power to the first 3D camera module, the second 3D camera module, the first encoder, the second encoder and the image information processor;
the first 3D camera module is used for shooting first steel rails at different angles close to the first base side to obtain a plurality of images of the first steel rails;
the second 3D camera module is used for detecting and shooting second steel rails with different angles close to the second base side to obtain a plurality of images of the second steel rails;
the first encoder is used for acquiring and encoding images of the first steel rails to obtain a plurality of encoded first steel rail images;
the second encoder is used for acquiring and encoding the images of the second steel rails to obtain a plurality of encoded images of the second steel rails;
and the image information processor is used for respectively acquiring and processing the coded first steel rail image and the coded second steel rail image to obtain three-dimensional data of the surface of the first steel rail and three-dimensional data of the surface of the second steel rail.
The beneficial effects of the invention are as follows: the invention provides a portable high-speed turnout detection trolley based on binocular recognition, which is built into a common turnout detection trolley through a pulley, a first base, a second base, a base beam and a hand push rod, and is additionally provided with a lithium battery power supply module, a first 3D camera module, a first encoder, a second 3D camera module, a second encoder and an image information processor on the basis, so that a steel rail image acquired through the 3D camera module is processed through the encoder, the steel rail image subjected to encoding processing is processed through the image information processor, three-dimensional data of the surface of a first steel rail and three-dimensional data of the surface of a second steel rail are finally obtained, continuous movement of the portable trolley is realized, the variable-section steel rail profile and the geometric parameters of a turnout area are detected, and meanwhile, the steel rail profile, the light band and other information are acquired.
Further, the first 3D camera module and the second 3D camera module are both binocular cameras including two cameras, respectively.
The beneficial effect of adopting the further scheme is as follows: the determination of the optimal matching point is realized through binocular recognition of a binocular camera, and a foundation is provided for obtaining three-dimensional data of the surface of the steel rail through processing the steel rail image acquired by the camera.
The invention also provides a binocular recognition-based detection method of the portable high-speed turnout detection trolley, which comprises the following steps of:
s1, supplying power to the first 3D camera module, the second 3D camera module, the first encoder, the second encoder and the image information processor by starting the lithium battery power supply module;
s2, pushing the hand push rod to drive the base beam, the first base, the second base and the pulley to stably advance, and continuously shooting the first steel rail and the second steel rail by using the first 3D camera module and the second camera module respectively to obtain a plurality of images of the first steel rail and a plurality of images of the second steel rail;
s3, respectively coding and processing the images of the first steel rails and the images of the second steel rails by using a first coder and a second coder to obtain a plurality of coded first steel rail images and second steel rail images;
and S4, respectively acquiring and processing the coded image of the first steel rail and the coded image of the second steel rail by using an image information processor to obtain three-dimensional data of the surface of the first steel rail and three-dimensional data of the surface of the second steel rail.
The invention has the beneficial effects that: the invention provides a binocular-recognition-based detection method of a portable high-speed turnout detection trolley, which is a detection method corresponding to the binocular-recognition-based portable high-speed turnout detection trolley and is used for detecting and acquiring surface three-dimensional data of a high-speed turnout steel rail.
Further, the step S4 includes the following steps:
s41, respectively acquiring a plurality of coded first steel rail images and second steel rail images by using an image information processor;
s42, based on the first steel rail image and the second steel rail image after being coded, taking the mean value of pixel point components acquired by the first 3D camera module and the second 3D camera module respectively at the same time from steel rail sub-image groups shot from different camera viewing angles as the gray value of the pixel point, wherein the steel rail sub-image group comprises a first sub-image and a second sub-image of the first steel rail image shot by the first 3D camera module and a third sub-image and a fourth sub-image of the second steel rail image shot by the second 3D camera module at the same time;
s43, respectively preprocessing the gray value of each pixel point in the steel rail sub-image group based on the frequency domain to obtain a preprocessed steel rail sub-image group;
s44, respectively taking each sub-image in the preprocessed steel rail sub-image group as a 0 layer based on the Gaussian pyramid principle, and constructing a multi-scale image by taking 2 as a sampling factor and 5 multiplied by 5 as a Gaussian kernel template;
s45, based on the multi-scale images, establishing a fixed cross window for each scale image by using the calculated pixel point center for median filtering to obtain a bit string of each scale image;
s46, taking any pixel point in the first sub-image or the third sub-image under each scale as a point to be matched, and correspondingly calculating the Hamming distance between the point to be matched and the bit strings of all candidate distance points in the second sub-image or the fourth sub-image to calculate a first matching cost to obtain a cost ticket under each scale;
s47, obtaining the best matching point and the parallax value of the multi-scale image based on the cost ticket under each scale;
and S48, obtaining three-dimensional data of the surface of the first steel rail and three-dimensional data of the surface of the second steel rail based on the optimal matching point and the parallax value of the multi-scale image.
The beneficial effect of adopting the further scheme is as follows: the image of the first steel rail and the image of the second steel rail after each code are obtained and processed through the image information processor respectively, three-dimensional data of the surface of the first steel rail and three-dimensional data of the surface of the second steel rail are obtained, continuous movement of the portable trolley is achieved, the variable cross-section steel rail profile and the geometric parameters of the turnout area are detected, and meanwhile, other information such as the steel rail profile and the steel rail light band are acquired.
Further, the step S43 includes the following steps:
s431, respectively taking logarithm of the product of the high-frequency component and the low-frequency component of the steel rail sub-image group based on the frequency domain to obtain a steel rail sub-image group processed by the product of the high-frequency component and the low-frequency component;
and S432, respectively carrying out Fourier transform, high-frequency filtering, inverse Fourier transform and index taking on the steel rail sub-image group subjected to the high-low frequency component product processing in sequence to obtain a preprocessed steel rail sub-image group.
The beneficial effect of adopting the further scheme is as follows:
further, the computational expression of the multi-scale image in step S44 is as follows:
Figure BDA0003698900730000051
wherein, I n-1 (x, y) each sub-image at the (n-1) th scale, x the lateral position of the window center point, y the longitudinal position of the window center point, I n (x + s, y + t) represent each sub-image at the nth scale, s represents the transverse phase position of the corresponding position in the Gaussian kernel, t represents the corresponding longitudinal position in the Gaussian kernel,
Figure BDA0003698900730000052
representing the tensor product, G 5×5 (s, t) represents a window with a Gaussian kernel of 5 × 5.
The beneficial effect of adopting the further scheme is as follows: the preprocessed steel rail sub-image group is obtained by respectively preprocessing the gray value of each pixel point in the steel rail sub-image group, and a basis is provided for constructing a multi-scale image.
Further, the step S45 includes the following steps:
s451, based on the multi-scale images, respectively establishing a fixed cross window by the calculated pixel point centers for each scale image;
s452, taking the mean difference between the RGB value of any pixel point in the cross window and the RGB value of the median pixel as a first binary output, and taking the difference between the gray value of any pixel point and the mean gray value of the pixel in the cross window and a self-adaptive linear threshold value as a second binary output to carry out double-sequence Census transformation to obtain a bit string of each scale image:
Figure BDA0003698900730000061
Figure BDA0003698900730000062
wherein, ζ (I (p), I (q)) represents the bit string of each scale image, I (p) represents the gray value of any pixel point p in the cross window,&and, I (q) the gray value of the calculated window center point q, I avg Representing the mean value of the grey level of the pixels in the cross window, tau q Expressing the linear threshold of the neighborhood point of the current pixel point, delta expressing the mean difference of RGB of any pixel point in the cross window, I expressing the ith channel in RGB three channels, I i (p) information value of ith channel representing any pixel point in cross window, I i (q) an ith channel information value representing the calculated window center point q.
The beneficial effect of adopting the further scheme is as follows: and establishing a fixed cross window for median filtering by using the calculated pixel point centers of the images of all scales respectively to obtain bit strings of the images of all scales, and providing a basis for calculating the cost tickets under all scales and obtaining the optimal matching points and the parallax values of the images of multiple scales.
Further, the calculation expression of the token in the step S46 is as follows:
C(x′,y′,d)=Hamming(Str(x′,y′),Str(x′-d,y′))
c (x ', y', d) represents the matching cost between the point to be matched and the candidate point, x 'and y' respectively represent the transverse position and the longitudinal position of the pixel point in each sub-image, Hamming (·) represents the Hamming distance calculation matching cost, Str (x ', y') represents the bit string of the point to be matched in the first sub-image or the third sub-image, Str (x '-d, y') represents that the second sub-image or the fourth sub-image respectively corresponds to the bit string of the candidate point with the parallax of d of the point to be matched in the first sub-image or the third sub-image.
The beneficial effect of adopting the further scheme is as follows: and calculating the matching cost by calculating the Hamming distance of the bit strings of the point to be matched and all possible candidate distance points through calculating the images respectively shot by the binocular camera under each scale, thereby obtaining the cost volume under each scale.
Further, the step S47 includes the following steps:
s471, based on the cost tickets under all scales, respectively performing cost aggregation on the images of all scales by using a box type filtering kernel method to obtain second matching costs of all pixel points under the images of all scales;
s472, based on the principle that the smaller the second matching cost is, the larger the similarity is, calculating according to a WTA algorithm to obtain the optimal matching point and the parallax value of each pixel point under each scale;
and S473, based on the optimal matching point and the parallax value of each pixel point under each scale, performing layer-by-layer cost aggregation from the layer with the coarsest scale by using a Tikhonov regularization matrix until the cost value and the parallax value of the finest scale of the first sub-image, the second sub-image, the third sub-image and the fourth sub-image of the preset 0 layer are reached, and obtaining the optimal matching point and the parallax value of the multi-scale image.
The beneficial effect of adopting the above further scheme is that: and performing cost aggregation on the images under each scale by using a box type filtering kernel method, adding interlayer correlation constraint on the basis of the correlation of the images in the layers, optimizing a cost result, reducing the mismatching rate, and obtaining the final optimal matching point and the parallax value under the multi-scale images so as to obtain the three-dimensional data of the surface of the steel rail.
Drawings
Fig. 1 is a front view of a portable high-speed turnout detection trolley based on binocular recognition in the embodiment of the invention.
Fig. 2 is a rear view of the portable high-speed turnout detection trolley based on binocular recognition in the embodiment of the invention.
Fig. 3 is a flow chart illustrating steps of a binocular recognition-based portable high-speed turnout detection trolley detection method in the embodiment of the invention.
Wherein: 1. a pulley; 2. a first base; 3. a second base; 4. a base beam; 5. a handspike; 6. a lithium battery power supply module; 7. a first 3D camera module; 8. a second 3D camera module; 9. a first encoder; 10. a second encoder; 11. an image information processor.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
Census transformation belongs to one of nonparametric image transformation, and can well detect local structural features in an image, such as edge and corner features; census conversion essentially encodes the gray value of a pixel into a binary code stream so as to obtain the size relationship between the gray value of a neighborhood pixel and the gray value of a center pixel;
as shown in fig. 1 and 2, in one embodiment of the invention, the invention provides a portable high-speed turnout detection trolley based on binocular recognition, which comprises:
a pulley 1 for supporting the first and second beds 2 and 3 and traveling on a high-speed rail;
the first base 2 is used for bearing a base beam 4, a first 3D camera module 7 and a first encoder 9;
the second base 3 is used for bearing a base beam 4, a second 3D camera module 8 and a second encoder 10;
the base beam 4 is used for stably advancing according to stress after being fixedly connected with the hand push rod 5, keeping the first base 2 and the second base 3 to synchronously advance, and bearing the lithium battery power supply module 6;
the hand push rod 5 is used for driving the base cross beam 4, the first base 3, the second base 3 and the roller skate 1 to advance under stress;
a lithium battery power supply module 6 for supplying power to the first 3D camera module 7, the second 3D camera module 8, the first encoder 9, the second encoder 10, and the image information processor 11;
the first 3D camera module 7 is used for shooting first steel rails with different angles close to the side of the first base 2 to obtain a plurality of images of the first steel rails;
the second 3D camera module 8 is used for detecting and shooting second steel rails with different angles close to the second base 2 side to obtain a plurality of images of the second steel rails;
the first encoder 9 is used for acquiring and encoding images of the first steel rails to obtain a plurality of encoded first steel rail images;
the second encoder 10 is configured to acquire and encode images of the second steel rails to obtain a plurality of encoded images of the second steel rails;
the image information processor 11 is configured to obtain and process each encoded first steel rail image and second steel rail image, respectively, to obtain three-dimensional data of a first steel rail surface and three-dimensional data of a second steel rail surface;
the first 3D camera module 7 and the second 3D camera module 8 are both binocular cameras including two cameras, respectively.
The invention provides a portable high-speed turnout detection trolley based on binocular recognition, which is built into a common turnout detection trolley through a pulley, a first base, a second base, a base beam and a hand push rod, and is additionally provided with a lithium battery power supply module, a first 3D camera module, a first encoder, a second 3D camera module, a second encoder and an image information processor on the basis, so that a steel rail image acquired through the 3D camera module is processed through the encoder, the steel rail image subjected to encoding processing is processed through the image information processor, three-dimensional data of the surface of a first steel rail and three-dimensional data of the surface of a second steel rail are finally obtained, continuous movement of the portable trolley is realized, the variable-section steel rail profile and the geometric parameters of a turnout area are detected, and meanwhile, the steel rail profile, the light band and other information are acquired.
As shown in fig. 3, in another embodiment of the present invention, the present invention provides a method for detecting a portable high-speed turnout detection trolley based on binocular recognition, which comprises the following steps:
s1, supplying power to the first 3D camera module 7, the second 3D camera module 8, the first encoder 9, the second encoder 10 and the image information processor 11 by starting the lithium battery power supply module 6;
s2, pushing the hand push rod 5 to drive the base beam 4, the first base 2, the second base 3 and the pulley 1 to stably advance, and continuously shooting the first steel rail and the second steel rail by using the first 3D camera module 7 and the second camera module 8 respectively to obtain a plurality of images of the first steel rail and a plurality of images of the second steel rail;
s3, respectively utilizing the first encoder 9 and the second encoder 10 to encode and process the images of the first steel rails and the images of the second steel rails to obtain a plurality of encoded first steel rail images and second steel rail images;
s4, respectively acquiring and processing the coded images of the first steel rail and the coded images of the second steel rail by using the image information processor 11 to obtain three-dimensional data of the surface of the first steel rail and three-dimensional data of the surface of the second steel rail;
the step S4 includes the following steps:
s41, respectively acquiring a plurality of coded first steel rail images and second steel rail images by using the image information processor 11;
s42, based on the coded first steel rail image and the coded second steel rail image, taking the mean value of pixel point components acquired by the first 3D camera module 7 and the second 3D camera module 8 respectively at the same time from steel rail sub-image groups shot from different camera viewing angles as the gray value of the pixel point, wherein the steel rail sub-image group comprises a first sub-image and a second sub-image of the first steel rail image shot by the first 3D camera module 7 at the same time, and a third sub-image and a fourth sub-image of the second steel rail image shot by the second 3D camera module 8;
s43, respectively preprocessing the gray value of each pixel point in the steel rail sub-image group based on the frequency domain to obtain a preprocessed steel rail sub-image group;
the step S43 includes the following steps:
s431, respectively taking logarithm of the product of the high-frequency component and the low-frequency component of the steel rail sub-image group based on the frequency domain to obtain a steel rail sub-image group processed by the product of the high-frequency component and the low-frequency component;
s432, respectively carrying out Fourier transform, high-frequency filtering, inverse Fourier transform and index taking on the steel rail sub-image group subjected to the high-low frequency component product processing to obtain a preprocessed steel rail sub-image group;
s44, respectively taking each sub-image in the preprocessed steel rail sub-image group as a 0 layer based on the Gaussian pyramid principle, and constructing a multi-scale image by taking 2 as a sampling factor and 5 multiplied by 5 as a Gaussian kernel template;
the computational expression of the multi-scale image in step S44 is as follows:
Figure BDA0003698900730000111
wherein, I n-1 (x, y) denotes each sub-image at the n-1 th scale, x denotes the lateral position of the window centre point, y denotes the longitudinal position of the window centre point, I n (x + s, y + t) represents each sub-image at the nth scale, s represents the transverse phase position of the corresponding position in the Gaussian kernel, t represents the corresponding longitudinal position in the Gaussian kernel,
Figure BDA0003698900730000112
the product of the tensors is represented by,G 5×5 (s, t) represents a window with a Gaussian kernel of 5 × 5;
s45, based on the multi-scale images, establishing a fixed cross window for each scale image by using the calculated pixel point center for median filtering to obtain a bit string of each scale image;
the step S45 includes the following steps:
s451, based on the multi-scale images, respectively establishing a fixed cross window by the calculated pixel point centers for each scale image;
s452, taking the mean difference between the RGB value of any pixel point in the cross window and the RGB value of the median pixel as a first binary output, and taking the difference between the gray value of any pixel point and the mean gray value of the pixel in the cross window and a self-adaptive linear threshold value as a second binary output to carry out double-sequence Census transformation to obtain a bit string of each scale image:
Figure BDA0003698900730000113
Figure BDA0003698900730000121
wherein, ζ (I (p), I (q)) represents the bit string of each scale image, I (p) represents the gray value of any pixel point p in the cross window,&and, I (q) the gray value of the calculated window center point q, I avg Representing the mean value of the grey level of the pixels in the cross window, tau q Expressing the linear threshold of the neighborhood point of the current pixel point, delta expressing the mean difference of RGB of any pixel point in the cross window, I expressing the ith channel in RGB three channels, I i (p) information value of ith channel representing any pixel point in cross window, I i (q) an ith channel information value representing the calculated window center point q;
s46, taking any pixel point in the first sub-image or the third sub-image under each scale as a point to be matched, and correspondingly calculating the Hamming distance between the point to be matched and the bit strings of all candidate distance points in the second sub-image or the fourth sub-image to calculate a first matching cost to obtain a cost ticket under each scale;
the calculation expression of the token in step S46 is as follows:
C(x′,y′,d)=Hamming(Str(x′,y′),Str(x′-d,y′))
c (x ', y', d) represents the matching cost between the point to be matched and the candidate point, x 'and y' respectively represent the transverse position and the longitudinal position of the pixel point in each sub-image, Hamming (·) represents the Hamming distance calculation matching cost, Str (x ', y') represents the bit string of the point to be matched in the first sub-image or the third sub-image, Str (x '-d, y') represents that the second sub-image or the fourth sub-image respectively corresponds to the bit string of the candidate point with the parallax of d of the point to be matched in the first sub-image or the third sub-image;
s47, obtaining the best matching point and the parallax value of the multi-scale image based on the cost ticket under each scale;
the step S47 includes the following steps:
s471, based on the cost ticket under each scale, respectively performing cost aggregation on the image of each scale by using a box type filtering kernel method to obtain a second matching cost of each pixel point under the image of each scale;
s472, based on the principle that the smaller the second matching cost is, the larger the similarity is, calculating according to a WTA algorithm to obtain the optimal matching point and the parallax value of each pixel point under each scale;
s473, based on the optimal matching point and the parallax value of each pixel point under each scale, performing layer-by-layer cost aggregation on the coarsest scale layer by using a Tikhonov regularization matrix until the cost value and the parallax value of the coarsest scale of a first sub-image, a second sub-image, a third sub-image and a fourth sub-image of 0 preset layer are reached, and obtaining the optimal matching point and the parallax value of the multi-scale image;
and S48, obtaining three-dimensional data of the surface of the first steel rail and three-dimensional data of the surface of the second steel rail based on the optimal matching point and the parallax value of the multi-scale image.
The invention provides a detection method of a portable high-speed turnout detection trolley based on binocular recognition, which is a detection method corresponding to the portable high-speed turnout detection trolley based on binocular recognition and is used for detecting and acquiring surface three-dimensional data of a high-speed turnout steel rail; the portable high-speed turnout detection trolley based on binocular recognition provided by the embodiment can execute the technical scheme shown in the method embodiment, the implementation principle and the beneficial effect are similar, and the detailed description is omitted.

Claims (9)

1. The utility model provides a portable high-speed switch detects dolly based on binocular discernment which characterized in that includes:
a pulley (1) for supporting the first chassis (2) and the second chassis (3) and traveling on a high-speed rail;
a first chassis (2) for carrying a chassis beam (4), a first 3D camera module (7) and a first encoder (9);
a second chassis (3) for carrying a chassis beam (4), a second 3D camera module (8) and a second encoder (10);
the base beam (4) is used for stably advancing according to stress after being fixedly connected with the hand push rod (5), keeping the first base (2) and the second base (3) to synchronously advance, and bearing the lithium battery power supply module (6);
the hand push rod (5) is used for driving the base cross beam (4), the first base (2), the second base (3) and the roller skate (1) to move forward under stress;
the lithium battery power supply module (6) is used for supplying power to the first 3D camera module (7), the second 3D camera module (8), the first encoder (9), the second encoder (10) and the image information processor (11);
the first 3D camera module (7) is used for shooting first steel rails at different angles close to the first base (2) side to obtain a plurality of images of the first steel rails;
the second 3D camera module (8) is used for shooting second steel rails with different angles close to the second base (2) side to obtain a plurality of images of the second steel rails;
the first encoder (9) is used for acquiring and encoding images of the first steel rails to obtain a plurality of encoded first steel rail images;
the second encoder (10) is used for acquiring and encoding the images of the second steel rails to obtain a plurality of encoded images of the second steel rails;
and the image information processor (11) is used for respectively acquiring and processing the coded first steel rail image and the coded second steel rail image to obtain three-dimensional data of the surface of the first steel rail and three-dimensional data of the surface of the second steel rail.
2. The binocular recognition-based portable high-speed turnout detection trolley according to claim 1, wherein the first 3D camera module (7) and the second 3D camera module (8) are each a binocular camera comprising two cameras.
3. The detection method of the portable high-speed turnout detection trolley based on binocular identification according to claims 1 and 2, characterized by comprising the following steps:
s1, supplying power to the first 3D camera module (7), the second 3D camera module (8), the first encoder (9), the second encoder (10) and the image information processor (11) by starting the lithium battery power supply module (6);
s2, pushing the hand push rod (5) to drive the base beam (4), the first base (2), the second base (3) and the pulley (1) to stably advance, and continuously shooting the first steel rail and the second steel rail by using the first 3D camera module (7) and the second camera module (8) respectively to obtain a plurality of images of the first steel rail and a plurality of images of the second steel rail;
s3, respectively utilizing a first encoder (9) and a second encoder (10) to encode and process the images of the first steel rails and the images of the second steel rails to obtain a plurality of encoded first steel rail images and second steel rail images;
and S4, respectively acquiring and processing the coded image of the first steel rail and the coded image of the second steel rail by using an image information processor (11) to obtain three-dimensional data of the surface of the first steel rail and three-dimensional data of the surface of the second steel rail.
4. The binocular identification based detection method of the portable high-speed turnout detection trolley according to claim 3, wherein the step S4 comprises the following steps:
s41, respectively acquiring a plurality of coded first steel rail images and second steel rail images by using an image information processor (11);
s42, based on the coded first steel rail image and the coded second steel rail image, taking the mean value of pixel point components acquired by the first 3D camera module (7) and the second 3D camera module (8) respectively at the same time from steel rail sub-image groups shot from different camera viewing angles as the gray value of the pixel point, wherein the steel rail sub-image group comprises a first sub-image and a second sub-image of the first steel rail image shot by the first 3D camera module (7) and a third sub-image and a fourth sub-image of the second steel rail image shot by the second 3D camera module (8) at the same time;
s43, respectively preprocessing the gray value of each pixel point in the steel rail sub-image group based on the frequency domain to obtain a preprocessed steel rail sub-image group;
s44, respectively taking each sub-image in the preprocessed steel rail sub-image group as a 0 layer based on the Gaussian pyramid principle, and constructing a multi-scale image by taking 2 as a sampling factor and 5 multiplied by 5 as a Gaussian kernel template;
s45, based on the multi-scale images, establishing a fixed cross window for each scale image by using the calculated pixel point center for median filtering to obtain a bit string of each scale image;
s46, taking any pixel point in the first sub-image or the third sub-image under each scale as a point to be matched, and correspondingly calculating the Hamming distance between the point to be matched and the bit strings of all candidate distance points in the second sub-image or the fourth sub-image to calculate a first matching cost to obtain a cost ticket under each scale;
s47, obtaining the best matching point and the parallax value of the multi-scale image based on the cost ticket under each scale;
and S48, obtaining three-dimensional data of the surface of the first steel rail and three-dimensional data of the surface of the second steel rail based on the optimal matching point and the parallax value of the multi-scale image.
5. The binocular identification based detection method of the portable high-speed turnout detection trolley according to claim 4, wherein the step S43 comprises the following steps:
s431, respectively taking logarithm of the product of the high-frequency component and the low-frequency component of the steel rail sub-image group based on the frequency domain to obtain a steel rail sub-image group processed by the product of the high-frequency component and the low-frequency component;
and S432, respectively carrying out Fourier transform, high-frequency filtering, inverse Fourier transform and index taking on the steel rail sub-image group subjected to the high-low frequency component product processing to obtain a preprocessed steel rail sub-image group.
6. The binocular recognition-based portable high-speed turnout detection trolley detection method according to claim 5, wherein the computational expression of the multi-scale image in step S44 is as follows:
Figure FDA0003698900720000041
wherein, I n-1 (x, y) denotes each sub-image at the n-1 th scale, x denotes the lateral position of the window centre point, y denotes the longitudinal position of the window centre point, I n (x + s, y + t) represents each sub-image at the nth scale, s represents the transverse phase position of the corresponding position in the Gaussian kernel, t represents the corresponding longitudinal position in the Gaussian kernel,
Figure FDA0003698900720000042
representing the tensor product, G 5×5 (s, t) represents a window with a Gaussian kernel of 5 × 5.
7. The binocular identification based detection method of the portable high-speed turnout detection trolley according to claim 6, wherein the step S45 comprises the following steps:
s451, based on the multi-scale images, respectively establishing a fixed cross window by the calculated pixel point centers for each scale image;
s452, taking the mean difference between the RGB value of any pixel point in the cross window and the RGB value of the median pixel as a first binary output, and taking the difference between the gray value of any pixel point and the mean gray value of the pixel in the cross window and a self-adaptive linear threshold value as a second binary output to carry out double-sequence Census transformation to obtain a bit string of each scale image:
Figure FDA0003698900720000043
Figure FDA0003698900720000044
wherein, ζ (I (p), I (q)) represents the bit string of each scale image, I (p) represents the gray value of any pixel point p in the cross window,&and, I (q) the gray value of the calculated window center point q, I avg Representing the mean value of the grey level of the pixels in the cross window, tau q Expressing the linear threshold of the neighborhood point of the current pixel point, delta expressing the mean difference of RGB of any pixel point in the cross window, I expressing the ith channel in RGB three channels, I i (p) information value of ith channel representing any pixel point in cross window, I i (q) an ith channel information value representing the calculated window center point q.
8. The binocular recognition-based portable high-speed turnout detection trolley detection method according to claim 7, wherein the calculation expression of the token in step S46 is as follows:
C(x′,y′,d)=Hamming(Str(x′,y′),Str(x′-d,y′))
c (x ', y', d) represents the matching cost between the point to be matched and the candidate point, x 'and y' respectively represent the horizontal position and the vertical position of the pixel point in each sub-image, Hamming (·) represents the Hamming distance calculation matching cost, Str (x ', y') represents the bit string of the point to be matched in the first sub-image or the third sub-image, Str (x '-d, y') represents that the second sub-image or the fourth sub-image respectively corresponds to the bit string of the candidate point with the parallax of d of the point to be matched in the first sub-image or the third sub-image.
9. The binocular recognition-based portable high-speed turnout detection trolley detection method according to claim 8, wherein the step S47 comprises the following steps:
s471, based on the cost ticket under each scale, respectively performing cost aggregation on the image of each scale by using a box type filtering kernel method to obtain a second matching cost of each pixel point under the image of each scale;
s472, based on the principle that the smaller the second matching cost is, the greater the similarity is, calculating according to a WTA (WTA) algorithm to obtain the optimal matching point and the parallax value of each pixel point under each scale;
and S473, based on the optimal matching point and the parallax value of each pixel point under each scale, performing layer-by-layer cost aggregation from the layer with the coarsest scale by using a Tikhonov regularization matrix until the cost value and the parallax value of the finest scale of the first sub-image, the second sub-image, the third sub-image and the fourth sub-image of the preset 0 layer are reached, and obtaining the optimal matching point and the parallax value of the multi-scale image.
CN202210682514.4A 2022-06-16 2022-06-16 Portable high-speed turnout detection trolley based on binocular identification and detection method Active CN114987564B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210682514.4A CN114987564B (en) 2022-06-16 2022-06-16 Portable high-speed turnout detection trolley based on binocular identification and detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210682514.4A CN114987564B (en) 2022-06-16 2022-06-16 Portable high-speed turnout detection trolley based on binocular identification and detection method

Publications (2)

Publication Number Publication Date
CN114987564A true CN114987564A (en) 2022-09-02
CN114987564B CN114987564B (en) 2023-10-20

Family

ID=83034121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210682514.4A Active CN114987564B (en) 2022-06-16 2022-06-16 Portable high-speed turnout detection trolley based on binocular identification and detection method

Country Status (1)

Country Link
CN (1) CN114987564B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116304954A (en) * 2023-05-08 2023-06-23 西南交通大学 Mileage alignment method and system for high-frequency sampling data of high-speed railway dynamic inspection vehicle
CN117232435A (en) * 2023-11-14 2023-12-15 北京科技大学 Device and method for measuring abrasion value and reduction value of switch tongue

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105526882A (en) * 2015-12-28 2016-04-27 西南交通大学 Turnout wear detection system and detection method based on structured light measurement
CN105891217A (en) * 2016-04-27 2016-08-24 重庆大学 System and method for detecting surface defects of steel rails based on intelligent trolley
CN110293993A (en) * 2019-08-09 2019-10-01 大连维德集成电路有限公司 A kind of track switch detection device and system
CN112172862A (en) * 2020-09-04 2021-01-05 天津津航技术物理研究所 Multifunctional track detection system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105526882A (en) * 2015-12-28 2016-04-27 西南交通大学 Turnout wear detection system and detection method based on structured light measurement
CN105891217A (en) * 2016-04-27 2016-08-24 重庆大学 System and method for detecting surface defects of steel rails based on intelligent trolley
CN110293993A (en) * 2019-08-09 2019-10-01 大连维德集成电路有限公司 A kind of track switch detection device and system
CN112172862A (en) * 2020-09-04 2021-01-05 天津津航技术物理研究所 Multifunctional track detection system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116304954A (en) * 2023-05-08 2023-06-23 西南交通大学 Mileage alignment method and system for high-frequency sampling data of high-speed railway dynamic inspection vehicle
CN116304954B (en) * 2023-05-08 2023-07-28 西南交通大学 Mileage alignment method and system for high-frequency sampling data of high-speed railway dynamic inspection vehicle
CN117232435A (en) * 2023-11-14 2023-12-15 北京科技大学 Device and method for measuring abrasion value and reduction value of switch tongue
CN117232435B (en) * 2023-11-14 2024-01-30 北京科技大学 Device and method for measuring abrasion value and reduction value of switch tongue

Also Published As

Publication number Publication date
CN114987564B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN114987564A (en) Portable high-speed turnout detection trolley based on binocular recognition and detection method
US11398037B2 (en) Method and apparatus for performing segmentation of an image
CN107967695B (en) A kind of moving target detecting method based on depth light stream and morphological method
US8983178B2 (en) Apparatus and method for performing segment-based disparity decomposition
CN109509192A (en) Merge the semantic segmentation network in Analysis On Multi-scale Features space and semantic space
CN106407903A (en) Multiple dimensioned convolution neural network-based real time human body abnormal behavior identification method
CN110991340B (en) Human body action analysis method based on image compression
CN111582210B (en) Human body behavior recognition method based on quantum neural network
CN108765506A (en) Compression method based on successively network binaryzation
CN110458903B (en) Image processing method of coding pulse sequence
CN114912487B (en) End-to-end remote heart rate detection method based on channel enhanced space-time attention network
US8666144B2 (en) Method and apparatus for determining disparity of texture
CN108200432A (en) A kind of target following technology based on video compress domain
CN109886269A (en) A kind of transit advertising board recognition methods based on attention mechanism
CN115131760A (en) Lightweight vehicle tracking method based on improved feature matching strategy
CN117094999B (en) Cross-scale defect detection method
CN117011342A (en) Attention-enhanced space-time transducer vision single-target tracking method
CN112184731A (en) Multi-view stereo depth estimation method based on antagonism training
Yang et al. Research on real-time detection method of rail corrugation based on improved ShuffleNet V2
CN113327269A (en) Unmarked cervical vertebra movement detection method
CN115482519A (en) Driver behavior identification method and device based on space-time and motion deep learning
CN112200831B (en) Dynamic template-based dense connection twin neural network target tracking method
CN114743257A (en) Method for detecting and identifying image target behaviors
CN114187550A (en) Bow net core part identification method based on improved YOLO V3 network
CN113688747B (en) Method, system, device and storage medium for detecting personnel target in image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant