CN113358659A - Camera array type imaging method for automatic detection of high-speed rail box girder crack - Google Patents

Camera array type imaging method for automatic detection of high-speed rail box girder crack Download PDF

Info

Publication number
CN113358659A
CN113358659A CN202110463929.8A CN202110463929A CN113358659A CN 113358659 A CN113358659 A CN 113358659A CN 202110463929 A CN202110463929 A CN 202110463929A CN 113358659 A CN113358659 A CN 113358659A
Authority
CN
China
Prior art keywords
light source
camera
source module
box girder
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110463929.8A
Other languages
Chinese (zh)
Other versions
CN113358659B (en
Inventor
朱文发
张文静
柴晓冬
李立明
范国鹏
张辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai University of Engineering Science
Original Assignee
Shanghai University of Engineering Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai University of Engineering Science filed Critical Shanghai University of Engineering Science
Priority to CN202110463929.8A priority Critical patent/CN113358659B/en
Publication of CN113358659A publication Critical patent/CN113358659A/en
Application granted granted Critical
Publication of CN113358659B publication Critical patent/CN113358659B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/01Arrangements or apparatus for facilitating the optical investigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques

Landscapes

  • Chemical & Material Sciences (AREA)
  • Biochemistry (AREA)
  • Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Immunology (AREA)
  • Analytical Chemistry (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention discloses a camera array type imaging method for automatically detecting cracks of a high-speed rail box girder, which comprises the following steps of: s1, driving the rail inspection trolley into the high-speed rail box girder needing defect detection; s2, along with the movement of the rail inspection trolley, a camera in a camera light source module arranged on the rail inspection trolley shoots the inner wall of the high-speed rail box girder and transmits the shot image to a computer; and S3, detecting and classifying defects of the received images through a convolutional neural network and performing three-dimensional reconstruction through a three-dimensional reconstruction network by using MATLAB software, and finally fusing the defects distinguished by the convolutional neural network into the three-dimensional images reconstructed by the three-dimensional reconstruction network to realize the defect detection of the inner wall of the high-speed railway box girder. The method can realize automatic detection of the cracks of the high-speed railway box girder, has high detection speed and high efficiency, and can carry out long-distance and omnibearing defect identification and detection on the high-speed railway box girder.

Description

Camera array type imaging method for automatic detection of high-speed rail box girder crack
Technical Field
The invention relates to a camera array type imaging method for automatic detection of cracks of a high-speed rail box girder, and belongs to the technical field of rail defect detection.
Background
The box girder is a key component of the overhead bridge and directly bears the train load transmitted by the high-speed rail. Under the high-speed development of high-speed railways, the integrity of the box girder becomes a center for maintenance and detection. As the box girder is always exposed to a complicated atmospheric environment and is influenced by various factors (train load, environmental conditions, etc.) for a long time, various defects are generated. Under the alternating action of complex factors such as reciprocating load (such as frequent operation of a train), environmental change (such as alternate change of temperature and humidity), sudden disasters (such as earthquake) and the like, the high-speed rail box girder can generate tiny fatigue cracks. The development and accumulation of cracks lead to continuous deterioration of service performance of the box girder, fatigue fracture can occur even under extreme conditions, the stability and smoothness of the high-speed rail cannot be guaranteed, the stability and smoothness are important preconditions for guaranteeing the quick and safe operation of the high-speed rail, and the stability and smoothness are directly related to the normal operation of a train and the personal safety of passengers, so that the defect detection of the high-speed rail box girder is required.
At present, the detection of the defects of the box girder mainly depends on manual flaw detection of a high-speed railway bridge tunneler, and once the defects are not found in time, disastrous results are generated. The realization of the rapid automatic detection of the defects of the high-speed rail box girder is a key core problem in the field of maintenance and repair of the foundation structure of the high-speed rail line, and has important scientific significance, engineering value and market prospect.
Disclosure of Invention
Aiming at the problems in the prior art, the invention aims to provide the camera array type imaging method for the automatic detection of the cracks of the high-speed railway box girder, which has the advantages of accurate and comprehensive detection and is not easily interfered by external conditions, so that the crack defects of the high-speed railway box girder can be detected efficiently, nondestructively and in real time, and timely early warning and powerful guarantee are provided for the safe operation of the high-speed railway.
In order to achieve the purpose, the invention adopts the following technical scheme:
a camera array type imaging method for automatic detection of cracks of a high-speed rail box girder comprises the following steps:
s1, enabling the rail inspection trolley to run into a high-speed rail box girder needing defect detection, wherein a camera support is arranged at the top end of the rail inspection trolley, a circular camera mounting part is arranged on the camera support, a plurality of camera light source modules are symmetrically arranged on the camera mounting part along the left and right direction of the circumference direction, and each camera light source module comprises a video camera and a circular aperture which is annularly arranged outside the video camera;
s2, along with the movement of the rail inspection trolley, a camera in a camera light source module arranged on the rail inspection trolley shoots the inner wall of the high-speed rail box girder and transmits the shot image to a computer;
and S3, detecting and classifying defects of the received images through a convolutional neural network and performing three-dimensional reconstruction through a three-dimensional reconstruction network by using MATLAB software, and finally fusing the defects distinguished by the convolutional neural network into the three-dimensional images reconstructed by the three-dimensional reconstruction network to realize the defect detection of the inner wall of the high-speed railway box girder.
In one embodiment, step S3 specifically includes the following operations:
s31, detecting and classifying the images of the inner wall of the high-speed rail box girder shot in the step S2 by adopting an SSD target detection algorithm:
s311, adopting a PBS algorithm to enhance the image in MATLAB software:
carrying out image enhancement by 3 constraints of color space consistency, texture consistency and exposure consistency according to the relation between an input image I and an illumination image S in the condition that I is S multiplied by R, and optimizing illumination image estimation by an optimization equation, wherein the optimization equation is as follows:
Figure BDA0003038036670000021
in the formula (1), EdIs to bring S as close as possible to S,
Figure BDA0003038036670000022
p represents a pixel, c ∈ { r, g, b ∈ { r, g }};Ec,Et,EeIn order to carry out perceptual bidirectional similarity constraint of color, texture and exposure, lambda is a weighted value;
s312, marking the processed image after translation, amplification and 45-degree free rotation to configure a target detection training set;
s313, detecting and classifying the images through an SSD target detection algorithm, realizing defect detection of the inner wall of the high-speed rail box girder, and marking the image position of the defect;
s32, performing three-dimensional reconstruction on the image of the inner wall of the high-speed rail box girder shot in the step S2 by adopting an image splicing 3d reconstruction algorithm:
s321, carrying out gray scale processing on the image by adopting a weighted average method in MATLAB software:
Image(i,j)=a(i,j)+bG(i,j)+cB(i,j) (2);
in the formula (2), a, b and c are weights of red, green and blue color components;
s322, removing the noise of the image by adopting a Shearlet transformation-based method:
for any f e L in the second-order linear integrable space2(R2) If the function f satisfies the formula
Figure BDA0003038036670000023
Then psij,l,t|a,sCalled successive Shearlet, the definition of successive Shearlet transforms can be expressed as:
Figure BDA0003038036670000031
in the formula (4), a ∈ R+,h∈R,t∈R2(ii) a a, h and t are respectively a scale parameter, a shearing parameter and a translation parameter;
then the image signal is subjected to Shearlet transform denoising, which can be expressed as:
f(t)=s(t)+n(t) (5);
SHψ(f)=SHψ(s)+SHψ(n) (6);
in the formula (5), s (t), n (t) are respectively signal and noise;
in the formula (6), SHψ(s) performing Shearlet transformation on the signal; SH (hydrogen sulfide)ψ(n) performing Shearlet transformation for noise;
s323, three-dimensional reconstruction: denoising the box girder picture based on Shearlet transformation, and then performing feature extraction and image fusion:
s3231, dividing the image into an ROI, and extracting and matching features in the ROI by adopting an SIFT algorithm:
the construction of the scale space is defined as follows:
L(x,y,σ)=G(x,y,σ)*I(χ,y) (7);
in equation (7), G (x, y, σ) is a gaussian kernel function:
Figure BDA0003038036670000032
(x, y) is the coordinate of the pixel point in the image, I (x, y) represents the pixel value of the point, and sigma is a scale space factor;
establishing a Gaussian pyramid according to a scale function, wherein the first layer of the first order of the Gaussian pyramid is an original image, the Gaussian pyramid has an o-order layer and an s-order layer, the scale ratio between two adjacent layers on the same order is k, on the basis of the Gaussian pyramid, obtaining a DOG Gaussian difference operator by using the difference between the space functions of the two adjacent layers on the same order, and using the DOG Gaussian difference operator to detect the maximum value of a point in a scale space, namely detecting the maximum value of the point in the scale space
D(x,y;σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)
=L(x,y,kσ)-L(x,y;σ) (9);
L (x, y, k sigma) and L (x, y; sigma) in the formula (9) represent space functions of two adjacent two layers; g (x, y, k sigma) and G (x, y, sigma) represent Gaussian functions of two adjacent layers;
the extreme point determined by the Gaussian difference is a point in a discrete space, and a continuous extreme point is calculated by using a Taylor expansion formula:
Figure BDA0003038036670000041
obtaining the extreme point
Figure BDA0003038036670000042
Then, a main direction is generated: in order to realize that the feature points have rotation invariance, calculating the angles of the feature points, and in order to realize the rotation invariance of the feature points, calculating the angles of the feature points, wherein the directions of the feature points are calculated according to local features in a Gaussian scale image where the feature points are located, a scale space factor sigma is known, the scale is relative to a reference image of a group where the image is located, the local features are gradient amplitude and gradient amplitude of all pixels in a neighborhood region of the feature points, and the neighborhood region is defined by taking the feature points as the center of a circle and the radius of the neighborhood region is 4.5 times of the Gaussian scale;
Figure BDA0003038036670000043
Figure BDA0003038036670000044
in the formula (10), m (x, y) is the amplitude of the pixel point, and θ (x, y) is the amplitude of the point;
finally determining the feature points, then finding the same feature points in the two images for feature matching, and using the Euclidean distance D for feature matchingssdTo show that:
Figure BDA0003038036670000051
in formula (11), A, B are feature points of the two images, respectively;
s3232, when matching the features, selecting a feature point from the A image to match with each feature point in the B image by using brute force matching (BF), and finally selecting DSSdThe minimum two points are used as matching results; obtaining a transformation matrix with a homography matrix as a whole from the feature points matched by the SIFT algorithm, using the same homography transformation for all areas on the image, and splicing the pictures shot by the cameras in all the camera light source modules:
the rail inspection trolley runs in the middle of the ground of the high-speed rail box girder, at each shooting moment, the rail inspection trolley is relatively fixed with the position of the camera and the inner surface of the high-speed rail box girder, and the relation from a world coordinate system to a camera coordinate system is as follows:
Xc=RXw+t (12);
in equation (12), R represents a rotation matrix of the camera position with respect to the world coordinate system origin, t represents a translation vector of the camera position with respect to the world coordinate system, and XcRepresenting the camera coordinate system XwRepresenting a world coordinate system; xc=(xc,yc,zc)T,Xw=(xw,yw,zw)T
It is written in the form of its secondary coordinates:
Figure BDA0003038036670000052
the inverse transform of equation (12):
Xw=RTXc-RTt (14);
convert it to matrix form:
Figure BDA0003038036670000053
thus, the three-dimensional reconstruction of the inner surface of the high-speed rail box girder is finally realized in the MATLAB, and a three-dimensional image of the inner surface of the high-speed rail box girder is reconstructed;
and S33, fusing the defects distinguished by the S31 convolutional neural network into the three-dimensional image reconstructed by S32, and realizing the defect detection of the inner wall of the high-speed rail box girder.
In a preferred embodiment, in S33, a plurality of fusion points are randomly selected by the computer, and then the fusion points on the three-dimensional image corresponding to the randomly selected defect object map are fused based on an image fusion technique (for example, poisson fusion), and a three-dimensional defect map is generated by using the matched defect binary map.
In one embodiment, the camera is a CCD camera.
In one embodiment, the circular aperture is a white LED lamp.
In one embodiment, the number of the camera light source modules is 8, and the 8 camera light source modules are symmetrically arranged at the left end and the right end of the camera installation part.
According to a preferred scheme, 4 camera light source modules arranged at the left end of a camera installation part are sequentially named as a first left camera light source module, a second left camera light source module, a third left camera light source module and a fourth left camera light source module from top to bottom, and 4 camera light source modules arranged at the right end of the camera installation part are sequentially named as a first right camera light source module, a second right camera light source module, a third right camera light source module and a fourth right camera light source module from top to bottom; wherein: the first left camera light source module is installed at a position 40 degrees away from the vertical direction of the camera installation part, the second left camera light source module is installed at a position 80 degrees away from the vertical direction of the camera installation part, the third left camera light source module is installed at a position 120 degrees away from the vertical direction of the camera installation part, the fourth left camera light source module is installed at a position 160 degrees away from the vertical direction of the camera installation part, and the first right camera light source module, the second right camera light source module, the third right camera light source module and the fourth right camera light source module are respectively and symmetrically arranged with the corresponding first left camera light source module, the second left camera light source module, the third left camera light source module and the fourth left camera light source module.
According to the optimal scheme, the focal length of the video cameras in the first left camera light source module, the second left camera light source module, the third left camera light source module, the first right camera light source module, the second right camera light source module and the third right camera light source module is 12mm, and the focal length of the video cameras in the fourth left camera light source module and the fourth right camera light source module is 6 mm.
According to a preferred scheme, angles between the emission center directions of the cameras in the first left camera light source module and the first right camera light source module and the vertical horizontal plane are 85 degrees, angles between the emission center directions of the cameras in the second left camera light source module and the second right camera light source module and the vertical horizontal plane are 105 degrees, angles between the emission center directions of the cameras in the third left camera light source module and the third right camera light source module and the vertical horizontal plane are 135 degrees, and angles between the emission center directions of the cameras in the fourth left camera light source module and the fourth right camera light source module and the vertical horizontal plane are 170 degrees.
In one embodiment, an encoder is further arranged on the rail inspection trolley.
In one embodiment, a distance sensor is further arranged on the rail inspection trolley.
In one embodiment, a mobile power supply is further arranged on the rail inspection trolley.
In one embodiment, the rail inspection trolley is provided with a self-walking power mechanism.
Compared with the prior art, the invention has the beneficial technical effects that:
the camera array type imaging method for the automatic detection of the cracks of the high-speed rail box girder, provided by the invention, can realize the automatic detection of the cracks of the high-speed rail box girder, has high detection speed and high efficiency, can carry out remote and omnibearing defect identification and detection on the high-speed rail box girder, basically has no influence of external environment on a detection result, has high detection precision, greatly improves the detection and maintenance work efficiency, and can provide timely maintenance and powerful support for the safe operation of high-speed rails; therefore, compared with the prior art, the invention has remarkable progress and application value.
Drawings
Fig. 1 is a schematic structural diagram of a camera array apparatus for automatic detection of cracks of a high-speed railway box girder according to an embodiment of the present invention;
fig. 2 is a schematic view of a use state 1 of a camera array device for automatic detection of cracks of a high-speed railway box girder according to an embodiment of the invention;
fig. 3 is a schematic view of a use state 2 of a camera array device for automatic detection of cracks of a high-speed railway box girder according to an embodiment of the invention;
the numbers in the figures are as follows: 1. a rail inspection trolley; 11. a track of the rail inspection trolley; 2. a camera support; 21. a camera mounting section; 3. a camera light source module; 3-CL1, a first left camera light source module; 3-CL2, a second left camera light source module; 3-CL3, a third left camera light source module; 3-CL4, a fourth left camera light source module; 3-CR1, a first right camera light source module; 3-CR2, a second right camera light source module; 3-CR3, third right camera light source module; 3-CR4, fourth right camera light source module; 31. a camera; 32. a circular aperture; 4. an encoder; 5. a distance sensor; 6. and a self-walking power mechanism.
Detailed Description
The technical solution of the present invention will be further clearly and completely described below with reference to the accompanying drawings and examples.
Examples
Please refer to fig. 1 to fig. 3: the camera array device for automatic detection of high-speed railway box girder crack that this embodiment provided, including track inspection dolly 1, the top of track inspection dolly 1 is equipped with camera support 2, camera support 2 is equipped with circular shape camera installation department 21, be equipped with a plurality of camera light source module 3 along circumferencial direction bilateral symmetry on the camera installation department 21, camera light source module 3 includes that camera 31 and ring locate the outside circular light ring 32 of camera 31. In the invention, as shown in fig. 3, the multiple camera light source modules 3 are designed symmetrically and in an array mode, so that the shooting range of the camera light source modules 3 can realize the whole area coverage of the inner wall of the high-speed rail box beam 7, and as shown in fig. 1, in the camera light source modules 3, the arrangement of the cameras 31 and the circular apertures 32 ensures that the cameras 31 and the circular apertures 32 are alternately arranged in the whole camera array device, and the circular apertures 32 not only eliminate the light supplement in the dark but also can eliminate shadows, so that the detected pictures are clearer and more effective; in addition, the camera installation part 21 of bearing the camera light source module 3 is designed to be circular, so that the use number of the camera light source module 3 can be reduced, energy is saved, and then the corner of the inner wall of the high-speed railway box beam 7 can be covered with the high-speed railway box beam 7 matched with the arched high-speed railway box beam, so that the shot picture is more comprehensive.
In this embodiment, the camera 31 is a CCD camera. The circular aperture 32 is a white LED lamp.
In order to better and more comprehensively acquire the image of the inner wall of the high-speed railway box girder and more accurately detect the defects on the surface of the high-speed railway box girder, in this embodiment, the number of the camera light source modules 3 is 8, and the 8 camera light source modules 3 are symmetrically arranged at the left end and the right end of the camera installation part 21. For convenience of description, in the present invention, the 4 camera light source modules disposed at the left end of the camera mounting part 21 are sequentially named as a first left camera light source module 3-CL1, a second left camera light source module 3-CL2, a third left camera light source module 3-CL3 and a fourth left camera light source module 3-CL4 from top to bottom, and the 4 camera light source modules disposed at the right end of the camera mounting part 21 are sequentially named as a first right camera light source module 3-CR1, a second right camera light source module 3-CR2, a third right camera light source module 3-CR3 and a fourth right camera light source module 3-CR4 from top to bottom; wherein: the first left camera light source module 3-CL1 is mounted on the camera mounting portion 21 at 40 DEG from the vertical direction, the second left camera light source module 3-CL2 is mounted on the camera mounting portion 21 at 80 DEG from the vertical direction, the third left camera light source module 3-CL3 is mounted on the camera mounting portion 21 at 120 DEG from the vertical direction, the fourth left camera light source module 3-CL4 is mounted on the camera mounting portion 21 at 160 DEG from the vertical direction, the first right camera light source module 3-CR1, the second right camera light source module 3-CR2, the third right camera light source module 3-CR3, and the fourth right camera light source module 3-CR4 are symmetrically disposed with respect to the corresponding first left camera light source module 3-CL1, the second left camera light source module 3-CL2, the third left camera light source module 3-CL3, and the fourth left camera light source module 3-CL4, respectively; the above values allow a deviation of 5%.
Furthermore, the focal length of the video camera 31 in the first left camera light source module 3-CL1, the second left camera light source module 3-CL2, the third left camera light source module 3-CL3, the first right camera light source module 3-CR1, the second right camera light source module 3-CR2, the third right camera light source module 3-CR3 is 12mm, the field angle of the corresponding video camera 31 is 23 °, the focal length of the video camera 31 in the fourth left camera light source module 3-CL4, and the fourth right camera light source module 3-CR4 is 6mm, and the field angle of the corresponding video camera 31 is 40 °; the above values allow a deviation of 5%.
Further, the emission center directions of the video cameras 31 in the first left camera light source module 3-CL1 and the first right camera light source module 3-CR1 are both at an angle of 85 ° to the vertical horizontal plane direction, the emission center directions of the video cameras 31 in the second left camera light source module 3-CL2 and the second right camera light source module 3-CR2 are both at an angle of 105 ° to the vertical horizontal plane direction, the emission center directions of the video cameras 31 in the third left camera light source module 3-CL3 and the third right camera light source module 3-CR3 are both at an angle of 135 ° to the vertical horizontal plane direction, and the emission center directions of the video cameras 31 in the fourth left camera light source module 3-CL4 and the fourth right camera light source module 3-CR4 are both at an angle of 170 ° to the vertical horizontal plane direction. Thus, when the rail inspection trolley 1 runs, the high-speed rail box girder can be photographed in the whole area so as to detect defects.
In addition, as shown in fig. 1, an encoder 4 is further arranged on the rail inspection trolley 1 to realize real-time positioning of detected defects, clarify defect positions and facilitate maintenance of constructors.
In addition, still be equipped with distance sensor 5 on the rail inspection dolly, can gather rail inspection dolly 1 the place ahead distance.
In addition, a mobile power supply (not shown) is arranged on the rail inspection trolley 1 to realize mobile power supply of the device.
In addition, the rail inspection trolley 1 is provided with a self-walking power mechanism 6 to realize the automatic walking of the rail inspection trolley 1, and the self-walking power mechanism 6 can be realized by adopting the prior art.
The camera array imaging method for automatically detecting the high-speed rail box girder by adopting the camera array device comprises the following steps of:
s1, as shown in the figures 2 and 3, the rail inspection trolley 1 is driven into the high-speed rail box girder 7 needing defect detection;
s2, along with the movement of the rail inspection trolley 1, the camera 31 in the camera light source module 3 arranged on the rail inspection trolley 1 shoots the inner wall of the high-speed rail box girder 7 and transmits the shot image to the computer;
s3, the computer adopts MATLAB software to respectively detect and classify the defects of the received images through a convolutional neural network and carry out three-dimensional reconstruction through a three-dimensional reconstruction network, and finally, the defects distinguished by the convolutional neural network are fused into the three-dimensional images reconstructed by the three-dimensional reconstruction network, so that the defect detection of the inner wall of the high-speed railway box girder is realized, and the method specifically comprises the following operations:
s31, detecting and classifying the images of the inner wall of the high-speed rail box girder shot in the step S2 by adopting an SSD target detection algorithm:
s311, adopting a PBS algorithm to enhance the image in MATLAB software:
carrying out image enhancement by 3 constraints of color space consistency, texture consistency and exposure consistency according to the relation between an input image I and an illumination image S in the condition that I is S multiplied by R, and optimizing illumination image estimation by an optimization equation, wherein the optimization equation is as follows:
Figure BDA0003038036670000091
in the formula (1), EdIs to bring S as close as possible to S,
Figure BDA0003038036670000092
p represents a pixel, c ∈ { r, g, b }; ec,Et,EeIn order to carry out perceptual bidirectional similarity constraint of color, texture and exposure, lambda is a weighted value;
s312, marking the processed image after translation, amplification and 45-degree free rotation to configure a target detection training set; this part belongs to the known technology, and is not described in detail herein;
s313, detecting and classifying the images through an SSD target detection algorithm, realizing defect detection of the inner wall of the high-speed rail box girder, and marking the image position of the defect; this part belongs to the known technology, and is not described in detail herein;
s32, performing three-dimensional reconstruction on the image of the inner wall of the high-speed rail box girder shot in the step S2 by adopting an image splicing 3d reconstruction algorithm:
s321, carrying out gray scale processing on the image by adopting a weighted average method in MATLAB software:
Image(i,j)=a(i,j)+bG(i,j)+cB(i,j) (2);
in the formula (2), a, b and c are weights of red, green and blue color components;
s322, removing the noise of the image by adopting a Shearlet transformation-based method:
for any f e L in the second-order linear integrable space2(R2) If the function f satisfies the formula
Figure BDA0003038036670000101
Then psij,l,t|a,sCalled successive Shearlet, the definition of successive Shearlet transforms can be expressed as:
Figure BDA0003038036670000102
in the formula (4), a ∈ R+,h∈R,t∈R2(ii) a a, h and t are respectively a scale parameter, a shearing parameter and a translation parameter;
then the image signal is subjected to Shearlet transform denoising, which can be expressed as:
f(t)=s(t)+n(t) (5);
SHψ(f)=SHψ(s)+SHψ(n) (6);
in the formula (5), s (t), n (t) are respectively signal and noise;
in the formula (6), SHψ(s) performing Shearlet transformation on the signal; SH (hydrogen sulfide)ψ(n) performing Shearlet transformation for noise;
s323, three-dimensional reconstruction: denoising the box girder picture based on Shearlet transformation, and then performing feature extraction and image fusion:
s3231, dividing the image into an ROI, and extracting and matching features in the ROI by adopting an SIFT algorithm:
the construction of the scale space is defined as follows:
L(x,y,σ)=G(x,y,σ)*I(χ,y) (7);
in equation (7), G (x, y, σ) is a gaussian kernel function:
Figure BDA0003038036670000111
(x, y) is the coordinate of the pixel point in the image, I (x, y) represents the pixel value of the point, and sigma is a scale space factor;
establishing a Gaussian pyramid according to a scale function, wherein the first layer of the first order of the Gaussian pyramid is an original image, the Gaussian pyramid has an o-order layer and an s-order layer, the scale ratio between two adjacent layers on the same order is k, on the basis of the Gaussian pyramid, obtaining a DOG Gaussian difference operator by using the difference between the space functions of the two adjacent layers on the same order, and using the DOG Gaussian difference operator to detect the maximum value of a point in a scale space, namely detecting the maximum value of the point in the scale space
D(x,y;σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)
=L(x,y,kσ)-L(x,y;σ) (9);
G (x, y, k sigma) and L (x, y; sigma) in the formula (9) represent space functions of two adjacent two layers; g (x, y, k sigma) and G (x, y, sigma) represent Gaussian functions of two adjacent layers;
the extreme point determined by the Gaussian difference is a point in a discrete space, and a continuous extreme point is calculated by using a Taylor expansion formula:
Figure BDA0003038036670000112
obtaining the extreme point
Figure BDA0003038036670000121
Because of the grey-scale map of the edge profile of the object, there isThe sudden change of the gray value can make such points mistakenly used as characteristic values, so a Hessian matrix is used for removing the edge influence to obtain more accurate characteristic points;
then, a main direction is generated: in order to realize that the feature point has rotation invariance, the angle of the feature point is calculated, and in order to realize the rotation invariance of the feature point, the angle of the feature point is calculated, when the direction of the feature point is calculated, the local feature in the gaussian scale image where the feature point is located is used, the scale space factor sigma is known, and the scale is relative to the reference image of the group where the image is located, the so-called local feature is the gradient amplitude and gradient amplitude of all pixels in the neighborhood region of the feature point, wherein the neighborhood region is defined by taking the feature point as the center of a circle and the radius is 4.5 times of the gaussian scale:
Figure BDA0003038036670000122
Figure BDA0003038036670000123
in the above formula, m (x, y) is the amplitude of the pixel point, and θ (x, y) is the amplitude of the point;
finally determining the feature points, then finding the same feature points in the two images for feature matching, and using the Euclidean distance D for feature matchingssdTo show that:
Figure BDA0003038036670000124
in formula (11), A, B are feature points of the two images, respectively;
s3232, when matching the features, selecting a feature point from the A image to match with each feature point in the B image by using brute force matching (BF), and finally selecting DSSdThe minimum two points are used as matching results; obtaining a homography matrix from the feature points matched by the SIFT algorithmAnd (2) for a global transformation matrix, using the same homography transformation for all areas on the image, and splicing the pictures shot by the cameras in all the camera light source modules:
the rail inspection trolley runs in the middle of the ground of the high-speed rail box girder, at each shooting moment, the rail inspection trolley is relatively fixed with the position of the camera and the inner surface of the high-speed rail box girder, and the relation from a world coordinate system to a camera coordinate system is as follows:
Xc=RXw+t (12);
in equation (12), R represents a rotation matrix of the camera position with respect to the world coordinate system origin, t represents a translation vector of the camera position with respect to the world coordinate system, and XcRepresenting the camera coordinate system XwRepresenting a world coordinate system;
Xc=(xc,yc,zc)T
Xw=(xw,yw,zw)T
it is written in the form of its secondary coordinates:
Figure BDA0003038036670000131
the inverse transform of equation (12):
XW=RTXc-RTt (14);
convert it to matrix form:
Figure BDA0003038036670000132
thus, the three-dimensional reconstruction of the inner surface of the high-speed rail box girder is finally realized in the MATLAB, and a three-dimensional image of the inner surface of the high-speed rail box girder is reconstructed;
s33, fusing the defects distinguished by the S31 convolutional neural network into the three-dimensional image reconstructed by S32, and realizing the defect detection of the inner wall of the high-speed rail box girder: firstly, randomly selecting a plurality of fusion points through a computer, then fusing the fusion points on the three-dimensional image corresponding to the randomly selected defect object image based on an image fusion technology (such as Poisson fusion), and generating the three-dimensional defect image by utilizing the matched defect binary image.
In conclusion, the method can realize the automatic detection of the cracks of the high-speed rail box girder, has high detection speed and high efficiency, can carry out remote and omnibearing defect identification and detection on the high-speed rail box girder, basically has no influence of the external environment on the detection result, has high detection precision, greatly improves the detection and maintenance work efficiency, and can provide timely maintenance and powerful support for the safe operation of the high-speed rail, thereby having remarkable progress and application value compared with the prior art.
It is finally necessary to point out here: the above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.

Claims (10)

1. A camera array type imaging method for automatic detection of cracks of a high-speed rail box girder is characterized by comprising the following steps:
s1, enabling the rail inspection trolley to run into a high-speed rail box girder needing defect detection, wherein a camera support is arranged at the top end of the rail inspection trolley, a circular camera mounting part is arranged on the camera support, a plurality of camera light source modules are symmetrically arranged on the camera mounting part along the left and right direction of the circumference direction, and each camera light source module comprises a video camera and a circular aperture which is annularly arranged outside the video camera;
s2, along with the movement of the rail inspection trolley, a camera in a camera light source module arranged on the rail inspection trolley shoots the inner wall of the high-speed rail box girder and transmits the shot image to a computer;
and S3, detecting and classifying defects of the received images through a convolutional neural network and performing three-dimensional reconstruction through a three-dimensional reconstruction network by using MATLAB software, and finally fusing the defects distinguished by the convolutional neural network into the three-dimensional images reconstructed by the three-dimensional reconstruction network to realize the defect detection of the inner wall of the high-speed railway box girder.
2. The camera array imaging method for the automatic detection of the crack of the high-speed railway box girder according to claim 1, wherein the step S3 specifically comprises the following operations:
s31, detecting and classifying the images of the inner wall of the high-speed rail box girder shot in the step S2 by adopting an SSD target detection algorithm:
s311, adopting a PBS algorithm to enhance the image in MATLAB software:
carrying out image enhancement by 3 constraints of color space consistency, texture consistency and exposure consistency according to the relation between an input image I and an illumination image S in the condition that I is S multiplied by R, and optimizing illumination image estimation by an optimization equation, wherein the optimization equation is as follows:
Figure FDA0003038036660000011
in the formula (1), EdIs to bring S as close as possible to S,
Figure FDA0003038036660000012
p represents a pixel, c ∈ { r, g, b }; ec,Et,EeIn order to carry out perceptual bidirectional similarity constraint of color, texture and exposure, lambda is a weighted value;
s312, marking the processed image after translation, amplification and 45-degree free rotation to configure a target detection training set;
s313, detecting and classifying the images through an SSD target detection algorithm, realizing defect detection of the inner wall of the high-speed rail box girder, and marking the image position of the defect;
s32, performing three-dimensional reconstruction on the image of the inner wall of the high-speed rail box girder shot in the step S2 by adopting an image splicing 3d reconstruction algorithm:
s321, carrying out gray scale processing on the image by adopting a weighted average method in MATLAB software:
Image(i,j)=a(i,j)+bG(i,j)+cB(i,j) (2);
in the formula (2), a, b and c are weights of red, green and blue color components;
s322, removing the noise of the image by adopting a Shearlet transformation-based method:
for any f e L in the second-order linear integrable space2(R2) If the function f satisfies the formula
Figure FDA0003038036660000021
Then psij,l,t|a,hCalled successive Shearlet, the definition of successive Shearlet transforms can be expressed as:
Figure FDA0003038036660000022
in the formula (4), a ∈ R+,h∈R,t∈R2(ii) a a, h and t are respectively a scale parameter, a shearing parameter and a translation parameter;
then the image signal is subjected to Shearlet transform denoising, which can be expressed as:
f(t)=s(t)+n(t) (5);
SHψ(f)=SHψ(s)+SHψ(n) (6);
in the formula (5), s (t), n (t) are respectively signal and noise;
in the formula (6), SHψ(s) performing Shearlet transformation on the signal; SH (hydrogen sulfide)ψ(n) performing Shearlet transformation for noise;
s323, three-dimensional reconstruction: denoising the box girder picture based on Shearlet transformation, and then performing feature extraction and image fusion:
s3231, dividing the image into an ROI, and extracting and matching features in the ROI by adopting an SIFT algorithm:
the construction of the scale space is defined as follows:
L(x,y,σ)=G(x,y,σ)*I(χ,y) (7);
in equation (7), G (x, y, σ) is a gaussian kernel function:
Figure FDA0003038036660000031
(x, y) is the coordinate of the pixel point in the image, I (x, y) represents the pixel value of the point, and sigma is a scale space factor;
establishing a Gaussian pyramid according to a scale function, wherein the first layer of the first order of the Gaussian pyramid is an original image, the Gaussian pyramid has an o-order layer and an s-order layer, the scale ratio between two adjacent layers on the same order is k, on the basis of the Gaussian pyramid, obtaining a DOG Gaussian difference operator by using the difference between the space functions of the two adjacent layers on the same order, and using the DOG Gaussian difference operator to detect the maximum value of a point in a scale space, namely:
D(x,y;σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)
=L(x,y,kσ)-L(x,y;σ) (9);
l (x, y, k sigma) and L (x, y; sigma) in the formula (9) represent space functions of two adjacent two layers; g (x, y, k sigma) and G (x, y, sigma) represent Gaussian functions of two adjacent layers;
the extreme point determined by the Gaussian difference is a point in a discrete space, and a continuous extreme point is calculated by using a Taylor expansion formula:
Figure FDA0003038036660000032
obtaining the extreme point
Figure FDA0003038036660000033
Then, a main direction is generated: in order to realize that the feature points have rotation invariance, calculating the angles of the feature points, and in order to realize the rotation invariance of the feature points, calculating the angles of the feature points, wherein the directions of the feature points are calculated according to local features in a Gaussian scale image where the feature points are located, a scale space factor sigma is known, the scale is relative to a reference image of a group where the image is located, the local features are gradient amplitude and gradient amplitude of all pixels in a neighborhood region of the feature points, and the neighborhood region is defined by taking the feature points as the center of a circle and the radius of the neighborhood region is 4.5 times of the Gaussian scale;
Figure FDA0003038036660000041
Figure FDA0003038036660000042
in the formula (10), m (x, y) is the amplitude of the pixel point, and θ (x, y) is the amplitude of the point;
finally determining the feature points, then finding the same feature points in the two images for feature matching, and using the Euclidean distance D for feature matchingssdTo show that:
Figure FDA0003038036660000043
in formula (11), A, B are feature points of the two images, respectively;
s3232, when matching the features, selecting a feature point from the A image to match with each feature point in the B image by using brute force matching (BF), and finally selecting DSSdThe minimum two points are used as matching results; obtaining a transformation matrix with a homography matrix as a whole from the feature points matched by the SIFT algorithm, using the same homography transformation for all areas on the image, and splicing the pictures shot by the cameras in all the camera light source modules:
the rail inspection trolley runs in the middle of the ground of the high-speed rail box girder, at each shooting moment, the rail inspection trolley is relatively fixed with the position of the camera and the inner surface of the high-speed rail box girder, and the relation from a world coordinate system to a camera coordinate system is as follows:
Xc=RXw+t (12);
in equation (12), R represents a rotation matrix of the camera position with respect to the world coordinate system origin, t represents a translation vector of the camera position with respect to the world coordinate system, and XcRepresenting the camera coordinate system XwRepresenting a world coordinate system; xc=(xc,yc,zc)T,Xw=(xw,yw,zw)T
It is written in the form of its secondary coordinates:
Figure FDA0003038036660000051
the inverse transform of equation (12):
XW=RTXc-RTt (14);
convert it to matrix form:
Figure FDA0003038036660000052
thus, the three-dimensional reconstruction of the inner surface of the high-speed rail box girder is finally realized in the MATLAB, and a three-dimensional image of the inner surface of the high-speed rail box girder is reconstructed;
and S33, fusing the defects distinguished by the S31 convolutional neural network into the three-dimensional image reconstructed by S32, and realizing the defect detection of the inner wall of the high-speed rail box girder.
3. The camera array type imaging method for the automatic detection of the cracks of the high-speed rail box girder as claimed in claim 1, wherein: the camera is a CCD camera.
4. The camera array type imaging method for the automatic detection of the cracks of the high-speed rail box girder as claimed in claim 1, wherein: the circular aperture is a white LED lamp.
5. The camera array type imaging method for the automatic detection of the cracks of the high-speed rail box girder as claimed in claim 1, wherein: the number of the camera light source modules is 8, and the 8 camera light source modules are arranged at the left end and the right end of the camera installation part in a bilateral symmetry mode.
6. The camera array type imaging method for the automatic detection of the cracks of the high-speed rail box girder according to claim 5, wherein: the 4 camera light source modules arranged at the left end of the camera installation part are sequentially named as a first left camera light source module, a second left camera light source module, a third left camera light source module and a fourth left camera light source module from top to bottom, and the 4 camera light source modules arranged at the right end of the camera installation part are sequentially named as a first right camera light source module, a second right camera light source module, a third right camera light source module and a fourth right camera light source module from top to bottom; wherein: the first left camera light source module is installed at a position 40 degrees away from the vertical direction of the camera installation part, the second left camera light source module is installed at a position 80 degrees away from the vertical direction of the camera installation part, the third left camera light source module is installed at a position 120 degrees away from the vertical direction of the camera installation part, the fourth left camera light source module is installed at a position 160 degrees away from the vertical direction of the camera installation part, and the first right camera light source module, the second right camera light source module, the third right camera light source module and the fourth right camera light source module are respectively and symmetrically arranged with the corresponding first left camera light source module, the second left camera light source module, the third left camera light source module and the fourth left camera light source module.
7. The camera array type imaging method for the automatic detection of the cracks of the high-speed rail box girder according to claim 6, wherein: the angles between the emission center directions of the cameras in the first left camera light source module and the first right camera light source module and the vertical horizontal plane are 85 degrees, the angles between the emission center directions of the cameras in the second left camera light source module and the second right camera light source module and the vertical horizontal plane are 105 degrees, the angles between the emission center directions of the cameras in the third left camera light source module and the third right camera light source module and the vertical horizontal plane are 135 degrees, and the angles between the emission center directions of the cameras in the fourth left camera light source module and the fourth right camera light source module and the vertical horizontal plane are 170 degrees.
8. The camera array type imaging method for the automatic detection of the cracks of the high-speed rail box girder as claimed in claim 1, wherein: and an encoder is also arranged on the rail inspection trolley.
9. The camera array type imaging method for the automatic detection of the cracks of the high-speed rail box girder as claimed in claim 1, wherein: and a distance sensor is also arranged on the rail inspection trolley.
10. The camera array type imaging method for the automatic detection of the cracks of the high-speed rail box girder as claimed in claim 1, wherein: the rail inspection trolley is provided with a self-walking power mechanism.
CN202110463929.8A 2021-04-25 2021-04-25 Camera array type imaging method for automatic detection of high-speed rail box girder crack Active CN113358659B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110463929.8A CN113358659B (en) 2021-04-25 2021-04-25 Camera array type imaging method for automatic detection of high-speed rail box girder crack

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110463929.8A CN113358659B (en) 2021-04-25 2021-04-25 Camera array type imaging method for automatic detection of high-speed rail box girder crack

Publications (2)

Publication Number Publication Date
CN113358659A true CN113358659A (en) 2021-09-07
CN113358659B CN113358659B (en) 2022-07-19

Family

ID=77525566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110463929.8A Active CN113358659B (en) 2021-04-25 2021-04-25 Camera array type imaging method for automatic detection of high-speed rail box girder crack

Country Status (1)

Country Link
CN (1) CN113358659B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294080A (en) * 2022-08-15 2022-11-04 山东大学 Automatic road crack slotting robot and working method and application

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106338521A (en) * 2016-09-22 2017-01-18 华中科技大学 Additive manufacturing surface defect, internal defect and shape composite detection method and device
CN106683182A (en) * 2017-01-12 2017-05-17 南京大学 3D reconstruction method for weighing stereo matching and visual appearance
CN107945269A (en) * 2017-12-26 2018-04-20 清华大学 Complicated dynamic human body object three-dimensional rebuilding method and system based on multi-view point video
CN109580657A (en) * 2019-01-23 2019-04-05 郑州工程技术学院 A kind of crack detection method in bridge quality testing
CN111340754A (en) * 2020-01-18 2020-06-26 中国人民解放军国防科技大学 Method for detecting and classifying surface defects based on aircraft skin
CN111402227A (en) * 2020-03-13 2020-07-10 河海大学常州校区 Bridge crack detection method
CN111476767A (en) * 2020-04-02 2020-07-31 南昌工程学院 High-speed rail fastener defect identification method based on heterogeneous image fusion
AU2020101836A4 (en) * 2020-08-14 2020-09-24 Xi'an university of posts and telecommunications A method for generating femoral x-ray films based on deep learning and digital reconstruction of radiological image
CN111709983A (en) * 2020-06-16 2020-09-25 天津工业大学 Bubble flow field three-dimensional reconstruction method based on convolutional neural network and light field image
CN111784820A (en) * 2020-06-28 2020-10-16 南京宥安传感科技有限公司 Automated device and method for measuring crack changes through three-dimensional space reconstruction
CN111967288A (en) * 2019-05-20 2020-11-20 万维数码智能有限公司 Intelligent three-dimensional object identification and positioning system and method
CN112308826A (en) * 2020-10-23 2021-02-02 南京航空航天大学 Bridge structure surface defect detection method based on convolutional neural network
CN112330628A (en) * 2020-11-03 2021-02-05 南通斯迈尔精密设备有限公司 Metal workpiece surface defect image detection method

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106338521A (en) * 2016-09-22 2017-01-18 华中科技大学 Additive manufacturing surface defect, internal defect and shape composite detection method and device
CN106683182A (en) * 2017-01-12 2017-05-17 南京大学 3D reconstruction method for weighing stereo matching and visual appearance
CN107945269A (en) * 2017-12-26 2018-04-20 清华大学 Complicated dynamic human body object three-dimensional rebuilding method and system based on multi-view point video
CN109580657A (en) * 2019-01-23 2019-04-05 郑州工程技术学院 A kind of crack detection method in bridge quality testing
CN111967288A (en) * 2019-05-20 2020-11-20 万维数码智能有限公司 Intelligent three-dimensional object identification and positioning system and method
CN111340754A (en) * 2020-01-18 2020-06-26 中国人民解放军国防科技大学 Method for detecting and classifying surface defects based on aircraft skin
CN111402227A (en) * 2020-03-13 2020-07-10 河海大学常州校区 Bridge crack detection method
CN111476767A (en) * 2020-04-02 2020-07-31 南昌工程学院 High-speed rail fastener defect identification method based on heterogeneous image fusion
CN111709983A (en) * 2020-06-16 2020-09-25 天津工业大学 Bubble flow field three-dimensional reconstruction method based on convolutional neural network and light field image
CN111784820A (en) * 2020-06-28 2020-10-16 南京宥安传感科技有限公司 Automated device and method for measuring crack changes through three-dimensional space reconstruction
AU2020101836A4 (en) * 2020-08-14 2020-09-24 Xi'an university of posts and telecommunications A method for generating femoral x-ray films based on deep learning and digital reconstruction of radiological image
CN112308826A (en) * 2020-10-23 2021-02-02 南京航空航天大学 Bridge structure surface defect detection method based on convolutional neural network
CN112330628A (en) * 2020-11-03 2021-02-05 南通斯迈尔精密设备有限公司 Metal workpiece surface defect image detection method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
周神特等: "基于机器视觉的金属板材表面缺陷光学检测技术", 《无损检测》 *
孙朝云等: "基于深度卷积神经网络融合模型的路面裂缝识别方法", 《长安大学学报(自然科学版)》 *
杨成立等: "基于非下采样Shearlet变换的磁瓦表面裂纹检测", 《农业机械学报》 *
王苹: "云计算模型下图像边缘重叠区域检测方法研究", 《内蒙古民族大学学报(自然科学版)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294080A (en) * 2022-08-15 2022-11-04 山东大学 Automatic road crack slotting robot and working method and application
CN115294080B (en) * 2022-08-15 2023-09-08 山东大学 Automatic slotting robot for highway cracks and working method and application thereof

Also Published As

Publication number Publication date
CN113358659B (en) 2022-07-19

Similar Documents

Publication Publication Date Title
Jang et al. Automated crack evaluation of a high‐rise bridge pier using a ring‐type climbing robot
Ren et al. State of the art in defect detection based on machine vision
US11893724B2 (en) Methods of artificial intelligence-assisted infrastructure assessment using mixed reality systems
Prasanna et al. Automated crack detection on concrete bridges
CN113125458B (en) Method and system for checking and evaluating coating state of steel structure
JP6904614B2 (en) Object detection device, prediction model creation device, object detection method and program
Li et al. Automatic segmentation and enhancement of pavement cracks based on 3D pavement images
JP5175528B2 (en) Tunnel lining crack inspection system
JP2015201192A (en) Detection of object position and direction
CN111754466B (en) Intelligent detection method for damage condition of conveyor belt
WO2020041319A1 (en) Fatigue crack detection in civil infrastructure
Yuan et al. Near real‐time bolt‐loosening detection using mask and region‐based convolutional neural network
Wang et al. Fully convolution network architecture for steel-beam crack detection in fast-stitching images
Lee et al. Diagnosis of crack damage on structures based on image processing techniques and R-CNN using unmanned aerial vehicle (UAV)
CN111126381A (en) Insulator inclined positioning and identifying method based on R-DFPN algorithm
CN113358659B (en) Camera array type imaging method for automatic detection of high-speed rail box girder crack
Zhou et al. UAV vision detection method for crane surface cracks based on Faster R-CNN and image segmentation
JP5274173B2 (en) Vehicle inspection device
CN111238365A (en) Subway train distance measurement and positioning method and system based on stereoscopic vision
Feng et al. Crack assessment using multi-sensor fusion simultaneous localization and mapping (SLAM) and image super-resolution for bridge inspection
Di Stefano et al. Automatic 2D-3D vision based assessment of the attitude of a train pantograph
Shajahan et al. Automated inspection of monopole tower using drones and computer vision
CN114638835B (en) Sleeper foreign matter detection method based on depth camera
CN115857040A (en) Dynamic visual detection device and method for foreign matters on locomotive roof
Zhang et al. Obtaining Unfolded Image for Surface of Wire Rope Based on Image Processing and Splicing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant