CN111390913B - Automatic detection method for bridge bottom structure - Google Patents

Automatic detection method for bridge bottom structure Download PDF

Info

Publication number
CN111390913B
CN111390913B CN202010275584.9A CN202010275584A CN111390913B CN 111390913 B CN111390913 B CN 111390913B CN 202010275584 A CN202010275584 A CN 202010275584A CN 111390913 B CN111390913 B CN 111390913B
Authority
CN
China
Prior art keywords
detection
section
point cloud
robot
cross
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010275584.9A
Other languages
Chinese (zh)
Other versions
CN111390913A (en
Inventor
张德津
曹民
胡超
王新林
卢毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Optical Valley Excellence Technology Co ltd
Original Assignee
Wuhan Optical Valley Excellence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Optical Valley Excellence Technology Co ltd filed Critical Wuhan Optical Valley Excellence Technology Co ltd
Priority to CN202010275584.9A priority Critical patent/CN111390913B/en
Publication of CN111390913A publication Critical patent/CN111390913A/en
Application granted granted Critical
Publication of CN111390913B publication Critical patent/CN111390913B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • G06Q10/047Optimisation of routes or paths, e.g. travelling salesman problem

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Mechanical Engineering (AREA)
  • Economics (AREA)
  • Robotics (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the invention provides an automatic detection method for a bridge bottom structure, which comprises the steps of obtaining a three-dimensional point cloud cross section of a region to be detected; dividing the cross section of the three-dimensional point cloud into a plurality of detection areas, and acquiring the plurality of detection areas and a sequence of the plurality of detection areas, wherein the sequence is used for carrying out task allocation on the robot; and acquiring the path plan of the robot corresponding to any one detection area in the plurality of detection areas based on the plurality of detection areas. According to the automatic detection method for the bridge bottom, provided by the embodiment of the invention, the task allocation and the path planning of the robot cluster are carried out by utilizing the three-dimensional point cloud cross section, and the multi-robot cooperative detection operation mode is adopted, so that the automatic detection time of the bridge bottom structure is shortened, and the detection efficiency is improved.

Description

Automatic detection method for bridge bottom structure
Technical Field
The invention relates to the technical field of bridge detection, in particular to an automatic detection method for a bridge bottom structure.
Background
The bridge plays a considerable role in the economic development of China. At present, the total number of bridges in China is close to 90 thousands, a large number of bridges reach the maintenance period, and the detection requirements of bridges in China are increased year by year. The bridge detection comprises the items of bridge deck detection, pier detection, beam bottom detection and the like. The beam bottom detection means that the bottom of the bridge deck is detected, and the detection position is special, so that the bridge bottom detection device is particularly emphasized by the industry. According to the national standard requirements, the detection of the beam bottom needs to detect the defects of concrete cracks, concrete seepage, rubber pad slippage, rubber pad cracks and the like.
There are two methods commonly used for detecting the bottom of the beam. A detection method is characterized in that a special bridge detection vehicle conveys detection personnel to the bottom of a beam to execute a detection task, and once the bridge detection vehicle has abnormity such as shaking and fracture, serious threats can be caused to the life safety of the detection personnel.
The other detection method is to adopt a bridge detection robot to carry out the unmanned detection of the beam bottom. The main components of the bridge detection robot are a beam bottom platform and a beam bottom detection robot, rails are distributed on the beam bottom platform, and the beam bottom robot reciprocates on the rails to realize beam bottom data acquisition. The bridge detection robot adopts a detection method of forming a plane by a line, namely a detection cycle of beam bottom platform walking, beam bottom platform positioning, robot reciprocating detection and beam bottom platform walking, and detects a plurality of bridge cross sections through the robot, and the plurality of cross sections are spliced to form a detection result of a whole beam bottom plane-shaped area. However, the problem of low efficiency exists in the aspect of more and more domestic wide bridges with four lanes and more in the scanning detection by using a single robot.
Disclosure of Invention
Embodiments of the present invention provide an automated detection method for a bridge substructure that overcomes, or at least partially solves, the above-mentioned problems.
The embodiment of the invention provides an automatic detection method for a bridge bottom structure, which comprises the following steps: acquiring a three-dimensional point cloud cross section of a region to be detected; dividing the cross section of the three-dimensional point cloud into a plurality of detection areas, and acquiring the plurality of detection areas and a sequence of the plurality of detection areas, wherein the sequence is used for carrying out task allocation on the robot; and acquiring the path plan of the robot corresponding to any one detection area in the plurality of detection areas based on the plurality of detection areas.
In some embodiments, the obtaining a path plan of the robot corresponding to any detection area in the plurality of detection areas based on the plurality of detection areas includes: dividing the cross-section point cloud of any detection area into a plurality of detection units; acquiring sensor attitude parameters of the robot corresponding to the detection units based on the detection units; and acquiring the path plan of the robot corresponding to any detection area based on the sensor attitude parameters of the robot corresponding to the detection units.
In some embodiments, the dividing the cross-sectional point cloud of any of the detection areas into a plurality of detection units includes: dividing the cross-section point cloud of any detection area into a plurality of sections; based on the plurality of segments, the plurality of detection units are obtained.
In some embodiments, said obtaining the plurality of detection units based on the plurality of segments comprises: if the length of any section is not more than the length of a preset detection unit, stopping dividing the any section, and taking the any section as the detection unit; and if the length of any section is greater than the length of the preset detection unit, averagely dividing the any section until the length of a single section is not greater than the length of the preset detection unit to obtain the detection unit.
In some embodiments, the segmenting the cross-sectional point cloud of any of the detection regions into a plurality of segments includes: segmentation is performed using an elevation gradient-based segmentation method.
In some embodiments, the cross-sectioning the three-dimensional point cloud into a plurality of detection regions, and acquiring the plurality of detection regions and a sequence of the plurality of detection regions, the sequence being used for task assignment to a robot, includes: dividing the cross section of the three-dimensional point cloud into a plurality of detection areas, and acquiring the plurality of detection areas and a sequence of the plurality of detection areas by taking the shortest total detection time of multiple robots as a target function.
In some embodiments, the obtaining a cross section of a three-dimensional point cloud of a region under test comprises: acquiring three-dimensional point cloud information of the area to be detected; and acquiring the three-dimensional point cloud cross section of the area to be detected based on the three-dimensional point cloud information of the area to be detected.
According to the automatic detection method for the bridge bottom structure, disclosed by the embodiment of the invention, the task allocation and the path planning of the robot cluster are carried out by utilizing the three-dimensional point cloud cross section, and the multi-robot cooperative detection operation mode is adopted, so that the automatic detection time of the bridge bottom structure is shortened, and the detection efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flow chart of an automated detection method for a bridge bottom structure according to an embodiment of the present invention;
FIG. 2 is a flow chart of another method for automatically detecting a bridge substructure according to an embodiment of the present invention;
FIG. 3 is a flow chart of the segmentation of the detection unit of the automatic detection method for the bridge bottom structure according to the embodiment of the present invention;
FIG. 4 is a task allocation flowchart of an automated detection method for a bridge substructure according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a boundary state of a robot according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a scanning method of a laser radar according to an embodiment of the present invention;
fig. 7 is a schematic view of a box girder according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The automatic detection method for the bridge bottom structure according to the embodiment of the invention is described below with reference to fig. 1.
As shown in fig. 1, the method for automatically detecting a bridge bottom structure of the embodiment of the invention includes steps S100-S300.
And S100, acquiring a three-dimensional point cloud cross section of the area to be measured.
It can be understood that bridge detection comprises bridge floor detection, pier detection, beam bottom detection and the like, and the beam bottom detection is used for monitoring the bottom of the bridge, so that the difficulty of detection is increased due to the special field environment at the bottom of the bridge. The embodiment of the invention carries out three-dimensional measurement on the bottom structure of the bridge, constructs the three-dimensional point cloud cross section of the bottom of the bridge, is used for reducing the bottom environment to the maximum extent, and provides accurate basis for task allocation and path planning of the robot.
And S200, dividing the cross section of the three-dimensional point cloud into a plurality of detection areas, and acquiring a plurality of detection areas and a sequence of the detection areas, wherein the sequence is used for task allocation of the robot.
It can be understood that the existing bridge inspection vehicle has high operation risk and great potential safety hazard, and the detection of the bridge substructure by using the robot is the trend of future development. The detection robot can bear various detection sensors, receive external input control to carry out specific movement, and support to adjust the posture of the sensor according to detection requirements so as to achieve the purpose of detection. Because the bottom structure of the bridge is complex and the width of the bridge is large, the embodiment of the invention adopts a multi-robot cooperative detection operation mode. The cross section of the three-dimensional point cloud at the bottom of the bridge is divided into a plurality of detection areas, and a plurality of robots detect the plurality of detection areas at the same time, so that the detection automation is realized, the detection time is shortened, and the detection efficiency is improved.
And step S300, acquiring a path plan of the robot corresponding to any detection area in the plurality of detection areas based on the plurality of detection areas.
It can be understood that one robot correspondingly detects one detection area, the detection area is divided into a plurality of detection units according to the working principle and the working range of the sensor arranged on the robot, a plurality of sensor attitude parameters of the robot are obtained, and the plurality of sensor attitude parameters are integrated to form a path plan corresponding to the robot in any detection area.
According to the automatic detection method for the bridge bottom structure, provided by the embodiment of the invention, the task distribution and the path planning of the robot cluster are carried out by utilizing the three-dimensional point cloud cross section, and the multi-robot cooperative detection operation mode is adopted, so that the automatic detection time of the bridge bottom structure is shortened, and the detection efficiency is improved.
In some embodiments, the step S300 of obtaining a path plan of the robot corresponding to any detection area of the plurality of detection areas based on the plurality of detection areas includes steps S310-S330.
Step S310, dividing the cross-section point cloud of any detection area into a plurality of detection units.
It can be understood that, due to the limitation of the detection range of the sensor mounted on the robot, the embodiment of the present invention divides the cross-sectional point cloud of the detection area into a plurality of detection units, and the detection units are data ranges of single coverage detection of the sensor, and are used for dispersing a continuous coverage detection range in one detection area into a plurality of basic detection ranges.
Step S320 is to acquire sensor attitude parameters of the robot corresponding to the plurality of detection units based on the plurality of detection units.
It can be understood that the detection unit is a data range of single coverage detection of the sensor, and the continuous coverage measurement range is dispersed into a plurality of detection units by using the detection unit, so that the sensor has a unique posture during coverage detection in each detection unit, and the posture of the sensor in the whole detection area is dispersed into the postures of the sensors in the plurality of detection units.
And S330, acquiring a path plan of the robot corresponding to any detection area based on the sensor attitude parameters of the robot corresponding to the plurality of detection units.
It can be understood that after the calculation of the attitude parameters of the sensors corresponding to all the detection units in the detection area is completed, the attitude parameters of the sensors are sequentially connected according to the theoretical positions, and the robot path planning of the automatic detection of the detection area is completed.
According to the automatic detection method for the bridge bottom structure, provided by the embodiment of the invention, the attitude of the sensor in the full length detected by the detection area is dispersed into the attitude of the sensor in the plurality of detection units, and the attitude parameters of the sensor corresponding to the detection units are solved under the condition that the sensor is over against the detection units, so that the problem of robot attitude adjustment is quickly and effectively solved.
In some embodiments, the step S310 of dividing the cross-sectional point cloud of any detection area into a plurality of detection units includes steps S311 and S312.
Step S311, a cross-sectional point cloud of any detection area is divided into a plurality of sections.
The method can be understood that the cross section point cloud of any detection area is divided into a plurality of sections based on the fact that the shape of the cross section of the bridge bottom structure is formed by straight line sections, and the method is used for laying a foundation for calculation of attitude parameters of the sensor.
Step S312, a plurality of detection units are acquired based on the plurality of segments.
It will be understood that if a segmented segment meets the requirements of the detection unit, the segment is taken as the detection unit, and if a segmented segment does not meet the requirements of the detection unit, the segment needs to be processed to obtain the detection unit.
The automatic detection method for the bridge bottom structure provided by the embodiment of the invention adopts a method of firstly segmenting and then detecting units, and lays a foundation for the calculation of the attitude parameters of the sensor.
In some embodiments, the step S312 of obtaining a plurality of detection units based on the plurality of segments includes: if the length of any section is not more than the length of the preset detection unit, stopping dividing any section, and taking any section as the detection unit; and if the length of any section is greater than the length of the preset detection unit, averagely dividing any section until the length of the single section is not greater than the length of the preset detection unit to obtain the detection unit.
As shown in fig. 3, the detection unit segmentation process of the automatic detection method for a bridge bottom structure according to the embodiment of the present invention is as follows: if the length of any section is not more than the length of the preset detection unit, stopping dividing any section, and taking any section as the detection unit; and if the length of any section is greater than the length of the preset detection unit, averagely dividing any section until the length of the single section is not greater than the length of the preset detection unit to obtain the detection unit.
The preset detection unit length is calculated based on parameters of the bridge automatic detection sensor. The embodiment of the invention provides a preset detection unit length solving method capable of meeting the high-efficiency detection requirement by comprehensively considering the types of the sensors.
The method for solving the length of the preset detection unit provided by the embodiment of the invention is described below with reference to table 1.
The sensors adopted by the embodiment of the invention comprise a high-resolution white light camera, an infrared camera, a ground penetrating radar and ultrasonic flaw detection. The high-resolution white light camera and the infrared camera are area-array cameras, and data of one plane are acquired at a time. The ground penetrating radar and the ultrasonic flaw detection are in a short-distance detection mode, a sensor needs to be close to a region to be detected, the two modes are used as fixed-point detection modes, and only a plurality of sampling points are detected in one section.
TABLE 1 sensor parameter Table
Name (R) High-resolution white light camera Infrared camera Ground penetrating radar Ultrasonic flaw detection
Sensor parameters m1×n1 m2×n2 / /
Accuracy of measurement 0.2(mm) 1(mm) / /
Visual field 0.2m1×0.2n1(mm) m2×n2(mm) / /
Optimal ranging 0.2m1/tan(EOF1/2)(mm) m2/tan(EOF2/2)(mm) ≤50(mm) Direct contact
m1×n1Number of pixels, m, of photosensitive array of high-resolution white light camera2×n2For the number of pixels of the photosensitive array of the infrared camera, EOF1For high resolution white light cameras2The field angle of the red camera is determined by the parameters of the sensor lens. The measurement accuracy of the high-resolution white light camera in the embodiment of the invention is 0.2mm, and the measurement accuracy of the infrared camera is 1 mm. In order to improve the detection efficiency, the embodiment of the invention provides a method for 'white light and infrared common measurement point', the method is that the optimal distance measurement of a white light camera is equal to the optimal distance measurement of an infrared camera, wherein the optimal distance measurement of the white light camera is determined based on the measurement precision of the white light camera, the number of transverse pixels of a photosensitive front surface of the white light camera and the field angle of the white light camera, the optimal distance measurement of the infrared camera is determined based on the measurement precision of the infrared camera, the number of transverse pixels of the photosensitive front surface of the infrared camera and the field angle of the infrared camera, and the calculation formula is as follows:
0.2m1/tan(EOF1/2)=m2/tan(EOF2/2)
by selecting a specific lens, two sensors with different parameters in the camera obtain the same optimal distance measurement. By using the method, the most frequent visual detection sensors can share the same detection strategy and path planning, and the reliability and maintainability of the detection method are greatly improved.
The calculation method of the length of the preset detection unit is that the minimum value of the side length of the image picture projected to the y-z plane subtracts the part of the image picture overlapped with the adjacent image, and the calculation formula is as follows:
lpre=min(m)(1-γ)
wherein lpre is the length of the preset detection unit, m is the side length of the image frame projected on the y-z plane, and γ is the coincidence proportion of the image and the adjacent image, which is an artificial preset value, and is preset to be 0.1-0.2 in the embodiment of the present invention.
The preset detection unit length lpre is smaller than the minimum value of the map of the sensor used for the overlay measurement. Due to the requirement of planar image splicing, a certain overlapping area is required between two adjacent images to complete image registration. When the white light and infrared common measurement point method is adopted for sensor type selection, the preset detection unit length lpre is the repeated part of the coverage detection common map frame minus the adjacent map frames.
And after the preset detection unit length lpre and the segmentation result jointly determine the segmentation result of the detection unit, solving the sensor attitude of each detection unit. Let the coordinates of two end points of the detection unit be (x)1,y1) And (x)2,y2) And l is the object distance of the camera, which is determined according to the focal length, the optimal distance between the camera and the object to be measured can be obtained according to the posture of the detection unit and the optimal measurement distance of the vision sensor, the abscissa of the optimal measurement point is the average value of the abscissas of the two end points of the detection unit minus the cosine value of the object distance of the camera, the ordinate of the optimal measurement point is the average value of the ordinates of the two end points of the detection unit minus the sine value of the object distance of the camera, the tangent value of the inclination angle of the sensor is the ratio of the difference of the abscissas of the two end points of the detection unit to the difference of the ordinates of the two end points of the detection unit, and the calculation formula is as follows:
Figure GDA0003063695540000081
Figure GDA0003063695540000082
Figure GDA0003063695540000083
wherein, theta is the inclination angle of the sensor, and the sensor can ensure the direct-view detection unit under the inclination angle state, and ensure that the image data does not generate serious distortion. The detection units calculate the optimal measuring point o coordinate and the measuring angle of the coordinate sensor one by one, and then the detection postures corresponding to a plurality of detection units in a single robot are integrated to form a detection path of the single robot.
According to the automatic detection method for the bridge bottom structure, provided by the embodiment of the invention, the detection area is divided into a plurality of detection units with the length not larger than the length of the preset detection unit, so that the single detection range of the sensor is ensured to be in the detection working range, and a foundation is laid for the calculation of the attitude parameters of the sensor.
In some embodiments, the step S311 of segmenting the cross-sectional point cloud of any detection region into a plurality of segments includes: segmentation is performed using an elevation gradient-based segmentation method.
It can be understood that, considering that random noise exists in the laser point cloud of the bridge bottom structure, the cross-section point cloud is subjected to gaussian down-sampling, the reduction ratio s is 0.8, and part of the random noise in the cross-section point cloud after the gaussian down-sampling is suppressed. Then gradient calculation is carried out on the point cloud of the cross section, the gray gradient value of the specific pixel point p (x, y) in the x direction is determined based on the specific pixel point adjacent to the specific pixel point p in the x axis direction and the specific pixel point adjacent to the specific pixel point p in the y axis direction, the gray gradient value of the specific pixel point p (x, y) in the y direction is determined based on the specific pixel point adjacent to the specific pixel point p in the y axis direction and the specific pixel point adjacent to the specific pixel point p in the x axis direction, and the calculation formula is as follows:
Figure GDA0003063695540000091
Figure GDA0003063695540000092
wherein i (x, y) is the gray value of the specific pixel point p (x, y), i (x +1, y) is the gray value of the specific pixel point p' (x +1, y) adjacent to the specific pixel point p in the x-axis direction, gxIs the value of the gray gradient in the x direction, gyThe gray scale value in the y direction is obtained, the square of the normalized gradient at the specific pixel point p is equal to the sum of the square of the gray scale value in the x direction of the specific pixel point p and the square of the gray scale value in the y direction of the specific pixel point p, the sine value of the gradient angle of the specific pixel point p is equal to the ratio of the gray scale value in the x direction of the specific pixel point p to the gray scale value in the y direction of the specific pixel point p, and the calculation formula is as follows:
Figure GDA0003063695540000101
Figure GDA0003063695540000102
wherein, gpFor a normalized gradient at a particular pixel point p, αpIs the gradient angle of the particular pixel point p. For the point cloud of the cross section of the bridge bottom structure, the position with larger gradient angle is more likely to be a turning point, a plurality of turning points are selected for the whole point cloud of the cross section, and the point cloud of the cross section is divided by the turning points to form a plurality of point cloud line segments. Calculating the slope of each point cloud line segment by the following calculation formula:
Figure GDA0003063695540000103
wherein x is the abscissa of the specific pixel point p, y is the ordinate of the specific pixel point p, adjacent line segments with similar slopes are merged, turning points at the joints of the two ends are eliminated, and finally the point cloud of the cross section of the bridge bottom structure is divided into a plurality of straight line segments through segment-by-segment merging.
According to the automatic detection method for the bridge bottom structure, provided by the embodiment of the invention, the point cloud of the cross section is subjected to Gaussian down-sampling, so that part of random noise in the point cloud of the cross section is inhibited, and then the point cloud of the cross section is segmented by a segmentation method based on the elevation gradient, so that the segmentation accuracy is ensured to the greatest extent.
In some embodiments, the step S200 of dividing the cross section of the three-dimensional point cloud into a plurality of detection areas, and acquiring a sequence of the plurality of detection areas and the plurality of detection areas, where the sequence for task assignment to the robot includes: dividing the cross section of the three-dimensional point cloud into a plurality of detection areas, and acquiring a plurality of detection areas and a plurality of detection area sequences by taking the shortest total detection time of multiple robots as a target function.
The following describes the automatic detection means for the bottom of the bridge provided by the embodiment of the invention with reference to table 2.
The automatic detection means of the bridge bottom is divided into two types: coverage detection and fixed point detection. The coverage detection is that the sensor acquires data of the full detection length, the white light camera and the infrared camera in the embodiment of the invention acquire data of the full detection length, the coverage detection belongs to the coverage detection, and the detection time is ta(s/m). The fixed-point detection is that the sensor only detects a plurality of specific sampling points, for example, deep detection means such as a ground penetrating radar only samples a plurality of points, so that the section detection task can be completed, and the fixed-point detection time is the time required by single-point detection. If m robots are provided for cooperative detection in a single detection, the total detection time (objective function) is the maximum value of the detection time of each robot, the detection time of the robot is the sum of the time required for coverage detection and the time required for fixed-point detection, and the calculation formula is as follows:
Figure GDA0003063695540000111
wherein t isoThe total detection time is the maximum value of the detection time of each robot because a plurality of robots perform detection synchronously. Detection time and detection range of robotThe corresponding relationship exists, the robot running interval can be calculated by combining the running time t and the detection cost, which is not described herein, and the task allocation method will be described below by using the robot running time t as a variable. t is tiThe detection time required for the ith robot,
Figure GDA0003063695540000112
for the time required for the coverage detection in the detection section of the ith robot,
Figure GDA0003063695540000113
m is the number of robots for the time required for the fixed point detection in the ith robot detection section.
TABLE 2 detection means cost table
Figure GDA0003063695540000114
Since the total detection task is fixed, the sum of the detection times of all robots, which are assigned tasks anyway, is a fixed value for all robots. When the detection time required by each robot is the same, toHaving a minimum value. But due to fixed point detection time
Figure GDA0003063695540000121
For the isolated cost and non-detachable, when a plurality of robots perform sequential task allocation according to the same detection time and fixed-point detection tasks appear at the junction of two robots, the detection time of the two robots is different because the fixed-point detection can only belong to one robot. When the fixed-point detection task is frequent, it is more difficult to assign the same detection time to each robot. Therefore, the total length of the detection is divided by adopting the flow shown in fig. 4, so that the task allocation is completed.
In order to avoid deadlock in task allocation, two thresholds are set in task allocation: a time difference threshold value a and a total iteration number threshold value b of each robot. The time difference degree a of each robot is equal to the ratio of the difference between the longest time and the shortest time of each robot to the longest time, and the calculation formula is as follows:
Figure GDA0003063695540000122
therein, max (t)i) The longest time, min (t), for the roboti) And when the time difference degree in the task allocation scheme is smaller than a threshold value a, the whole allocation iterative algorithm is terminated and the current allocation scheme is output to be the final task allocation.
In addition, a total iteration number threshold value b is set so as to solve the deadlock problem in the iterative optimization process, and the tasks of the single robot which consumes the longest time can be redistributed. As shown in fig. 5, if the robot number consuming the longest time is n, and the numbers of two adjacent robots are n-1 and n +1, respectively, the task boundary of the robot n has three states, which are (a) both the boundaries are fixed-point tasks, (b) a single boundary is a fixed-point task, and (c) both the boundaries are overlay tasks.
The task allocation method of the n number of robots corresponding to the three states is changed as follows:
when the robot is in the state (a), the task time consumption of the robot number n after the distribution mode is changed is equal to the minimum value obtained by subtracting the fixed-point task time consumption of the junction of two robots adjacent to the robot number n from the task time consumption of the robot number n before the distribution mode is changed, and the calculation formula is as follows:
Figure GDA0003063695540000123
in the state of (b), the task time consumption of the robot number n after the distribution mode is changed is equal to the average value of the total time consumption of the robot number n-1 and the robot number n before the distribution mode is changed, and the calculation formula is as follows:
Figure GDA0003063695540000131
in the state of (c), the task time consumption of the robot number n after the distribution mode is changed is equal to the average value of the total time consumption of the robot number n-1, the robot number n and the robot number n +1 before the distribution mode is changed, and the calculation formula is as follows:
Figure GDA0003063695540000132
wherein the content of the first and second substances,
Figure GDA0003063695540000133
indicating that the fixed-point task at the junction of the n-number robot and the n-1-number robot is time-consuming,
Figure GDA0003063695540000134
indicating that the fixed-point task at the junction of the n-number robot and the n + 1-number robot is time-consuming, tnTime consuming task for robot n 'before change of assignment method, t'nThe task of the robot number n with changed distribution mode is time-consuming, tn-1Time consuming task for robot n-1 before changing allocation mode, tn+1The task of the robot n +1 before changing the allocation mode is time-consuming. The task mode of the n-number robot and the adjacent robots is varied to form a new overall task distribution mode, global optimization is carried out by means of the flow shown in the figure 4, and finally the task distribution mode with the shortest total detection time of the multiple robots as a target function is formed.
According to the automatic detection method for the bridge bottom structure, provided by the embodiment of the invention, two typical detection requirements in the bridge detection process are comprehensively considered, the shortest total detection time of multiple robots is taken as an objective function to perform task allocation on the robots, the automatic detection time of the bridge bottom structure is shortened to the greatest extent, and the problem of task allocation of multi-robot cooperative detection is solved.
In some embodiments, the step S100 of obtaining the cross section of the three-dimensional point cloud of the region to be measured includes the steps S110 and S120.
And step S110, obtaining three-dimensional point cloud information of the area to be detected.
It is to be understood that, in the embodiment of the present invention, the type of the measurement device is not particularly limited, and the installation manner of the measurement device is not particularly limited, and the embodiment of the present invention exemplifies that the laser radar is installed on the robot. And equally dividing the area to be measured to obtain a plurality of measuring sub-areas, and measuring the bridge bottom structures in the measuring sub-areas by using a plurality of laser radars. The scanning direction of the laser radar is vertical to the moving direction of the laser radar, and the laser radar scans by taking the measuring range as the radius to obtain three-dimensional point cloud information based on the section.
And step S120, acquiring a three-dimensional point cloud cross section of the area to be detected based on the three-dimensional point cloud information of the area to be detected.
It can be understood that three-dimensional point cloud information based on a cross section is obtained through laser radar scanning, the three-dimensional point cloud covering the whole bridge deck width is obtained, dimension reduction processing is carried out on the three-dimensional point cloud, and cross section point cloud of the bottom of the bridge on a laser radar traveling route is extracted. And sequentially splicing the point clouds on the cross sections of the plurality of laser radars to form the cognition of the bottom area of the bridge to be detected.
Fig. 6 illustrates a scanning mode diagram of a lidar, as shown in fig. 6, the lidar has a certain angle of view, and performs an equiangular scan within the angle of view, and 1 data point is acquired every 2 °. On an x-z plane, a laser radar is used as a coordinate origin (0, 0), and the height of the bridge bottom structure right above the laser radar is (0, z)m). Within each scanning period, taking the x coordinate to be away from the target point zmThe nearest 4 points are (x) respectivelym-2,zm-2),(xm-1,zm-1),(xm+1,zm+1) And (x)m+2,zm+2) And then the m point coordinates of the point to be measured are as follows:
Figure GDA0003063695540000141
and after each scanning period of the laser radar, calculating the coordinates of m points of the points to be measured, and constructing local y-z plane projection point cloud corresponding to the single laser radar by combining all the coordinates of the m points in the measuring range of each laser radar with the y coordinate value of each m point measured by the encoder. And sequentially splicing the y-z plane point clouds of the plurality of laser radars to obtain the y-z plane point cloud corresponding to the full-length laser radar running track of the task.
According to the automatic detection method for the bridge bottom structure, the three-dimensional point cloud cross section of the area to be detected is constructed, so that the cognition on the area to be detected is formed, and a basis is provided for task allocation and path planning of a follow-up robot.
Another method for automatically detecting a bridge bottom structure according to an embodiment of the present invention is described below with reference to fig. 2, where the method includes: the method comprises the steps of obtaining three-dimensional measurement information of a beam bottom, obtaining task scheduling of the robot according to the three-dimensional measurement information of the beam bottom and a robot task scheduling method, obtaining path planning of a single robot according to the three-dimensional measurement information of the beam bottom and a sensor attitude parameter calculation method, and finally obtaining robot cluster detection planning according to the task scheduling of the robot and the path planning of the single robot.
The embodiment of the invention is exemplified by an automatic detection process of a box girder bridge. As shown in fig. 7, the bottom surface of the box girder is uneven and has a shape of "great wall", and the box girder is abstracted into 3 sections, which are respectively a bottom horizontal straight section a with a length of 0.4m, an inclined section b with a length of 1m, and a top horizontal section c with a length of 0.4 m. The whole area to be measured is composed of 10 box girders, and the total length is 18.4 m. The inspection needs to be a full-coverage visual inspection and 9 positions (a) as shown in FIG. 71-a9) The fixed point radar detection task of (1). Table 3 cost table of actual testing approach, the time cost is given by laboratory experiments before testing starts and does not need to be adjusted at the testing site.
TABLE 3 cost table of actual detection means
Figure GDA0003063695540000151
Attach 3 inspection robots (number R)1-R3) And cooperatively detecting the area to be detected. Before the start of the detection, 3 detection robots moved to the third half position of the total length of the detection, as shown in table 4.
Table 4 scan task partition table
Robot numbering R1 R2 R3
Responsible area (Unit m) 0-6.7 6.3-12.6 12.1-18.4
The 3 robots perform three-dimensional scanning at the same running speed. It should be noted that the three-dimensional scanning ranges of the 3 robots are overlapped to some extent, which is beneficial to the splicing of the 3 three-dimensional point clouds. The three-dimensional mapping along the line is carried out by the 3 robots through the laser radars carried by the robots, and the distribution results are shown in the table 5 according to the task distribution method in the steps.
TABLE 5 Multi-robot task Allocation
Robot numbering R1 R2 R3
Responsible area (Unit m) 0-6.05 6.05-12.35 12.35-18.4
Coverage detection time 93 98 93
Fixed point detection is time consuming 90 90 90
Total time consumption 183 188 183
After the task distribution of 3 robots is completed, paths within the responsible range of the robots are planned in parallel. Below with R1The robot exemplifies the path planning mode.
The sensor indices mounted on the robot in this embodiment are shown in table 6.
TABLE 6 visual sensor group parameter Table
Name (R) High-resolution white light camera Infrared rayCamera with a camera module
Sensor parameters 2048×2048 512×512
Accuracy of measurement 0.2(mm) 1(mm)
Visual field 512×512(mm) 512×512(mm)
Optimal ranging 500(mm) 500(mm)
The white light camera adopts an 8M resolution CMOS camera, and the infrared camera adopts a 250 ten thousand pixel resolution medium wave infrared camera. The optimal distance measurement of the two cameras is controlled to be 1.2m by selecting the type of the lens matched with the two cameras.
After the three-dimensional measurement and task allocation are finished, aiming at a single robot R1The assigned detection area is automatically divided into a plurality of straight line segments. The method adopted by the embodiment of the invention is to combine adjacent line segments with similar slopes, eliminate turning points at the joints of two ends, and finally divide the detection area into a plurality of straight line segments by combining segment by segment. In the embodiment of the present invention, a recommended threshold value is given for the box girder structure, as shown in table 7:
TABLE 7 straight line extraction recommendation threshold table
Name of item Threshold value
Turning point angle threshold ≥23°
Segment fusion threshold ≤10°
After the segmentation of each straight line segment is finished, if the length is too long, the straight line segments are further equally divided according to the length of a preset detection unit. In the embodiment of the invention, the visual fields of the visual sensor groups are all rectangles of 0.5m multiplied by 0.5m, the image splicing operation required in the image post-processing process is considered, the image overlap ratio gamma is set to be 0.2, and the single side of the image needs to be overlapped with the adjacent image by 0.1 m. According to the principle, the lower surface of the box girder is further divided and refined by each straight line section, R1The robot detection unit segmentation results are shown in table 8.
TABLE 8 robot detection cell segmentation results
Figure GDA0003063695540000171
It should be noted that the detection units in the 0-0.4 straight line segment and the 0.4-0.9 straight line segment in table 8 still have the overlapped region of the cross straight line segment.
After the segmentation method of each segment is determined, the coordinate of the initial point of each detection unit is solved, and the optimal measurement point of the sensor is reversely deduced by combining the optimal distance measurement of the visual sensor. Here, the section 0.35-0.6m in Table 8 is taken as an example to perform the calculation of the optimum measurement point, and the other detection units perform the calculation by a similar method. Assuming that the 0.35 point is the origin a, the coordinates thereof are (0, 0), and the 0.6 point is the b point, the coordinates thereof are
Figure GDA0003063695540000181
The theoretical position of the image sensor is (0.375, -0.2), the attitude angle is 60 degrees, and the calculation formula is as follows:
Figure GDA0003063695540000182
Figure GDA0003063695540000183
Figure GDA0003063695540000184
and after the sensor attitude parameters corresponding to all the detection units are calculated by a similar method, connecting the theoretical positions in sequence, and finishing the automatic detection robot path planning.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (4)

1. An automatic detection method for a bridge bottom structure is characterized by comprising the following steps:
acquiring a three-dimensional point cloud cross section of a region to be detected;
dividing the cross section of the three-dimensional point cloud into a plurality of detection areas, and acquiring the plurality of detection areas and a sequence of the plurality of detection areas, wherein the sequence is used for carrying out task allocation on the robot;
acquiring a path plan of the robot corresponding to any one of the detection areas based on the detection areas;
wherein, the obtaining of the path plan of the robot corresponding to any detection area in the plurality of detection areas based on the plurality of detection areas comprises:
dividing the cross-section point cloud of any detection area into a plurality of detection units,
acquiring sensor attitude parameters of the robots corresponding to the detection units based on the detection units,
acquiring a path plan of the robot corresponding to any detection area based on sensor attitude parameters of the robot corresponding to the detection units;
wherein, divide into a plurality of detecting element with the cross section point cloud of any detection region, include:
dividing the cross-section point cloud of any detection area into a plurality of sections,
acquiring the plurality of detection units based on the plurality of segments;
wherein the obtaining the plurality of detection units based on the plurality of segments comprises:
stopping dividing any one of the plurality of sections if the length of the section is not greater than the length of a preset detection unit, wherein the section serves as the detection unit,
and if the length of any section is greater than the length of the preset detection unit, averagely dividing the any section until the length of a single section is not greater than the length of the preset detection unit to obtain the detection unit.
2. The automatic detection method for the bridge bottom structure according to claim 1, wherein the step of segmenting the cross-sectional point cloud of any detection area into a plurality of segments comprises:
segmentation is performed using an elevation gradient-based segmentation method.
3. The method of claim 1, wherein the dividing the cross section of the three-dimensional point cloud into a plurality of detection areas, and obtaining the plurality of detection areas and a sequence of the plurality of detection areas, the sequence being used for task assignment to a robot, comprises:
dividing the cross section of the three-dimensional point cloud into a plurality of detection areas, and acquiring the plurality of detection areas and a sequence of the plurality of detection areas by taking the shortest total detection time of multiple robots as a target function.
4. The automatic detection method for the bridge bottom structure according to any one of claims 1 to 3, wherein the obtaining of the cross section of the three-dimensional point cloud of the area to be detected comprises:
acquiring three-dimensional point cloud information of the area to be detected;
and acquiring the three-dimensional point cloud cross section of the area to be detected based on the three-dimensional point cloud information of the area to be detected.
CN202010275584.9A 2020-04-09 2020-04-09 Automatic detection method for bridge bottom structure Active CN111390913B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010275584.9A CN111390913B (en) 2020-04-09 2020-04-09 Automatic detection method for bridge bottom structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010275584.9A CN111390913B (en) 2020-04-09 2020-04-09 Automatic detection method for bridge bottom structure

Publications (2)

Publication Number Publication Date
CN111390913A CN111390913A (en) 2020-07-10
CN111390913B true CN111390913B (en) 2021-07-20

Family

ID=71425143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010275584.9A Active CN111390913B (en) 2020-04-09 2020-04-09 Automatic detection method for bridge bottom structure

Country Status (1)

Country Link
CN (1) CN111390913B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113075638B (en) * 2021-04-30 2021-10-12 深圳安德空间技术有限公司 Multi-source data synchronous acquisition and fusion method and system for underground space exploration
CN115690098B (en) * 2022-12-16 2023-04-07 中科海拓(无锡)科技有限公司 Method for detecting breakage and loss of iron wire

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108286457A (en) * 2017-12-04 2018-07-17 山东康威通信技术股份有限公司 Electric tunnel inspection robot walking safety guarantee dispatching method and system
CN108731736A (en) * 2018-06-04 2018-11-02 山东大学 Automatic for bridge tunnel Structural defect non-destructive testing diagnosis climbs wall radar photoelectricity robot system
KR20190095167A (en) * 2018-02-05 2019-08-14 이철희 Apparatus and method for focusing in camera
CN110703802A (en) * 2019-11-04 2020-01-17 中国科学院自动化研究所 Automatic bridge detection method and system based on multi-unmanned aerial vehicle cooperative operation
CN110780307A (en) * 2019-05-29 2020-02-11 武汉星源云意科技有限公司 Method for obtaining road cross section based on storage battery car-mounted laser point cloud mobile measurement system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108286457A (en) * 2017-12-04 2018-07-17 山东康威通信技术股份有限公司 Electric tunnel inspection robot walking safety guarantee dispatching method and system
KR20190095167A (en) * 2018-02-05 2019-08-14 이철희 Apparatus and method for focusing in camera
CN108731736A (en) * 2018-06-04 2018-11-02 山东大学 Automatic for bridge tunnel Structural defect non-destructive testing diagnosis climbs wall radar photoelectricity robot system
CN110780307A (en) * 2019-05-29 2020-02-11 武汉星源云意科技有限公司 Method for obtaining road cross section based on storage battery car-mounted laser point cloud mobile measurement system
CN110703802A (en) * 2019-11-04 2020-01-17 中国科学院自动化研究所 Automatic bridge detection method and system based on multi-unmanned aerial vehicle cooperative operation

Also Published As

Publication number Publication date
CN111390913A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
Kim et al. SLAM-driven robotic mapping and registration of 3D point clouds
Valença et al. Assessment of cracks on concrete bridges using image processing supported by laser scanning survey
Peel et al. Localisation of a mobile robot for bridge bearing inspection
Kim et al. Automated dimensional quality assessment of precast concrete panels using terrestrial laser scanning
Ribeiro et al. Remote inspection of RC structures using unmanned aerial vehicles and heuristic image processing
CN111390913B (en) Automatic detection method for bridge bottom structure
JP2017053819A (en) Crack detection method and detection program of concrete
US20210266461A1 (en) Defect detection system using a camera equipped uav for building facades on complex asset geometry with optimal automatic obstacle deconflicted flightpath
CN112880599B (en) Roadbed flatness detection system based on four-foot robot and working method
CN107292926B (en) Crusing robot movement locus verticality measuring method based on more image sequences
US11506565B2 (en) Four-dimensional crane rail measurement
CN102788572A (en) Method, device and system for measuring attitude of lifting hook of engineering machinery
CN111721279A (en) Tail end path navigation method suitable for power transmission inspection work
McCormick et al. Practical in situ applications of DIC for large structures
CN105800464A (en) Positioning method based on automatic lifting hook system
Cheng et al. Vision-based trajectory monitoring for assembly alignment of precast concrete bridge components
CN104048645A (en) Integral orientation method of ground scanning point cloud through linear fitting
CN114964007A (en) Visual measurement and surface defect detection method for weld size
Jiang et al. Bridge coating inspection based on two-stage automatic method and collision-tolerant unmanned aerial system
Ioli et al. UAV photogrammetry for metric evaluation of concrete bridge cracks
Kersten et al. Potentials of autonomous UAS and automated image analysis for structural health monitoring
CN110458785A (en) A kind of magnetic levitation ball levitation gap detection method based on image sensing
CN115562284A (en) Method for realizing automatic inspection by high-speed rail box girder inspection robot
Hagiwara et al. Non-contact detection of degradation of in-service steel sheet piles due to buckling phenomena by using digital image analysis with Hough transform
Yan et al. A novel building post-construction quality assessment robot: Design and prototyping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 430223 No.6, 4th Road, Wuda Science Park, Donghu high tech Zone, Wuhan City, Hubei Province

Applicant after: Wuhan Optical Valley excellence Technology Co.,Ltd.

Address before: 430223 No.6, 4th Road, Wuda Science Park, Donghu high tech Zone, Wuhan City, Hubei Province

Applicant before: Wuhan Wuda excellence Technology Co.,Ltd.

Address after: 430223 No.6, 4th Road, Wuda Science Park, Donghu high tech Zone, Wuhan City, Hubei Province

Applicant after: Wuhan Wuda excellence Technology Co.,Ltd.

Address before: 430223 No.6, 4th Road, Wuda Science Park, Donghu high tech Zone, Wuhan City, Hubei Province

Applicant before: WUHAN WUDA ZOYON SCIENCE AND TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant