CN115797587B - Inspection robot positioning and drawing method capable of fusing line scanning vehicle bottom image characteristics - Google Patents

Inspection robot positioning and drawing method capable of fusing line scanning vehicle bottom image characteristics Download PDF

Info

Publication number
CN115797587B
CN115797587B CN202310080519.4A CN202310080519A CN115797587B CN 115797587 B CN115797587 B CN 115797587B CN 202310080519 A CN202310080519 A CN 202310080519A CN 115797587 B CN115797587 B CN 115797587B
Authority
CN
China
Prior art keywords
point cloud
frame
map
vehicle bottom
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310080519.4A
Other languages
Chinese (zh)
Other versions
CN115797587A (en
Inventor
张目华
马磊
沈楷
孙永奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202310080519.4A priority Critical patent/CN115797587B/en
Publication of CN115797587A publication Critical patent/CN115797587A/en
Application granted granted Critical
Publication of CN115797587B publication Critical patent/CN115797587B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a method for positioning and mapping an inspection robot by fusing the bottom image characteristics of a line scanning vehicle, which comprises the steps of continuously executing pose estimation in the motion of the robot, and constructing a bottom point cloud frame sequence, a characteristic point cloud frame sequence, a line scanning image frame sequence and a pose map; splicing the images of the inner line of the ROI in the region of interest after the robot stops to form a whole bogie region image, and identifying and matching bogie mode characteristics in the whole bogie region image; constructing a track optimization constraint according to the mode characteristic pose of the bogie; optimizing the pose graph through trajectory optimization constraints; and acquiring the accurate parking position of the train, calculating the position and the pose of the parking point of the inspection robot, starting inspection operation and finally returning to the inspection original point. The invention ensures that the train bottom inspection robot does not need to carry a special high-speed high-precision flight time laser point distance measuring sensor with high cost for positioning, and has the beneficial effects of reducing the hardware cost of robot positioning and mapping and improving the accuracy and robustness of robot positioning and mapping.

Description

Inspection robot positioning and drawing method capable of fusing line scanning vehicle bottom image characteristics
Technical Field
The invention relates to an inspection robot positioning and mapping method for fusing line scanning vehicle bottom image characteristics, and belongs to the technical field of autonomous positioning and mapping of inspection robots.
Background
With the continuous maturity of the robot technology and the artificial intelligence fault detection technology, the train bottom intelligent inspection system using the mobile robot as the core actuator has gained more and more attention in the industry in recent years, the application of the system can replace the artificial inspection operation in the train locomotive or vehicle section, and the working strength of the overhaul workers is greatly reduced. By the aid of the autonomous mobile robot which runs in the underground overhaul channel and is provided with the laser radar, the line scanning camera, the cooperative mechanical arm and the 3D camera, images of the train bottom, image information of key parts of the train bottom and three-dimensional point cloud information can be precisely sampled by the system, analysis is carried out through the algorithm server array, and maintenance suggestions of the train are given. In order to promote the effect of patrolling and examining, guarantee operation safety, the robot need possess stronger autonomic location and establish the drawing performance, contains: 1. the robot has the capability of accurately estimating the self pose in the running process of hundreds of meters in an underground overhaul channel and the capability of accurately constructing a vehicle bottom three-dimensional point cloud map; 2. because the parking positions of the trains after entering the garage are different, the position of the patrol inspection navigation point based on the position of the train bogie needs to be determined according to the real parking position of the train, and the robot has the accurate detection capability on the real parking position of the train in the patrol inspection operation.
In the prior art, a simultaneous localization and mapping (SLAM) method is mainly used for autonomous localization and mapping of a robot in an underground overhaul channel, and two schemes, namely a two-dimensional laser radar scheme based on a prior map and a three-dimensional laser radar odometry scheme not based on the prior map, are mainly used for selecting a specific SLAM scheme.
The two-dimensional laser radar scheme based on the prior map generally has higher positioning accuracy, but has the defects that a map cannot be built through three-dimensional point cloud, the characteristic reconstruction of the train bottom is realized, and meanwhile, due to the limitation of scanning information quantity of the two-dimensional laser radar, the degradation failure of a positioning algorithm is easy to occur under the extreme working condition that the characteristics are single and repeated; for the three-dimensional laser radar odometer scheme which is not based on the prior map, the defect that loop detection and global track optimization are not executed in the positioning map building according to the prior information, and the influence of long-distance running accumulated errors on the positioning precision and the vehicle bottom three-dimensional point cloud map building precision is difficult to overcome.
At present, the detection of a robot on a train parking position in an underground overhaul channel in the industry mainly comprises two methods, namely wheel/axle center detection based on a point laser ranging sensor and vehicle bottom map point cloud registration. The method comprises the steps that the center coordinates of a wheel and an axle are calculated through the characteristics of the circular surfaces of the wheel and the axle under a laser point distance measuring sensor, so that the train parking position is detected; the point cloud of the bogie template is used in the vehicle bottom full point cloud map of the inspection structure for registration to obtain the real position of the train bogie, so that the train parking position is detected.
For the scheme of using the laser spot distance measuring sensor, in order to ensure the searching effect of the wheel axle, the laser spot distance measuring sensor is required to have extremely high distance measuring precision and data updating frequency, so that the inspection robot is required to be provided with the expensive imported high-speed high-precision flight time laser spot distance measuring sensor, and the cost is often up to tens of thousands of yuan; for the scheme of using the bogie point cloud registration, the algorithm needs to calculate three-dimensional structure data, so that the time complexity and the space complexity are high, the requirement on storage space is high, and the calculation time is long. Meanwhile, under a complex point cloud structure, the scheme has the risk of detection failure caused by mismatching and local convergence.
Disclosure of Invention
Aiming at the problems that three-dimensional reconstruction and accurate positioning cannot be achieved in the estimation and mapping of the pose of the robot and low cost, high efficiency and high robustness cannot be achieved in the detection of the train parking position in the prior art, the inspection robot positioning mapping method for the fusion line scanning vehicle bottom image features is high in precision and robustness.
The technical scheme provided by the invention for solving the technical problems is as follows: a positioning and mapping method of an inspection robot for fusing line scanning vehicle bottom image characteristics comprises the following steps:
s1, a routing inspection robot runs at a constant speed in an underground maintenance channel along a linear track parallel to a train, runs from a routing inspection original point to the tail end of the underground maintenance channel, and sets a priori approximate position interval of a train bogie as an ROI (region of interest) of a line scanning vehicle bottom image;
s2, acquiring pose estimation by using a direct registration method or a feature extraction and pose residual error optimization method;
s3, scanning a vehicle bottom image frame sequence, a point cloud vehicle bottom map frame sequence or a point cloud characteristic map frame sequence according to a pose estimation construction line, simultaneously forming a pose estimation sequence, and constructing a pose graph by using the pose estimation sequence;
s4, splicing the images of the inner line scanning vehicle bottom in the ROI to form a whole bogie area map, and identifying and matching bogie mode characteristics in the whole bogie area map;
s5, constructing a track optimization constraint according to the difference between the pose difference of the bogie characteristic components in the two ROI (region of interest) in the map coordinate system and the distance between the prior real bogies of the train;
s6, optimizing the pose graph through track optimization constraint, adjusting the pose of each frame in each frame sequence, and completing point cloud vehicle bottom map optimization, point cloud characteristic map optimization and line scanning vehicle bottom image frame sequence optimization;
s7, identifying and matching bogie mode characteristics again in the bogie overall image after the pose estimation is optimized, and obtaining the accurate parking position of the train;
and S8, calculating the position and the pose of the parking point of the inspection robot according to the accurate parking position of the train, starting inspection operation and finally returning to the inspection original point.
The technical scheme is that the inspection robot in the step S1 moves at a constant speed.
A further technical solution is that the specific process of the direct registration method in step S2 is as follows:
s21, setting the current three-dimensional laser radar scanning as the kth frame scanning;
s22, calculating a rough transformation relation between the scanning point cloud of the current frame in the radar coordinate system and the scanning point cloud of the previous frame;
and S23, taking the rough transformation relation as an initial estimation, and registering the current scanning point cloud in a current global point cloud vehicle bottom map formed by point cloud parts in a point cloud vehicle bottom map frame sequence existing in the current scanning point cloud to obtain a pose estimation.
The further technical scheme is that the specific process of the feature extraction and pose residual error optimization method in the step S2 is as follows:
step S201, setting the current three-dimensional laser radar scanning as the kth frame scanning;
s202, extracting facial line features in the scanning point cloud to generate a feature point set in a radar coordinate system;
step S203, for each point in the feature point set, searching a plurality of nearest feature points from a sub-map near the pose of the previous frame in the current global point cloud feature map formed by the feature point cloud part in the existing feature point cloud map frame sequence, and constructing and optimizing a surface-surface residual error and a line-line residual error to obtain pose estimation.
The further technical scheme is that the specific process of the step S3 is as follows:
s31, converting the scanning point cloud into a map coordinate system and binding the scanning point cloud with pose estimation to generate a kth frame point cloud vehicle bottom map frame, or converting a feature point set into the map coordinate system and binding the feature point set with the pose estimation to generate the kth frame point cloud feature map frame;
s32, binding the k-th frame line scanning vehicle bottom image with pose estimation to generate a k-th frame line scanning vehicle bottom image frame;
s33, forming a scanning vehicle bottom image frame sequence by using a k frame line scanning vehicle bottom image frame, forming a point cloud vehicle bottom map frame sequence by using a k frame point cloud vehicle bottom map frame, and forming a point cloud characteristic map frame sequence by using a k frame point cloud characteristic map frame, so as to form a pose estimation sequence;
and S34, constructing a pose graph by using the pose estimation sequence.
The further technical scheme is that the specific process of the step S4 is as follows: and for the sub-sequence of the vehicle bottom image frames of the line scanning vehicles in each ROI, splicing the vehicle bottom images of the line scanning vehicles of each frame to form a complete image of each bogie area of the train, and searching for specific bogie mode characteristics on the complete image of each bogie by using a mode recognition matching algorithm.
A further technical scheme is that, in the step S4, the number of line scan lines corresponding to the identification result frame selection center in the complete image of the bogie area is calculated for the i-th bogie complete image, and further, the estimated translation on the map coordinate system corresponding to the identification result frame selection center is obtained through the pose estimation corresponding to each line recorded in the frame sequence construction process.
The further technical scheme is that the specific process of the step S6 is as follows:
s61, optimizing a global track in the pose graph by using track optimization constraints to obtain an optimized pose estimation sequence;
s62, according to the optimized pose estimation sequence, on the premise of recording original pose estimation, adjusting corresponding pose estimation in each type of frame in the frame sequence, and simultaneously transforming coordinates of each point in point cloud data contained in each type of frame to new pose estimation;
and S63, obtaining a more accurate global vehicle bottom point cloud map according to the optimized point cloud vehicle bottom map frame sequence or obtaining a more accurate global point cloud characteristic map according to the optimized characteristic point cloud map frame sequence, so that accurate three-dimensional vehicle bottom point cloud map construction and characteristic point cloud map construction are realized.
The further technical scheme is that the specific process of the step S7 is as follows: and acquiring pose estimation in a map coordinate system corresponding to the frame selection center of the mode feature identification result on each bogie image of the train in the line scanning vehicle bottom image frame sequence after the pose is optimized, namely acquiring the accurate parking position of the train.
The invention has the following beneficial effects: the invention ensures that the train bottom inspection robot does not need to carry a special high-speed high-precision flight time laser point distance measuring sensor with high cost for positioning, and has the beneficial effects of reducing the hardware cost of robot positioning and mapping and improving the accuracy and robustness of robot positioning and mapping.
Drawings
FIG. 1 is a schematic view of an inspection robot;
FIG. 2 is a flow chart of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it is to be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The robot is provided with a three-dimensional laser radar and a line scanning camera vertical to the bottom of a train. The three-dimensional laser radar continuously outputs three-dimensional point cloud scanning data at constant frequency; the line scanning camera is driven by pulses with frequency matched with the speed of the robot, and continuously outputs train bottom image data with fixed line number.
The algorithm is characterized by being divided into two stages, including:
a vehicle bottom data acquisition stage: the inspection robot runs at a constant speed in the underground maintenance trench along a linear track parallel to the train from the inspection origin to the tail end of the underground maintenance trench. The approximate position section of the prior train bogie is set as a region of interest ROI of a line scanning image. And continuously constructing a point cloud characteristic map frame (only when the characteristic radar mileage calculation method is used), a point cloud underbody map frame and a line scanning underbody image frame in the ROI (region of interest) bound with the pose estimation by using the pose estimation provided by a direct registration method or a characteristic extraction and pose residual error optimization method of a common radar mileage calculation method to form a line scanning image. Because each line scanning image only has the last line bound with the pose estimation, the pose estimation corresponding to other lines is interpolated by using a uniform motion model.
An off-line optimization calculation stage: and the robot starts off-line calculation after driving to the tail end of the underground overhaul channel and stopping. And splicing the line scanning images in each ROI, identifying and matching bogie characteristic components from the spliced images through pattern recognition neural network or traditional template matching, calculating pixel positions of the bogie characteristic components in the images, and solving the pose of the bogie characteristic components in a map coordinate system. The difference between the pose difference of the bogie characteristic components in the two ROI (regions of interest) in the map coordinate system and the distance between the prior real bogies of the train is the difference between the radar odometer pose estimation and the real distance, according to the difference constraint, the SLAM global track can be optimized by using methods including but not limited to pose graph optimization and the like, the pose estimation of each frame in each frame sequence is further adjusted, the precision of a characteristic point cloud map (only when a characteristic radar mileage calculation method is used), a point cloud vehicle bottom map and a line scanning vehicle bottom image is optimized, the pose of the train bogie characteristic components in the map coordinate system can be calculated out in the optimized line scanning image again, and the detection of the real parking position of the train is realized.
As shown in fig. 2, the method specifically includes the following steps:
s1, driving an inspection robot at a constant speed along a linear track parallel to a train in an underground maintenance channel at a linear speed of 0.5m/S from an inspection origin to the tail end of the underground maintenance channel, and setting a priori approximate position interval of a train bogie as an ROI (region of interest) of a linear scanning vehicle bottom image;
s2, acquiring pose estimation by using a direct registration method or a feature extraction and pose residual error optimization method;
the specific process of the direct registration method is as follows:
s21, setting the current three-dimensional laser radar scanning as the kth frame scanning;
s22, calculating a rough transformation relation between the scanning point cloud of the current frame in the radar coordinate system and the scanning point cloud of the previous frame;
and S23, taking the rough transformation relation as an initial estimation, and registering the current scanning point cloud in a current global point cloud vehicle bottom map formed by point cloud parts in a point cloud vehicle bottom map frame sequence existing in the current scanning point cloud to obtain a pose estimation.
The specific process of the feature extraction and pose residual error optimization method is as follows:
step S201, setting the current three-dimensional laser radar scanning as the kth frame scanning;
s202, extracting facial line features in the scanning point cloud to generate a feature point set in a radar coordinate system;
step S203, for each point in the feature point set, searching a plurality of nearest feature points from a sub-map near the pose of the previous frame in the current global point cloud feature map formed by the feature point cloud part in the existing feature point cloud map frame sequence, and constructing and optimizing a surface-surface residual error and a line-line residual error to obtain pose estimation.
S3, scanning a vehicle bottom image frame sequence, a point cloud vehicle bottom map frame sequence or a point cloud characteristic map frame sequence according to a pose estimation construction line, simultaneously forming a pose estimation sequence, and constructing a pose graph by using the pose estimation sequence;
step S31, the scanning point cloud is transformed into a map coordinate system
Figure SMS_1
/>
And will bep k map Binding with pose estimation to generate a kth frame point cloud vehicle bottom map frame
Figure SMS_2
Or transforming the feature point set into a map coordinate system, including
Figure SMS_3
And will bef k map Binding with pose estimation to generate a kth frame point cloud characteristic map frame;
Figure SMS_4
step S32, setting the approximate position interval of the prior train bogie as a region of interest (ROI) of a line scanning image, and if the current pose is estimated to be in the ROI, namely setting a k frame line scanning vehicle bottom image S k Binding with pose estimation to generate a kth frame line scanning vehicle bottom image frame;
Figure SMS_5
due to s k The pose estimation only can represent the pose estimation during the image acquisition of the nth line, and the uniform speed running of the robot is considered, so that the vehicle bottom image s of the k frame line scanning vehicle is obtained k Image s of ith line among inner 1 to n-1 lines i k Corresponding pose estimation is carried out by using a uniform motion model for compensation; i.e. within the ROIThe pose estimation of each line scanning image can be recorded;
s33, forming a scanning vehicle bottom image frame sequence by using a k frame line scanning vehicle bottom image frame, forming a point cloud vehicle bottom map frame sequence by using a k frame point cloud vehicle bottom map frame, and forming a point cloud characteristic map frame sequence by using a k frame point cloud characteristic map frame, so as to form a pose estimation sequence;
s34, constructing a pose graph by using a pose estimation sequence;
s4, splicing the images of the inner line scanning vehicle bottom in the ROI to form a whole bogie area map, and identifying and matching bogie mode characteristics in the whole bogie area map;
splicing the frame line scanning vehicle bottom image sub-sequence in each ROI to form a complete image of each bogie area of the train, and searching for specific bogie mode characteristics by using a mode identification matching algorithm on each complete image of the bogie; calculating the number of line scanning lines corresponding to the identification result frame selection center in the complete bogie area image for the ith bogie complete image, and further obtaining the estimation translation on a map coordinate system corresponding to the identification result frame selection center through the pose estimation corresponding to each line recorded in the frame sequence construction process;
s5, constructing a track optimization constraint according to the difference between the pose difference of the bogie characteristic components in the two ROI (region of interest) in the map coordinate system and the distance between the prior real bogies of the train;
wherein the precise distance between the bogies of the individual cars is known a priori, defined as t, for trains of the same type b The distance between the estimated translation of the ith complete image of the bogie and the estimated translation of the ith-1 complete image of the bogie is t b The relationship (including but not limited to the relationship of difference, absolute value of difference, ratio, absolute value of ratio, etc.) of (a) can be used as the constraint of track optimization;
s6, optimizing the pose graph through track optimization constraint, adjusting the pose of each frame in each frame sequence, and completing point cloud vehicle bottom map optimization, point cloud characteristic map optimization and line scanning vehicle bottom image frame sequence optimization;
s61, optimizing a global track in the pose graph by using track optimization constraints to obtain an optimized pose estimation sequence;
s62, according to the optimized pose estimation sequence, on the premise of recording original pose estimation, adjusting corresponding pose estimation in each type of frame in the frame sequence, and simultaneously transforming the coordinates of each point of point cloud data contained in each type of frame to new pose estimation;
the point cloud part of the kth frame point cloud underbody map frame comprises the following components:
Figure SMS_6
if the pose estimation is based on the feature method, for the feature point cloud part of the k frame point cloud feature map frame, the following steps are carried out:
Figure SMS_7
s63, obtaining a more accurate global vehicle bottom point cloud map according to the optimized point cloud vehicle bottom map frame sequence or obtaining a more accurate global point cloud feature map according to the optimized feature point cloud map frame sequence, so that accurate three-dimensional vehicle bottom point cloud map construction and feature point cloud map construction are realized;
s7, identifying and matching bogie mode characteristics again in the bogie overall image after the pose estimation is optimized, and obtaining the accurate parking position of the train;
acquiring pose estimation in a map coordinate system corresponding to a frame selection center of a mode feature identification result on each bogie image of the train in a line scanning vehicle bottom image frame sequence after the pose is optimized, namely acquiring an accurate parking position of the train;
and S8, calculating the parking point pose of the inspection robot according to the accurate parking position of the train, starting inspection operation and finally returning to the inspection original point.
Examples
As shown in figure 1, a wheeled differential drive train inspection robot 2 which runs along a straight line track parallel to a train in an underground overhaul channel 1 is provided with Livox MID-70 type solid three-dimensional laser radars 3 and 4 which are symmetrical to the Y axis of the train in the scanning forward direction and are parallel to the XY plane of the train body, and a line scanning camera device 5 which is used for scanning squares and is vertical to the XY plane of the train body; with reference to fig. 2, the steps of the positioning and mapping method are as follows:
step 1, vehicle bottom data acquisition stage:
calculating pose estimation: the inspection robot runs at a constant speed in the underground maintenance trench along a linear track parallel to the train from the inspection origin to the tail end of the underground maintenance trench. And calculating by using a feature extraction and pose residual error optimization method of a Loam livox algorithm. And setting the current three-dimensional laser radar scanning as the kth frame scanning, extracting the facial line characteristics in the scanning point cloud, generating a characteristic point set in a radar coordinate system, searching a plurality of nearest characteristic points from a sub-map which is close to the upper position in the current global point cloud characteristic map formed by the characteristic point cloud part in the existing characteristic point cloud map frame sequence for each point in the characteristic point set, and constructing and optimizing a surface-surface residual error and a line-line residual error to obtain the pose estimation.
Constructing a frame sequence: after obtaining the current pose estimation, the scanning point cloud is transformed into a map coordinate system, andp k map binding with the current pose estimation to generate a kth frame point cloud vehicle bottom map frame;
setting a priori train bogie approximate position interval as an ROI (region of interest) of a line scanning image, and if the current pose estimation is in the ROI, binding a k frame line scanning vehicle bottom image with the current pose estimation to generate a k frame line scanning vehicle bottom image frame;
transforming the feature point set into a map coordinate systemf k map Binding with the current pose estimation to generate a kth frame point cloud characteristic map frame;
and (4) forming a frame sequence by the collection of frames in the process that the robot runs to the tail end of the underground overhaul channel at a constant speed, namely constructing a pose graph by using the pose estimation sequence.
Step 2, an off-line optimization calculation stage:
splicing line scanning images and carrying out bogie characteristic identification and matching: and for the sub-sequence of the frame bottom image frame of the line scanning vehicle in each ROI, splicing the frame bottom images of the line scanning vehicles of each ROI to form a complete image of each bogie area of the train, and searching for a specific bogie mode characteristic by using a Yolov5 algorithm on the complete image of each bogie. And calculating the number of the corresponding line scanning lines of the frame selecting center of the identification result in the complete image of the bogie area for the ith bogie complete image, and further estimating the corresponding pose of each line in the frame sequence construction process to obtain the estimated translation of the frame selecting center of the identification result on a map coordinate system.
Constructing a track optimization constraint: for trains of the same model, the precise distance between the bogies of the individual cars is known a priori, defined as t b The distance between the estimated translation of the ith bogie complete image and the estimated translation of the ith-1 bogie complete image is then t b The difference can be used as the constraint of the track optimization;
optimizing the global track, and adjusting the position and posture of each frame: and optimizing the global trajectory by using the constraint of trajectory optimization and using a pose graph optimization method to obtain an optimized pose estimation sequence which is more accurate and closer to a true value. According to the optimized pose estimation sequence, on the premise of recording the original pose estimation, adjusting corresponding pose estimation in each frame of each type in the frame sequence, and simultaneously transforming the coordinates of each point of point cloud data contained in each frame of each type to the new pose estimation;
obtaining a more accurate global vehicle bottom point cloud map according to the optimized point cloud vehicle bottom map frame sequence; according to the optimized characteristic point cloud map frame sequence, a more accurate global point cloud characteristic map can be obtained, and accurate three-dimensional vehicle bottom point cloud map construction and characteristic point cloud map construction are achieved. Meanwhile, when the robot returns, the pose estimation method is consistent with the previous method, so that the more accurate feature point cloud map can obviously improve the pose estimation quality and the return positioning precision.
Detecting the real parking position of the train: the same operation as the step 1 is executed in the line scanning vehicle bottom image frame sequence after the position is optimized, so that the position estimation in the map coordinate system corresponding to the frame selection center of the mode feature identification result on each bogie image of the train can be obtained, and the accurate parking position of the train is obtained equivalently due to the fact that the transformation relation between the bogie mode feature and the train is known in a priori.
And 3, calculating the parking point pose of the inspection robot according to the accurate parking position of the train, starting inspection operation and finally returning to the inspection original point.
Although the present invention has been described with reference to the above embodiments, it should be understood that the present invention is not limited to the above embodiments, and those skilled in the art can make various changes and modifications without departing from the scope of the present invention.

Claims (9)

1. A positioning and drawing method for an inspection robot capable of fusing line scanning vehicle bottom image features is characterized by comprising the following steps:
s1, a routing inspection robot runs at a constant speed in an underground maintenance channel along a linear track parallel to a train, runs from a routing inspection original point to the tail end of the underground maintenance channel, and sets a priori approximate position interval of a train bogie as an ROI (region of interest) of a line scanning vehicle bottom image;
s2, acquiring pose estimation by using a direct registration method or a feature extraction and pose residual error optimization method;
s3, scanning a vehicle bottom image frame sequence, a point cloud vehicle bottom map frame sequence or a point cloud characteristic map frame sequence according to a pose estimation construction line, simultaneously forming a pose estimation sequence, and constructing a pose graph by using the pose estimation sequence;
s4, splicing the images of the bottom of the vehicle in the ROI to form a whole bogie area map, and identifying and matching bogie mode characteristics in the whole bogie area map;
s5, constructing a track optimization constraint according to the difference between the pose difference of the bogie characteristic components in the two ROI (region of interest) in the map coordinate system and the distance between the prior real bogies of the train;
s6, optimizing the pose graph through track optimization constraint, adjusting the pose of each frame in each frame sequence, and completing point cloud vehicle bottom map optimization, point cloud characteristic map optimization and line scanning vehicle bottom image frame sequence optimization;
s7, identifying and matching bogie mode characteristics again in the bogie overall image after the pose estimation is optimized, and obtaining the accurate parking position of the train;
and S8, calculating the position and the pose of the parking point of the inspection robot according to the accurate parking position of the train, starting inspection operation and finally returning to the inspection original point.
2. The inspection robot positioning and mapping method for fusing the bottom image characteristics of the line scanning vehicle according to claim 1, wherein the inspection robot moves at a constant speed in the step S1.
3. The inspection robot positioning and mapping method for fusing line scanning vehicle bottom image features according to claim 1, wherein the direct registration method in the step S2 comprises the following specific processes:
s21, setting the current three-dimensional laser radar scanning as the kth frame scanning;
s22, calculating a rough transformation relation between the scanning point cloud of the current frame in the radar coordinate system and the scanning point cloud of the previous frame;
and S23, taking the rough transformation relation as an initial estimation, and registering the current scanning point cloud in a current global point cloud vehicle bottom map formed by point cloud parts in a point cloud vehicle bottom map frame sequence existing in the current scanning point cloud to obtain a pose estimation.
4. The inspection robot positioning and mapping method for fusing line scanning vehicle bottom image features according to claim 1, wherein the specific processes of the feature extraction and pose residual error optimization method in the step S2 are as follows:
step S201, setting the current three-dimensional laser radar scanning as the kth frame scanning;
s202, extracting facial line features in the scanning point cloud to generate a feature point set in a radar coordinate system;
step S203, for each point in the feature point set, searching a plurality of nearest feature points from a sub-map near the pose of the previous frame in the current global point cloud feature map formed by the feature point cloud part in the existing feature point cloud map frame sequence, and constructing and optimizing a surface-surface residual error and a line-line residual error to obtain pose estimation.
5. The inspection robot positioning and mapping method for fusing line scanning vehicle bottom image characteristics according to claim 3 or 4, characterized in that the specific process of the step S3 is as follows:
s31, converting the scanning point cloud into a map coordinate system and binding the scanning point cloud with pose estimation to generate a kth frame point cloud vehicle bottom map frame, or converting a feature point set into the map coordinate system and binding the feature point set with the pose estimation to generate the kth frame point cloud feature map frame;
s32, binding the k-th frame line scanning vehicle bottom image with pose estimation to generate a k-th frame line scanning vehicle bottom image frame;
s33, forming a scanning vehicle bottom image frame sequence by using a k frame line scanning vehicle bottom image frame, forming a point cloud vehicle bottom map frame sequence by using a k frame point cloud vehicle bottom map frame, and forming a point cloud characteristic map frame sequence by using a k frame point cloud characteristic map frame, thereby forming a pose estimation sequence;
and S34, constructing a pose graph by using the pose estimation sequence.
6. The inspection robot positioning and mapping method for fusing line scanning vehicle bottom image characteristics according to claim 1, wherein the specific process of the step S4 is as follows: and for the sub-sequence of the frame image of the vehicle bottom of the line scanning vehicle in each ROI, splicing the image of the vehicle bottom of each frame line scanning vehicle to form a complete image of each bogie area of the train, and searching for specific bogie mode characteristics by using a mode identification matching algorithm on the complete image of each bogie.
7. The inspection robot positioning and mapping method for fusing line scanning vehicle bottom image features according to claim 6, wherein in step S4, for the i-th complete image of the bogie, the number of line scanning lines corresponding to the recognition result frame selection center in the complete image of the bogie area is calculated, and further, the estimated translation on the map coordinate system corresponding to the recognition result frame selection center can be obtained through the pose estimation corresponding to each line recorded in the frame sequence construction process.
8. The inspection robot positioning and mapping method for fusing line scanning vehicle bottom image characteristics according to claim 1, wherein the specific process of the step S6 is as follows:
s61, optimizing a global track in the pose graph by using track optimization constraints to obtain an optimized pose estimation sequence;
s62, according to the optimized pose estimation sequence, on the premise of recording original pose estimation, adjusting corresponding pose estimation in each type of frame in the frame sequence, and simultaneously transforming the coordinates of each point of point cloud data contained in each type of frame to new pose estimation;
and S63, obtaining a more accurate global vehicle bottom point cloud map according to the optimized point cloud vehicle bottom map frame sequence or obtaining a more accurate global point cloud characteristic map according to the optimized characteristic point cloud map frame sequence, so that accurate three-dimensional vehicle bottom point cloud map construction and characteristic point cloud map construction are realized.
9. The inspection robot positioning and mapping method for fusing line scanning vehicle bottom image characteristics according to claim 1, wherein the specific process of the step S7 is as follows: and acquiring pose estimation in a map coordinate system corresponding to the frame selection center of the mode feature identification result on each bogie image of the train in the line scanning vehicle bottom image frame sequence after the pose is optimized, namely acquiring the accurate parking position of the train.
CN202310080519.4A 2023-02-08 2023-02-08 Inspection robot positioning and drawing method capable of fusing line scanning vehicle bottom image characteristics Active CN115797587B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310080519.4A CN115797587B (en) 2023-02-08 2023-02-08 Inspection robot positioning and drawing method capable of fusing line scanning vehicle bottom image characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310080519.4A CN115797587B (en) 2023-02-08 2023-02-08 Inspection robot positioning and drawing method capable of fusing line scanning vehicle bottom image characteristics

Publications (2)

Publication Number Publication Date
CN115797587A CN115797587A (en) 2023-03-14
CN115797587B true CN115797587B (en) 2023-04-07

Family

ID=85430438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310080519.4A Active CN115797587B (en) 2023-02-08 2023-02-08 Inspection robot positioning and drawing method capable of fusing line scanning vehicle bottom image characteristics

Country Status (1)

Country Link
CN (1) CN115797587B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109738213A (en) * 2019-02-03 2019-05-10 北京新联铁集团股份有限公司 Rail transit rolling stock inspection pose detection system and its method
CN113448333A (en) * 2021-06-25 2021-09-28 北京铁道工程机电技术研究所股份有限公司 Bottom routing inspection positioning method and device based on sensor combination and electronic equipment
CN114379607A (en) * 2022-01-26 2022-04-22 株洲时代电子技术有限公司 Comprehensive railway inspection method
CN114862944A (en) * 2022-05-07 2022-08-05 杭州海康机器人技术有限公司 Vehicle pose detection method and device and electronic equipment
CN115446834A (en) * 2022-09-01 2022-12-09 西南交通大学 Single-axis weight positioning method of vehicle bottom inspection robot based on occupied grid registration
CN115511958A (en) * 2022-08-25 2022-12-23 成都唐源电气股份有限公司 Auxiliary positioning method for vehicle bottom inspection robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109738213A (en) * 2019-02-03 2019-05-10 北京新联铁集团股份有限公司 Rail transit rolling stock inspection pose detection system and its method
CN113448333A (en) * 2021-06-25 2021-09-28 北京铁道工程机电技术研究所股份有限公司 Bottom routing inspection positioning method and device based on sensor combination and electronic equipment
CN114379607A (en) * 2022-01-26 2022-04-22 株洲时代电子技术有限公司 Comprehensive railway inspection method
CN114862944A (en) * 2022-05-07 2022-08-05 杭州海康机器人技术有限公司 Vehicle pose detection method and device and electronic equipment
CN115511958A (en) * 2022-08-25 2022-12-23 成都唐源电气股份有限公司 Auxiliary positioning method for vehicle bottom inspection robot
CN115446834A (en) * 2022-09-01 2022-12-09 西南交通大学 Single-axis weight positioning method of vehicle bottom inspection robot based on occupied grid registration

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李林锋.地铁车辆车底巡检机器人的功能及其应用分析.《城市轨道交通研究》.2022,第2022年卷(第S01期),1-5. *
王健.铁路机辆车底智能巡检机器人的设计研究.《工程技术研究》.2018,第2018年卷(第8期),210-211. *

Also Published As

Publication number Publication date
CN115797587A (en) 2023-03-14

Similar Documents

Publication Publication Date Title
CN111429498B (en) Railway business line three-dimensional center line manufacturing method based on point cloud and image fusion technology
CN107065887B (en) Reverse navigation method in channel of omnidirectional mobile robot
CN106584451B (en) automatic transformer substation composition robot and method based on visual navigation
CN110647798A (en) Automatic track center line detection method based on vehicle-mounted mobile laser point cloud
CN106568456B (en) Non-stop charging method based on GPS/ Beidou positioning and cloud computing platform
CN109059930A (en) A kind of method for positioning mobile robot of view-based access control model odometer
CN113674399A (en) Mobile robot indoor three-dimensional point cloud map construction method and system
CN114526745A (en) Drawing establishing method and system for tightly-coupled laser radar and inertial odometer
CN114018248A (en) Odometer method and map building method integrating coded disc and laser radar
CN114897777A (en) Full-automatic extraction method of laser point cloud of overhead line system supporting facility considering spatial relationship
CN108152829B (en) Two-dimensional laser radar mapping device with linear guide rail and mapping method thereof
CN111123953A (en) Particle-based mobile robot group under artificial intelligence big data and control method thereof
CN115797587B (en) Inspection robot positioning and drawing method capable of fusing line scanning vehicle bottom image characteristics
CN113034584B (en) Mobile robot visual positioning method based on object semantic road sign
Sujiwo et al. Localization based on multiple visual-metric maps
CN114281081B (en) Navigation system and navigation method of subway vehicle inspection robot and robot
CN115482282A (en) Dynamic SLAM method with multi-target tracking capability in automatic driving scene
CN112305558B (en) Mobile robot track determination method and device using laser point cloud data
CN115205397A (en) Vehicle space-time information identification method based on computer vision and pose estimation
CN114137953A (en) Power inspection robot system based on three-dimensional laser radar and image building method
CN113532472A (en) Method and system for detecting laser mapping odometer and integrated navigation positioning deviation
Yang et al. Improved Cartographer Algorithm Based on Map-to-Map Loopback Detection
Xu et al. End-to-end autonomous driving based on image plane waypoint prediction
CN114742884B (en) Texture-based mapping, mileage calculation and positioning method and system
Ban Research on Precise Positioning Technology Based on SLAM Algorithm for Port Unmanned Vehicle Under Shore Bridge

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant