CN115511958A - Auxiliary positioning method for vehicle bottom inspection robot - Google Patents
Auxiliary positioning method for vehicle bottom inspection robot Download PDFInfo
- Publication number
- CN115511958A CN115511958A CN202211028373.0A CN202211028373A CN115511958A CN 115511958 A CN115511958 A CN 115511958A CN 202211028373 A CN202211028373 A CN 202211028373A CN 115511958 A CN115511958 A CN 115511958A
- Authority
- CN
- China
- Prior art keywords
- vehicle bottom
- data
- inspection robot
- waveform
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007689 inspection Methods 0.000 title claims abstract description 97
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000009467 reduction Effects 0.000 claims abstract description 27
- 238000012545 processing Methods 0.000 claims abstract description 24
- 238000013507 mapping Methods 0.000 claims abstract description 11
- 238000013135 deep learning Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 7
- 238000010276 construction Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 6
- 238000013136 deep learning model Methods 0.000 claims description 4
- 238000012850 discrimination method Methods 0.000 claims description 3
- 238000003709 image segmentation Methods 0.000 claims description 3
- 238000012423 maintenance Methods 0.000 abstract description 5
- 239000011159 matrix material Substances 0.000 description 13
- 238000010586 diagram Methods 0.000 description 8
- 238000009434 installation Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an auxiliary positioning method for a vehicle bottom inspection robot, which relates to the technical field of auxiliary positioning of robots.A vehicle bottom data subjected to noise reduction processing is compared with a data characteristic of a constructed specified part to determine that the vehicle bottom data comprises the data of the specified part; then returning a position adjusting signal of the appointed part to the vehicle bottom inspection robot, and calculating the space coordinate of the appointed part positioned at the bottom of the vehicle according to the mapping relation between the vehicle bottom data and the space coordinate provided by the vehicle bottom inspection robot; finally, determining the position adjustment offset of the vehicle bottom inspection robot according to the space coordinate of the position of the vehicle bottom inspection robot and the space coordinate of the appointed part positioned at the vehicle bottom; the invention does not rely on the identification of fixed road sign marks, is flexible and convenient, has relatively low maintenance cost, is not influenced by external environment, and is more suitable for the actual conditions of subway bottom patrol inspection business.
Description
Technical Field
The invention relates to the technical field of auxiliary positioning of robots, in particular to an auxiliary positioning method of a vehicle bottom inspection robot.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
The subway bottom inspection robot travels along a set inspection direction after arriving at a subway bottom maintenance gallery according to a planned path; after the specified working position such as the bogie and the through passage is passed, the robot needs to confirm the coordinate point of the working position again by the auxiliary positioning.
At present, the existing auxiliary positioning technology is mainly based on road sign identifications, a plurality of road sign identifications are installed on characteristic point positions in the environment where the AGV robot is located, and the AGV robot calculates the self pose by utilizing the measurement of a sensor on the road sign identifications; the method depends on road sign identification in the environment, and the actual positioning precision mainly depends on accurate identification of the road sign identification and the accuracy and rapidness of extraction of environment position information.
The prior art has the defects that the installation and maintenance cost is high, and when the external environment is dynamically changed, blind spots exist in the detection of the sensor; secondly, in the subway train bottom patrol inspection operation scene, due to the limitation of the space boundary of the train, the installation of the road sign identification device at the bottom of the train is not in line with the actual business.
Disclosure of Invention
The invention aims to: the auxiliary positioning method for the vehicle bottom inspection robot is characterized in that a laser range finder is used for analyzing data or/and an image of a linear array camera, a position adjusting signal is sent to the inspection robot, and meanwhile, a position adjusting offset of the inspection robot is given, so that the inspection robot finds the best operation position, and the problems are solved.
The technical scheme of the invention is as follows:
an auxiliary positioning method for a vehicle bottom inspection robot comprises the following steps:
comparing the vehicle bottom data subjected to noise reduction with the constructed data characteristics of the designated parts, and determining that the vehicle bottom data comprises the data of the designated parts;
returning a position adjusting signal of the designated part to the vehicle bottom inspection robot, and calculating the space coordinate of the designated part positioned at the bottom of the vehicle according to the mapping relation between the vehicle bottom data and the space coordinate provided by the vehicle bottom inspection robot;
and determining the position adjustment offset of the vehicle bottom inspection robot according to the space coordinate of the position of the vehicle bottom inspection robot and the space coordinate of the appointed part positioned at the vehicle bottom.
Further, the vehicle bottom data comprises: vehicle bottom waveform data are obtained through a laser sensor; the noise reduction processing includes: carrying out noise reduction processing on vehicle bottom waveform data by using a clustering algorithm or a filtering algorithm;
or/and
the vehicle bottom data comprises: acquiring vehicle bottom image data through a line scanning module; the noise reduction processing includes: and carrying out noise reduction processing on the vehicle bottom image data acquired by the line scanning module by using a filtering algorithm.
Further, the data characteristics of the specified parts are constructed by a direct method;
the direct process construction includes:
constructing a feature vector B of a designated part; the characteristic vector B is a waveform characteristic vector B 1 Or/and image feature vector B 2 。
Further, the data characteristics of the specified parts are constructed through deep learning modeling;
the deep learning modeling construction comprises the following steps:
carrying out noise reduction processing on a large amount of data corresponding to a specified part, wherein the data is waveform data or/and image data;
constructing a discrimination model of the specified part by using a deep learning model according to the data subjected to noise reduction processing; the discrimination model is a waveform discrimination model or/and an image discrimination model.
Further, the comparison adopts a direct method for comparison;
the direct method comparison comprises the following steps:
extracting a characteristic vector A of vehicle bottom data; the feature vector A is a waveform feature vector A 1 Or/and image feature vector A 2 (ii) a The types of the feature vector A and the feature vector B are consistent;
calculating the cosine similarity of the characteristic vector A and the characteristic vector B;
if the cosine similarity is greater than or equal to the threshold value s0, the vehicle bottom data comprises data of specified parts; and if the cosine similarity is smaller than the threshold value s0, the vehicle bottom data does not include the data of the specified parts, and new vehicle bottom data needs to be acquired again.
Further, the cosine similarity calculation formula is as follows:
furthermore, the comparison adopts a deep learning discrimination method;
the deep learning discrimination comparison comprises the following steps:
judging and predicting vehicle bottom data by using a judging model, and outputting the probability P that the vehicle bottom data comprises data of specified parts;
if the probability P is larger than or equal to the threshold value P0, the vehicle bottom data comprises appointed part data; if the probability P is smaller than the threshold value P0, the vehicle bottom data do not comprise the data of the designated parts, and new vehicle bottom data need to be obtained again.
Further, the mapping relationship includes:
the vehicle bottom inspection robot provides K space coordinates corresponding to the abscissa of the vehicle bottom data in the real working environment in unit time;
let the vector formed by the K space coordinates be:
vp=[p 0 ,p 1 ……p K ]
wherein p is 0 And p K Respectively are the space coordinates corresponding to the horizontal coordinates of the starting point and the end point of the vehicle bottom data;
the sequence number of the kth space coordinate in the vehicle bottom data is as follows:
wherein N is the number of horizontal coordinates in the vehicle bottom data acquired in unit time;
the vector formed by the serial numbers of the vehicle bottom data corresponding to the K space coordinates is as follows:
n(vp)=[n(p 0 ),n(p 1 )……n(p K )]。
further, when the vehicle bottom data comprises vehicle bottom waveform data;
the mapping relation comprises:
the vehicle bottom inspection robot provides K spatial coordinates corresponding to the abscissa of a data point of vehicle bottom waveform data in a real working environment in unit time;
let the vector formed by the K space coordinates be:
vp=[p 0 ,p 1 ……p K ]
wherein p is 0 And p K The space coordinates corresponding to the abscissa of the first data point and the abscissa of the last data point of the vehicle bottom waveform data respectively;
the sequence number of the kth space coordinate in the vehicle bottom waveform data is as follows:
wherein N is the number of data points in the vehicle bottom waveform data acquired in unit time;
the vector formed by the serial numbers of the vehicle bottom waveform data corresponding to the K space coordinates is as follows:
n(vp)=[n(p 0 ),n(p 1 )……n(p K )]
the calculation is located the space coordinate of the appointed spare part of vehicle bottom, includes:
acquiring waveform breakpoints in vehicle bottom waveform data by using a clustering algorithm, a Jenks natural breakpoint algorithm or a kernel density estimation algorithm; the waveform breakpoints form a plurality of waveforms;
calculating the range of each waveform and comparing the range with a threshold value; if the range of the threshold value, then the waveform corresponding to the range difference is judged to be the waveform corresponding to the appointed parts positioned at the bottom of the vehicle, and extracting waveform data of the waveform;
fitting the waveform data by using a polynomial function, acquiring the minimum value of the polynomial function by using a gradient descent algorithm, and acquiring the abscissa of the minimum value in the vehicle bottom waveform data corresponding to the data point
Find the abscissaRelative positions in the serial numbers of vehicle bottom waveform data corresponding to the K space coordinates;
set up the abscissaIs in the position of n (p) k ) To n (p) k+1 ) Between, then on the abscissaThe corresponding spatial coordinates are:
further, when the vehicle bottom data comprises vehicle bottom image data;
the mapping relationship comprises:
the vehicle bottom inspection robot provides K space coordinates corresponding to the abscissa of the vehicle bottom image data in the real working environment in unit time;
let the vector that K space coordinates constitute be:
vp=[p 0 ,p 1 ……p K ]
wherein p is 0 And p K Respectively are space coordinates corresponding to the leftmost abscissa and the rightmost abscissa of the vehicle bottom image data;
the sequence number of the kth space coordinate in the vehicle bottom image data is as follows:
wherein N is the number of horizontal coordinates in the vehicle bottom image data acquired in unit time;
the vector formed by the serial numbers of the vehicle bottom image data corresponding to the K space coordinates is as follows:
n(vp)=[n(p 0 ),n(p 1 )……n(p K )]
according to vehicle bottom data calculation be located the space coordinate of the appointed spare part of vehicle bottom, include:
extracting an appointed part image area from the vehicle bottom image data by using an image segmentation algorithm;
acquiring the abscissa corresponding to the central point of the image area of the specified part in the vehicle bottom image data
Find the abscissaRelative positions in the vehicle bottom image data serial numbers corresponding to the K space coordinates;
set the abscissaIs in the position of n (p) k ) To n (p) k+1 ) In the middle, the abscissaThe corresponding spatial coordinates are:
further, confirm the vehicle bottom and patrol and examine the position adjustment offset of robot, include:
wherein, the delta p is the position adjustment offset of the vehicle bottom inspection robot, and the p (A) is the space coordinate of the center of the vehicle bottom inspection robot obtained by the vehicle bottom inspection robot;
and when the delta p is smaller than 0, the vehicle bottom inspection robot needs to retreat by the distance of | delta p |.
When the delta p is larger than 0, the vehicle bottom inspection robot needs to advance by the distance of | delta p |.
When Δ p =0, the vehicle bottom inspection robot does not need to move.
Compared with the prior art, the invention has the beneficial effects that:
an auxiliary positioning method for a vehicle bottom inspection robot comprises the following steps: comparing the vehicle bottom data subjected to noise reduction processing with the data characteristics of the constructed specified parts, and determining that the vehicle bottom data comprises the data of the specified parts; returning a position adjusting signal of the designated part to the vehicle bottom inspection robot, and calculating the space coordinate of the designated part positioned at the bottom of the vehicle according to the mapping relation between the vehicle bottom data and the space coordinate provided by the vehicle bottom inspection robot; determining the position adjustment offset of the vehicle bottom inspection robot according to the space coordinate of the position of the vehicle bottom inspection robot and the space coordinate of a designated part positioned at the bottom of the vehicle; the method and the system send position adjusting signals to the inspection robot by analyzing data of the laser range finder or/and images of the linear array camera, and simultaneously give the position adjusting offset of the inspection robot, so that the inspection robot finds the optimal operation position without depending on the identification of a fixed road sign mark, and the method and the system are flexible and convenient; meanwhile, various vehicle bottom parts can be identified at the same time, and spatial coordinates are output; moreover, the maintenance cost is relatively low because only the laser distance sensor and the linear array camera are relied on; and when the external environment changes dynamically, the scheme is not greatly interfered, and the method is suitable for the practical situation of the subway bottom inspection business.
Drawings
FIG. 1 is a flow chart of an auxiliary positioning method for a vehicle bottom inspection robot;
FIG. 2 is a schematic diagram of laser sensor data acquisition;
FIG. 3 is a schematic diagram of vehicle bottom waveform data acquired by a laser sensor;
FIG. 4 is a schematic diagram of line scan module data acquisition;
FIG. 5 is a schematic view of a scanning area of a through passage at the bottom of a subway car scanned by a line scanning module;
FIG. 6 is a schematic diagram of data denoising of vehicle bottom waveform data;
FIG. 7 is a schematic diagram of data denoising of vehicle bottom image data;
FIG. 8 is a flow chart for obtaining spatial coordinates of designated parts based on vehicle bottom waveform data;
FIG. 9 is a diagram illustrating a waveform break point obtained in the first embodiment;
FIG. 10 is a diagram illustrating waveforms corresponding to the specified components obtained in the first embodiment;
FIG. 11 is a flow chart for obtaining spatial coordinates of designated parts based on vehicle bottom image data;
fig. 12 is a schematic diagram of calculation of the position adjustment offset of the vehicle bottom inspection robot.
Detailed Description
It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The features and properties of the present invention are described in further detail below with reference to examples.
Example one
At present, the existing auxiliary positioning technology is mainly based on road sign identifications, a plurality of road sign identifications are installed on characteristic point positions in the environment where the AGV robot is located, and the AGV robot calculates the self pose by utilizing the measurement of a sensor on the road sign identifications; the method depends on the road sign identification in the environment, and the actual positioning precision mainly depends on the accurate identification of the road sign identification and the accurate and rapid extraction of the environment position information.
The prior art has the defects that the installation and maintenance cost is high, and when the external environment is dynamically changed, blind spots exist in the detection of the sensor; secondly, in the subway train bottom patrol inspection operation scene, due to the limitation of the space boundary of the train, the installation of the road sign identification device at the bottom of the train is not in line with the actual business.
In this embodiment, it should be noted that the vehicle bottom inspection robot is composed of a working mechanical arm, an inspection trolley and various sensors.
In order to solve the above problems, the present embodiment provides an auxiliary positioning method for a vehicle bottom inspection robot, which sends a position adjustment signal to the inspection robot by analyzing data of a laser range finder or/and an image of a line camera, and simultaneously gives a position adjustment offset of the inspection robot, so that the inspection robot finds an optimal operation position.
Referring to fig. 1, an auxiliary positioning method for a vehicle bottom inspection robot specifically includes:
comparing the vehicle bottom data subjected to noise reduction processing with the data characteristics of the constructed specified parts, and determining that the vehicle bottom data comprises the data of the specified parts; namely, whether the vehicle bottom inspection robot reaches the operation position corresponding to the designated part positioned at the vehicle bottom is judged;
returning a position adjusting signal of the designated part to the vehicle bottom inspection robot, and calculating the space coordinate of the designated part at the bottom of the vehicle according to the mapping relation between the vehicle bottom data and the space coordinate provided by the vehicle bottom inspection robot; the adjustment signal is, for example, a through passage return value 1, a bogie axle return value 2, or the like;
determining the position adjustment offset of the vehicle bottom inspection robot according to the spatial coordinate of the position of the vehicle bottom inspection robot and the spatial coordinate of the designated part at the vehicle bottom; preferably, the spatial coordinate of the position of the vehicle bottom inspection robot is directly given by the inspection trolley, namely the spatial coordinate of the position of the center of the inspection trolley is given by the inspection trolley.
In this embodiment, specifically, the vehicle bottom data includes: vehicle bottom waveform data acquired by a laser sensor is shown in fig. 3; the noise reduction processing includes: carrying out noise reduction processing on vehicle bottom waveform data by using a clustering algorithm or a filtering algorithm; the noise reduction processing for the vehicle bottom waveform data is shown in fig. 6;
or/and
the vehicle bottom data comprises: acquiring vehicle bottom image data through a line scanning module; the noise reduction processing includes: carrying out noise reduction processing on the vehicle bottom image data acquired by the line scanning module by using a filtering algorithm; preferably, a Gaussian filtering algorithm is adopted for noise reduction; the noise reduction processing for the vehicle bottom image data is illustrated in fig. 7, taking the vehicle bottom image data as an example.
Specifically, the selection of vehicle bottom waveform data or vehicle bottom image data as vehicle bottom data mainly depends on the types of specified parts; for example, the bogie axle and the like adopt vehicle bottom waveform data, and the through passage adopts vehicle bottom image data.
As shown in fig. 2, the direction of the laser beam of the laser sensor is perpendicular to the upper surface of the inspection trolley, and the laser point is positioned on the bottom surface of the subway trolley; in the process of travelling of the inspection trolley, a laser point moving track is formed at the bottom of the subway train, and the laser sensor acquires corresponding train bottom waveform data;
as shown in fig. 4, the scanning direction of the line scanning module is perpendicular to the upper surface of the inspection trolley, and the scanning area is located on the bottom surface of the subway train; in the process of advancing the inspection trolley, the scanning area of the line scanning module is a rectangular area; taking the identification of the subway bottom through passage as an example, the scanning area is shown in fig. 5.
In this embodiment, specifically, the data characteristics of the specified component are constructed by a direct method;
the direct process construction includes: constructing a feature vector B of a designated part; the characteristic vector B is a waveform characteristic vector B 1 Or/and image feature vector B 2 ;
Namely, the waveform characteristic vector B of a specified part (such as a bogie axle and the like) is constructed by utilizing prior knowledge 1 Features such as distance of adjacent inflection points, amplitude, etc.;
using prior knowledge, constructing image characteristic vector B of image data of appointed parts (such as through passage, etc.) 2 Or image feature matrix B 3 For example, a gray level co-occurrence matrix reflecting textural features; when the image feature matrix is the image feature matrix, the image feature matrix is required to be converted into an image feature vector; the image feature matrix B needs to be processed 3 Conversion into image feature vector B 3 。
In this embodiment, specifically, the comparison is performed by a direct method;
the direct method comparison comprises the following steps:
extracting a characteristic vector A of vehicle bottom data; the feature vector A is a waveform feature vector A 1 Or/and image feature vector A 2 (ii) a The types of the feature vector A and the feature vector B are kept consistent; that is, if the vehicle bottom data is the vehicle bottom waveform data, the characteristic vector B is the waveform characteristic vector B 1 The feature vector A is a waveform feature vector A 1 (ii) a Further, extracting the feature vector A by adopting an extraction mode of the feature vector B; when the extracted image feature matrix A is 3 When the image feature matrix A is required to be used, the image feature matrix A is also required to be used 3 Conversion into image feature vector A 3 ;
Calculating the cosine similarity of the characteristic vector A and the characteristic vector B;
if the cosine similarity is greater than or equal to the threshold value s0, the vehicle bottom data comprises data of specified parts, and when the vehicle bottom data is vehicle bottom waveform data, the waveform data comprises waveform data of the specified parts, namely the inspection robot is explained to reach the operation position preliminarily; when the vehicle bottom data comprises vehicle bottom image data, the image data comprises image data of specified parts, namely the inspection robot is explained to reach the operation position preliminarily; if the cosine similarity is smaller than a threshold value s0, the vehicle bottom data does not include the data of the designated parts, and new vehicle bottom data needs to be obtained again;
the cosine similarity calculation formula is as follows:
let the feature vector be A = (a) 1 ,a 2 …a p ),B=(b 1 ,b 2 …b p ),
The cosine similarity between the feature vector a and the feature vector B is:
if it is the image feature matrix B 3 And an image feature matrix A 3 Then, first, A is put 3 And A 3 Reconstructing into a vector; and then cosine similarity calculation is carried out.
Let the image feature matrix A 3 And B 3 Comprises the following steps:
then the image feature matrix a is reconstructed first 3 And B 3 As a vector, the reconstructed vector is as follows:
A 3 =(a 1 ,a 2 …,a p ,a p+1 ,a p+2 …,a 2p ,a 2p+1 ,…,a 3p )
B 3 =(b 1 ,b 2 …,b p ,b p+1 ,b p+2 …,b 2p ,b 2p+1 ,…,b 3p )
then the image feature vector a 3 And B 3 The cosine similarity of (a) is:
in this embodiment, specifically, when the underbody data includes underbody waveform data; please refer to fig. 10;
the mapping relation comprises:
the vehicle bottom inspection robot provides K spatial coordinates corresponding to the abscissa of the data point of the vehicle bottom waveform data in the real working environment in unit time; namely, the patrol and examine dolly provides the corresponding space coordinate (here is one-dimensional coordinate) of vehicle bottom waveform data in the real work environment. Assuming that the frequency K =50 of acquiring the spatial coordinates in the embodiment, that is, the inspection trolley provides spatial coordinates corresponding to the abscissa of data points in 50 vehicle bottom waveform data per second, and the number N =1000 of the data points in the vehicle bottom waveform data acquired per second;
let the vector that K space coordinates constitute be:
vp=[p 0 ,p 1 ……p K ]
wherein p is 0 And p K The space coordinates corresponding to the abscissa of the first data point and the abscissa of the last data point of the vehicle bottom waveform data respectively;
the sequence number of the kth space coordinate in the vehicle bottom waveform data is as follows:
wherein N is the number of data points in the vehicle bottom waveform data acquired in unit time;
the vector formed by the serial numbers of the vehicle bottom waveform data corresponding to the K space coordinates is as follows:
n(vp)=[n(p 0 ),n(p 1 )……n(p K )]
referring to fig. 8, the calculating the spatial coordinates of the designated components at the bottom of the vehicle includes:
acquiring waveform breakpoints in vehicle bottom waveform data by using a clustering algorithm, a Jenks natural breakpoint algorithm or a kernel density estimation algorithm; the waveform breakpoints form a plurality of waveforms; preferably, two adjacent waveform breakpoints constitute one waveform;
fig. 9 shows an example of waveform breakpoints, where a, b, c, d are four waveform breakpoints, and then the four waveform breakpoints constitute three-segment waveforms ab, bc, cd;
calculating the range of each waveform, and comparing the range with a threshold value; if the range of the threshold value, then the waveform corresponding to the range difference is judged to be the waveform corresponding to the appointed parts positioned at the bottom of the vehicle, and extracting waveform data of the waveform;
as shown in fig. 10, the polar difference Δ ab of the waveform ab is equal to the maximum value minus the minimum value between the points a and b, the polar difference Δ bc of the waveform bc is equal to the maximum value minus the minimum value between the points b and c, and the polar difference Δ cd of the waveform cd is equal to the maximum value minus the minimum value between the points c and d; setting the range delta at more than or equal to delta 1 Is less than or equal to Δ 2 When the waveform is the designated part; the waveform ab and the waveform cd are the waveforms corresponding to the specified component.
Fitting the waveform data by using a polynomial function, acquiring the minimum value of the polynomial function by using a gradient descent algorithm, and acquiring the abscissa of the minimum value in the vehicle bottom waveform data corresponding to the data point
Find the abscissaRelative positions in the serial numbers of vehicle bottom waveform data corresponding to the K space coordinates;
set up the abscissaIs in the position of n (p) k ) To n (p) k+1 ) Between, then on the abscissaThe corresponding spatial coordinates are:
in this embodiment, specifically, when the vehicle bottom data includes vehicle bottom image data;
the mapping relation comprises:
the vehicle bottom inspection robot provides K space coordinates corresponding to the abscissa of the vehicle bottom image data in the real working environment in unit time; namely, the patrol car provides corresponding space coordinates (here, one-dimensional coordinates) of car bottom image data in the real working environment. Assuming that the frequency K =50 of the acquisition of the spatial coordinates in the embodiment, that is, the inspection trolley provides spatial coordinates corresponding to the abscissa in the 50 vehicle bottom image data every second, and the number N =1000 of the abscissas in the vehicle bottom image data collected every second;
let the vector formed by the K space coordinates be:
vp=[p 0 ,p 1 ……p K ]
wherein p is 0 And p K Respectively are space coordinates corresponding to the leftmost abscissa and the rightmost abscissa of the vehicle bottom image data;
the sequence number of the kth space coordinate in the vehicle bottom image data is as follows:
wherein N is the number of horizontal coordinates in the vehicle bottom image data acquired in unit time;
the vector formed by the serial numbers of the vehicle bottom image data corresponding to the K space coordinates is as follows:
n(vp)=[n(p 0 ),n(p 1 )……n(p K )];
referring to fig. 11, the calculating the spatial coordinates of the designated parts located at the bottom of the train according to the train bottom data includes:
extracting an appointed part image area from the vehicle bottom image data by using an image segmentation algorithm;
acquiring the abscissa corresponding to the central point of the image area of the specified part in the vehicle bottom image data
Find the abscissaRelative positions in vehicle bottom image data serial numbers corresponding to the K space coordinates;
set up the abscissaIs in the position of n (p) k ) To n (p) k+1 ) Between, then on the abscissaThe corresponding spatial coordinates are:
in this embodiment, specifically, as shown in fig. 12, the determining the position adjustment offset of the vehicle bottom inspection robot includes:
wherein, the delta p is the position adjustment offset of the vehicle bottom inspection robot, and the p (A) is the space coordinate of the center of the vehicle bottom inspection robot obtained by the vehicle bottom inspection robot; the spatial coordinates of the center of the patrol trolley are acquired;
and when the delta p is smaller than 0, the vehicle bottom inspection robot needs to retreat by the distance of | delta p |.
And when the delta p is larger than 0, the vehicle bottom inspection robot needs to advance by the distance of the delta p.
When Δ p =0, the vehicle bottom inspection robot does not need to move.
Example two
The second embodiment is a further improvement of the first embodiment, the same components are not repeated herein, please refer to fig. 1-3, in this embodiment, the data features of the specified components are constructed by deep learning modeling;
the deep learning modeling construction comprises the following steps:
carrying out noise reduction processing on a large amount of data corresponding to a specified part, wherein the data is waveform data or/and image data; that is, a large amount of waveform data or/and image data corresponding to the existing designated parts should be collected first, and the noise reduction processing mode can be directly adopted in the manner described above;
constructing a discrimination model of the specified part by using a deep learning model according to the data subjected to noise reduction processing; the discrimination model is a waveform discrimination model or/and an image discrimination model; preferably, the deep learning model is such as a ResNet series network, a SENet network, a DCL network, or the like.
In this embodiment, the comparison is performed by a deep learning discrimination method;
the deep learning discrimination comparison comprises the following steps:
judging and predicting vehicle bottom data by using a judging model, and outputting the probability P that the vehicle bottom data comprises data of specified parts; predicting vehicle bottom data according to a waveform discrimination model or an image discrimination model, and outputting the probability P that the vehicle bottom data comprises the waveform or the image of a specified part;
if the probability P is larger than or equal to the threshold value P0, the vehicle bottom data comprises appointed part data; if the probability P is smaller than the threshold value P0, the vehicle bottom data do not comprise the data of the designated parts, and new vehicle bottom data need to be obtained again.
The above-mentioned embodiments only express the specific embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for those skilled in the art, without departing from the technical idea of the present application, several changes and modifications can be made, which are all within the protection scope of the present application.
The background section is provided to present the context of the invention in general, and work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present invention.
Claims (10)
1. The utility model provides a vehicle bottom inspection robot assistance-localization real-time method which characterized in that includes:
comparing the vehicle bottom data subjected to noise reduction processing with the data characteristics of the constructed specified parts, and determining that the vehicle bottom data comprises the data of the specified parts;
returning a position adjusting signal of the designated part to the vehicle bottom inspection robot, and calculating the space coordinate of the designated part positioned at the bottom of the vehicle according to the mapping relation between the vehicle bottom data and the space coordinate provided by the vehicle bottom inspection robot;
and determining the position adjustment offset of the vehicle bottom inspection robot according to the space coordinate of the position of the vehicle bottom inspection robot and the space coordinate of the appointed part positioned at the vehicle bottom.
2. The vehicle bottom inspection robot auxiliary positioning method according to claim 1, wherein the vehicle bottom data comprises: vehicle bottom waveform data are acquired through a laser sensor; the noise reduction processing includes: carrying out noise reduction processing on vehicle bottom waveform data by using a clustering algorithm or a filtering algorithm;
or/and
the vehicle bottom data comprises: acquiring vehicle bottom image data through a line scanning module; the noise reduction processing includes: and carrying out noise reduction processing on the vehicle bottom image data acquired by the line scanning module by using a filtering algorithm.
3. The vehicle bottom inspection robot auxiliary positioning method according to claim 2, characterized in that the data characteristics of the specified parts are constructed by a direct method;
the direct process construction includes:
constructing a feature vector B of a designated part; the characteristic vector B is a waveform characteristic vector B 1 Or/and image feature vector B 2 。
4. The vehicle bottom inspection robot auxiliary positioning method according to claim 2, characterized in that the data features of the designated parts are constructed by deep learning modeling;
the deep learning modeling construction comprises the following steps:
carrying out noise reduction processing on a large amount of data corresponding to a specified part, wherein the data is waveform data or/and image data;
constructing a discrimination model of the specified part by using a deep learning model according to the data after the noise reduction; the discrimination model is a waveform discrimination model or/and an image discrimination model.
5. The auxiliary positioning method for the vehicle bottom inspection robot according to claim 3, characterized in that the comparison is performed by a direct method;
the direct method comparison comprises the following steps:
extracting a characteristic vector A of vehicle bottom data; the feature vector A is a waveform feature vector A 1 Or/and image feature vector A 2 (ii) a The types of the feature vector A and the feature vector B are kept consistent;
calculating the cosine similarity of the characteristic vector A and the characteristic vector B;
if the cosine similarity is greater than or equal to the threshold value s0, the vehicle bottom data comprises data of specified parts; and if the cosine similarity is smaller than the threshold value s0, the vehicle bottom data does not include the data of the specified parts, and new vehicle bottom data needs to be acquired again.
6. The vehicle bottom inspection robot auxiliary positioning method according to claim 4, characterized in that the comparison is performed by a deep learning discrimination method;
the deep learning discrimination comparison comprises the following steps:
judging and predicting vehicle bottom data by using a judging model, and outputting the probability P that the vehicle bottom data comprises data of specified parts;
if the probability P is larger than or equal to the threshold value P0, the vehicle bottom data comprises appointed part data; and if the probability P is smaller than the threshold P0, the vehicle bottom data does not comprise the data of the specified parts, and new vehicle bottom data needs to be acquired again.
7. The vehicle bottom inspection robot auxiliary positioning method according to claim 2, wherein the mapping relationship comprises:
the vehicle bottom inspection robot provides K spatial coordinates corresponding to the abscissa of vehicle bottom data in a real working environment in unit time;
let the vector formed by the K space coordinates be:
vp=[p 0 ,p 1 ……p K ]
wherein p is 0 And p K Respectively are the space coordinates corresponding to the horizontal coordinates of the starting point and the end point of the vehicle bottom data;
the sequence number of the kth space coordinate in the vehicle bottom data is as follows:
wherein N is the number of horizontal coordinates in the vehicle bottom data acquired in unit time;
the vector formed by the serial numbers of the vehicle bottom data corresponding to the K space coordinates is as follows:
n(vp)=[n(p 0 ),n(p 1 )……n(p K )]。
8. the auxiliary positioning method for the vehicle bottom inspection robot according to claim 7, wherein when the vehicle bottom data comprises vehicle bottom waveform data;
the calculation is located the space coordinate of the appointed spare part of vehicle bottom, includes:
acquiring waveform breakpoints in vehicle bottom waveform data by using a clustering algorithm, a Jenks natural breakpoint algorithm or a kernel density estimation algorithm; the waveform breakpoints form a plurality of waveforms;
calculating the range of each waveform, and comparing the range with a threshold value; if the range of the threshold value, then the waveform corresponding to the range difference is judged to be the waveform corresponding to the appointed parts positioned at the bottom of the vehicle, and extracting waveform data of the waveform;
fitting the waveform data by using a polynomial function, acquiring the minimum value of the polynomial function by using a gradient descent algorithm, and acquiring the abscissa of the minimum value in the vehicle bottom waveform data corresponding to the data point
Find the abscissaRelative positions in the serial numbers of vehicle bottom waveform data corresponding to the K space coordinates;
set the abscissaIs in the position of n (p) k ) To n (p) k+1 ) Between, then on the abscissaThe corresponding spatial coordinates are:
9. the auxiliary positioning method for the vehicle bottom inspection robot according to claim 7, characterized in that when the vehicle bottom data comprises vehicle bottom image data;
according to vehicle bottom data calculation be located the space coordinate of the appointed spare part of vehicle bottom, include:
extracting an appointed part image area from the vehicle bottom image data by using an image segmentation algorithm;
acquiring the abscissa corresponding to the central point of the image area of the specified part in the vehicle bottom image data
Find the abscissaRelative positions in the vehicle bottom image data serial numbers corresponding to the K space coordinates;
set the abscissaIs in the position of n (p) k ) To n (p) k+1 ) In the middle, the abscissaThe corresponding spatial coordinates are:
10. the auxiliary positioning method for the vehicle bottom inspection robot according to claim 8 or 9, wherein the step of determining the position adjustment offset of the vehicle bottom inspection robot comprises the following steps:
wherein, the delta p is the position adjustment offset of the vehicle bottom inspection robot, and the p (A) is the space coordinate of the center of the vehicle bottom inspection robot obtained by the vehicle bottom inspection robot;
and when the delta p is smaller than 0, the vehicle bottom inspection robot needs to retreat by the distance of | delta p |.
And when the delta p is larger than 0, the vehicle bottom inspection robot needs to advance by the distance of the delta p.
When Δ p =0, the vehicle bottom inspection robot does not need to move.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211028373.0A CN115511958A (en) | 2022-08-25 | 2022-08-25 | Auxiliary positioning method for vehicle bottom inspection robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211028373.0A CN115511958A (en) | 2022-08-25 | 2022-08-25 | Auxiliary positioning method for vehicle bottom inspection robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115511958A true CN115511958A (en) | 2022-12-23 |
Family
ID=84502122
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211028373.0A Pending CN115511958A (en) | 2022-08-25 | 2022-08-25 | Auxiliary positioning method for vehicle bottom inspection robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115511958A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115797587A (en) * | 2023-02-08 | 2023-03-14 | 西南交通大学 | Inspection robot positioning and drawing method capable of fusing line scanning vehicle bottom image characteristics |
-
2022
- 2022-08-25 CN CN202211028373.0A patent/CN115511958A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115797587A (en) * | 2023-02-08 | 2023-03-14 | 西南交通大学 | Inspection robot positioning and drawing method capable of fusing line scanning vehicle bottom image characteristics |
CN115797587B (en) * | 2023-02-08 | 2023-04-07 | 西南交通大学 | Inspection robot positioning and drawing method capable of fusing line scanning vehicle bottom image characteristics |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109459750B (en) | Front multi-vehicle tracking method integrating millimeter wave radar and deep learning vision | |
CN113870123B (en) | Automatic detection method for contact net leading height and pulling value based on vehicle-mounted mobile laser point cloud | |
CN106997688B (en) | Parking lot parking space detection method based on multi-sensor information fusion | |
CN109633676A (en) | A kind of method and system based on the laser radar obstruction detection direction of motion | |
CN110378957B (en) | Torpedo tank car visual identification and positioning method and system for metallurgical operation | |
CN103324913A (en) | Pedestrian event detection method based on shape features and trajectory analysis | |
CN110736999B (en) | Railway turnout detection method based on laser radar | |
CN111832410B (en) | Forward train detection method based on fusion of vision and laser radar | |
CN114488194A (en) | Method for detecting and identifying targets under structured road of intelligent driving vehicle | |
CN111539436B (en) | Rail fastener positioning method based on straight template matching | |
CN115511958A (en) | Auxiliary positioning method for vehicle bottom inspection robot | |
CN112215125A (en) | Water level identification method based on YOLOv3 | |
CN110619328A (en) | Intelligent ship water gauge reading identification method based on image processing and deep learning | |
CN113310450A (en) | Contact net dropper detection method based on point cloud training model | |
CN111882664A (en) | Multi-window accumulated difference crack extraction method | |
CN110954005B (en) | Medium-low speed maglev train suspension gap detection method based on image processing | |
CN110176022B (en) | Tunnel panoramic monitoring system and method based on video detection | |
CN112036422B (en) | Track management method, system and computer readable medium based on multi-sensor information fusion | |
CN115294541A (en) | Local feature enhanced Transformer road crack detection method | |
CN115542338B (en) | Laser radar data learning method based on point cloud spatial distribution mapping | |
CN114119355B (en) | Method and system for early warning of blocking dropping risk of shield tunnel | |
CN113091693B (en) | Monocular vision long-range distance measurement method based on image super-resolution technology | |
CN115014359A (en) | Unmanned aerial vehicle path planning and positioning navigation system | |
CN111473944B (en) | PIV data correction method and device for observing complex wall surface in flow field | |
JP2022074331A5 (en) |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |