CN112009528A - Train positioning method based on contact net and accessory thereof - Google Patents

Train positioning method based on contact net and accessory thereof Download PDF

Info

Publication number
CN112009528A
CN112009528A CN202010916150.2A CN202010916150A CN112009528A CN 112009528 A CN112009528 A CN 112009528A CN 202010916150 A CN202010916150 A CN 202010916150A CN 112009528 A CN112009528 A CN 112009528A
Authority
CN
China
Prior art keywords
image
train
point
overhead line
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010916150.2A
Other languages
Chinese (zh)
Inventor
李杨龙
王诗琦
刘凯腾
廖进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202010916150.2A priority Critical patent/CN112009528A/en
Publication of CN112009528A publication Critical patent/CN112009528A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L25/00Recording or indicating positions or identities of vehicles or trains or setting of track apparatus
    • B61L25/02Indicating or recording positions or identities of vehicles or trains
    • B61L25/026Relative localisation, e.g. using odometer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a train positioning method based on a contact network and accessories thereof, which comprises the following steps of S1, clearly imaging contact suspension, additional suspension, a supporting device, a positioning device and other surroundings in the contact network; s2, acquiring clear images including all the auxiliary devices of the contact network along the whole way, S3, preprocessing the high-definition images and determining the areas of the auxiliary devices of the contact network; s4, extracting the characteristics of the area where the accessory device is located by adopting the HOG characteristics; s5, selecting 10% of samples from the area of the auxiliary device of the contact network as verification samples to extract SIFT features of the images in the sample library; and S6, recording the video information transmitted by the 4C detection device in real time by using a vehicle-mounted computer, repeating the steps S1-S4 to obtain an image of the auxiliary device area of the overhead line system, and extracting SIFT characteristics of an auxiliary device area image sample to compare with the detection sample in the step 5.

Description

Train positioning method based on contact net and accessory thereof
Technical Field
The invention belongs to the technical field of train positioning, and particularly relates to a train positioning method based on a contact network and auxiliary devices thereof.
Background
In the rail transit transportation process, determining the current position of a train is very important for maintaining stable operation of rail transit. However, as the requirement for train positioning accuracy is continuously increased, the current train positioning method and the used positioning equipment have major defects and shortcomings.
In the method for distance measurement and positioning through the wheel axle counter, the distance measurement error is gradually increased due to the reasons of wheel diameter error, wheel snaking and slipping and the like along with the increase of the running distance of a train, so that the positioning obtained according to the running distance is inaccurate; in the method for obtaining the train position through the ground active radio frequency identification tag, the positioning error is larger due to the larger transmission radius of the signal; in the GPS positioning method, the positioning effect depends on the GPS signal intensity, and the positioning effect of the train in the tunnel or underground operation is extremely poor; the positioning method by track circuits requires a lot of construction and maintenance costs.
Disclosure of Invention
The invention aims to provide a train positioning method based on a contact net and accessories thereof aiming at overcoming the defects in the prior art, and aims to solve the problem of low positioning precision of the existing train.
In order to achieve the purpose, the invention adopts the technical scheme that:
a train positioning method based on a contact net and accessories thereof comprises the following steps:
s1, acquiring a high-definition video of the overhead line equipment in real time based on the overhead line equipment suspension state 4C detection device, and clearly imaging a contact suspension device, an additional suspension device, a supporting device and a positioning device in the overhead line equipment;
s2, extracting the high-definition video acquired by the 4C device frame by frame, and acquiring clear images including all auxiliary devices of the contact network along the whole way, wherein the clear images include contact wires, dropper, carrier cable, connecting parts and insulators;
s3, preprocessing the high-definition image and determining an auxiliary device area of the overhead line system;
s4, extracting the characteristics of the area where the accessory device is located by adopting the HOG characteristics;
s5, selecting 10% of samples in the area of the auxiliary device of the overhead line system as verification samples, associating the verification samples with the positions in the high-definition video shot by the 4C detection device in real time, obtaining a detection sample library with one-to-one correspondence of the images and the position information, and extracting SIFT (scale invariant feature transform) features of the images in the sample library as a reference of an image identification part;
and S6, in the real-time running process of the train, recording the video information transmitted by the 4C detection device in real time by using the vehicle-mounted computer, repeating S1-S4 to obtain an image of an auxiliary device area of the overhead line system, extracting SIFT characteristics of an auxiliary device area image sample, and comparing the SIFT characteristics with the detection sample in S5 to further position the train.
Preferably, the S2 extracts the high-definition video acquired by the 4C device frame by frame, acquires clear images including all the accessories of the overhead line system along the whole route, including the contact line, the dropper, the messenger wire, the connecting parts and the insulator, and includes:
sampling and extracting a high-definition picture containing auxiliary devices of a contact network according to the sampling frequency and the vehicle speed of the high-speed industrial camera;
setting train running speed as discrete quantity, crossing adjacent support column time interval delta tiComprises the following steps:
Figure BDA0002665093860000021
wherein f isHigh speedThe sampling frequency of a high-speed industrial camera is shown, V is the real-time running speed of the train, and L is the standard contact net strut interval;
calculating the number N of real-time acquired high-definition images of trains passing through adjacent pillarsi
Ni=Δti·fHigh speed
From NiSelecting 5 high-definition pictures n at medium intervalsiThe sequence number of the selected picture is specifically as follows:
ni=0.2Ni,0.4Ni,0.6Ni,0.8Ni,Ni
all images of the train over the interval of adjacent legs were uniformly sampled 5 times.
Preferably, S3 pre-processes the high definition image, including:
s3.1, processing the color picture by using a gray level change formula:
g(x,y)=0.299R(x,y)+0.587G(x,y)+0.114B(x,y)
wherein R (x, y) is a red component; g (x, y) is a green component; b (x, y) is a blue component; g (x, y) is a gray image, and x and y are respectively the horizontal and vertical coordinates of the position of the transformed pixel point;
s3.2, reinforcing the edge profile of the accessory device by using a Sobel operator;
the Sobel operator comprises two groups of 3x3 matrixes which are respectively in the transverse direction and the longitudinal direction, and the two groups of 3x3 matrixes are subjected to plane convolution with the image to respectively obtain a brightness difference approximate value G in the transverse direction and the longitudinal directionxAnd GyThe transformation formula is as follows:
Figure BDA0002665093860000031
Figure BDA0002665093860000032
wherein, A is gray picture information, and the size and the direction of the gradient in the image are calculated by using a formula:
Figure BDA0002665093860000033
Figure BDA0002665093860000034
setting a threshold value Gmax, if the gradient G of a certain point is greater than Gmax, judging that the certain point is a boundary point and setting the certain point as a white point, and otherwise, setting the certain point as a black point;
Figure BDA0002665093860000035
s3.3, realizing straight line detection by adopting Hough transformation, positioning an accessory device area, mapping collinear points in an image space to a parameter space, detecting straight line parameters by using a local peak value, and then mapping to the image space to obtain an image straight line detection result;
if the sine curves have a common intersection point (rho, theta), the straight lines are judged to be collinear, and the equation of the straight line corresponding to the polar coordinate is as follows:
ρ=x cos θ+y sin θ;
wherein rho and theta are respectively the polar diameter and the polar angle of the intersection point of the sine curve under the polar coordinate;
the process of detecting the straight line where the edge of the accessory device is located comprises the following steps:
sequentially traversing gray level image A1Judging whether the Hough peak number is met or not by all the pixels in the image, and if so, adding 1 to counters of all the linear regions passing through the pixels; in order to obtain all straight line regions passing through a certain pixel, sequentially solving rho values by using all possible values of theta according to a formula to obtain a plurality of groups of rho and theta, wherein the straight lines meeting the requirements in the groups of rho and theta are the edges of the contact net stand columns.
Preferably, the step of extracting the feature of the region where the accessory device is located by using the HOG feature in S4 includes:
s4.1, image standardization is carried out on the contact network accessory device area;
I(x,y)=I(x,y)gamma
wherein, I (x, y) is the pixel value at the position of the image (x, y), gamma is the compression coefficient, and the value is 0.5;
s4.2, calculating the gradient and the direction of the image, and calculating the gradient G of the image in the horizontal direction at the (x, y) position of the image1xGradient G from vertical1yAnd calculating to obtain a gradient direction value of each pixel position:
G1x(x,y)=I(x+1,y)-I(x,y)
G1y(x,y)=I(x,y+1)-I(x,y)
gradient amplitude G at pixel point (x, y)1(x, y) and gradient direction α (x, y) are:
Figure BDA0002665093860000041
Figure BDA0002665093860000042
and S4.3, dividing the cells and the regions, dividing the image into a plurality of cells, constructing a gradient direction histogram for each cell, normalizing the gradient direction histogram in the region formed by the cells, and collecting the HOG characteristics.
Preferably, in S5, selecting 10% of samples in the area of the accessory device of the overhead line system as verification samples, associating the verification samples with positions in a high-definition video shot by a 4C detection device in real time, obtaining a detection sample library in which images and position information correspond to each other one by one, and extracting SIFT features of images in the sample library as references of an image recognition portion, including:
s5.1, the scale space L (x, y, σ) defining an image space is obtained by convolution of a gaussian function G (x, y, σ) of variable size with the input image:
L(x,y,σ)=G(x,y,σ)*I(x,y)
wherein,
Figure BDA0002665093860000051
(x, y) are spatial coordinates, σ is a scale coordinate, and the size of σ determines the degree of smoothness of the image;
and (3) detecting stable key points in the scale space by adopting a Gaussian difference scale space:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ)
wherein D is the difference of two adjacent scales, namely the two adjacent scales are different in scale by a multiplication coefficient k;
s5.2, detecting the spatial extreme points, searching the scale spatial extreme points, comparing each sampling point with all adjacent points of the sampling point, and if the sampling point is larger than or smaller than all the adjacent points, determining the sampling point as the scale spatial extreme point;
s5.3, removing pixels with obviously asymmetric local curvatures;
s5.4, matching the direction information of the stable key points;
preferably, the matching of the stable key point direction information in S5.4 includes:
the distribution of the directions is realized by solving the gradient of each extreme point;
for any key point, the gradient amplitude m (x, y) and direction θ (x, y) are calculated as:
Figure BDA0002665093860000052
Figure BDA0002665093860000053
and calculating the gradient directions of all the points in the neighborhood taking the key point as the center to obtain a gradient direction histogram so as to determine the main direction of the current key point.
Preferably, in the process of real-time running of the train, the S6 uses the vehicle-mounted computer to record the video information transmitted by the 4C detection device in real time, repeats S1-S4 to obtain an image of an auxiliary device area of the catenary, extracts SIFT features of an auxiliary device area image sample, and compares the SIFT features with detection samples in S5 to further position the train, and includes:
s6.1, SIFT feature value comparison is achieved by calculating the Euclidean distance of 128-dimensional key points of the two groups of feature points;
s6.2, if the Euclidean distance is smaller, the similarity is higher; and when the Euclidean distance is smaller than a set threshold value, judging that the matching is successful, further judging whether the train reaches a contact net positioning point, and positioning the train.
The train positioning method based on the contact net and the accessory thereof has the following beneficial effects:
the train overhead line system suspension state detection device and the overhead line system accessory device characteristics on the left side and the right side of the train are utilized for positioning, so that the train overhead line system suspension state detection device is simple and convenient, is low in cost and is less influenced by the train operation environment; meanwhile, the method can well solve the problems of large positioning error, large positioning interval and large positioning blind area in the traditional positioning method, has high recognition rate and high recognition speed of the contact net stand column characteristics, and can realize quick and accurate positioning of the train position.
Drawings
Fig. 1 is a schematic block diagram of a train positioning method based on a contact system and accessories thereof.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
According to an embodiment of the application, referring to fig. 1, the train positioning method based on the overhead contact system and the accessories thereof in the scheme includes:
s1, acquiring a high-definition video of the overhead line equipment in real time based on the overhead line equipment suspension state 4C detection device, and clearly imaging a contact suspension device, an additional suspension device, a supporting device and a positioning device in the overhead line equipment;
s2, extracting the high-definition video acquired by the 4C device frame by frame, and acquiring clear images including all auxiliary devices of the contact network along the whole way, wherein the clear images include contact wires, dropper, carrier cable, connecting parts and insulators;
s3, preprocessing the high-definition image and determining an auxiliary device area of the overhead line system;
s4, extracting the characteristics of the area where the accessory device is located by adopting the HOG characteristics;
s5, selecting 10% of samples in the area of the auxiliary device of the overhead line system as verification samples, associating the verification samples with the positions in the high-definition video shot by the 4C detection device in real time, obtaining a detection sample library with one-to-one correspondence of the images and the position information, and extracting SIFT (scale invariant feature transform) features of the images in the sample library as a reference of an image identification part;
and S6, in the real-time running process of the train, recording the video information transmitted by the 4C detection device in real time by using the vehicle-mounted computer, repeating S1-S4 to obtain an image of an auxiliary device area of the overhead line system, extracting SIFT characteristics of an auxiliary device area image sample, and comparing the SIFT characteristics with the detection sample in S5 to further position the train.
The above steps will be described in detail below according to one embodiment of the present application.
And S1, acquiring a high-definition video of the overhead line equipment in real time by using the overhead line suspension state detection device (4C), and clearly imaging the overhead line suspension, the additional suspension, the supporting device, the positioning device and other surroundings in the overhead line.
Step S2, extracting the high-definition video acquired by the 4C device frame by frame, acquiring clear images including all contact net accessory devices along the whole way, including contact lines, dropper, carrier cable, connecting parts and insulators, and specifically including:
and (4) sampling and extracting a high-definition picture containing the auxiliary devices of the contact network by considering the sampling frequency and the vehicle speed of the high-speed industrial camera.
Suppose fHigh speedThe sampling frequency who expresses high-speed industry camera, V are the real-time functioning speed of train, and L is standard contact net pillar interval, if ignore other influence factors to consider train functioning speed to discrete quantity, then:
the time interval across adjacent pillars is:
Figure BDA0002665093860000081
the train runs through adjacent pillars to collect the number of high-definition images in real time:
Ni=Δti·fhigh speed (2)
From NiSelecting 5 high-definition pictures n at medium intervalsiIf the sequence number of the selected picture is the following specific value:
ni=0.2Ni,0.4Ni,0.6Ni,0.8Ni,Ni (3)
all images of the train passing through the adjacent support within the interval time are uniformly sampled for 5 times, so that the high-definition contact network image with the optimal detection visual angle can be obtained, and the time consumed by detection and identification can be greatly saved.
Step S3, preprocessing the high-definition image, and determining the area of the auxiliary device of the contact network to improve the precision and the speed of the next processing, wherein the method specifically comprises the following steps:
s3.1, converting the color picture acquired by the 4C transposition into a gray image by adopting gray conversion, and filtering irrelevant information, wherein the method comprises the following steps:
processing the color picture by using a formula of gray level change:
g(x,y)=0.299R(x,y)+0.587G(x,y)+0.114B(x,y) (4)
wherein R (x, y) is a red component; g (x, y) is a green component; b (x, y) is a blue component; g (x, y) is a gray image.
And S3.2, reinforcing the edge outline of the accessory device by using a Sobel operator, and conveniently extracting the accessory device area.
The operator comprises two groups of 3x3 matrixes which are respectively in the transverse direction and the longitudinal direction, and the two groups of 3x3 matrixes are subjected to plane convolution with the image to respectively obtain an approximate value G of the brightness difference between the transverse direction and the longitudinal directionxAnd GyThe transformation formula is as follows:
Figure BDA0002665093860000091
Figure BDA0002665093860000092
wherein, A is gray picture information, and the size and the direction of the gradient in the image are calculated by using a formula:
Figure BDA0002665093860000093
Figure BDA0002665093860000094
and sets a suitable threshold Gmax, if the gradient G of a point is greater than Gmax, then this point is considered as the boundary point and set as the white point, otherwise this point is set as the black point:
Figure BDA0002665093860000095
and S3.3, realizing straight line detection by adopting Hough transformation, positioning an accessory device area, mapping collinear points in an image space to a parameter space, and mapping the collinear points to the image space after detecting straight line parameters by using local peak values to obtain an image straight line detection result.
A plurality of points are arranged on the plane, and straight lines passing through each point respectively correspond to a sine curve on the polar coordinate. If the sinusoids have a common intersection point (p, θ), the lines are collinear, corresponding to the equation of the line in polar coordinates expressed as:
ρ=x cos θ+y sin θ (10)
the process of detecting the straight line where the edge of the accessory device is located is as follows:
sequentially traversing gray level image A1And (4) judging whether the Hough peak number is met or not by all the pixels in the pixel group. If so, the counter for all the straight regions passing through the pixel is incremented by 1. In order to obtain all straight line regions passing through a certain pixel, sequentially solving rho values by using all possible values of theta (the value range of theta is-90 degrees) according to a formula, thereby obtaining a plurality of groups of rho and theta, and obtaining straight lines meeting requirements, namely the edges of the contact net stand columns.
Step S4, for the processed accessory device region image, implementing feature extraction on the region where the accessory device is located by using HOG feature extraction, which specifically includes:
s4.1, image standardization is carried out on the contact network accessory device area:
I(x,y)=I(x,y)gamma (11)
where I (x, y) is the pixel value at the image (x, y) position, and gamma represents the compression coefficient, typically 0.5.
Step S4.2, calculating the gradient and direction of the image:
at the (x, y) position of the image, the gradient G in the horizontal direction of the image is calculated1xGradient G from vertical1yAnd accordingly, obtaining a gradient direction value of each pixel position:
G1x(x,y)=I(x+1,y)-I(x,y) (12)
G1y(x,y)=I(x,y+1)-I(x,y) (13)
the gradient amplitude G (x, y) and the gradient direction α (x, y) at the pixel point (x, y) are respectively:
Figure BDA0002665093860000101
Figure BDA0002665093860000102
and S4.3, dividing the cells and the regions, dividing the image into a plurality of cells, constructing a gradient direction histogram for each cell, normalizing the gradient direction histogram in the region formed by the cells, and then collecting the HOG features.
Step S5: selecting 10% of the samples in the area of the auxiliary devices of the overhead line system as verification samples, and associating the verification samples with the positions in the high-definition video shot by the 4C detection device in real time to obtain a detection sample library with one-to-one correspondence of images and position information; and extracting SIFT characteristics of the sample library images as a reference of a subsequent image identification part.
The specific steps for extracting the SIFT features are as follows:
s5.1, detecting an extreme value of the scale space:
the scale space of an image space is defined as L (x, y, σ) which is obtained by convolution of a gaussian function G (x, y, σ) of variable size with the input image:
L(x,y,σ)=G(x,y,σ)*I(x,y) (16)
wherein,
Figure BDA0002665093860000111
(x, y) are spatial coordinates, σ is a scale coordinate, and the size of σ determines the degree of smoothness of the image.
In order to effectively detect stable key points in the scale space, a Gaussian difference scale space is adopted:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ) (17)
d is the difference of two adjacent scales, i.e. the two adjacent scales differ in scale by a multiplication factor k.
And S5.2, detecting spatial extreme points, wherein in order to search for the extreme points of the scale space, each sampling point is compared with all adjacent points of the sampling point to see whether the sampling point is larger or smaller than the adjacent points of the image domain and the scale domain.
And S5.3, removing the bad characteristic points and removing the pixels with very asymmetric local curvatures.
S5.4, stabilizing the direction information matching of the key points:
the allocation of the direction is achieved by graduating each extreme point.
For any key point, the gradient magnitude and direction are:
Figure BDA0002665093860000112
Figure BDA0002665093860000113
the principal direction of the current keypoint is determined by calculating the gradient direction of all points in the neighborhood centered on the keypoint to yield a histogram of gradient directions.
And S5.5, key point description, namely blocking the pixel region around the key point, calculating a fast internal gradient histogram, and generating a descriptor with uniqueness.
Step S6, in the real-time running process of the train, recording the video information transmitted by the 4C detection device in real time by using a vehicle-mounted computer, repeating the steps S1, S2, S3 and S4 to obtain an image of an auxiliary device area of the overhead line system, extracting SIFT characteristics of the auxiliary device area image sample, and comparing the SIFT characteristics with the detection sample in the step 5, wherein the detection process is as follows:
s6.1, SIFT feature value comparison is achieved by calculating the Euclidean distance of 128-dimensional key points of the two groups of feature points;
and S6.2, the smaller the Euclidean distance is, the higher the similarity is, when the Euclidean distance is smaller than a set threshold value, the matching can be judged to be successful, and whether the train reaches a contact net positioning point can be further judged, so that the train is positioned.
The method is based on image processing and recognition algorithms, the railway contact net and accessory devices thereof are recognized, and the train position is accurately positioned, meanwhile, the contact net suspension state detection device (4C) is used for carrying out high-definition imaging on contact net equipment, then preprocessing is carried out on extracted images, namely, graying is carried out on original color images, positioning and extraction of contact net column identification plates and other feature areas in the images are realized by an HOG feature description method, SIFT feature value comparison is finally adopted for recognition, the positions of the corresponding accessory devices are determined, and the current train running position is accurately positioned.
The train overhead line system suspension state detection device and the overhead line system accessory device characteristics on the left side and the right side of the train are utilized for positioning, so that the train overhead line system suspension state detection device is simple and convenient, is low in cost and is less influenced by the train operation environment; meanwhile, the method can well solve the problems of large positioning error, large positioning interval and large positioning blind area in the traditional positioning method, has high recognition rate and high recognition speed of the contact net stand column characteristics, and can realize quick and accurate positioning of the train position.
While the embodiments of the invention have been described in detail in connection with the accompanying drawings, it is not intended to limit the scope of the invention. Various modifications and changes may be made by those skilled in the art without inventive step within the scope of the appended claims.

Claims (7)

1. A train positioning method based on a contact net and accessories thereof is characterized by comprising the following steps:
s1, acquiring a high-definition video of the overhead line equipment in real time based on the overhead line equipment suspension state 4C detection device, and clearly imaging a contact suspension device, an additional suspension device, a supporting device and a positioning device in the overhead line equipment;
s2, extracting the high-definition video acquired by the 4C device frame by frame, and acquiring clear images including all auxiliary devices of the contact network along the whole way, wherein the clear images include contact wires, dropper, carrier cable, connecting parts and insulators;
s3, preprocessing the high-definition image and determining an auxiliary device area of the overhead line system;
s4, extracting the characteristics of the area where the accessory device is located by adopting the HOG characteristics;
s5, selecting 10% of samples in the area of the auxiliary device of the overhead line system as verification samples, associating the verification samples with the positions in the high-definition video shot by the 4C detection device in real time, obtaining a detection sample library with one-to-one correspondence of the images and the position information, and extracting SIFT (scale invariant feature transform) features of the images in the sample library as a reference of an image identification part;
and S6, in the real-time running process of the train, recording the video information transmitted by the 4C detection device in real time by using the vehicle-mounted computer, repeating S1-S4 to obtain an image of an auxiliary device area of the overhead line system, extracting SIFT characteristics of an auxiliary device area image sample, and comparing the SIFT characteristics with the detection sample in S5 to further position the train.
2. The train positioning method based on the overhead line system and the accessories thereof as claimed in claim 1, wherein the S2 extracts the high definition video acquired by the 4C device frame by frame, acquires clear images including all the accessories of the overhead line system along the way, including the contact line, the dropper, the catenary, the connecting parts and the insulator, and comprises:
sampling and extracting a high-definition picture containing auxiliary devices of a contact network according to the sampling frequency and the vehicle speed of the high-speed industrial camera;
setting train running speed as discrete quantity, crossing adjacent support column time interval delta tiComprises the following steps:
Figure FDA0002665093850000011
wherein f isHigh speedThe sampling frequency of a high-speed industrial camera is shown, V is the real-time running speed of the train, and L is the standard contact net strut interval;
calculating the number N of real-time acquired high-definition images of trains passing through adjacent pillarsi
Ni=Δti·fHigh speed
From NiSelecting 5 high-definition pictures n at medium intervalsiThe sequence number of the selected picture is specifically as follows:
ni=0.2Ni,0.4Ni,0.6Ni,0.8Ni,Ni
all images of the train over the interval of adjacent legs were uniformly sampled 5 times.
3. The train positioning method based on the overhead line system and the accessories thereof as claimed in claim 1, wherein the S3 preprocesses the high definition image, including:
s3.1, processing the color picture by using a gray level change formula:
g(x,y)=0.299R(x,y)+0.587G(x,y)+0.114B(x,y)
wherein R (x, y) is a red component; g (x, y) is a green component; b (x, y) is a blue component; g (x, y) is a gray image, and x and y are respectively the horizontal and vertical coordinates of the position of the transformed pixel point;
s3.2, reinforcing the edge profile of the accessory device by using a Sobel operator;
the Sobel operator comprises two groups of 3x3 matrixes which are respectively in the transverse direction and the longitudinal direction, and the two groups of 3x3 matrixes are subjected to plane convolution with the image to respectively obtain a brightness difference approximate value G in the transverse direction and the longitudinal directionxAnd GyThe transformation formula is as follows:
Figure FDA0002665093850000021
Figure FDA0002665093850000022
wherein, A is gray picture information, and the size and the direction of the gradient in the image are calculated by using a formula:
Figure FDA0002665093850000023
Figure FDA0002665093850000031
setting a threshold value Gmax, if the gradient G of a certain point is greater than Gmax, judging that the certain point is a boundary point and setting the certain point as a white point, and otherwise, setting the certain point as a black point;
Figure FDA0002665093850000032
s3.3, realizing straight line detection by adopting Hough transformation, positioning an accessory device area, mapping collinear points in an image space to a parameter space, detecting straight line parameters by using a local peak value, and then mapping to the image space to obtain an image straight line detection result;
if the sine curves have a common intersection point (rho, theta), the straight lines are judged to be collinear, and the equation of the straight line corresponding to the polar coordinate is as follows:
ρ=xcosθ+ysinθ;
wherein rho and theta are respectively the polar diameter and the polar angle of the intersection point of the sine curve under the polar coordinate;
the process of detecting the straight line where the edge of the accessory device is located comprises the following steps:
sequentially traversing gray level image A1Judging whether the Hough peak number is met or not by all the pixels in the image, and if so, adding 1 to counters of all the linear regions passing through the pixels; in order to obtain all straight line regions passing through a certain pixel, sequentially solving rho values by using all possible values of theta according to a formula to obtain a plurality of groups of rho and theta, wherein the straight lines meeting the requirements in the groups of rho and theta are the edges of the contact net stand columns.
4. The train positioning method based on the overhead line system and the accessories thereof as claimed in claim 1, wherein the step of extracting the features of the area where the accessories are located by using the HOG features in the step S4 includes:
s4.1, image standardization is carried out on the contact network accessory device area;
I(x,y)=I(x,y)gamma
wherein, I (x, y) is the pixel value at the position of the image (x, y), gamma is the compression coefficient, and the value is 0.5;
s4.2, calculating the gradient and the direction of the image, and calculating the gradient G of the image in the horizontal direction at the (x, y) position of the image1xGradient G from vertical1yAnd calculating to obtain a gradient direction value of each pixel position:
G1x(x,y)=I(x+1,y)-I(x,y)
G1y(x,y)=I(x,y+1)-I(x,y)
gradient amplitude G at pixel point (x, y)1(x, y) and gradient direction α (x, y) are:
Figure FDA0002665093850000041
Figure FDA0002665093850000042
and S4.3, dividing the cells and the regions, dividing the image into a plurality of cells, constructing a gradient direction histogram for each cell, normalizing the gradient direction histogram in the region formed by the cells, and collecting the HOG characteristics.
5. The train positioning method based on the overhead line system and the accessories thereof according to claim 1, wherein in the step S5, 10% of samples in the area of the accessories of the overhead line system are selected as verification samples and are associated with positions in a high-definition video shot by a 4C detection device in real time, a detection sample library with one-to-one correspondence between images and position information is obtained, SIFT features of images in the sample library are extracted and used as a reference of an image recognition part, and the method comprises the following steps:
s5.1, the scale space L (x, y, σ) defining an image space is obtained by convolution of a gaussian function G (x, y, σ) of variable size with the input image:
L(x,y,σ)=G(x,y,σ)*I(x,y)
wherein,
Figure FDA0002665093850000043
(x, y) are spatial coordinates, σ is a scale coordinate, and the size of σ determines the degree of smoothness of the image;
and (3) detecting stable key points in the scale space by adopting a Gaussian difference scale space:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ)
wherein D is the difference of two adjacent scales, namely the two adjacent scales are different in scale by a multiplication coefficient k;
s5.2, detecting the spatial extreme points, searching the scale spatial extreme points, comparing each sampling point with all adjacent points of the sampling point, and if the sampling point is larger than or smaller than all the adjacent points, determining the sampling point as the scale spatial extreme point;
s5.3, removing pixels with obviously asymmetric local curvatures;
and S5.4, matching the direction information of the stable key points.
6. The train positioning method based on the overhead line system and the accessories thereof as claimed in claim 5, wherein the matching of the stable key point direction information in the S5.4 comprises:
the distribution of the directions is realized by solving the gradient of each extreme point;
for any key point, the gradient amplitude m (x, y) and direction θ (x, y) are calculated as:
Figure FDA0002665093850000051
Figure FDA0002665093850000052
and calculating the gradient directions of all the points in the neighborhood taking the key point as the center to obtain a gradient direction histogram so as to determine the main direction of the current key point.
7. The train positioning method based on the overhead line system and the accessories thereof as claimed in claim 1, wherein in the real-time running process of the train, the S6 uses the vehicle-mounted computer to record the video information transmitted by the 4C detection device in real time, repeats S1-S4 to obtain the image of the accessory area of the overhead line system, extracts the SIFT feature of the image sample of the accessory area and compares the SIFT feature with the detection sample in S5, and further positions the train, comprising:
s6.1, SIFT feature value comparison is achieved by calculating the Euclidean distance of 128-dimensional key points of the two groups of feature points;
s6.2, if the Euclidean distance is smaller, the similarity is higher; and when the Euclidean distance is smaller than a set threshold value, judging that the matching is successful, further judging whether the train reaches a contact net positioning point, and positioning the train.
CN202010916150.2A 2020-09-03 2020-09-03 Train positioning method based on contact net and accessory thereof Pending CN112009528A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010916150.2A CN112009528A (en) 2020-09-03 2020-09-03 Train positioning method based on contact net and accessory thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010916150.2A CN112009528A (en) 2020-09-03 2020-09-03 Train positioning method based on contact net and accessory thereof

Publications (1)

Publication Number Publication Date
CN112009528A true CN112009528A (en) 2020-12-01

Family

ID=73515746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010916150.2A Pending CN112009528A (en) 2020-09-03 2020-09-03 Train positioning method based on contact net and accessory thereof

Country Status (1)

Country Link
CN (1) CN112009528A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114898320A (en) * 2022-05-30 2022-08-12 西南交通大学 YOLO v 5-based train positioning method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101484346A (en) * 2006-06-30 2009-07-15 通用电气公司 System and method of navigation with captured images
CN202614278U (en) * 2012-04-27 2012-12-19 苏州艾特光视电子技术有限公司 Overhead lines image dynamic acquisition system
CN105930839A (en) * 2015-11-11 2016-09-07 湖南华宏铁路高新科技开发有限公司 Electric railway overhead line system pole number intelligent identification method
CN206327389U (en) * 2016-12-27 2017-07-14 上海铁路局科学技术研究所 A kind of railway positioning system based on contact net bar image recognition
CN107067009A (en) * 2017-01-13 2017-08-18 重庆三峡学院 A kind of real-time bar recognition methods
CN107399338A (en) * 2016-05-18 2017-11-28 北京华兴致远科技发展有限公司 Train contact network detection means and method
CN107563419A (en) * 2017-08-22 2018-01-09 交控科技股份有限公司 The train locating method that images match and Quick Response Code are combined
CN111591321A (en) * 2020-07-27 2020-08-28 成都中轨轨道设备有限公司 Continuous recognition and correction device and method for contents of track pole number plate

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101484346A (en) * 2006-06-30 2009-07-15 通用电气公司 System and method of navigation with captured images
CN202614278U (en) * 2012-04-27 2012-12-19 苏州艾特光视电子技术有限公司 Overhead lines image dynamic acquisition system
CN105930839A (en) * 2015-11-11 2016-09-07 湖南华宏铁路高新科技开发有限公司 Electric railway overhead line system pole number intelligent identification method
CN107399338A (en) * 2016-05-18 2017-11-28 北京华兴致远科技发展有限公司 Train contact network detection means and method
CN206327389U (en) * 2016-12-27 2017-07-14 上海铁路局科学技术研究所 A kind of railway positioning system based on contact net bar image recognition
CN107067009A (en) * 2017-01-13 2017-08-18 重庆三峡学院 A kind of real-time bar recognition methods
CN107563419A (en) * 2017-08-22 2018-01-09 交控科技股份有限公司 The train locating method that images match and Quick Response Code are combined
CN111591321A (en) * 2020-07-27 2020-08-28 成都中轨轨道设备有限公司 Continuous recognition and correction device and method for contents of track pole number plate

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
埃米利奥 马乔等: "《视频跟踪:理论与实践》", 31 May 2017 *
徐光柱等: "《实用性目标检测与跟踪算法原理及应用》", 30 April 2015 *
柳杨: "《数字图像物体识别理论详解与实战》", 31 January 2018 *
甘胜丰等: "《机器视觉表面缺陷检测技术及其在钢铁工业中的应用》", 30 June 2017 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114898320A (en) * 2022-05-30 2022-08-12 西南交通大学 YOLO v 5-based train positioning method and system

Similar Documents

Publication Publication Date Title
Bilal et al. Real-time lane detection and tracking for advanced driver assistance systems
CN106407924A (en) Binocular road identifying and detecting method based on pavement characteristics
CN109801267B (en) Inspection target defect detection method based on feature point detection and SVM classifier
CN103324913B (en) A kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis
CN103914687A (en) Rectangular-target identification method based on multiple channels and multiple threshold values
CN105373135A (en) Method and system for guiding airplane docking and identifying airplane type based on machine vision
CN109635733B (en) Parking lot and vehicle target detection method based on visual saliency and queue correction
CN108764234B (en) Liquid level meter reading identification method based on inspection robot
CN108198417B (en) A kind of road cruising inspection system based on unmanned plane
CN115184380B (en) Method for detecting abnormity of welding spots of printed circuit board based on machine vision
Azad et al. A novel and robust method for automatic license plate recognition system based on pattern recognition
CN105891220A (en) Pavement marker line defect detecting device and detecting method thereof
CN108573280B (en) Method for unmanned ship to autonomously pass through bridge
CN110348307B (en) Path edge identification method and system for crane metal structure climbing robot
CN117593290A (en) Bolt loosening detection method and equipment for train 360-degree dynamic image monitoring system
CN117078717A (en) Road vehicle track extraction method based on unmanned plane monocular camera
CN112009528A (en) Train positioning method based on contact net and accessory thereof
Wu et al. Design and implementation of vehicle speed estimation using road marking-based perspective transformation
CN112053407B (en) Automatic lane line detection method based on AI technology in traffic law enforcement image
Pan et al. An efficient method for skew correction of license plate
CN108389177A (en) A kind of vehicle bumper damage testing method and traffic security early warning method of traffic control
Cai et al. Robust road lane detection from shape and color feature fusion for vehicle self-localization
CN108734158B (en) Real-time train number identification method and device
CN114299406B (en) Optical fiber cable line inspection method based on unmanned aerial vehicle aerial photography
CN111583341B (en) Cloud deck camera shift detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201201

RJ01 Rejection of invention patent application after publication