CN111856441B - Train positioning method based on vision and millimeter wave radar fusion - Google Patents

Train positioning method based on vision and millimeter wave radar fusion Download PDF

Info

Publication number
CN111856441B
CN111856441B CN202010517233.4A CN202010517233A CN111856441B CN 111856441 B CN111856441 B CN 111856441B CN 202010517233 A CN202010517233 A CN 202010517233A CN 111856441 B CN111856441 B CN 111856441B
Authority
CN
China
Prior art keywords
train
image
key position
speed
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010517233.4A
Other languages
Chinese (zh)
Other versions
CN111856441A (en
Inventor
余贵珍
王章宇
周彬
徐少清
付子昂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202010517233.4A priority Critical patent/CN111856441B/en
Publication of CN111856441A publication Critical patent/CN111856441A/en
Application granted granted Critical
Publication of CN111856441B publication Critical patent/CN111856441B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/50Systems of measurement based on relative movement of target
    • G01S13/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • G01S13/589Velocity or trajectory determination systems; Sense-of-movement determination systems measuring the velocity vector
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/91Radar or analogous systems specially adapted for specific applications for traffic control
    • G01S13/92Radar or analogous systems specially adapted for specific applications for traffic control for velocity measurement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention discloses a train positioning method based on vision and millimeter wave radar fusion, which is characterized in that image feature extraction is carried out through vision, global features and local features are fused, wherein the global features are extracted through deep learning, and the local features are extracted through linear detection and linear feature descriptors, so that the feature extraction of images can be more sufficient; when the key position is searched, the global feature similarity measurement and the local feature similarity measurement are respectively carried out, so that the key position detection can be more accurate; the train speed is measured by the millimeter wave radar, the vehicle signal system is not depended, and the train speed is obtained by carrying out reverse speed calculation by the obstacle detected by the millimeter wave radar. According to the invention, the train is positioned by utilizing the vehicle-mounted sensor, road side equipment is not needed, the train positioning cost can be effectively reduced, and the difficulty that the train cannot be positioned in the tunnel and other environments can be effectively overcome by fusing data for positioning.

Description

Train positioning method based on vision and millimeter wave radar fusion
Technical Field
The invention relates to the technical field of unmanned autonomous environment sensing and positioning of rail trains, in particular to a train positioning method based on vision and millimeter wave radar fusion.
Background
With the rapid development of urban scale in China, the urban process is gradually accelerated, the level of the conservation quantity of motor vehicles for urban population and people is rapidly increased, and the traffic jam phenomenon is increasingly serious. Urban rail transit with large passenger capacity, high transportation efficiency and low energy consumption has become a necessary choice for alleviating the problem of urban traffic jam.
The unmanned train system is independent of a driver in the running process of the train, and can effectively improve the running efficiency and the running safety of the train. The unmanned system of train needs real-time accurate train position, and accurate train position information can provide important guarantee for vehicle dispatch and speed control.
At present, roadbed equipment such as a transponder and the like is installed on a road side to position a train, and the method has the defects of long laying period and high laying cost. In recent years, part of researchers perform real-time train positioning through vehicle-mounted sensors, such as a vehicle-mounted Beidou positioning system or a vehicle-mounted GPS (global positioning system), and the method can well perform train positioning when satellite signals are good, however, when the satellite signals are blocked or missing, the method cannot meet the actual application requirements.
Disclosure of Invention
In view of the above, the invention provides a train positioning method based on integration of vision and millimeter wave radar, which is used for realizing autonomous positioning of a train in an environment line including a tunnel and the like by integrating vision image data and millimeter wave radar data.
The invention provides a train positioning method based on vision and millimeter wave radar fusion, which comprises the following steps:
s1: a camera and a millimeter wave radar sensor are installed at the position of the train head, and synchronous data of the whole line are collected through the installed camera and the millimeter wave radar sensor; taking two frames of data with the latest time interval for acquiring data by the camera and the millimeter wave radar sensor as synchronous data;
s2: selecting a key position in a line, acquiring a key position image from the acquired synchronous data, extracting features of the key position image, and establishing a key position visual data feature library;
s3: in the running process of the train, the camera is utilized to collect images in real time, and the characteristics of the images collected in real time are extracted;
s4: performing similarity measurement on the features of the images acquired in real time and the features of the images in the key position visual data feature library, and judging whether the train reaches the key position corresponding to the images in the key position visual data feature library; if yes; calibrating the train position; if not, the millimeter wave radar sensor is utilized to measure the speed of the train in real time, the measured speed is integrated, and the position of the train in the two key position intervals is predicted.
In a possible implementation manner, in the method for positioning a train based on integration of vision and millimeter wave radar provided by the invention, step S2 is to select a key position in a line, obtain a key position image from the acquired synchronous data, perform feature extraction on the key position image, and establish a key position vision data feature library, and specifically includes:
s21: selecting the position of each station in the line as a key position, and acquiring a key position image from the acquired synchronous data;
s22: carrying out global feature extraction on each acquired frame of key position image, wherein the global feature extraction is realized through a convolutional neural network; scaling each frame of key position image to the same size, reducing the size of a feature map of each frame of key position image by using a convolution operation, extracting features by using a reverse residual error network, and obtaining 1280-dimensional high-dimensional vectors by using an average pooling layer; l2 regularization is performed on 1280-dimensional high-dimensional vectors:
Figure BDA0002530553500000021
where d=1280, represents the dimension of the high-dimensional vector; (W) 1 ,W 2 ,…W d ) Representing global features of the key position image for the regularized high-dimensional vector; q=1, 2,. -%, d;
s23: extracting local features of each acquired frame of key position image, wherein the local features comprise linear features and corresponding linear descriptors; smoothing each frame of key position image, obtaining a gradient image of each frame of key position image by utilizing a Sobel operator, obtaining maximum pixel values in a transverse area and a longitudinal area of each frame of key position image, wherein the position of the maximum pixel value is an anchor point, connecting all the anchor points to obtain an edge image of each frame of key position image, and performing straight line fitting on the edge image by using a least square method to detect a straight line; carrying out feature description on each detected straight line by using a local strip descriptor, and representing the local feature of each frame of key position image by using all straight line descriptors in each frame of key position image;
s24: the global features and the local features of the extracted key position images of each frame are stored; corresponding the high-dimensional vector extracted by each station and the corresponding station number, and constructing a global feature library of all stations; and constructing a local feature library of all the extracted linear descriptors by adopting a visual word bag model.
In a possible implementation manner, in the method for positioning a train based on integration of vision and millimeter wave radar provided by the invention, step S3, during the running process of the train, acquires images in real time by using a camera, and performs feature extraction on the images acquired in real time, specifically includes:
s31: global feature extraction is carried out on the images acquired in real time, and the global feature extraction is realized through a convolutional neural network; scaling all the images acquired in real time to the same size, reducing the size of a feature map of each image acquired in real time by using one convolution operation, extracting features by using a reverse residual error network, and obtaining 1280-dimensional high-dimensional vectors by using an average pooling layer; l2 regularization is performed on 1280-dimensional high-dimensional vectors:
Figure BDA0002530553500000031
where d=1280, represents the dimension of the high-dimensional vector; (W) 1 ',W 2 ',…W d ') is a regularized high-dimensional vector, and represents global features of the image acquired in real time; q=1, 2,. -%, d;
s32: extracting local features of the image acquired in real time, wherein the local features comprise linear features and corresponding linear descriptors; smoothing the image acquired in real time, acquiring a gradient image of the image acquired in real time by utilizing a Sobel operator, acquiring maximum pixel values in a transverse area and a longitudinal area of the image acquired in real time, wherein the position of the maximum pixel value is an anchor point, connecting all the anchor points to obtain an edge image of the image acquired in real time, and performing straight line fitting on the edge image by utilizing a least square method to detect a straight line; and carrying out feature description on each detected straight line by using a local strip descriptor, and representing the local features of the image acquired in real time by using all straight line descriptors in the image acquired in real time.
In a possible implementation manner, in the method for positioning a train based on fusion of vision and millimeter wave radar provided by the invention, step S4 is to measure similarity between the features of the image acquired in real time and the features of the image in the visual data feature library of the key position, and judge whether the train reaches the key position corresponding to the image in the visual data feature library of the key position; if yes; calibrating the train position; if not, measuring the speed of the train in real time by utilizing a millimeter wave radar sensor, integrating the measured speed, and predicting the position of the train in two key position intervals, wherein the method specifically comprises the following steps:
s41: the method comprises the steps of sequentially carrying out similarity measurement on global features of images acquired in real time and global features of all images in a constructed global feature library, wherein the calculation process is as follows:
Figure BDA0002530553500000041
where i represents the i-th image acquired in real time,
Figure BDA0002530553500000042
representing the global features of image i->
Figure BDA0002530553500000043
Representing global features of the jth image in the global feature library, D (iJ) represents the global feature +.>
Figure BDA0002530553500000044
Global feature +.>
Figure BDA0002530553500000045
Similarity of (2); carrying out 2 Represents the L2 paradigm;
s42: judging global features of image i
Figure BDA0002530553500000046
Global feature +.>
Figure BDA0002530553500000047
Whether the similarity of (2) is smaller than a first set threshold; if not, measuring the speed of the train in real time by utilizing a millimeter wave radar sensor, integrating the measured speed, and predicting the position of the train in the two key position intervals; if yes, go to step S43 and step S44;
s43: local features w for image i i And sequentially carrying out similarity measurement on the local features of all the images in the constructed local feature library, wherein the calculation process is as follows:
Figure BDA0002530553500000048
wherein ,s(wi -w j ) Representing local features w of an image i i Local feature w of jth image in local feature library j Similarity, w j Representing the local features of the j-th image in the local feature library,
Figure BDA0002530553500000049
representing local features corresponding to the kth line in image i,/->
Figure BDA0002530553500000051
Representing the kth line in the jth image in the local feature libraryCorresponding local features, wherein N represents the number of straight lines in the image i;
s44: judging the local feature w of the image i i Local feature w of jth image in local feature library j Whether the similarity of (2) is smaller than a second set threshold; if yes, calibrating the train position; if not, the millimeter wave radar sensor is utilized to measure the speed of the train in real time, the measured speed is integrated, and the position of the train in the two key position intervals is predicted.
In a possible implementation manner, in the method for positioning a train based on integration of vision and millimeter wave radar provided by the invention, in step S4, the speed of the train is measured in real time by using a millimeter wave radar sensor, the measured speed is integrated, and the position of the train in two key position intervals is predicted, which specifically includes:
acquiring real-time data of a forward obstacle detected by a millimeter wave radar sensor:
Figure BDA0002530553500000053
wherein ,xp Represents the lateral distance of the p-th forward obstacle, y p Represents the longitudinal distance, v, of the p-th forward obstacle p Indicating the speed of the p-th forward obstacle relative to the train, and n indicating the number of forward obstacles;
extracting the real-time speed of the train, the speed v of the train relative to all forward obstacles 1 、v 2 …v m Counting, namely counting the number of forward obstacles at each speed, and taking the speed with the largest number as the speed of the static obstacle relative to the train:
num(v s )=max{num(v 1 ),num(v 2 ),…num(v m )} (6)
wherein ,vs Represents the speed of a stationary obstacle relative to the train, num (v s ) Representing v s The number of forward obstacles at speed, num (v 1 ) Representing v 1 The number of forward obstacles at speed, num (v 2 ) Representing v 2 The number of forward obstacles at speed, num (v m ) Representing v m The number of forward obstacles at the speed, m represents the total speed value category number contained in the millimeter wave radar data of the current frame, and m is less than or equal to n; the speed of the train is-v s
Integrating the speed of the train to realize train position estimation:
Figure BDA0002530553500000052
wherein ,
Figure BDA0002530553500000061
indicating the position of the train at time t, p r-1 Representing the last critical position of the train position at time t, l representing the total number of critical positions, v (t) representing the real-time speed of the train, and ∈v (t) d (t) representing the speed integral from the last critical position to time t.
According to the train positioning method based on vision and millimeter wave radar fusion, the vehicle-mounted sensor is used for collecting image data of the whole line, a feature library is built after feature extraction is carried out on a key position image, the feature library is inquired after feature extraction is carried out on an image collected at the current moment, whether the current position of a train is a key position or an interval position is judged, and if the current position is the interval position, interval position estimation is carried out according to the speed of the train. The image feature extraction is carried out through vision, the global feature and the local feature are fused, the global feature is extracted through deep learning, and the local feature is extracted through linear detection and linear feature descriptors, so that the feature extraction of the image can be more sufficient; when the key position is searched, the global feature similarity measurement and the local feature similarity measurement are respectively carried out, so that the key position detection can be more accurate; the train speed is measured by the millimeter wave radar, the vehicle signal system is not depended, and the train speed is obtained by carrying out reverse speed calculation by the obstacle detected by the millimeter wave radar. According to the invention, the train is positioned by utilizing the vehicle-mounted sensor, road side equipment is not needed, the train positioning cost can be effectively reduced, and the difficulty that the train cannot be positioned in the tunnel and other environments can be effectively overcome by fusing data for positioning.
Drawings
FIG. 1 is a flow chart of a train positioning method based on vision and millimeter wave radar fusion provided by the invention;
fig. 2 is a diagram of a train positioning structure based on integration of vision and millimeter wave radar in embodiment 1 of the present invention;
FIG. 3 is an exemplary diagram of image feature extraction by fusing global features and local features in embodiment 1 of the present invention;
FIG. 4 is a diagram showing an example of global feature extraction of an image in embodiment 1 of the present invention;
fig. 5 is a diagram showing an example of image local feature extraction in embodiment 1 of the present invention.
Detailed Description
The technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is apparent that the described embodiments are merely examples and are not intended to limit the present invention.
The invention provides a train positioning method based on vision and millimeter wave radar fusion, which is shown in figure 1 and comprises the following steps:
s1: a camera and a millimeter wave radar sensor are installed at the position of the train head, and synchronous data of the whole line are collected through the installed camera and the millimeter wave radar sensor; taking two frames of data with the latest time interval for acquiring data by the camera and the millimeter wave radar sensor as synchronous data;
specifically, the mounting position of the camera requires that the camera shoots a front area of the train, and the position of the camera is fixed after the camera is mounted; the installation location of the millimeter wave radar sensor also requires the radar to be directed toward the front track area of the train; the synchronous data acquisition is to acquire synchronous data of the camera and the millimeter wave radar sensor, and two frames of data with the nearest time stamp of the camera and the millimeter wave radar sensor are selected from the time stamp of the acquired data to serve as synchronous data;
s2: selecting a key position in a line, acquiring a key position image from the acquired synchronous data, extracting features of the key position image, and establishing a key position visual data feature library;
specifically, the key position image refers to an area image with obvious characteristics in the line; the key position image feature extraction comprises extracting global features and local features of a key position image, wherein the global feature extraction is realized through deep learning, and the local feature extraction is realized through detecting straight lines in the key position image and carrying out feature description on the detected straight lines; the key position visual data feature library is established by storing the extracted features so as to facilitate the subsequent inquiry;
s3: in the running process of the train, the camera is utilized to collect images in real time, and the characteristics of the images collected in real time are extracted;
specifically, extracting features of the image acquired in real time also includes extracting global features and local features in the image, which are similar to the feature extraction in step S2, and are not described herein;
s4: performing similarity measurement on the features of the images acquired in real time and the features of the images in the visual data feature library of the key positions, and judging whether the train reaches the key position corresponding to the images in the visual data feature library of the key positions; if yes; calibrating the train position; if not, measuring the speed of the train in real time by utilizing a millimeter wave radar sensor, integrating the measured speed, and predicting the position of the train in the two key position intervals;
specifically, the speed of the train is measured through a millimeter wave radar sensor, forward obstacle detection is mainly carried out through the millimeter wave radar sensor, the relative speed of the obstacle relative to the train is obtained, and the speed of the static obstacle relative to the train is extracted on the basis, so that the speed of the train is reversely calculated; the speed of the train is integrated, accumulated distance information can be obtained, and the current position of the train can be calculated by combining the last key position information.
The following describes in detail the implementation of the train positioning method based on the integration of vision and millimeter wave radar according to the present invention by means of a specific embodiment.
Example 1:
as shown in fig. 2, the architecture diagram of train positioning based on integration of vision and millimeter wave radar is provided for realizing real-time positioning of trains. Selecting a subway train to realize the positioning of the subway train in a line, wherein the line comprises 16 stations, and the specific implementation method comprises the following steps:
the first step: mounting camera and millimeter wave radar sensor
An industrial camera and a millimeter wave radar sensor are installed at a windshield of a subway train, and the installed view angle faces to the running direction of the train, so that the acquisition of the forward environmental data of the train can be realized.
And a second step of: collecting synchronization data
And synchronous data acquisition is carried out by using the installed camera and millimeter wave radar sensor, and the acquired data are line video data and millimeter wave radar data which need to be positioned. The synchronous data of the whole subway line are collected through the installed camera and the millimeter wave radar sensor, and when the synchronous data are collected, two frames of data with the nearest time stamp of the camera and the millimeter wave radar are selected as the synchronous data through the time stamp of the obtained data.
And a third step of: establishing a key position visual data feature library
And establishing a key position visual data feature library by utilizing the acquired synchronous data. Before establishing the visual data feature library of the key position, the key position in the route is required to be selected, the position (namely, each station position) when the train arrives at the station and stops in the subway route is selected as the key position, and the images when the train arrives at the station and stops are stored, so that the key position images when all stations in the whole route are acquired. The key position image feature extraction is carried out based on the key position images, and a key position visual data feature library is built, as shown in fig. 3, the extracted features comprise global features and local features, and the building process is as follows:
(1) And carrying out global feature extraction on each acquired frame of key position image, wherein the global feature extraction is realized through a convolutional neural network, and the extraction process is shown in fig. 4.
Firstly, scaling each frame of key position image to 224 x 224;
then, reducing the feature map size of each frame of key position image by using a convolution operation;
then, 7 reverse residual error networks are utilized for feature extraction;
thereafter, a 1280-dimensional high-dimensional vector is obtained by averaging the pooling layer, and the 1280-dimensional high-dimensional vector can be utilized to represent global features of the key position image.
(2) Because the distribution of the 1280-dimensional high-dimensional vector is uneven, L2 regularization can be carried out on the 1280-dimensional high-dimensional vector so as to obtain the high-dimensional vector with more even distribution:
Figure BDA0002530553500000091
where d=1280, represents the dimension of the high-dimensional vector; (W) 1 ,W 2 ,…W d ) Representing global features of the key position image for the regularized high-dimensional vector; q=1, 2,..d.
(3) And extracting local features of the acquired key position images of each frame, wherein the local features comprise linear features and corresponding linear descriptors.
The straight line feature extraction process is as shown in fig. 5:
firstly, smoothing each frame of key position image; in this embodiment, a gaussian kernel function of 5*5 is applied to smooth the key position image shown in a graph a in fig. 5, and the smoothed image is shown in a graph b in fig. 5;
then, acquiring a gradient map of each frame of key position image; in the embodiment, a Sobel operator is utilized to acquire a gradient map of a key position image, and the acquired gradient map is shown as a c map in fig. 5;
next, acquiring anchor point images of each frame of key position images; in the graph c of fig. 5, the maximum pixel value in the transverse and longitudinal three-neighborhood region of the key position image is obtained, the position of the maximum pixel value is an anchor point, and the anchor point graph is shown as the graph d of fig. 5;
then, obtaining an edge map of each frame of key position image, and connecting all anchor points in the d map in fig. 5 to obtain an edge map of the key position image, as shown in the e map in fig. 5;
finally, straight line detection is performed, and a least square method is applied to perform straight line fitting on the edge graph shown in the e graph in fig. 5, so that a straight line is detected, as shown in f in fig. 5.
And carrying out characteristic description on the detected straight line. In the embodiment, the local stripe descriptors are applied to perform feature description on each detected straight line, and the local features of each frame of key position image are represented by using all the straight line descriptors in each frame of key position image.
(4) The global features and the local features of the extracted key position images of each frame are stored; corresponding the high-dimensional vector extracted by each station and the corresponding station number, and constructing a global feature library of all stations; and constructing a local feature library of all the extracted linear descriptors by adopting a visual word bag model.
Fourth step: critical location detection
When the train runs in the line, the camera collects forward image data of the train in real time, and feature extraction is carried out on each frame of image collected in real time, wherein the extracted features comprise global features and local features of the image. After each frame of image is extracted, carrying out similarity measurement on the global features of the image and the built global feature library, and carrying out similarity measurement on the local features of the image and the built local feature library, so as to judge whether the train reaches a key position.
(1) And carrying out global feature extraction on the image acquired in real time, wherein the global feature extraction is realized through a convolutional neural network. Scaling all the images acquired in real time to the same size, reducing the size of a feature map of each image acquired in real time by using one convolution operation, extracting features by using a reverse residual error network, and obtaining 1280-dimensional high-dimensional vectors by using an average pooling layer; l2 regularization is performed on 1280-dimensional high-dimensional vectors:
Figure BDA0002530553500000101
where d=1280, represents the dimension of the high-dimensional vector; (W) 1 ',W 2 ',…W d ') is a regularized high-dimensional vector, and represents global features of the image acquired in real time; q=1, 2,..d. The implementation process of global feature extraction on the image acquired by the camera in real time is similar to that of the third step (1) (2), and is not described herein.
(2) And extracting local features of the image acquired in real time, wherein the local features comprise linear features and corresponding linear descriptors. Smoothing the image acquired in real time, acquiring a gradient image of the image acquired in real time by utilizing a Sobel operator, acquiring maximum pixel values in a transverse area and a longitudinal area of the image acquired in real time, wherein the position of the maximum pixel value is an anchor point, connecting all the anchor points to obtain an edge image of the image acquired in real time, and performing straight line fitting on the edge image by utilizing a least square method to detect a straight line; and carrying out feature description on each detected straight line by using a local strip descriptor, and representing the local features of the image acquired in real time by using all straight line descriptors in the image acquired in real time. The above-mentioned process of extracting local features from the image acquired in real time by the camera is similar to the process of extracting local features from the image at the key position in the third step (3), and will not be described herein.
(3) The method comprises the steps of sequentially carrying out similarity measurement on global features of images acquired in real time and global features of all images in a constructed global feature library, wherein the calculation process is as follows:
Figure BDA0002530553500000111
where i represents the i-th image acquired in real time,
Figure BDA0002530553500000112
representing the global features of image i->
Figure BDA0002530553500000113
Global features representing the jth image in a global feature libraryD (i, j) represents the global feature +.>
Figure BDA0002530553500000114
Global feature +.>
Figure BDA0002530553500000115
Similarity of (2); I.I 2 Represents the L2 paradigm;
if the global feature of the image i
Figure BDA0002530553500000119
Global feature +.>
Figure BDA0002530553500000116
If the similarity of the two is smaller than the first set threshold, the current position of the train is considered to be a certain key position, and further local feature similarity measurement is needed.
(4) Local features w for image i i And sequentially carrying out similarity measurement on the local features of all the images in the constructed local feature library, wherein the calculation process is as follows:
Figure BDA0002530553500000117
wherein ,s(wi -w j ) Representing local features w of an image i i Local feature w of jth image in local feature library j Similarity, w j Representing the local features of the j-th image in the local feature library,
Figure BDA0002530553500000118
representing local features corresponding to the kth line in image i,/->
Figure BDA0002530553500000121
Representing local features corresponding to the kth straight line in the jth image in the local feature library, and N represents the number of straight lines in the image i; />
If the local feature w of the image i i Local feature w of jth image in local feature library j And if the similarity of the two is smaller than a second set threshold value, determining the current position of the train as a certain key position.
Fifth step: interval position estimation
If the train does not reach a certain key position, the position of the train in the key position interval needs to be estimated. The section position estimation process of the train is as follows:
(1) And acquiring real-time data of the forward obstacle detected by the millimeter wave radar sensor. The millimeter wave radar data are:
Figure BDA0002530553500000122
wherein ,xp Represents the lateral distance of the p-th forward obstacle, y p Represents the longitudinal distance, v, of the p-th forward obstacle p Indicating the speed of the p-th forward obstacle relative to the train, and n indicating the number of forward obstacles;
(2) The real-time speed of the train is extracted. Velocity v of the relative train for all forward obstacles 1 、v 2 …v m Counting, namely counting the number of forward obstacles at each speed, and taking the speed with the largest number as the speed of the static obstacle relative to the train:
num(v s )=max{num(v 1 ),num(v 2 ),…num(v m )} (6)
wherein ,vs Represents the speed of a stationary obstacle relative to the train, num (v s ) Representing v s The number of forward obstacles at speed, num (v 1 ) Representing v 1 The number of forward obstacles at speed, num (v 2 ) Representing v 2 The number of forward obstacles at speed, num (v m ) Representing v m The number of forward obstacles at the speed, m represents the total speed value category number contained in the millimeter wave radar data of the current frame, and m is less than or equal to n; since most of the obstacles in the train running line are static obstacles, the obstacle measured by the millimeter wave radar sensor is largeThe speed measured by the millimeter wave radar sensor is the relative speed between the train and the obstacle, so that when most of the obstacles are at a certain speed value, the speed is the speed of the train relative to the stationary obstacle. The velocity v of the stationary obstacle relative to the train obtained above s The speed of the train is-v s
(3) And estimating the train position. Integrating the speed of the train to realize train position estimation:
Figure BDA0002530553500000131
wherein ,
Figure BDA0002530553500000132
indicating the position of the train at time t, p r-1 Representing the last critical position of the train position at time t, l representing the total number of critical positions, v (t) representing the real-time speed of the train, and ∈v (t) d (t) representing the speed integral from the last critical position to time t.
According to the train positioning method based on vision and millimeter wave radar fusion, the vehicle-mounted sensor is used for collecting image data of the whole line, a feature library is built after feature extraction is carried out on a key position image, the feature library is inquired after feature extraction is carried out on an image collected at the current moment, whether the current position of a train is a key position or an interval position is judged, and if the current position is the interval position, interval position estimation is carried out according to the speed of the train. The image feature extraction is carried out through vision, the global feature and the local feature are fused, the global feature is extracted through deep learning, and the local feature is extracted through linear detection and linear feature descriptors, so that the feature extraction of the image can be more sufficient; when the key position is searched, the global feature similarity measurement and the local feature similarity measurement are respectively carried out, so that the key position detection can be more accurate; the train speed is measured by the millimeter wave radar, the vehicle signal system is not depended, and the train speed is obtained by carrying out reverse speed calculation by the obstacle detected by the millimeter wave radar. According to the invention, the train is positioned by utilizing the vehicle-mounted sensor, road side equipment is not needed, the train positioning cost can be effectively reduced, and the difficulty that the train cannot be positioned in the tunnel and other environments can be effectively overcome by fusing data for positioning.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (4)

1. A train positioning method based on vision and millimeter wave radar fusion is characterized by comprising the following steps:
s1: a camera and a millimeter wave radar sensor are installed at the position of the train head, and synchronous data of the whole line are collected through the installed camera and the millimeter wave radar sensor; taking two frames of data with the latest time interval for acquiring data by the camera and the millimeter wave radar sensor as synchronous data;
s2: selecting a key position in a line, acquiring a key position image from the acquired synchronous data, extracting features of the key position image, and establishing a key position visual data feature library;
s3: in the running process of the train, the camera is utilized to collect images in real time, and the characteristics of the images collected in real time are extracted;
s4: performing similarity measurement on the features of the images acquired in real time and the features of the images in the key position visual data feature library, and judging whether the train reaches the key position corresponding to the images in the key position visual data feature library; if yes; calibrating the train position; if not, measuring the speed of the train in real time by utilizing a millimeter wave radar sensor, integrating the measured speed, and predicting the position of the train in the two key position intervals;
step S2, selecting a key position in a line, acquiring a key position image from the acquired synchronous data, extracting features of the key position image, and establishing a key position visual data feature library, wherein the method specifically comprises the following steps:
s21: selecting the position of each station in the line as a key position, and acquiring a key position image from the acquired synchronous data;
s22: carrying out global feature extraction on each acquired frame of key position image, wherein the global feature extraction is realized through a convolutional neural network; scaling each frame of key position image to the same size, reducing the size of a feature map of each frame of key position image by using a convolution operation, extracting features by using a reverse residual error network, and obtaining 1280-dimensional high-dimensional vectors by using an average pooling layer; l2 regularization is performed on 1280-dimensional high-dimensional vectors:
Figure FDA0004102614920000011
where d=1280, represents the dimension of the high-dimensional vector; (W) 1 ,W 2 ,…W d ) Representing global features of the key position image for the regularized high-dimensional vector; q=1, 2,. -%, d;
s23: extracting local features of each acquired frame of key position image, wherein the local features comprise linear features and corresponding linear descriptors; smoothing each frame of key position image, obtaining a gradient image of each frame of key position image by utilizing a Sobel operator, obtaining maximum pixel values in a transverse area and a longitudinal area of each frame of key position image, wherein the position of the maximum pixel value is an anchor point, connecting all the anchor points to obtain an edge image of each frame of key position image, and performing straight line fitting on the edge image by using a least square method to detect a straight line; carrying out feature description on each detected straight line by using a local strip descriptor, and representing the local feature of each frame of key position image by using all straight line descriptors in each frame of key position image;
s24: the global features and the local features of the extracted key position images of each frame are stored; corresponding the high-dimensional vector extracted by each station and the corresponding station number, and constructing a global feature library of all stations; and constructing a local feature library of all the extracted linear descriptors by adopting a visual word bag model.
2. The train positioning method based on vision and millimeter wave radar fusion according to claim 1, wherein step S3, during the running of the train, acquires images in real time by using a camera and performs feature extraction on the images acquired in real time, specifically comprises:
s31: global feature extraction is carried out on the images acquired in real time, and the global feature extraction is realized through a convolutional neural network; scaling all the images acquired in real time to the same size, reducing the size of a feature map of each image acquired in real time by using one convolution operation, extracting features by using a reverse residual error network, and obtaining 1280-dimensional high-dimensional vectors by using an average pooling layer; l2 regularization is performed on 1280-dimensional high-dimensional vectors:
Figure FDA0004102614920000021
where d=1280, represents the dimension of the high-dimensional vector; (W) 1 ',W′ 2 ,…W′ d ) Representing global features of the image acquired in real time for the regularized high-dimensional vector; q=1, 2,. -%, d;
s32: extracting local features of the image acquired in real time, wherein the local features comprise linear features and corresponding linear descriptors; smoothing the image acquired in real time, acquiring a gradient image of the image acquired in real time by utilizing a Sobel operator, acquiring maximum pixel values in a transverse area and a longitudinal area of the image acquired in real time, wherein the position of the maximum pixel value is an anchor point, connecting all the anchor points to obtain an edge image of the image acquired in real time, and performing straight line fitting on the edge image by utilizing a least square method to detect a straight line; and carrying out feature description on each detected straight line by using a local strip descriptor, and representing the local features of the image acquired in real time by using all straight line descriptors in the image acquired in real time.
3. The train positioning method based on vision and millimeter wave radar fusion according to claim 1, wherein step S4 is performed with similarity measurement on the features of the image acquired in real time and the features of the images in the key position vision data feature library, and whether the train reaches the key position corresponding to the images in the key position vision data feature library is judged; if yes; calibrating the train position; if not, measuring the speed of the train in real time by utilizing a millimeter wave radar sensor, integrating the measured speed, and predicting the position of the train in two key position intervals, wherein the method specifically comprises the following steps:
s41: the method comprises the steps of sequentially carrying out similarity measurement on global features of images acquired in real time and global features of all images in a constructed global feature library, wherein the calculation process is as follows:
Figure FDA0004102614920000031
where i represents the i-th image acquired in real time,
Figure FDA0004102614920000032
representing the global features of image i->
Figure FDA0004102614920000033
Representing the global feature of the jth image in the global feature library, D (i, j) representing the global feature +.>
Figure FDA0004102614920000034
Global feature +.>
Figure FDA0004102614920000035
Similarity of (2); I.I 2 Represents the L2 paradigm;
s42: judging global features of image i
Figure FDA0004102614920000039
Global feature +.>
Figure FDA0004102614920000036
Whether the similarity of (2) is smaller than a first set threshold; if not, measuring the speed of the train in real time by utilizing a millimeter wave radar sensor, integrating the measured speed, and predicting the position of the train in the two key position intervals; if yes, go to step S43 and step S44;
s43: local features w for image i i And sequentially carrying out similarity measurement on the local features of all the images in the constructed local feature library, wherein the calculation process is as follows:
Figure FDA0004102614920000037
wherein ,s(wi -w j ) Representing local features w of an image i i Local feature w of jth image in local feature library j Similarity, w j Representing the local features of the j-th image in the local feature library,
Figure FDA0004102614920000038
representing local features corresponding to the kth line in image i,/->
Figure FDA0004102614920000041
Representing local features corresponding to the kth straight line in the jth image in the local feature library, and N represents the number of straight lines in the image i;
s44: judging the local feature w of the image i i Local feature w of jth image in local feature library j Whether the similarity of (2) is smaller than a second set threshold; if yes, calibrating the train position; if not, the millimeter wave radar sensor is utilized to measure the speed of the train in real time, the measured speed is integrated, and the position of the train in the two key position intervals is predicted.
4. The train positioning method based on vision and millimeter wave radar fusion according to claim 1, wherein in step S4, the speed of the train is measured in real time by using a millimeter wave radar sensor, the measured speed is integrated, and the position of the train in two key position intervals is predicted, specifically comprising:
acquiring real-time data of a forward obstacle detected by a millimeter wave radar sensor:
Figure FDA0004102614920000042
wherein ,xp Represents the lateral distance of the p-th forward obstacle, y p Represents the longitudinal distance, v, of the p-th forward obstacle p Indicating the speed of the p-th forward obstacle relative to the train, and n indicating the number of forward obstacles;
extracting the real-time speed of the train, the speed v of the train relative to all forward obstacles 1 、v 2 …v m Counting, namely counting the number of forward obstacles at each speed, and taking the speed with the largest number as the speed of the static obstacle relative to the train:
num(v s )=max{num(v 1 ),num(v 2 ),…num(v m )} (6)
wherein ,vs Represents the speed of a stationary obstacle relative to the train, num (v s ) Representing v s The number of forward obstacles at speed, num (v 1 ) Representing v 1 The number of forward obstacles at speed, num (v 2 ) Representing v 2 The number of forward obstacles at speed, num (v m ) Representing v m The number of forward obstacles at the speed, m represents the total speed value category number contained in the millimeter wave radar data of the current frame, and m is less than or equal to n; the speed of the train is-v s
Integrating the speed of the train to realize train position estimation:
Figure FDA0004102614920000043
wherein ,
Figure FDA0004102614920000044
indicating the position of the train at time t, p r-1 Representing the last critical position of the train position at time t, l representing the total number of critical positions, v (t) representing the real-time speed of the train, and ∈v (t) d (t) representing the speed integral from the last critical position to time t. />
CN202010517233.4A 2020-06-09 2020-06-09 Train positioning method based on vision and millimeter wave radar fusion Active CN111856441B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010517233.4A CN111856441B (en) 2020-06-09 2020-06-09 Train positioning method based on vision and millimeter wave radar fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010517233.4A CN111856441B (en) 2020-06-09 2020-06-09 Train positioning method based on vision and millimeter wave radar fusion

Publications (2)

Publication Number Publication Date
CN111856441A CN111856441A (en) 2020-10-30
CN111856441B true CN111856441B (en) 2023-04-25

Family

ID=72987309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010517233.4A Active CN111856441B (en) 2020-06-09 2020-06-09 Train positioning method based on vision and millimeter wave radar fusion

Country Status (1)

Country Link
CN (1) CN111856441B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112528763A (en) * 2020-11-24 2021-03-19 浙江大华汽车技术有限公司 Target detection method, electronic device and computer storage medium
CN113189583B (en) * 2021-04-26 2022-07-01 天津大学 Time-space synchronization millimeter wave radar and visual information fusion method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107545239A (en) * 2017-07-06 2018-01-05 南京理工大学 A kind of deck detection method matched based on Car license recognition with vehicle characteristics
CN108983219A (en) * 2018-08-17 2018-12-11 北京航空航天大学 A kind of image information of traffic scene and the fusion method and system of radar information
CN109031304A (en) * 2018-06-06 2018-12-18 上海国际汽车城(集团)有限公司 Vehicle positioning method in view-based access control model and the tunnel of millimetre-wave radar map feature
CN109947097A (en) * 2019-03-06 2019-06-28 东南大学 A kind of the robot localization method and navigation application of view-based access control model and laser fusion
CN110135485A (en) * 2019-05-05 2019-08-16 浙江大学 The object identification and localization method and system that monocular camera is merged with millimetre-wave radar
CN110398731A (en) * 2019-07-11 2019-11-01 北京埃福瑞科技有限公司 Train speed's measuring system and method
CN110415297A (en) * 2019-07-12 2019-11-05 北京三快在线科技有限公司 Localization method, device and unmanned equipment
CN110587597A (en) * 2019-08-01 2019-12-20 深圳市银星智能科技股份有限公司 SLAM closed loop detection method and detection system based on laser radar

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018080609A2 (en) * 2016-07-29 2018-05-03 Remote Sensing Solutions, Inc. Mobile radar for visualizing topography

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107545239A (en) * 2017-07-06 2018-01-05 南京理工大学 A kind of deck detection method matched based on Car license recognition with vehicle characteristics
CN109031304A (en) * 2018-06-06 2018-12-18 上海国际汽车城(集团)有限公司 Vehicle positioning method in view-based access control model and the tunnel of millimetre-wave radar map feature
CN108983219A (en) * 2018-08-17 2018-12-11 北京航空航天大学 A kind of image information of traffic scene and the fusion method and system of radar information
CN109947097A (en) * 2019-03-06 2019-06-28 东南大学 A kind of the robot localization method and navigation application of view-based access control model and laser fusion
CN110135485A (en) * 2019-05-05 2019-08-16 浙江大学 The object identification and localization method and system that monocular camera is merged with millimetre-wave radar
CN110398731A (en) * 2019-07-11 2019-11-01 北京埃福瑞科技有限公司 Train speed's measuring system and method
CN110415297A (en) * 2019-07-12 2019-11-05 北京三快在线科技有限公司 Localization method, device and unmanned equipment
CN110587597A (en) * 2019-08-01 2019-12-20 深圳市银星智能科技股份有限公司 SLAM closed loop detection method and detection system based on laser radar

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Automotive radar and camera fusion using Generative Adversarial Networks;VladimirLekic 等;《Computer Vision and Image Understanding》;20190731;第184卷;第1-8页 *
基于视觉和毫米波雷达的车道级定位方法;赵翔 等;《上海交通大学学报》;20180131;第33-38页 *

Also Published As

Publication number Publication date
CN111856441A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN111551958B (en) Mining area unmanned high-precision map manufacturing method
CN107463940B (en) Vehicle type identification method and device based on mobile phone data
CN106778593A (en) A kind of track level localization method based on the fusion of many surface marks
CN109064495A (en) A kind of bridge floor vehicle space time information acquisition methods based on Faster R-CNN and video technique
CN111856441B (en) Train positioning method based on vision and millimeter wave radar fusion
CN105741546A (en) Intelligent vehicle target tracking system through integration of road side equipment and vehicle sensor and method thereof
CN112466141A (en) Vehicle-road-collaboration-oriented intelligent network connection end equipment interaction method, system and storage medium
US11294387B2 (en) Systems and methods for training a vehicle to autonomously drive a route
CN111169468A (en) Automatic parking system and method
CN104504364A (en) Real-time stop line recognition and distance measurement method based on temporal-spatial correlation
Tak et al. Development of AI-based vehicle detection and tracking system for C-ITS application
CN112712693B (en) Flooding detection device, flooding detection system, and computer-readable storage medium
CN111183464B (en) System and method for estimating saturation flow of signal intersection based on vehicle trajectory data
CN115923839A (en) Vehicle path planning method
Einsiedler et al. Indoor micro navigation utilizing local infrastructure-based positioning
CN111539305B (en) Map construction method and system, vehicle and storage medium
US20230236020A1 (en) System and Method for Map Matching GNSS Positions of a Vehicle
US11753018B2 (en) Lane-type and roadway hypotheses determinations in a road model
CN108701223B (en) Device and method for estimating the level of attention of a driver of a vehicle
WO2020213512A1 (en) Traffic jam information providing device, traffic jam information processing method, and recording medium
Masmoudi et al. Vision based approach for adaptive parking lots occupancy estimation
Van Hamme et al. Lane identification based on robust visual odometry
Kreibich et al. Lane-level matching algorithm based on GNSS, IMU and map data
Angel et al. Application of aerial video for traffic flow monitoring and management
Chipka et al. Autonomous urban localization and navigation with limited information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant