CN117291964A - Distance measurement method and device based on adjacent rail train characteristics - Google Patents
Distance measurement method and device based on adjacent rail train characteristics Download PDFInfo
- Publication number
- CN117291964A CN117291964A CN202311100544.0A CN202311100544A CN117291964A CN 117291964 A CN117291964 A CN 117291964A CN 202311100544 A CN202311100544 A CN 202311100544A CN 117291964 A CN117291964 A CN 117291964A
- Authority
- CN
- China
- Prior art keywords
- feature
- train
- target
- adjacent rail
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000691 measurement method Methods 0.000 title claims abstract description 16
- 238000000034 method Methods 0.000 claims abstract description 27
- 239000011159 matrix material Substances 0.000 claims abstract description 25
- 230000009466 transformation Effects 0.000 claims abstract description 13
- 238000005259 measurement Methods 0.000 claims abstract description 8
- 238000004590 computer program Methods 0.000 claims description 10
- 238000004422 calculation algorithm Methods 0.000 claims description 5
- 230000001131 transforming effect Effects 0.000 claims description 4
- 239000013589 supplement Substances 0.000 abstract description 5
- 230000008859 change Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 7
- 238000012549 training Methods 0.000 description 7
- 230000003137 locomotive effect Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Train Traffic Observation, Control, And Security (AREA)
Abstract
The invention provides a distance measurement method and device based on adjacent rail train characteristics, and belongs to the field of rail transit. Comprising the following steps: obtaining target characteristics of the adjacent rail train; determining depth values from each feature corner to the vehicle based on an internal reference matrix of the vehicle-mounted shooting device, coordinates of each feature corner of the target feature in a pixel reference coordinate system in the adjacent rail train image and geometric constraint conditions of the target feature; and (3) projecting the side image into a non-perspective image based on inverse perspective transformation, and determining the depth value from the target position of the adjacent rail train occupying the switch to the vehicle based on the vertical points of different characteristic angular points on the lower edge straight line. The distance measuring method and the distance measuring device based on the adjacent rail train characteristics, which are provided by the embodiment of the invention, do not need to change the existing active train collision avoidance system, can also be used as heterogeneous supplement of laser radar scanning distance measurement, and provide more accurate depth values for train collision avoidance control of a train operation control system.
Description
Technical Field
The invention relates to the technical field of rail transit, in particular to a distance measuring method and device based on adjacent rail train characteristics.
Background
In the process of gradually propelling an auxiliary driving system and an unmanned driving system in the rail traffic industry, the clearance detection of a rail running area in front of a train is a basic premise for ensuring the safe running of the rail traffic train.
The adjacent rail train exists in the front turnout junction area of the train, and is one of the most common and dangerous scenes in automatic driving of the train. The protection of the scene requires that the train comprehensively and accurately identifies the train bodies of the adjacent rail trains, measures the distance of the adjacent rail trains occupying the switch points and provides information for the control of the train running system of the train based on the safe distance.
The existing active train collision avoidance system adopts a mode of high-precision positioning and laser radar scanning to realize the detection of adjacent rail trains in front turnout junction areas: firstly, acquiring train forward scene information and train motion information through train-mounted sensors such as a laser radar, a camera and an IMU, and matching with an existing line map to realize positioning; and then, scanning and ranging key nodes (such as turnout junction areas) of a forward area of the train through a laser radar to judge whether an adjacent rail train exists or not and determine the distance from the adjacent rail train to the train.
In the prior art, the detection is performed only by adopting a mode of combining high-precision positioning and radar scanning, the ranging function of a camera sensor is not fully utilized, and the accuracy and the reliability of ranging are not guaranteed only by the high-precision positioning and the radar scanning.
Disclosure of Invention
The invention provides a distance measuring method, a distance measuring device and electronic equipment based on adjacent rail train characteristics, which are used for solving the defect that the positioning is not accurate enough only through radar scanning in the prior art.
The invention provides a distance measurement method based on adjacent rail train characteristics, which comprises the following steps:
performing feature recognition on the side images of the adjacent rail trains acquired by the own-vehicle shooting device to acquire target features of the adjacent rail trains; the target feature is a feature of known dimensions;
determining depth values from each feature corner point to the vehicle based on an internal reference matrix of the vehicle-mounted shooting device, coordinates of each feature corner point of the target feature in a pixel reference coordinate system in the adjacent rail train image and geometric constraint conditions of the target feature;
projecting the side image into a non-perspective image based on inverse perspective transformation, and determining a lower edge straight line of the adjacent rail train in the non-perspective image;
determining the depth value from the target position of the adjacent rail train occupying the switch to the own vehicle based on the vertical points of different characteristic angular points on the lower edge straight line and the depth values from the different characteristic angular points to the own vehicle; wherein the target position is located on the lower straight line;
in one embodiment, the determining the depth value from each feature corner to the host vehicle based on the internal reference matrix of the vehicle-mounted photographing device, the coordinates of each feature corner of the target feature in the pixel reference coordinate system in the adjacent rail train image, and the geometric constraint condition of the target feature includes:
transforming coordinates of each feature corner of the target feature in a pixel reference coordinate system in the adjacent rail train image into a three-dimensional space coordinate expression in the reference coordinate system of the vehicle-mounted shooting device based on the internal reference matrix of the vehicle-mounted shooting device;
constructing a target equation set based on the geometric constraint condition of the target feature and each three-dimensional space coordinate expression;
solving the target equation set to obtain depth values from each characteristic corner point to the vehicle;
in one embodiment, the determining the depth value from the target position of the adjacent rail train occupying the switch to the own vehicle includes:
determining a linear relation between pixel coordinates and depth values based on pixel coordinates of vertical points of different feature angular points on the lower edge straight line in the non-perspective image and the depth values of the different feature angular points to the vehicle;
determining a depth value from the target position to the vehicle by utilizing a fixed ratio point algorithm based on the pixel coordinates of the target position and the linear relation;
in one embodiment, the feature recognition is performed on the side image of the adjacent rail train acquired by the own vehicle shooting device to obtain the target feature of the adjacent rail train, including:
inputting the side image of the adjacent rail train into a feature recognition model to obtain target features of the adjacent rail train and the confidence coefficient thereof, which are output by the feature recognition model;
taking the target features with the confidence coefficient higher than a confidence coefficient threshold value as effective target features;
in one embodiment, the feature recognition model is a YOLO network model or an ERFNet network model;
in one embodiment, the camera is a monocular vision camera;
in one embodiment, the target feature comprises at least one of a train head, a train head front window, a train double head lamp, a train room window, a train room door, a train wheel, a train track.
In a second aspect, the present invention further provides a ranging device based on characteristics of an adjacent rail train, including:
the feature recognition module is used for carrying out feature recognition on the side images of the adjacent rail train, which are acquired by the own vehicle shooting device, so as to obtain target features of the adjacent rail train; the target feature is a feature of known dimensions;
the depth determining module is used for determining depth values from each characteristic corner point to the vehicle based on an internal reference matrix of the vehicle-mounted shooting device, coordinates of each characteristic corner point of the target characteristic in a pixel reference coordinate system in the adjacent rail train image and geometric constraint conditions of the target characteristic;
the image projection module is used for projecting the side image into a non-perspective image based on inverse perspective transformation and determining the lower straight line of the adjacent rail train in the non-perspective image;
the turnout determining module is used for determining the depth value from the target position of the adjacent rail train occupying the pressed turnout to the own vehicle based on the vertical points of different characteristic angular points on the lower edge straight line and the depth value from the different characteristic angular points to the own vehicle; wherein the target position is located on the lower straight line.
In a third aspect, the present invention also provides an electronic device, including a memory, a processor and a memory, where the processor implements the steps of the proximity-based train feature ranging method according to any one of the first aspect when executing the program.
In a fourth aspect, the present invention also provides a processor readable storage medium having stored thereon a computer program for causing a processor to carry out the steps of the method for ranging based on characteristics of an adjacent rail train as described in any one of the first aspects above when executed.
According to the distance measurement method and device based on the adjacent rail train features, the depth value of the feature corner is established and determined through the internal reference matrix of the vehicle-mounted shooting device, the coordinates of each feature corner of the target feature and the pixel reference system and the integrated convergence condition of the feature corner, and the depth value of the target position is determined through projecting the side image into the non-perspective image, so that the existing active train collision avoidance system is not required to be changed, the method and device can be used as heterogeneous supplement of laser radar scanning distance measurement, and more accurate depth value is provided for train collision avoidance control of a train operation control system.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a ranging method based on characteristics of a neighboring rail train according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of feature recognition results provided by an embodiment of the present invention;
FIG. 3 is a perspective projection schematic view of a target feature provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of an inverse perspective transformation provided by an embodiment of the present invention;
fig. 5 is a schematic diagram of a ranging device based on characteristics of a neighboring rail train according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an entity structure of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, the ranging method based on the characteristics of the adjacent rail train provided by the embodiment of the invention includes:
step 110, carrying out feature recognition on the side image of the adjacent rail train obtained by the own-vehicle shooting device to obtain target features of the adjacent rail train; the target features are features of known dimensions;
step 120, determining depth values from each feature corner point to the vehicle based on an internal reference matrix of the vehicle-mounted shooting device, coordinates of each feature corner point of the target feature in a pixel reference coordinate system in the adjacent rail train image and geometric constraint conditions of the target feature;
step 130, projecting the side image into a non-perspective image based on inverse perspective transformation, and determining the lower edge straight line of the adjacent rail train in the non-perspective image;
step 140, determining the depth value from the target position of the adjacent rail train occupying the switch to the own vehicle based on the vertical points of different characteristic angular points on the lower edge straight line and the depth values from the different characteristic angular points to the own vehicle; wherein the target position is located on the lower straight line.
In step 110, feature recognition may be performed on the side image of the neighboring rail train acquired by the capturing device of the host vehicle, and the depth value from the target position to the host vehicle may be determined according to the target feature of the neighboring rail train and may be processed by the on-board CPU computing device of the host vehicle.
After the side image of the adjacent rail train is shot by the vehicle-mounted shooting device, the side image is input into the vehicle-mounted GPU computing equipment of the train in an image data stream mode, the image data is identified by the feature identification network in the GPU computing equipment, the target feature of the adjacent rail train is obtained, and the target feature is sent to the CPU computing equipment for calculation.
In step 120, the CPU computing device determines a depth value from the feature corner to the host vehicle according to the internal reference matrix of the vehicle-mounted photographing device, the coordinates of the corner of the target feature in the pixel reference coordinate system, and the geometric constraint condition of the target feature.
In step 130, the CPU computing device projects the side image of the adjacent rail train as a non-perspective image through an inverse perspective transformation, determining a lower straight line in the non-perspective image.
In step 140, according to the vertical points of the different characteristic angular points on the lower edge straight line and the depth values of the characteristic angular points to the own vehicle, the depth values of the target positions of the adjacent rail train occupying and pressing turnout to the own vehicle are calculated.
The vehicle-mounted shooting device can be arranged at a position with wide visual field at the front end of the train and convenient maintenance. The CPU computing equipment can acquire the pre-calibrated internal parameters of the vehicle-mounted shooting device and the position and the angle of the vehicle-mounted shooting device relative to the train in advance, so that the CPU computing equipment can determine the depth value from the characteristic corner point to the train.
The scale of the target feature may be target size information, angle information of the target feature shape, or the like. For example, the information of the width, height and the like of the passenger room of the train can be acquired by using tools such as a tape measure, a laser range finder, a level meter and the like.
It should be noted that, the geometric constraint condition of the target feature may be parallel, perpendicular, or the like.
It should be noted that the feature corner points may be points having obvious geometric relationships, such as four vertices of a quadrilateral, and specifically, in the present invention, four vertices of a rectangular passenger compartment window may be used.
The lower edge straight line of the adjacent rail train may be a straight line of the lower edge of the adjacent rail train shot by the train shooting device, and the straight line and the target position of the adjacent rail train occupying and pressing the turnout are on the same horizontal straight line.
According to the distance measurement method based on the adjacent rail train features, the depth value of the feature corner is established and determined through the internal reference matrix of the vehicle-mounted shooting device, the coordinates of each feature corner of the target feature and the pixel reference system and the integrated convergence condition of the feature corner, and the depth value of the target position is determined through projecting the side image into the non-perspective image, so that the existing active train collision avoidance system is not required to be changed, the distance measurement method can be used as heterogeneous supplement of laser radar scanning distance measurement, and a more accurate depth value is provided for train collision avoidance control by a train operation control system.
In one embodiment, the target feature comprises at least one of a train head, a train head front window, a train double head lamp, a train room window, a train room door, a train wheel, a train track.
It will be appreciated that, when the dimensions of the target feature are collected, for example, the train head width height, the train head front window width height, the train double head lamp spacing, the train passenger room door width height, the train wheel diameter, the train track double track spacing, etc. may be collected.
According to the distance measurement method based on the adjacent rail train features, at least one of the train head, the train head front window and the like is determined to serve as the target feature of the adjacent rail train, the scale of the target feature is conveniently collected, the collected scale information is more accurate relative to the feature with irregular shape, and the accuracy of subsequently identifying the target feature is more facilitated to be improved.
In one embodiment, the camera is a monocular vision camera.
It can be understood that the distance measurement is carried out on the train by using the monocular vision shooting device, and the heterogeneous supplement can be carried out on the laser radar scanning distance measurement result, so that the depth value of the train and the adjacent rail train, which is obtained by the train operation control system, at the target position of the pressure-occupying turnout is more accurate, and the train operation control system is more favorable for controlling the train collision avoidance.
Fig. 2 is a schematic diagram of a feature recognition result provided by an embodiment of the present invention. As shown in fig. 2, the distance measurement method based on the characteristics of the adjacent rail train provided by the embodiment of the invention performs characteristic recognition on the side image of the adjacent rail train obtained by the own-vehicle shooting device to obtain the target characteristics of the adjacent rail train, and includes:
inputting the side image of the adjacent rail train into the feature recognition model to obtain the target feature of the adjacent rail train and the confidence coefficient thereof, which are output by the feature recognition model;
and taking the target characteristics with the confidence higher than the confidence threshold as effective target characteristics.
After identifying the side image of the adjacent rail train, the feature identification network outputs the confidence coefficient of the target feature, and the CPU computing device determines that the target feature with the confidence coefficient lower than the confidence coefficient threshold value does not participate in the subsequent depth value computation by comparing the confidence coefficient of the target feature with the preset confidence coefficient threshold value.
According to the distance measurement method based on the adjacent rail train features, the feature recognition network is used for recognizing and outputting the target feature confidence coefficient, and the target feature with the target feature confidence coefficient higher than the confidence coefficient threshold value participates in subsequent depth value calculation, so that accuracy of the subsequent depth value calculation is ensured.
In one embodiment, the feature recognition model is a YOLO network model or an ERFNet network model.
It will be appreciated that the characteristics of the characteristic recognition network will vary from one use scenario to another. For example, when the picture frequently used for shooting by the own vehicle shooting device is a picture of a train locomotive, at this time, the feature recognition network can recognize the train locomotive and a train locomotive front window, and at this time, when the feature recognition network is trained, a large number of marked target feature images such as the train locomotive and the train locomotive front window can be input into the feature recognition network for training, so that the accuracy of the feature recognition network on the target feature recognition of the train locomotive part is improved.
It should be noted that the YOLO network model or the ERFNet network model is deployed in the GPU computing device of the present train. The YOLO network model and the ERFNet network model need to be trained before use, for example, the acquired images of the target features are marked, and the marking types can be rectangular frame marking, semantic segmentation marking and the like. Inputting a large number of marked target features into a YOLO network model or an ERFNet network model for training to obtain a feature recognition network.
It will be appreciated that the choice of using the YOLO network model or the ERFNet network model may be selected based on the hardware conditions, training duration, etc. of the present train. For example, the selection can be performed according to hardware conditions, namely, a train with good hardware conditions and high processing speed of the GPU computing equipment is selected to use a YOLO network model, and the ERFNet network model can be selected otherwise; the selection may be, according to the training market, selecting an ERFNet network model with a long training period, selecting a YOLO network model with a long training period, and the present invention is not limited herein.
According to the distance measurement method based on the adjacent rail train characteristics, provided by the embodiment of the invention, the YOLO network model or the ERFNet network model is selected to be used according to different train hardware conditions and training time of different use scenes, so that the target characteristic detection model can be suitable for various train types.
In one embodiment, determining a depth value from each feature corner to the host vehicle based on an internal reference matrix of the vehicle-mounted photographing device, coordinates of each feature corner of the target feature in a pixel reference coordinate system in the adjacent rail train image, and geometric constraint conditions of the target feature includes:
transforming coordinates of each characteristic corner point of the target characteristic in a pixel reference coordinate system in the adjacent rail train image into a three-dimensional space coordinate expression in the reference coordinate system of the vehicle-mounted shooting device based on an internal reference matrix of the vehicle-mounted shooting device;
constructing a target equation set based on the geometric constraint condition of the target feature and each three-dimensional space coordinate expression;
and solving the target equation set to obtain the depth value from each characteristic corner point to the vehicle.
After the CPU computing device obtains the target feature of the feature adjacent rail train, a pixel reference system is determined according to the adjacent rail train image, and the two-dimensional coordinates of each corner point of the target feature are determined in the pixel reference system. And determining an internal reference matrix of the vehicle-mounted shooting device according to the internal reference of the calibrated vehicle-mounted shooting device, constructing a reference coordinate system of the vehicle-mounted shooting device, and converting the two-dimensional coordinates of each corner point into a three-dimensional space coordinate expression in the reference coordinate system of the vehicle-mounted shooting device according to the internal reference matrix. And then, the CPU computing equipment constructs a target equation set according to a preset set constraint condition of the target features and combining with each three-dimensional space coordinate expression, and solves the equation set to obtain the depth value from each feature corner point to the vehicle.
Fig. 3 is a perspective projection schematic view of a target feature according to an embodiment of the present invention. The target feature is a guest window. As shown in fig. 3, points a ', B', C ', D' are four corner points of the target feature, and according to the pixel reference system, two-dimensional coordinates of the four points a ', B', C ', D' are determined as follows:
[u [i] ,v [i] ] T (i∈{A,B,C,D})
according to the internal reference matrix of the vehicle-mounted shooting device, converting the two-dimensional coordinates of each corner point into three-dimensional space coordinate expression in the reference coordinate system of the vehicle-mounted shooting device is as follows:
[X [i] ,Y [i] ,Z [i] ] T (i∈{A,B,C,D}),
further rewritten as depth information to depth information Z [i] (i ε { A, B, C, D }) as the expression of the variables:
according to the geometric constraint condition of the target feature: knowing the height H, the width L, and the vertical relationship, the geometric constraint expression is obtained:
constructing a target equation set according to the geometric constraint expression:
specifically, (4) can be expanded to:
solving the target equation set to obtain the depth value Z from each characteristic corner point to the vehicle [i] (i∈{A,B,C,D})。
According to the distance measurement method based on the adjacent rail train features, provided by the embodiment of the invention, the depth value from each feature corner point to the train is obtained through solving the simultaneous equations of the coordinate in the reference coordinate system of the pixel and the set constraint condition of the target feature of the internal reference matrix of the vehicle-mounted shooting device, the depth value from the Chinese feature corner point of the adjacent rail train to the train can be determined without changing the existing active train anti-collision system, the depth measurement flow is simplified, the train operation control system can obtain more accurate depth results, and the train anti-collision is better controlled.
In one embodiment, determining a depth value from a target location of a railroad tie to a host vehicle for a neighboring rail train comprises:
determining a linear relation between pixel coordinates and depth values based on pixel coordinates of vertical points of different feature angular points on a lower edge straight line in a non-perspective image and depth values of the different feature angular points to the vehicle;
and determining the depth value from the target position to the vehicle by using a fixed ratio point dividing algorithm based on the pixel coordinates of the target position and the linear relation.
Fig. 4 is a schematic diagram of an inverse perspective transformation provided by an embodiment of the present invention. As shown in fig. 4, the target feature angular point A, B and the target position C, the line segment a ' C is the straight line of the lower edge of the adjacent rail train, the points a ', B ' are the vertical points of the target feature angular point A, B on the straight line of the lower edge of the adjacent rail train, and the target position C is the position of the adjacent rail train occupying the switch and is located on the straight line of the lower edge of the adjacent rail train.
It will be appreciated that the vertical points a ', B' have the same depth values as the target feature corner A, B.
The pixel coordinates of the points A ', B and C are determined by constructing the pixel coordinates in the non-perspective image, the pixel distance between the points A', B and C is determined according to the pixel coordinates, and the depth value of the point C is calculated by utilizing a fixed ratio point dividing algorithm because the pixel distance and the depth value relation are in a linear relation. For example, in fig. 4, the pixel coordinate of the a 'point is (0,176), the depth value of the a' point is 88m, the pixel coordinate of the B 'point is (0,202), the depth value of the B' point is 101m, the pixel coordinate of the C point is (0,278), and the depth value of the C point is 139m according to the linear relationship between the pixel coordinates of the a 'and the B' point and the depth value.
According to the distance measurement method based on the adjacent rail train features, provided by the embodiment of the invention, the linear relation between the pixel coordinates of the point on the straight line of the adjacent rail train and the depth value of the point on the straight line of the adjacent rail train is determined through the depth of the target feature angular point in the non-perspective image and the pixel coordinates of the vertical point on the straight line of the adjacent rail train, so that the depth value of the adjacent rail train occupying the target position of the turnout is determined, the depth value of the target position can be obtained through graph change and simple calculation, and the efficiency of detecting the depth of the target position is greatly improved.
It should be noted that the distance measurement method based on the characteristics of the adjacent rail train can also be applied to the depth value estimation of other common targets, such as pedestrians, vehicles at level crossings, and the like.
It should be noted that, the distance measurement method based on the characteristics of the adjacent rail train can also be applied to the roadside road test monitoring equipment.
On the other hand, fig. 5 is a schematic diagram of a ranging device based on characteristics of an adjacent rail train according to an embodiment of the present invention, where, as shown in fig. 5, the ranging device based on characteristics of an adjacent rail train includes:
the feature recognition module 510 is configured to perform feature recognition on the side image of the adjacent rail train acquired by the own-vehicle shooting device, so as to obtain a target feature of the adjacent rail train; the target features are features of known dimensions;
the depth determining module 520 is configured to determine a depth value from each feature corner to the host vehicle based on an internal reference matrix of the vehicle-mounted photographing device, coordinates of each feature corner of the target feature in a pixel reference coordinate system in the adjacent rail train image, and geometric constraint conditions of the target feature;
an image projection module 530, configured to project the side image into a non-perspective image based on inverse perspective transformation, and determine a lower edge straight line of the adjacent rail train in the non-perspective image;
the turnout determining module 540 is configured to determine a depth value from a target position of the adjacent rail train occupying the turnout to the host vehicle based on a vertical point of the different feature corner points on the lower edge straight line and a depth value from the different feature corner points to the host vehicle; wherein the target position is located on the lower straight line.
In one embodiment, the feature recognition module 510 is specifically configured to:
inputting the side image of the adjacent rail train into the feature recognition model to obtain the target feature of the adjacent rail train and the confidence coefficient thereof, which are output by the feature recognition model;
and taking the target characteristics with the confidence higher than the confidence threshold as effective target characteristics.
In one embodiment, the depth determination module 520 is specifically configured to:
transforming coordinates of each characteristic corner point of the target characteristic in a pixel reference coordinate system in the adjacent rail train image into a three-dimensional space coordinate expression in the reference coordinate system of the vehicle-mounted shooting device based on an internal reference matrix of the vehicle-mounted shooting device;
constructing a target equation set based on the geometric constraint condition of the target feature and each three-dimensional space coordinate expression;
and solving the target equation set to obtain the depth value from each characteristic corner point to the vehicle.
In one embodiment, the switch determination module 540 is specifically configured to:
determining a linear relation between pixel coordinates and depth values based on pixel coordinates of vertical points of different feature angular points on a lower edge straight line in a non-perspective image and depth values of the different feature angular points to the vehicle;
and determining the depth value from the target position to the vehicle by using a fixed ratio point dividing algorithm based on the pixel coordinates of the target position and the linear relation.
In one embodiment, the feature recognition model is a YOLO network model or an ERFNet network model.
In one embodiment, the camera is a monocular vision camera.
In one embodiment, the target feature comprises at least one of a train head, a train head front window, a train double head lamp, a train room window, a train room door, a train wheel, a train track.
The distance measuring device based on the adjacent rail train features and the distance measuring method based on the adjacent rail train features, which are described above, can be correspondingly referred to each other, and the same technical effects can be achieved, and are not described in detail herein.
According to the distance measuring device based on the adjacent rail train features, the depth value of the feature corner is established and determined through the internal reference matrix of the vehicle-mounted shooting device, the coordinates of each feature corner of the target feature and the pixel reference system and the integrated convergence condition of the feature corner, and the depth value of the target position is determined through projecting the side image into the non-perspective image, so that the existing active train collision avoidance system is not required to be changed, the distance measuring device can be used as heterogeneous supplement of laser radar scanning distance measurement, and a more accurate depth value is provided for train collision avoidance control by a train operation control system.
Fig. 6 is a schematic physical structure of an electronic device according to an embodiment of the present invention, as shown in fig. 6, the electronic device may include: processor 610, communication interface (Communications Interface) 620, memory 630, and communication bus 640, wherein processor 610, communication interface 620, and memory 630 communicate with each other via communication bus 640. The processor 610 may invoke logic instructions in the memory 630 to perform a ranging method based on characteristics of the neighboring rail train, the method comprising:
performing feature recognition on the side images of the adjacent rail trains acquired by the own-vehicle shooting device to acquire target features of the adjacent rail trains; the target features are features of known dimensions;
determining depth values from each feature corner to the vehicle based on an internal reference matrix of the vehicle-mounted shooting device, coordinates of each feature corner of the target feature in a pixel reference coordinate system in the adjacent rail train image and geometric constraint conditions of the target feature;
projecting the side image into a non-perspective image based on inverse perspective transformation, and determining the lower edge straight line of the adjacent rail train in the non-perspective image;
determining the depth value from the target position of the adjacent rail train occupying the switch to the own vehicle based on the vertical points of different characteristic angular points on the lower edge straight line and the depth values from the different characteristic angular points to the own vehicle; wherein the target position is located on the lower straight line.
Further, the logic instructions in the memory 630 may be implemented in the form of software functional units and stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In still another aspect, the present invention further provides a processor readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the method for ranging based on characteristics of a neighboring rail train provided by the above methods, the method comprising:
performing feature recognition on the side images of the adjacent rail trains acquired by the own-vehicle shooting device to acquire target features of the adjacent rail trains; the target features are features of known dimensions;
determining depth values from each feature corner to the vehicle based on an internal reference matrix of the vehicle-mounted shooting device, coordinates of each feature corner of the target feature in a pixel reference coordinate system in the adjacent rail train image and geometric constraint conditions of the target feature;
projecting the side image into a non-perspective image based on inverse perspective transformation, and determining the lower edge straight line of the adjacent rail train in the non-perspective image;
determining the depth value from the target position of the adjacent rail train occupying the switch to the own vehicle based on the vertical points of different characteristic angular points on the lower edge straight line and the depth values from the different characteristic angular points to the own vehicle; wherein the target position is located on the lower straight line.
In another aspect, the present application further provides a computer program product, where the computer program product includes a computer program, where the computer program may be stored on a non-transitory computer readable storage medium, and when the computer program is executed by a processor, the computer is capable of executing the ranging method based on the characteristics of the neighboring rail train provided in the foregoing embodiments, where the ranging method includes:
performing feature recognition on the side images of the adjacent rail trains acquired by the own-vehicle shooting device to acquire target features of the adjacent rail trains; the target features are features of known dimensions;
determining depth values from each feature corner to the vehicle based on an internal reference matrix of the vehicle-mounted shooting device, coordinates of each feature corner of the target feature in a pixel reference coordinate system in the adjacent rail train image and geometric constraint conditions of the target feature;
projecting the side image into a non-perspective image based on inverse perspective transformation, and determining the lower edge straight line of the adjacent rail train in the non-perspective image;
determining the depth value from the target position of the adjacent rail train occupying the switch to the own vehicle based on the vertical points of different characteristic angular points on the lower edge straight line and the depth values from the different characteristic angular points to the own vehicle; wherein the target position is located on the lower straight line.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (10)
1. A distance measurement method based on characteristics of a neighboring rail train, comprising:
performing feature recognition on the side images of the adjacent rail trains acquired by the own-vehicle shooting device to acquire target features of the adjacent rail trains; the target feature is a feature of known dimensions;
determining depth values from each feature corner point to the vehicle based on an internal reference matrix of the vehicle-mounted shooting device, coordinates of each feature corner point of the target feature in a pixel reference coordinate system in the adjacent rail train image and geometric constraint conditions of the target feature;
projecting the side image into a non-perspective image based on inverse perspective transformation, and determining a lower edge straight line of the adjacent rail train in the non-perspective image;
determining the depth value from the target position of the adjacent rail train occupying the switch to the own vehicle based on the vertical points of different characteristic angular points on the lower edge straight line and the depth values from the different characteristic angular points to the own vehicle; wherein the target position is located on the lower straight line.
2. The method for ranging based on the neighboring rail train features according to claim 1, wherein determining the depth value from each feature corner to the host vehicle based on the internal reference matrix of the vehicle-mounted photographing device, the coordinates of each feature corner of the target feature in the pixel reference coordinate system in the neighboring rail train image, and the geometric constraint condition of the target feature comprises:
transforming coordinates of each feature corner of the target feature in a pixel reference coordinate system in the adjacent rail train image into a three-dimensional space coordinate expression in the reference coordinate system of the vehicle-mounted shooting device based on the internal reference matrix of the vehicle-mounted shooting device;
constructing a target equation set based on the geometric constraint condition of the target feature and each three-dimensional space coordinate expression;
and solving the target equation set to obtain the depth value from each characteristic corner point to the vehicle.
3. The method for measuring distance based on characteristics of adjacent rail trains according to claim 1, wherein determining a depth value from a target position of an adjacent rail train occupying a switch to a host vehicle comprises:
determining a linear relation between pixel coordinates and depth values based on pixel coordinates of vertical points of different feature angular points on the lower edge straight line in the non-perspective image and the depth values of the different feature angular points to the vehicle;
and determining the depth value from the target position to the vehicle by utilizing a fixed ratio point algorithm based on the pixel coordinates of the target position and the linear relation.
4. The method for measuring distance based on the characteristics of the adjacent rail train according to claim 1, wherein the step of performing characteristic recognition on the side image of the adjacent rail train obtained by the own-vehicle photographing device to obtain the target characteristics of the adjacent rail train comprises the following steps:
inputting the side image of the adjacent rail train into a feature recognition model to obtain target features of the adjacent rail train and the confidence coefficient thereof, which are output by the feature recognition model;
and taking the target features with the confidence degrees higher than the confidence degree threshold value as effective target features.
5. The method for ranging based on characteristics of a neighboring rail train according to claim 4, wherein the characteristic recognition model is a YOLO network model or an ERFNet network model.
6. The method for ranging based on adjacent track feature vehicle according to claim 1, wherein the photographing device is a monocular vision photographing device.
7. The adjacent rail train feature based ranging method of any one of claims 1 to 6, wherein the target feature comprises at least one of a train head, a train head front window, a train double head lamp, a train room window, a train room door, train wheels, a train track.
8. A distance measurement device based on adjacent rail train characteristics, comprising:
the feature recognition module is used for carrying out feature recognition on the side images of the adjacent rail train, which are acquired by the own vehicle shooting device, so as to obtain target features of the adjacent rail train; the target feature is a feature of known dimensions;
the depth determining module is used for determining depth values from each characteristic corner point to the vehicle based on an internal reference matrix of the vehicle-mounted shooting device, coordinates of each characteristic corner point of the target characteristic in a pixel reference coordinate system in the adjacent rail train image and geometric constraint conditions of the target characteristic;
the image projection module is used for projecting the side image into a non-perspective image based on inverse perspective transformation and determining the lower straight line of the adjacent rail train in the non-perspective image;
the turnout determining module is used for determining the depth value from the target position of the adjacent rail train occupying the pressed turnout to the own vehicle based on the vertical points of different characteristic angular points on the lower edge straight line and the depth value from the different characteristic angular points to the own vehicle; wherein the target position is located on the lower straight line.
9. An electronic device comprising a processor and a memory storing a computer program, characterized in that the processor implements the steps of the neighboring rail train feature-based ranging method of any of claims 1 to 7 when executing the computer program.
10. A processor readable storage medium having stored thereon a computer program for causing a processor to perform the steps of the neighboring rail train feature based ranging method of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311100544.0A CN117291964A (en) | 2023-08-29 | 2023-08-29 | Distance measurement method and device based on adjacent rail train characteristics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311100544.0A CN117291964A (en) | 2023-08-29 | 2023-08-29 | Distance measurement method and device based on adjacent rail train characteristics |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117291964A true CN117291964A (en) | 2023-12-26 |
Family
ID=89250795
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311100544.0A Pending CN117291964A (en) | 2023-08-29 | 2023-08-29 | Distance measurement method and device based on adjacent rail train characteristics |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117291964A (en) |
-
2023
- 2023-08-29 CN CN202311100544.0A patent/CN117291964A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI722355B (en) | Systems and methods for correcting a high-definition map based on detection of obstructing objects | |
US8331653B2 (en) | Object detector | |
US8180100B2 (en) | Plane detector and detecting method | |
US8154594B2 (en) | Mobile peripheral monitor | |
CN109359409A (en) | A kind of vehicle passability detection system of view-based access control model and laser radar sensor | |
CN112132896B (en) | Method and system for detecting states of trackside equipment | |
US11132530B2 (en) | Method for three-dimensional graphic reconstruction of a vehicle | |
CN111967360B (en) | Target vehicle posture detection method based on wheels | |
KR20190134231A (en) | Apparatus and method for estimating location of vehicle and computer recordable medium storing computer program thereof | |
CN104700414A (en) | Rapid distance-measuring method for pedestrian on road ahead on the basis of on-board binocular camera | |
KR102441075B1 (en) | Apparatus and method for estmating position of vehicle base on road surface display | |
JP4940177B2 (en) | Traffic flow measuring device | |
US10984263B2 (en) | Detection and validation of objects from sequential images of a camera by using homographies | |
TWI504858B (en) | A vehicle specification measuring and processing device, a vehicle specification measuring method, and a recording medium | |
JP6815963B2 (en) | External recognition device for vehicles | |
BR112016010089A2 (en) | moving body position estimation device and moving body position estimation method | |
CN113173502B (en) | Anticollision method and system based on laser vision fusion and deep learning | |
CN110717445A (en) | Front vehicle distance tracking system and method for automatic driving | |
CN114119729A (en) | Obstacle identification method and device | |
Wang et al. | Geometry constraints-based visual rail track extraction | |
US11904843B2 (en) | Autonomous parking systems and methods for vehicles | |
CN116740295A (en) | Virtual scene generation method and device | |
Lin et al. | Adaptive inverse perspective mapping transformation method for ballasted railway based on differential edge detection and improved perspective mapping model | |
Oniga et al. | A fast ransac based approach for computing the orientation of obstacles in traffic scenes | |
CN117291964A (en) | Distance measurement method and device based on adjacent rail train characteristics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |