CN116402994A - Railway danger monitoring method based on laser radar and video image fusion - Google Patents
Railway danger monitoring method based on laser radar and video image fusion Download PDFInfo
- Publication number
- CN116402994A CN116402994A CN202310055422.8A CN202310055422A CN116402994A CN 116402994 A CN116402994 A CN 116402994A CN 202310055422 A CN202310055422 A CN 202310055422A CN 116402994 A CN116402994 A CN 116402994A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- laser
- point
- obstacle
- calibration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000012544 monitoring process Methods 0.000 title claims abstract description 33
- 230000004927 fusion Effects 0.000 title claims abstract description 9
- 230000011218 segmentation Effects 0.000 claims abstract description 12
- 230000002708 enhancing effect Effects 0.000 claims abstract description 7
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 238000001914 filtration Methods 0.000 claims description 25
- 230000004888 barrier function Effects 0.000 claims description 18
- 238000005070 sampling Methods 0.000 claims description 9
- 230000003044 adaptive effect Effects 0.000 claims description 8
- 238000004422 calculation algorithm Methods 0.000 claims description 8
- 229910000831 Steel Inorganic materials 0.000 claims description 6
- 239000002245 particle Substances 0.000 claims description 6
- 239000010959 steel Substances 0.000 claims description 6
- 230000007123 defense Effects 0.000 claims description 4
- 230000005484 gravity Effects 0.000 claims description 3
- 238000003062 neural network model Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 230000009545 invasion Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 239000003595 mist Substances 0.000 description 3
- XEEYBQQBJWHFJM-UHFFFAOYSA-N Iron Chemical compound [Fe] XEEYBQQBJWHFJM-UHFFFAOYSA-N 0.000 description 2
- 239000000428 dust Substances 0.000 description 2
- 239000010419 fine particle Substances 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 229910052500 inorganic mineral Inorganic materials 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 229910052742 iron Inorganic materials 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011707 mineral Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The invention discloses a railway danger monitoring method based on laser and video image fusion, which comprises the following steps: performing joint calibration on the laser radar and the camera; respectively acquiring laser single-frame point clouds containing obstacles and camera image data, and respectively preprocessing the laser single-frame point clouds and the camera image data; dividing the single-frame point cloud and the obstacle in the image data respectively; the segmentation result of the image data is a mask image; projecting the obstacle point cloud data from the laser coordinate system to the image coordinate system, deleting point clouds outside the mask image, and enhancing the point cloud data in the mask image; classifying the enhanced obstacle point cloud data, and outputting classification information and enhanced obstacle point clouds. The invention integrates the laser radar and the video system, adopts the video image to correct and strengthen the laser point cloud data, increases the capability of the system for resisting external interference, can intelligently detect the invader within the limit range of an invaded line in various complex and severe operation environments, and provides classified alarm output.
Description
Technical Field
The invention relates to the technical field of rail transit safety, in particular to a railway danger monitoring method based on laser and video image fusion.
Background
Railway perimeter safety monitoring which aims at high-speed rail and passenger train safety is an important development focus in the current railway safety field, the occurrence of safety problems related to railway perimeter has burst and accidental, and the traditional inspection and management working mode can not meet the railway operation safety requirements in the current stage, so that advanced technical means are urgently needed to comprehensively and timely discover potential safety hazards of railway perimeter environment, the workload and difficulty of manually inspecting potential hazards along the line are reduced, and powerful guarantee is provided for safe operation of trains.
Perimeter safety monitoring based on non-contact obstacle detection technology is currently a more advanced technology, mainly based on two types of sensors: laser sensors and visual sensing, both of which are monitored by monitoring occlusion during intrusion of an object. The products used by the current railway perimeter invasion mainly comprise three products, namely a vibrating optical fiber product, an electronic fence product and an infrared correlation product, and the problems that the false alarm rate is high, the behavior after the target invasion cannot be judged, the monitoring of a rail line cannot be considered, and the like exist in the products. In recent years, a laser radar technology and an intelligent video technology are respectively applied to dangerous condition monitoring, but the existing problems still cannot be solved by applying a single technology.
The double-power-grid foreign matter invasion is identified by adopting a physical protection and power-grid detection mode, and the double-power-grid foreign matter invasion detection method can only be applied to scenes such as a public span iron, a bridge and the like, and has the problems of high construction cost, difficult maintenance and possible missing report on thrown objects. In addition, most of the service time of the system exceeds ten years, and part of the area can be maintained even by interrupting driving, so that the maintenance cost is extremely high.
There are also devices and applications of a radar fusion monitoring technology combining a laser radar and a video, for example, patent publication No. CN109164443a, which discloses a method for detecting a foreign object on a railway line based on the laser radar and image analysis, and mainly discloses that whether a moving target exists in a monitoring range is monitored by using the laser radar, and then the moving target is further identified and processed by a high-definition pan-tilt camera. It also has the following disadvantages:
(1) The linkage relation between the microwave laser radar and the high-definition pan-tilt camera is not existed, and the conclusion is obtained after the independent functions of the two devices are combined;
(2) The foreign matter can recognize unilateral real-time images shot by the high-definition pan-tilt camera, and as mentioned above, the recognition accuracy is inevitably greatly affected and even the condition of no recognition can occur under the conditions of complex environments (such as rain, snow, fog) and night.
(3) Depending on unilateral foreign matter identification, no approval operation exists, which inevitably leads to high false alarm rate and high false alarm rate.
Disclosure of Invention
In order to solve the technical problems, the technical scheme provided by the invention comprises the following steps:
a railway danger monitoring method based on laser and video image fusion comprises the following steps:
s1, performing joint calibration on a laser radar and a camera;
s2, respectively acquiring laser single-frame point clouds containing obstacles and camera image data under the same time, and respectively preprocessing the laser single-frame point clouds;
s3, dividing the single-frame point cloud and the obstacle in the image data respectively; wherein the segmentation result of the image data is a mask image;
s4, projecting the obstacle point cloud data from the laser coordinate system to the image coordinate system, deleting point clouds outside the mask image, and enhancing the point cloud data in the mask image;
s5, inputting the enhanced obstacle point cloud data into a trained neural network model for classification, and outputting classification information and enhanced obstacle point cloud.
In some preferred embodiments, step S1 further comprises:
s101, installing a laser radar and a camera close to each other, so that the laser radar and the camera monitor the same defense area at the closest angle; calibrating the camera by using the track plate image;
s102, acquiring laser single-frame point cloud data, selecting at least three points representing a track plate as calibration points, and taking a plane determined by the calibration points as a first reference plane;
s103, fitting is carried out in a point cloud cluster of a first reference surface, and a calibration reference surface parallel to the first reference surface is obtained; if the calibration reference plane parallel to the first reference plane cannot be obtained after fitting, judging that the calibration fails, and reselecting a plurality of calibration points;
s104, extracting a point cloud cluster of the steel rail for fitting to obtain a calibration straight line formed by the highest point of the steel rail; setting a plane which is parallel to the calibration reference plane and coincides with the calibration straight line as a calibration plane;
s105, calibrating the laser by using the calibration straight line and the calibration plane.
In some preferred embodiments, step S104 further comprises:
rotating the calibration plane so that a plane formed by an X axis and a Y axis which are arranged in the laser is parallel to the calibration plane, the X axis which is arranged in the laser is horizontal to the calibration straight line, and the Y axis which is arranged in the laser is vertical to the calibration straight line;
rotating the calibration plane to enable a Z axis of the laser to be vertical to the calibration plane;
and moving the origin coordinates of the laser to the intersection point of the XYZ axes after rotation, and establishing a new user coordinate system according to the origin coordinates.
In some preferred embodiments, the method for fitting point cloud data comprises:
randomly selecting a plurality of point data in the point cloud cluster, and constructing a target parameter model which is satisfied by all the selected point data;
counting the points meeting the model by using other unselected point data in the target parameter model check point cloud cluster;
if the number of points meeting the model is larger than a preset value, storing the target parameter model; if the number of points meeting the model is smaller than a preset value, reconstructing a target parameter model;
repeating the steps to obtain a plurality of target parameter models, and selecting the model with the most points meeting the model as the fitting model.
In some preferred embodiments, the preprocessing the laser single-frame point cloud in step S2 includes the steps of:
s201, filtering the acquired single-frame point cloud by using laser as an origin and adopting a semi-diameter adaptive filtering method;
s202, using point clouds without barriers in sunny days as standard point clouds, and dividing suspected barriers in a single frame after filtering by using a dividing algorithm and the standard point clouds;
s203, setting a judging threshold value, filtering the compressed point cloud quantity particle noise point clouds in the segmented suspected obstacle point clouds, and removing the rain and fog noise interference; the judging threshold value comprises a minimum length, width and height value and a point duty ratio of the suspected obstacle.
In some preferred embodiments, the radius adaptive filtering method includes the steps of:
setting an initial search circle radius R, and setting the number K of neighbor points at least contained in a neighbor region of each laser point in the search circle radius R;
calculating the distance L between each point in the single-frame point cloud and the X direction of the origin Xi Calculating the adjacent threshold radius coefficient lambda,wherein L is min Is the X-direction distance of the nearest point from the origin in the single-frame point cloud, alpha is an adjustment coefficient and alpha epsilon(0,1);
Calculating an adaptive radius threshold R for each point in a single-frame point cloud ′ =RλL Xi Then R is taken as ′ And searching adjacent points in the circle for the radius, reserving the points in the circle, and deleting the points outside the circle.
In some preferred embodiments, the method for segmenting the obstacle in the single-frame point cloud in step S3 includes:
s301, rasterizing the single-frame point cloud, calculating a point cloud characteristic value of a grid, and marking the grid with the characteristic value variance larger than a first preset threshold value as a multi-obstacle grid;
s302, traversing all grids, carrying out neighborhood grid region growth clustering on the multi-obstacle grid, judging whether the multi-obstacle grid has obstacles or not on other grids, and if yes, marking the multi-obstacle grid as an obstacle grid;
s303, extracting obstacle point clouds in the multi-obstacle grid;
s304, traversing all barrier grids, carrying out region growth clustering in the grids, and extracting barrier point clouds in the barrier grids;
s305, judging whether the characteristic value of the obstacle point cloud meets a second preset threshold value, and if so, outputting the obstacle point cloud;
the characteristic values comprise the height difference, the gravity center and the variance of point clouds in the grid.
In some preferred embodiments, the rasterizing method in step S301 includes: taking laser as a center, establishing a spherical coordinate system, wherein the coordinate of any point P in the spherical coordinate system is as follows: p (ρ, θ, φ), where ρ is the radial distance, θ is the azimuth angle, φ is the polar angle; the grids are divided by Δρ, Δθ, and Δφ.
In some preferred embodiments, the method for region growing clustering comprises:
traversing all grids in sequence, and marking the grids as seed grids if the number of point clouds in the grids is larger than 5 and the height difference between the point clouds is larger than a height difference threshold value;
sequentially traversing grids in the neighborhood 8 around the seed grid by taking the seed grid as a center, and searching and marking other seed grids;
repeating the above two steps until no seed grid exists in the neighborhood grids, and completing the clustering of single barriers.
In some preferred embodiments, the method for enhancing the point cloud data in the mask image in step S4 includes:
s401, randomly sampling a plurality of virtual points in a mask image without repetition;
s402, searching the nearest point cloud projection points for the virtual points respectively, and endowing the virtual points with the depth information of the point cloud projection points;
s403, back projecting the virtual points in the mask image to a laser coordinate system to obtain enhanced point cloud data.
Advantageous effects
1. The invention integrates a laser radar and a video system, can intelligently detect invaders within the limit range of an invaded line in various complex and severe operation environments under the support of big data and machine learning classification algorithm, and provides classification alarm output;
2. the filtering and clustering of the laser point cloud data under various conditions can avoid the situation that obstacles far away from the laser radar are divided into multiple categories due to the coefficients of the point clouds, and can filter out noise interference of rain and fog and other fine objects in extreme weather to the laser point clouds, so that the efficiency and accuracy of dividing, matching and classifying the obstacle dividing point clouds are improved, the monitoring precision is improved, and the occupation of computing resources and the time consumption of a monitoring flow are reduced.
3. The video image is adopted to correct and enhance the laser point cloud data, so that the capability of the system for resisting external interference is improved, the integrated lightning is truly realized, the false alarm is zero, the false alarm is low, and the running safety of the railway is guaranteed all the time.
Drawings
FIG. 1 is a schematic flow chart of a method according to a preferred embodiment of the invention;
FIG. 2 is a laser radar coordinate system before joint calibration in a preferred embodiment of the present invention;
FIG. 3 is a laser radar coordinate system after joint calibration in a preferred embodiment of the present invention;
FIG. 4 is a schematic diagram of enhancing an otherwise sparse laser point cloud using planar sampling points based on image data in accordance with another preferred embodiment of the present invention;
Detailed Description
The present invention will be further described with reference to the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent. In the description of the present invention, it should be understood that the terms "upper," "lower," "front," "rear," "left," "right," "top," "bottom," "inner," "outer," and the like indicate or are based on the orientation or positional relationship shown in the drawings, merely to facilitate description of the present invention and to simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention.
As shown in fig. 1, the invention provides a railway danger monitoring method based on laser and video image fusion, which comprises the following steps:
s1, performing joint calibration on a laser radar and a camera; it should be understood that the joint calibration refers to a part of the laser radar and camera interconnection system calibration, and the system calibration further comprises: 1. a monitoring area is defined; 2. calculating a track plane equation; 3. and calibrating internal parameters of the camera. The determination of these three calibration parameters belongs to the common technology in the art, so the present invention will not be described in detail. The purpose is to enable the laser radar and the camera to obtain the point cloud and image data with the same magnification or reduction multiple for the same area at the same time. Specifically, the coordinate systems of the laser radar and the camera are made to be consistent, and the target area is enlarged or reduced by the same magnification. In some preferred embodiments, a method for joint calibration is provided, comprising the steps of:
s101, installing a laser radar and a camera close to each other, so that the laser radar and the camera monitor the same defense area at the closest angle; calibrating a camera by using the track plate image to obtain a camera amplification parameter and a coordinate system parameter; it will be appreciated that in the construction of a track, the laying of the track slabs must be a plane parallel to the rail surface, and that from solid geometry knowledge, 3 points in space can define a plane, so that by characterising at least three points of the track slabs a first reference plane is determined, which is used as a reference plane for the plane in which the rail surface is to be determined in a subsequent step.
S102, acquiring laser single-frame point cloud data, as shown in FIG. 2, selecting at least three points representing a track plate as calibration points, and taking a plane determined by the calibration points as a first reference plane; in order to reduce the amount of data processed, a core area, which refers to a range space extending to the surrounding space on a railway basis, may also be defined based on the railway perimeter range determined by the railway authorities. And dividing the point cloud data in the range space, and deleting other point cloud data of the non-core area to reduce the subsequent data processing amount. It should be understood that the segmentation of the core region is not focused on in this application, and thus, will not be described in detail, and may be processed by those skilled in the art according to the image segmentation method in the art.
S103, fitting is carried out in a point cloud cluster of a first reference surface, and a calibration reference surface parallel to the first reference surface is obtained; if the calibration reference plane parallel to the first reference plane cannot be obtained after fitting, judging that the calibration fails, and reselecting a plurality of calibration points; in some preferred embodiments, a preferred fitting method is provided, comprising the steps of:
randomly selecting a plurality of point data in the point cloud cluster, and constructing a target parameter model which is satisfied by all the selected point data; wherein, the specific number of the randomly selected point data is smaller, preferably 1% -2% of the total point number. The target parametric model refers to a parametric model (such as a plane or a straight line parametric model) conforming to a fitting target, in some preferred embodiments, a computer may design a preliminary model according to the fitting target, then estimate parameters of the model through selected point data, thereby obtaining a target parametric model, for example, designing a plane or a straight line equation or an equation set with parameters, and then calculate the parameters through the point data, thereby obtaining the target plane or straight line parametric model.
Counting the points meeting the model by using other unselected point data in the target parameter model check point cloud cluster; in this step, if there are enough points to fit the model, it is reasonable to say that the model is used for fitting.
If the number of points meeting the model is larger than a preset value, storing the target parameter model; if the number of points meeting the model is smaller than a preset value, reconstructing a target parameter model; the preset value can be set by a person skilled in the art according to practical situations, and is preferably not lower than 40% of the total points. In some preferred embodiments, in order to reduce the calculation steps of the target parameter model, the model satisfying the number of points of the model in the step is directly discarded without performing the subsequent steps, and the target parameter model is reconstructed and executed from the step a, so as to speed up the whole fitting. In other preferred embodiments, since the parameters of the target parametric model are estimated only once when the model is built, there is a certain space for improvement in the rationality, so that the target parametric model is estimated again by taking into account all the point data satisfying the model, so that the target parametric model is more reasonable and efficient.
Repeating the steps to obtain a plurality of target parameter models, and selecting the model with the most points meeting the model as the fitting model. The previous steps are repeatedly executed for fixed times, the models generated each time are inspected, the models with fewer points are discarded, and the models with more points are used for replacing the original models, so that the target parameter model is obtained in an iterative mode.
It should be appreciated that in some cases, the calibration reference surface may not be fitted due to improper selection of the calibration points, and at this time, the calibration is determined to fail, and the calibration points need to be re-selected. If the fitting is successful and the subsequent calibration is successful, the selected calibration point is proper, and the calibration point used in the calibration can be selected and stored, so that the recalibration can be conveniently performed when the calibration is performed next time (for example, when the position of the laser radar is changed), and the efficiency and the accuracy of the subsequent calibration are improved.
S104, extracting a point cloud cluster of the steel rail for fitting to obtain a calibration straight line formed by the highest point of the steel rail; setting a plane which is parallel to the calibration reference plane and coincides with the calibration straight line as a calibration plane;
s105, calibrating the laser by using the calibration straight line and the calibration plane. It should be understood that the calibration of the laser radar is mainly to calibrate three axes of an XYZ coordinate system built in the laser radar device according to the monitored actual positional relationship of the area. After the calibration straight line and the calibration plane are obtained, the position relation between the laser radar and the monitoring area is preliminarily confirmed, and at the moment, a person skilled in the art can realize final coordinate system calibration in a rotating or moving mode. In some preferred embodiments, a preferred calibration method is provided, comprising the steps of:
rotating the calibration plane so that a plane formed by an X axis and a Y axis which are arranged in the laser is parallel to the calibration plane, the X axis which is arranged in the laser is horizontal to the calibration straight line, and the Y axis which is arranged in the laser is vertical to the calibration straight line;
rotating the calibration plane to enable a Z axis of the laser to be vertical to the calibration plane;
and moving the origin coordinates of the laser to the intersection point of the XYZ axes after rotation, and establishing a new user coordinate system according to the origin coordinates. The calibrated result is shown in fig. 3, the point cloud imaging at the moment accords with a rail coordinate system, and the subsequent perimeter monitoring is carried out.
S2, respectively acquiring laser single-frame point clouds containing obstacles and camera image data under the same time, and respectively preprocessing the laser single-frame point clouds; the preprocessing in this step is to eliminate noise in the point cloud data and the image data, wherein the noise processing method for the image data is mature, and a noise reduction method commonly used in the art can be adopted. For laser radar monitoring, the sources of noise infection are mainly classified into two types, one type is signal interference brought by equipment and other equipment, and for the interference, the method of direct filtering, uniform sampling filtering, statistical discrete point filtering and the like is generally adopted in the field to remove the interference; the other type is physical interference caused by environmental factors, such as fine particle interference of rain, fog, dust and the like in extreme weather, and the interference can only be smoothed by adopting a traditional filtering mode, so that a better removing effect cannot be achieved. In some preferred embodiments, a preferred method for removing noise interference of fine particles such as rain mist, sand dust and the like is provided, which comprises the following steps:
s201, filtering the acquired single-frame point cloud by using laser as an origin and adopting a semi-diameter adaptive filtering method; those skilled in the art will appreciate that conventional fixed radius threshold R filtering is directed to unified processing of point cloud data; because the radar point cloud has the characteristics of near density and far sparseness, when filtering is performed by using a fixed radius, the filtering effect on the near and far point clouds is different. This can easily lead to the erroneous filtering of important non-noisy points at a far distance. Therefore, the filtering processing by using the independent variable radius value is considered, and specifically includes: for near point cloud radius settings to be smaller, the radius threshold set should be larger and larger as distance increases. At the beginning of the construction of a monitoring system, the coordinate system of the laser radar and the coordinate system of a monitoring target (namely, a track defense area) are calibrated and converted, so that any point in the point cloud is known and can be converted with each other based on the XYZ value of the radar, and the radius value is self-adaptive by taking the X distance value as an independent variable: r is R ′ =RλL Xi Where R is a radius threshold of the setting input, L Xi The distance from the ith point to the laser radar is the straight line X; lambda is the neighbor threshold radius coefficient.
In some preferred embodiments, a specific method of implementing radius filtering is presented:
setting an initial search circle radius R, and setting the number K of neighbor points at least contained in a neighbor region of each laser point in the search circle radius R; the initial search circle radius R and the number of neighboring points K can be specifically set by a person skilled in the art according to the degree of point cloud density, preferably K is greater than or equal to 25, R is greater than or equal to 5cm, preferably k=30, and r=10 cm.
Calculating the distance L between each point in the first target point cloud and the original point in X direction Xi Calculating the adjacent threshold radius coefficient lambda,wherein L is min For the closest to the origin in the first target point cloudX-direction distance of the point, alpha is an adjustment coefficient and alpha epsilon (0, 1); it should be appreciated that the neighbor radius coefficient λ is generally greater than 1, to prevent the semi-adaptive radius threshold R of the far point cloud radius filter ′ Too large, the adjacent threshold radius coefficient λ is limited to be too large in consideration of setting the adjustment coefficient α amount.
Calculating an adaptive radius threshold R for each point in the first target point cloud ′ =RλL Xi Then R is taken as ′ And searching adjacent points in the circle for the radius, reserving the points in the circle, and deleting the points outside the circle.
S202, using point clouds without barriers in sunny days as standard point clouds, and dividing suspected barriers in a single frame after filtering by using a dividing algorithm and the standard point clouds; it should be appreciated that if downsampling is employed in the preamble step to compress the data points, the filtered point cloud data points will be further reduced, which may lead to sparse points being ignored during cluster monitoring in the subsequent recognition and classification steps, and thus, in some preferred embodiments, the up-sampling interpolation of the target point cloud is considered to interpolate the currently owned point cloud data, enriching the original data points. Specifically, the interpolation process may be performed by a moving least squares method (MLS, moving Least Squares). The method specifically comprises the following steps:
when the distribution of a large amount of discrete data is disordered, the data is often required to be fitted in a segmented way by using the traditional least square method, and the problem that a fitting curve on adjacent segments is discontinuous and not smooth is avoided. The MLS method is simple and easy to realize, and the complicated steps are not needed when the same problems are treated. In addition, the coefficient (aj) of each node only takes into account its neighboring sampling points, and the closer to the node the greater the contribution of the sampling point, the more distant the point is, in this embodiment, the upsampling of the target point cloud is accomplished by computing a fitting MLS local surface in the neighborhood, then computing interpolated coordinates between the normal and the point cloud from the surface, and finally mapping the interpolated coordinates into the input point cloud.
It should be understood that the segmentation of the suspected obstacle in this step is a rough segmentation, and the objective is to further reduce the number of point clouds required to be processed in the subsequent step, and to have a preliminary division on the rough range of the obstacle so as to accelerate the operation of the subsequent step, so that the segmentation algorithm here may use a method for searching nearest neighbors (including BST, kd-tree and Octree algorithms), a K-Means algorithm, and the like, which are commonly used in the art, and the specific segmentation method is not further limited in the present invention, and may be selected and applied by those skilled in the art according to actual needs.
S203, setting a judging threshold value, filtering the compressed point cloud quantity particle noise point clouds in the segmented suspected obstacle point clouds, and removing the rain and fog noise interference; the judging threshold value comprises a minimum length, width and height value and a point duty ratio of the suspected obstacle. It should be understood that the segmented point cloud of suspected obstructions includes not only the point cloud of the suspected obstructions, but also particle noise that is not filtered out, and obviously, the physical size of the suspected obstructions is obviously much larger than that of the particles, therefore, in some preferred embodiments, the particle noise point cloud can be filtered by setting the minimum length, width and height values of the suspected obstacle as the judging threshold value, preferably, the minimum length, width and height values are 15 x 15cm. In other preferred embodiments, the number of points of the entire point cloud is reduced after the foregoing noise removal and segmentation operations, but for possible obstacles, the reduced number of points will generally not exceed half of the original number of points, where the set point duty ratio may be considered to measure whether the target object is rain and fog or other interference noise, specifically including: the point ratio (point ratio) of the point of the target object in the third target point cloud to the point of the point in the first target point cloud is smaller than the point ratio, and the determination is made as the rain mist or the interference noise, and the determination is made as the non-rain mist or the interference noise, which is larger than the point ratio. Preferably, the dot duty cycle may be 0.5.
S3, dividing the single-frame point cloud and the obstacle in the image data respectively; wherein the segmentation result of the image data is a mask image; the method for dividing the Mask image may be an algorithm commonly used in the art, such as Mask-RCNN, fast-RCNN, etc., and the present invention is not limited thereto. It should be understood that, according to different sizes of the obstacles, for larger obstacles, the existing point cloud segmentation algorithm has lower segmentation efficiency and accuracy, and is easy to segment different types of obstacles with different sizes into the same obstacle, so that subsequent obstacle classification and recognition steps fail, and the accuracy and efficiency of monitoring are affected. In some preferred embodiments, a method for segmenting an obstacle by two region-growing clusters is provided, wherein the first region-growing cluster refines the obstacle with a larger volume which possibly contains multiple targets, and the second region-growing cluster enables the obstacle with a smaller volume to be segmented well. The method specifically comprises the following steps:
s301, rasterizing the single-frame point cloud, calculating a point cloud characteristic value of a grid, and marking the grid with the characteristic value variance larger than a first preset threshold value as a multi-obstacle grid; the characteristic values comprise the height difference, the gravity center and the variance of point clouds in the grid. The rasterization processing refers to processing an area scanned by the laser radar by using grids, each grid point cloud represents a small area of space, a part of point cloud is contained, the point cloud rasterization processing is divided into two-dimensional rasterization and three-dimensional rasterization, and two dimensions are realized by carrying out one projection on the three-dimensional point cloud without considering the change of a z value. The three-dimensional rasterization generally divides the point cloud into square grids with length, width and height parameters, the traditional three-dimensional rasterization divides the point cloud in the region of interest according to fixed length, width and height parameters, and the radar is characterized in that the distance between two nearest points increases along with the distance between the points and the laser radar sensor; the resolution of the vertical angle is much greater than the resolution of the horizontal angle; each scan line provides only one point and the vertical resolution is relatively low, resulting in large radial depth differences. The characteristics of the laser radar are easily lost by using a fixed length and width, a plurality of obstacles near the laser radar are divided into the same grid by mistake, and a distance is also mistakenly dividedIs divided between different grids that are far apart. Based on the above consideration, the invention provides a better rasterization processing method, which specifically comprises the following steps: taking a laser radar as a center, establishing a spherical coordinate system, and the coordinates of any point P in the spherical coordinate system: wherein ρ is radial distance, θ is azimuth, and +.>Is the polar angle; with Δρ, Δθ and +.>To divide the grid. The grid division provided by the invention has the advantages that the grid division can be correctly grouped no matter how far from the laser radar is; taking into account the difference between horizontal resolution and vertical resolution; the points of successive scans can be correctly grouped even if the radial difference is large.
S302, traversing all grids, carrying out neighborhood grid region growth clustering on the multi-obstacle grid, judging whether the multi-obstacle grid has obstacles or not on other grids, and if yes, marking the multi-obstacle grid as an obstacle grid; in some preferred embodiments, a specific method for region growing clustering is provided, comprising the steps of:
traversing all grids in sequence, and marking the grids as seed grids if the number of point clouds in the grids is larger than 5 and the height difference between the point clouds is larger than a height difference threshold value;
sequentially traversing grids in the neighborhood 8 around the seed grid by taking the seed grid as a center, and searching and marking other seed grids;
repeating the above two steps until no seed grid exists in the neighborhood grids, and completing the clustering of single barriers.
S303, extracting obstacle point clouds in the multi-obstacle grid; it should be appreciated that for a grid with multiple obstacles, where the obstacles may be bulky, direct extraction may result in more noise points, so in some preferred embodiments, density-based spatial clustering of noise is considered to refine the obstacle before extracting the refined obstacle point cloud. Specifically, the density-based noise spatial clustering may employ a trained DBSCAN model.
S304, traversing all barrier grids, carrying out region growth clustering in the grids, and extracting barrier point clouds in the barrier grids;
s305, judging whether the characteristic value of the obstacle point cloud meets a second preset threshold value, and if so, outputting the obstacle point cloud; it should be understood that, in order to remove the possible noise influence in the obstacle point cloud, the preset threshold of the obstacle point cloud characteristic is considered to be set, for the obstacle point cloud characteristic value in the range, it is indicated that the obstacle point cloud characteristic value does belong to an obstacle, and the obstacle point cloud characteristic value beyond the range can be output, and for the obstacle point cloud characteristic value beyond the range, it is indicated that the obstacle point cloud characteristic value may be a misjudgment item and belongs to the ground point cloud which is detected by mistake. In some preferred embodiments, if the feature value includes a height difference of point clouds in the grid, the volume of the obstacle is considered to be generally not too small, so the second preset threshold may be set as a minimum point cloud height difference threshold, and even if the volume of the point cloud falls into the minimum point cloud height difference threshold, the point cloud is small and even if the point cloud is an obstacle, the point cloud does not affect the driving safety of the train (such as small obstacles such as empty mineral water bottles and paper scraps falling from the train), so the point cloud is considered to be excluded from the obstacle list, or the point cloud of the obstacle is determined to be a false detected ground point cloud.
S4, projecting the obstacle point cloud data from the laser coordinate system to the image coordinate system, deleting point clouds outside the mask image, and enhancing the point cloud data in the mask image; it should be appreciated that according to the calibration step of the preamble, the point cloud data and the image data of the same target have the same time ID, image size and image angle, so that the boundary of the mask image can be considered to define the range of the point cloud data, which can further reduce noise in the point cloud data and combine the laser point cloud data with the image data. Because of the characteristics of near-density and far-thinning of the laser radar, large objects close to the laser radar can be easily measured to dense laser points, but small objects far away can only be measured to one or two laser points, so that the processing of subsequent steps is not facilitated. As shown in fig. 4, the present invention contemplates generating three-dimensional virtual points using a set of planar sampling points based on image data to enhance an otherwise sparse cloud of laser points. The method specifically comprises the following steps:
s401, randomly sampling a plurality of virtual points in a mask image without repetition;
s402, searching the nearest point cloud projection points for the virtual points respectively, and endowing the virtual points with the depth information of the point cloud projection points;
s403, back projecting the virtual points in the mask image to a laser coordinate system to obtain enhanced point cloud data. It will be appreciated that the enhanced point cloud data will be increased, and more point cloud data will be advantageous for the execution of subsequent steps, particularly for smaller objects at a distance.
S5, inputting the enhanced obstacle point cloud data into a trained neural network model for classification, and outputting classification information and enhanced obstacle point cloud. It should be understood that the obstacle classification neural network may be a classification neural network commonly used in the art, such as a centrpoint model and a KD-tree model, and will not be described herein because this part is not an important point of the present invention. Obviously, the obstacle classification neural network is used for inputting the third target point cloud of the segmented obstacle and outputting specific classification information of the obstacle.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined in the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (10)
1. The railway danger monitoring method based on laser and video image fusion is characterized by comprising the following steps:
s1, performing joint calibration on a laser radar and a camera;
s2, respectively acquiring laser single-frame point clouds containing obstacles and camera image data under the same time, and respectively preprocessing the laser single-frame point clouds;
s3, dividing the single-frame point cloud and the obstacle in the image data respectively; wherein the segmentation result of the image data is a mask image;
s4, projecting the obstacle point cloud data from the laser coordinate system to the image coordinate system, deleting point clouds outside the mask image, and enhancing the point cloud data in the mask image;
s5, inputting the enhanced obstacle point cloud data into a trained neural network model for classification, and outputting classification information and enhanced obstacle point cloud.
2. The method for monitoring railway crisis by fusing laser and video images according to claim 1, wherein the step S1 further comprises:
s101, installing a laser radar and a camera close to each other, so that the laser radar and the camera monitor the same defense area at the closest angle; calibrating the camera by using the track plate image;
s102, acquiring laser single-frame point cloud data, selecting at least three points representing a track plate as calibration points, and taking a plane determined by the calibration points as a first reference plane;
s103, fitting is carried out in a point cloud cluster of a first reference surface, and a calibration reference surface parallel to the first reference surface is obtained; if the calibration reference plane parallel to the first reference plane cannot be obtained after fitting, judging that the calibration fails, and reselecting a plurality of calibration points;
s104, extracting a point cloud cluster of the steel rail for fitting to obtain a calibration straight line formed by the highest point of the steel rail; setting a plane which is parallel to the calibration reference plane and coincides with the calibration straight line as a calibration plane;
s105, calibrating the laser by using the calibration straight line and the calibration plane.
3. The method for monitoring railway crisis with laser and video image fusion according to claim 2, wherein step S105 further comprises:
rotating the calibration plane so that a plane formed by an X axis and a Y axis which are arranged in the laser is parallel to the calibration plane, the X axis which is arranged in the laser is horizontal to the calibration straight line, and the Y axis which is arranged in the laser is vertical to the calibration straight line;
rotating the calibration plane to enable a Z axis of the laser to be vertical to the calibration plane;
and moving the origin coordinates of the laser to the intersection point of the XYZ axes after rotation, and establishing a new user coordinate system according to the origin coordinates.
4. The method for monitoring railway crisis by combining laser and video images according to claim 2, wherein the method for fitting point cloud data comprises the following steps:
randomly selecting a plurality of point data in the point cloud cluster, and constructing a target parameter model which is satisfied by all the selected point data;
counting the points meeting the model by using other unselected point data in the target parameter model check point cloud cluster;
if the number of points meeting the model is larger than a preset value, storing the target parameter model; if the number of points meeting the model is smaller than a preset value, reconstructing a target parameter model;
repeating the steps to obtain a plurality of target parameter models, and selecting the model with the most points meeting the model as the fitting model.
5. The method for monitoring railway danger by fusing laser and video images according to claim 1, wherein the preprocessing of the laser single-frame point cloud in step S2 comprises the steps of:
s201, filtering the acquired single-frame point cloud by using laser as an origin and adopting a semi-diameter adaptive filtering method;
s202, using point clouds without barriers in sunny days as standard point clouds, and dividing suspected barriers in a single frame after filtering by using a dividing algorithm and the standard point clouds;
s203, setting a judging threshold value, filtering the compressed point cloud quantity particle noise point clouds in the segmented suspected obstacle point clouds, and removing the rain and fog noise interference; the judging threshold value comprises a minimum length, width and height value and a point duty ratio of the suspected obstacle.
6. The method for monitoring railway crisis by combining laser and video images according to claim 5, wherein the radius adaptive filtering method comprises the steps of:
setting an initial search circle radius R, and setting the number K of neighbor points at least contained in a neighbor region of each laser point in the search circle radius R;
calculating the distance L between each point in the single-frame point cloud and the X direction of the origin Xi Calculating the adjacent threshold radius coefficient lambda,wherein L is min The X-direction distance of the point closest to the origin in the single-frame point cloud is alpha, alpha is an adjustment coefficient and alpha epsilon (0, 1);
calculating an adaptive radius threshold R for each point in a single-frame point cloud ′ =RλL Xi Then R is taken as ′ And searching adjacent points in the circle for the radius, reserving the points in the circle, and deleting the points outside the circle.
7. The method for monitoring railway danger by fusing laser and video images according to claim 1, wherein the method for dividing obstacles in a single frame point cloud in step S3 comprises:
s301, rasterizing the single-frame point cloud, calculating a point cloud characteristic value of a grid, and marking the grid with the characteristic value variance larger than a first preset threshold value as a multi-obstacle grid;
s302, traversing all grids, carrying out neighborhood grid region growth clustering on the multi-obstacle grid, judging whether the multi-obstacle grid has obstacles or not on other grids, and if yes, marking the multi-obstacle grid as an obstacle grid;
s303, extracting obstacle point clouds in the multi-obstacle grid;
s304, traversing all barrier grids, carrying out region growth clustering in the grids, and extracting barrier point clouds in the barrier grids;
s305, judging whether the characteristic value of the obstacle point cloud meets a second preset threshold value, and if so, outputting the obstacle point cloud;
the characteristic values comprise the height difference, the gravity center and the variance of point clouds in the grid.
8. The method for monitoring railway danger by fusing laser and video images according to claim 7, wherein the rasterizing process in step S301 comprises: taking laser as a center, establishing a spherical coordinate system, wherein the coordinate of any point P in the spherical coordinate system is as follows: p (ρ, θ),) Wherein ρ is radial distance, θ is azimuth angle, +.>Is the polar angle; with Δρ, Δθ and +.>To divide the grid.
9. The method for monitoring railway crisis by combining laser and video images according to claim 7, wherein the method for clustering the region growth comprises the following steps:
traversing all grids in sequence, and marking the grids as seed grids if the number of point clouds in the grids is larger than 5 and the height difference between the point clouds is larger than a height difference threshold value;
sequentially traversing grids in the neighborhood 8 around the seed grid by taking the seed grid as a center, and searching and marking other seed grids;
repeating the above two steps until no seed grid exists in the neighborhood grids, and completing the clustering of single barriers.
10. The method for monitoring railway danger by fusing laser and video images according to claim 1, wherein the method for enhancing the point cloud data in the mask image in step S4 comprises:
s401, randomly sampling a plurality of virtual points in a mask image without repetition;
s402, searching the nearest point cloud projection points for the virtual points respectively, and endowing the virtual points with the depth information of the point cloud projection points;
s403, back projecting the virtual points in the mask image to a laser coordinate system to obtain enhanced point cloud data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310055422.8A CN116402994A (en) | 2023-01-18 | 2023-01-18 | Railway danger monitoring method based on laser radar and video image fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310055422.8A CN116402994A (en) | 2023-01-18 | 2023-01-18 | Railway danger monitoring method based on laser radar and video image fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116402994A true CN116402994A (en) | 2023-07-07 |
Family
ID=87016655
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310055422.8A Pending CN116402994A (en) | 2023-01-18 | 2023-01-18 | Railway danger monitoring method based on laser radar and video image fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116402994A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117894015A (en) * | 2024-03-15 | 2024-04-16 | 浙江华是科技股份有限公司 | Point cloud annotation data optimization method and system |
-
2023
- 2023-01-18 CN CN202310055422.8A patent/CN116402994A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117894015A (en) * | 2024-03-15 | 2024-04-16 | 浙江华是科技股份有限公司 | Point cloud annotation data optimization method and system |
CN117894015B (en) * | 2024-03-15 | 2024-05-24 | 浙江华是科技股份有限公司 | Point cloud annotation data optimization method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110532889B (en) | Track foreign matter detection method based on rotor unmanned aerial vehicle and YOLOv3 | |
CN113192091B (en) | Long-distance target sensing method based on laser radar and camera fusion | |
CN103500338B (en) | Road zebra crossing extraction method based on Vehicle-borne Laser Scanning point cloud | |
CN112347999B (en) | Obstacle recognition model training method, obstacle recognition method, device and system | |
CN111461088B (en) | Rail transit obstacle avoidance system based on image processing and target recognition | |
CN108564525A (en) | A kind of 3D point cloud 2Dization data processing method based on multi-line laser radar | |
CN114565900A (en) | Target detection method based on improved YOLOv5 and binocular stereo vision | |
Liu et al. | Classification of airborne lidar intensity data using statistical analysis and hough transform with application to power line corridors | |
CN111079611A (en) | Automatic extraction method for road surface and marking line thereof | |
CN116030289A (en) | Railway danger monitoring method based on laser radar | |
CN115205796B (en) | Rail line foreign matter intrusion monitoring and risk early warning method and system | |
CN114743181A (en) | Road vehicle target detection method and system, electronic device and storage medium | |
CN115272425B (en) | Railway site area intrusion detection method and system based on three-dimensional point cloud | |
CN115100741B (en) | Point cloud pedestrian distance risk detection method, system, equipment and medium | |
CN116402994A (en) | Railway danger monitoring method based on laser radar and video image fusion | |
CN114355339B (en) | Pavement void disease radar map identification method and system | |
Sharma et al. | Automatic vehicle detection using spatial time frame and object based classification | |
CN114842166A (en) | Negative obstacle detection method, system, medium, and apparatus applied to structured road | |
CN118072007A (en) | Method and device for dividing obstacle based on SAM (SAM) point cloud and image fusion | |
CN117148338A (en) | Mining area environment data processing method and device and mining area environment data processing system | |
CN117037079A (en) | Three-dimensional vehicle detection method based on laser radar | |
Zaletnyik et al. | LIDAR waveform classification using self-organizing map | |
CN116299315A (en) | Method and device for detecting road surface obstacle in real time by using laser radar | |
CN116863325A (en) | Method for multiple target detection and related product | |
Lin et al. | Research on Point Cloud Structure Detection of Manhole Cover Based on Structured Light Camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |