CN117894011A - Point cloud-based method for detecting foreign matters at bottom of rail transit vehicle - Google Patents

Point cloud-based method for detecting foreign matters at bottom of rail transit vehicle Download PDF

Info

Publication number
CN117894011A
CN117894011A CN202410141597.5A CN202410141597A CN117894011A CN 117894011 A CN117894011 A CN 117894011A CN 202410141597 A CN202410141597 A CN 202410141597A CN 117894011 A CN117894011 A CN 117894011A
Authority
CN
China
Prior art keywords
foreign
foreign matter
image
point
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410141597.5A
Other languages
Chinese (zh)
Inventor
洪诚康
涂文豪
杨轩
李鑫
万辰飞
汪海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongshu Zhike Hangzhou Technology Co ltd
Original Assignee
Zhongshu Zhike Hangzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongshu Zhike Hangzhou Technology Co ltd filed Critical Zhongshu Zhike Hangzhou Technology Co ltd
Priority to CN202410141597.5A priority Critical patent/CN117894011A/en
Publication of CN117894011A publication Critical patent/CN117894011A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting foreign matters at the bottom of a track traffic vehicle based on point cloud, which comprises the following steps: s1: creating an initial vehicle bottom foreign matter feature point extraction model, and training the initial vehicle bottom foreign matter feature point extraction model by utilizing a vehicle bottom foreign matter image feature data set to obtain a vehicle bottom foreign matter feature point extraction model; s2: introducing a geometric transformation convolution layer to the vehicle bottom foreign body feature point extraction model to generate a comprehensive vehicle bottom foreign body feature point extraction model; s3: performing foreign matter identification by using comprehensive visual characteristics of the vehicle bottom 3D image; s4: performing foreign matter positioning according to the foreign matter identification to generate foreign matter positioning information; s5: shooting a foreign object 2D image to be detected and a foreign object 3D image to be detected according to the foreign object positioning information, inputting the foreign object 2D image to be detected and the foreign object 3D image to be detected into a comprehensive vehicle bottom foreign object feature point extraction model, and obtaining the foreign object 2D image feature point to be detected and the foreign object 3D image feature point to be detected. According to the invention, the foreign matter position is positioned by identifying the 3D image of the foreign matter to be detected, so that the accuracy of foreign matter detection is improved.

Description

Point cloud-based method for detecting foreign matters at bottom of rail transit vehicle
Technical Field
The invention relates to the technical field of track traffic inspection image processing, in particular to a method for detecting foreign matters at the bottom of a track traffic vehicle based on point cloud.
Background
In recent years, rail transit technology has developed rapidly, and subways have become traffic systems for most urban residents with few trips. In order to ensure the safety problem of subway travel, the vehicle bottom needs to be periodically inspected.
Along with the development of computer technology and artificial intelligence, the track traffic field has appeared that adopts the camera to shoot vehicle bottom 2D, 3D image, judges whether there is the foreign matter through the method of detecting 3D size. In the prior art, the foreign matter searching range for the foreign matters at the bottom of the automobile is too large, so that the accuracy of the foreign matter searching result is low easily.
Disclosure of Invention
The invention aims to provide a point cloud-based method for detecting foreign matters at the bottom of a rail transit vehicle, which is characterized in that the position of the foreign matters is positioned through a photographed 3D image of the bottom of the vehicle, the searching range is reduced, and a more accurate foreign matter searching result is obtained, so that the accuracy of extracting the characteristic points of the foreign matters is improved, and the judging result of the foreign matters at the bottom of the rail transit vehicle is more accurate.
In order to achieve the above object, the present invention provides the following solutions: the method for detecting the foreign matters at the bottom of the track traffic vehicle based on the point cloud comprises the following steps:
S1: creating an initial vehicle bottom foreign matter feature point extraction model, and training the initial vehicle bottom foreign matter feature point extraction model by utilizing a vehicle bottom foreign matter image feature data set to obtain the vehicle bottom foreign matter feature point extraction model, wherein the vehicle bottom foreign matter image feature data set comprises external factor foreign matter images, internal factor foreign matter images, dangerous foreign matter images and lost article images with different degrees of darkness;
S2: introducing the geometric transformation convolution layer to the vehicle bottom foreign object feature point extraction model, training the geometric transformation convolution layer, and iteratively optimizing parameters of the geometric transformation convolution layer until the parameters reach qualified parameters, so as to generate a comprehensive vehicle bottom foreign object feature point extraction model;
S3: shooting a vehicle bottom 3D image, utilizing the comprehensive visual characteristics of the vehicle bottom 3D image to identify the foreign matters, classifying the identified foreign matters according to the types of the foreign matters, and dividing the foreign matters into external factor foreign matters, internal factor foreign matters, dangerous foreign matters and missing object foreign matters, if the foreign matters are missing object foreign matters, reserving the foreign matters, otherwise shooting a 2D image of the foreign matters to be detected and a 3D image of the foreign matters to be detected;
s4: performing foreign matter positioning according to the foreign matter identification to generate foreign matter positioning information, wherein the foreign matter positioning information comprises track clearance foreign matter positioning information, vehicle bottom middle foreign matter positioning information and obstacle deflector area foreign matter positioning information;
S5: shooting a foreign object 2D image to be detected and a foreign object 3D image to be detected according to the foreign object positioning information, inputting the foreign object 2D image to be detected and the foreign object 3D image to be detected into a comprehensive vehicle bottom foreign object feature point extraction model, and obtaining the foreign object 2D image feature point to be detected and the foreign object 3D image feature point to be detected.
Further, step S3 further includes: and carrying out batch processing and saving of shooting dates, shooting vehicle models, passenger information and staff information corresponding to the 3D images of the vehicle bottom, inquiring the passenger information and the staff information corresponding to the foreign matters of the lost articles and sending the lost article information.
Further, step S5 further includes: the method comprises the steps of presetting an emergency foreign matter treatment level according to a vehicle running state, and extracting feature points of a 2D image feature point of a foreign matter to be detected and a 3D image feature point of the foreign matter to be detected in sequence from high to low according to the emergency foreign matter treatment level, wherein the vehicle running state comprises a dangerous running degree of a vehicle and a running degree of the vehicle.
Further, the characteristic points of the 2D image of the foreign matter to be detected and the characteristic points of the 3D image of the foreign matter to be detected are matched and aligned by using camera calibration, and an aligned foreign matter 2D image and an aligned foreign matter 3D image are obtained.
Further, a foreign matter labeling area is preset according to the foreign matter type and the foreign matter structure, the target detection neural network model is utilized to detect the foreign matter labeling area of the foreign matter 2D image to be detected, if the foreign matter labeling area is detected, labeling is carried out, otherwise, labeling is not carried out, and a foreign matter rectangular labeling frame is obtained and comprises a bolt part position and a bolt part size.
Further, a foreign object mask image is created according to the foreign object rectangular labeling frame, 2D effective pixel points of the foreign object mask image are extracted through logic operation, 3D effective pixel points aligned with the foreign object 3D image are obtained according to the 2D effective pixel points, the 3D effective pixel points are utilized to generate a point cloud of a foreign object part to be detected, and the 2D effective pixel points are located in the foreign object rectangular labeling frame.
Further, calculating the average distance between each adjacent point in the point cloud of the foreign object part to be detected, calculating the standard distance according to the average distance and the preset standard deviation, traversing the points to be defined in the point cloud of the foreign object part to be detected, calculating the adjacent distance between the points to be defined and the adjacent points, if the adjacent distance is larger than the standard distance, defining the points to be defined as noise points, removing the noise points from the point cloud, otherwise, defining the points to be defined as qualified points, and keeping the qualified points in the point cloud.
Further, a standard point cloud is created according to the standard component, ICP registration is conducted on the qualified points and standard points in the standard point cloud, and the distance between the point cloud of the component to be tested and the standard point cloud is optimized in an iterative mode until a preset threshold is reached.
Further, a fixed threshold radius of a standard point cloud of the standard component is preset, a ball neighborhood is obtained according to the fixed threshold radius, whether the qualified point is located in the ball neighborhood is judged, if the qualified point is located in the ball neighborhood, the qualified point is defined as an inner point and removed, otherwise, the qualified point is defined as an outer point and reserved.
Further, presetting the quantity of qualified cluster point clouds, carrying out cluster segmentation on the outer points to generate cluster point clouds to be defined, judging whether the cluster point clouds to be defined are abnormal cluster point clouds according to the preset quantity of the qualified cluster point clouds, if so, calculating a minimum size bounding box of the abnormal cluster point clouds according to the OBB, and obtaining an abnormal part by utilizing the minimum size bounding box, otherwise, not calculating.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: the method comprises the steps of collecting foreign matter images with different brightness and darkness of a vehicle bottom, internal foreign matter images, dangerous foreign matter images and lost object images of historical track traffic, constructing a vehicle bottom foreign matter feature point extraction model, avoiding missing feature points of darker parts when feature points of foreign matters are extracted, introducing a geometric transformation convolution layer into the vehicle bottom foreign matter feature point extraction model, optimizing parameters of the geometric transformation convolution layer, generating a comprehensive vehicle bottom foreign matter feature point extraction model, enabling foreign matter identification to be carried out according to a 3D image of the vehicle bottom when feature points of the foreign matters are extracted, obtaining specific information of the foreign matters, obtaining position information of the foreign matters according to the specific information of the foreign matters, obtaining more accurate positions of the foreign matters, reducing a foreign matter searching area, improving searching speed, simultaneously increasing accuracy of foreign matter searching, finally, inputting 2D images and 3D images of the foreign matters to be detected according to the obtained positions, inputting the 2D image feature points of the foreign matters to be detected, and extracting 3D image feature points of the foreign matters to be detected, and obtaining more accurate and comprehensive feature points of the foreign matters to be detected.
Drawings
FIG. 1 is a general flow chart of a method for detecting foreign objects at the bottom of a track traffic vehicle based on point clouds according to the present application;
FIG. 2 is a 3D image of a foreign object under test of the bottom of the rail transit vehicle of the present application;
FIG. 3 is a 3D image of a standard part of the bottom of a rail transit vehicle of the present application;
FIG. 4 is a ball field image of a standard part of the present application;
FIG. 5 is an outlier plot of the present application;
FIG. 6 is a graph of cluster partitions of outliers according to the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Fig. 1 is a general flowchart of a method for detecting a foreign object on a vehicle bottom of a track traffic vehicle based on point clouds according to the present application.
S1: an initial vehicle bottom foreign matter feature point extraction model is created, the vehicle bottom foreign matter feature point extraction model is trained by utilizing a vehicle bottom foreign matter image feature data set, and the vehicle bottom foreign matter feature point extraction model is obtained, wherein the vehicle bottom foreign matter image feature data set comprises external factor foreign matter images, internal factor foreign matter images, dangerous foreign matter images and lost article images with different degrees of darkness.
Specifically, a patrol robot is adopted to shoot historical vehicle bottom foreign matter 3D images, all the shot historical vehicle bottom foreign matter 3D images are arranged, the vehicle bottom foreign matter 3D images of the same part are uniformly classified, the vehicle bottom foreign matter 3D images of the same part with different brightness are classified together, the vehicle bottom foreign matter 3D images are classified according to the foreign matter types, and external factor foreign matter images, internal factor foreign matter images and dangerous foreign matter images with different brightness are obtained.
Specifically, the brightness of the images can be set or adjusted according to the actual needs, the images are adjusted according to the preset proportion, and the brightness proportion can be set to 10%, 20%, 30%, 40%, 50% and 60%.
Specifically, the external factor foreign matter image includes 3D images of stones, metal sheets, plastic bags, water, oil, etc., the internal factor foreign matter image includes 3D images of vehicle manufacturing residues, vehicle parts, vehicle bottom arcs, etc., the dangerous foreign matter image includes 3D images of glass residues, blades, needles, etc., and the missing article image includes 3D images of personal articles of passengers or staff, sundries dropped on a vehicle, etc.
Specifically, the shot and arranged external factor foreign matter images, internal factor foreign matter images, dangerous foreign matter images and lost object images with different degrees of darkness are used as a training set of an initial vehicle bottom foreign matter feature point extraction model, and the initial vehicle bottom foreign matter feature point extraction model is input for training, so that the vehicle bottom foreign matter feature point extraction model capable of identifying the external factor foreign matter images, the internal factor foreign matter images, the dangerous foreign matter images and the lost object images with different degrees of darkness is obtained.
S2: introducing the geometric transformation convolution layer to the vehicle bottom foreign object feature point extraction model, training the geometric transformation convolution layer, and iteratively optimizing parameters of the geometric transformation convolution layer until the parameters reach qualified parameters, so as to generate the comprehensive vehicle bottom foreign object feature point extraction model.
Specifically, the external factor foreign matter image, the internal factor foreign matter image, the dangerous foreign matter image and the missing object image with different darkness are input into the vehicle bottom foreign matter feature point extraction model to obtain the external factor foreign matter image point, the internal factor foreign matter image point, the dangerous foreign matter image point and the missing object image point, a geometric transformation convolution layer is introduced into the vehicle bottom foreign matter feature point extraction model, the external factor foreign matter image, the internal factor foreign matter image, the dangerous foreign matter image and the missing object image are subjected to geometric transformation according to the actual utilization area condition, partial images which are not applied are removed, the geometric transformation comprises obtaining square, round and other images, the image after the geometric transformation is used as an input image, the geometric transformation convolution layer is trained, the training process comprises the steps of rotating, zooming, translating and the like the input image to generate a new geometric input image, carrying out convolution operation on the geometric input image, calculating offset of the pixel point of the input image, firstly calculating the displacement deviation of the preset deviation, converting the preset deviation into the integral deviation, calculating the sum of the displacement deviation and the integral deviation, obtaining the new pixel point position of the input image, updating the pixel position, repeating the calculated offset value, carrying out convolution layer updating, carrying out convolution operation on the position, and carrying out the geometric transformation, and carrying out overall transformation on the geometric transformation to obtain the geometric transformation feature, and the geometric transformation feature is fully reaching the geometric transformation, and the geometric transformation feature is completely achieved, and the geometric transformation is achieved through the step of the new, and the geometric transformation feature is achieved, and the full, and the geometric feature is completely achieved through the feature is achieved, and the feature is better and the feature, and the feature is achieved.
S3: shooting a vehicle bottom 3D image, utilizing the comprehensive visual characteristics of the vehicle bottom 3D image to identify the foreign matters, classifying the identified foreign matters according to the foreign matters types, and dividing the foreign matters into external factor foreign matters, internal factor foreign matters, dangerous foreign matters and lost object foreign matters, if the foreign matters are lost object foreign matters, reserving the foreign matters, otherwise shooting a to-be-detected foreign matter 2D image and a to-be-detected foreign matter 3D image.
Specifically, the time for regularly checking the foreign matters at the bottom of the vehicle is preset according to weather conditions and aging conditions of the vehicle and recorded, a patrol robot is adopted to shoot a real-time 3D image at the bottom of the vehicle, foreign matters are identified according to the types of the foreign matters, the identified foreign matters are counted and classified, the foreign matters are classified into foreign matters of external factors, foreign matters of internal factors, dangerous foreign matters and foreign matters of lost objects, and 2D images and 3D images of the foreign matters of external factors, the foreign matters of internal factors and the dangerous foreign matters are shot.
Specifically, the shooting date, the shooting vehicle model, the passenger information and the staff information corresponding to the 3D image of the vehicle bottom are batched and stored, the passenger information and the staff information corresponding to the foreign matters of the lost article are inquired, the lost article information is sent, the time interval for sending the lost article information can be set according to the claim condition of the lost article, if no person claims the lost article information is sent once every two days, otherwise, the sending is stopped, and the claim date and the claim person signature are recorded.
S4: and (3) performing foreign matter positioning according to the foreign matter identification to generate foreign matter positioning information, wherein the foreign matter positioning information comprises track clearance foreign matter positioning information, vehicle bottom middle foreign matter positioning information and obstacle deflector area foreign matter positioning information.
Specifically, according to the vehicle bottom positioning and identifying system for rail transit, the statistics of the positions of the identified external factor foreign matters, internal factor foreign matters and dangerous foreign matters on the vehicle bottom is carried out, so that more specific positioning information is obtained, wherein the positioning information comprises foreign matter positioning information including rail clearance foreign matter positioning information, vehicle bottom middle foreign matter positioning information and obstacle deflector area foreign matter positioning information, and the foreign matters are shot and detected according to the obtained positioning information.
Specifically, a statistics sequence can be preset, and the positions of dangerous foreign matters, internal factor foreign matters and external factor foreign matters are counted in sequence according to the size affecting the traffic running degree and positioning information.
As shown in fig. 2, fig. 2 is a 3D image of a foreign object to be detected of the bottom of the rail transit vehicle of the present application.
S5: shooting a foreign object 2D image to be detected and a foreign object 3D image to be detected according to the foreign object positioning information, inputting the foreign object 2D image to be detected and the foreign object 3D image to be detected into a comprehensive vehicle bottom foreign object feature point extraction model, and obtaining the foreign object 2D image feature point to be detected and the foreign object 3D image feature point to be detected.
Specifically, the inspection robot is adopted to shoot a 2D image of the foreign object to be detected and a 3D image of the foreign object to be detected at the position of the foreign object, the emergency level of the foreign object treatment is preset according to the running state of the vehicle, the running state of the vehicle comprises the dangerous running degree of the vehicle and the running degree of the vehicle, the emergency level of the foreign object treatment is divided into one level, two levels and three levels from large to small according to the emergency degree, the shot 2D image of the foreign object to be detected and the shot 3D image of the foreign object to be detected are sequentially extracted according to the emergency level of the foreign object treatment, the 2D image of the foreign object to be detected and the 3D image of the foreign object to be detected are input into the comprehensive vehicle bottom foreign object feature point extraction model, and the 2D image feature points of the foreign object to be detected and the 3D image feature points of the foreign object to be detected are obtained.
Specifically, the first-stage foreign matter is treated according to the difficulty of the foreign matter treatment, the foreign matter treatment means (means comprise foreign matter replacement and maintenance), the time of maintenance personnel and the length of maintenance time.
Specifically, the camera internal reference matrix is utilized to project the feature points of the 2D image of the foreign object to be detected back to the 3D space, the feature points of the 3D image of the foreign object to be detected are converted into the camera foreign object 3D feature points in the camera coordinate system, the camera external reference matrix is utilized to multiply the camera foreign object 3D feature points with the rotation matrix and the translation vector, so that the camera foreign object 3D feature points are converted into world foreign object 3D feature points, the internal and external parameters of the camera are calculated according to the corresponding relation between the world foreign object 3D feature points and the feature points of the 2D image of the foreign object to be detected, each point in the 3D image of the foreign object to be detected is converted into the 3D foreign object coordinates of the foreign object to be detected under the camera coordinate system, and then the external parameters of the camera are utilized to convert the 3D foreign object coordinates into the coordinates of the 2D image of the foreign object to be detected under the camera to be detected, so that the RGB color information of each pixel in the 2D image and the depth information in the 3D image are in one-to-one correspondence.
Specifically, a foreign matter labeling area is preset according to the type and the structure of the foreign matter, the target detection neural network model is utilized to detect the foreign matter labeling area of the foreign matter 2D image to be detected, if the foreign matter labeling area is detected, labeling is carried out, otherwise, labeling is not carried out, and a rectangular foreign matter labeling frame is obtained and comprises a bolt part position and a bolt part size.
Specifically, a foreign object mask image is created according to a foreign object rectangular marking frame, the pixel value in the foreign object rectangular marking frame is 1, the pixel value of other areas is 0, 2D effective pixel points of the foreign object mask image are extracted by utilizing logic operation, 2D effective pixel points in the foreign object rectangular marking frame are extracted, 3D effective pixel points in a 3D image of the aligned foreign object are obtained according to the 2D effective pixel points, effective 3D depth information and effective 3D color information can be obtained through the 3D effective pixel points, wherein the depth information describes the position and the shape of a part of the foreign object in the 3D image, the color information describes the color, the effective 3D depth information is converted into point clouds of the part of the foreign object to be detected, the point clouds are a mode for representing the surface of the three-dimensional object, and the shape and the position of the object are represented through a set of a group of three-dimensional coordinate points.
As shown in fig. 3, fig. 3 is a 3D image of a standard part of the bottom of the rail transit vehicle of the present application.
Specifically, calculating the average distance between each adjacent point in the point cloud of the foreign object part to be detected, calculating the standard distance according to the average distance and the preset standard deviation, traversing the points to be defined in the point cloud of the foreign object part to be detected, calculating the adjacent distance between the points to be defined and the adjacent points, if the adjacent distance is larger than the standard distance, defining the points to be defined as noise points, removing the noise points from the point cloud, otherwise, defining the points to be defined as qualified points, and keeping the qualified points in the point cloud.
As shown in fig. 4 and 5, fig. 4 is a ball field image of the standard component of the present application, and fig. 5 is an outer point diagram of the present application.
Specifically, a standard point cloud is created according to a standard component, ICP registration is conducted on qualified points and standard points in the standard point cloud, an initial transformation matrix is set, corresponding points of each to-be-measured point in the to-be-measured component point cloud in the standard template point cloud are found through neighbor search, distances between the to-be-measured points and the corresponding points are calculated, the square sum of the distances is minimized through a least square method, a transformation matrix is generated, and the distances between the to-be-measured component point cloud and the standard point cloud are iteratively optimized until a preset threshold value is reached, so that a target transformation matrix is generated.
As shown in fig. 6, fig. 6 is a cluster segmentation diagram of outliers according to the present application.
Specifically, a fixed threshold radius of a standard point cloud of the standard component is preset, a ball neighborhood is obtained according to the fixed threshold radius, whether the qualified point is located in the ball neighborhood is judged, if the qualified point is located in the ball neighborhood, the qualified point is defined as an inner point and removed, otherwise, the qualified point is defined as an outer point and reserved. The method comprises the steps of presetting the quantity of qualified clustering point clouds, carrying out clustering segmentation on external points to generate clustering point clouds to be defined, obtaining the quantity of the clustering point clouds to be defined according to the clustering point clouds to be defined, presetting the quantity of the qualified clustering point clouds, comparing the quantity of the preset qualified clustering point clouds with the quantity of the clustering point clouds to be defined, judging whether the clustering point clouds to be defined are abnormal clustering point clouds, judging the clustering point clouds to be abnormal clustering point clouds if the quantity of the clustering point clouds to be defined is larger than the quantity of the preset qualified clustering point clouds, calculating a minimum size bounding box of the abnormal clustering point clouds according to an OBB, obtaining abnormal parts by utilizing the minimum size bounding box, and otherwise, not calculating.
Specifically, the minimum size bounding box of the abnormal cluster is calculated by using the OBB, data standardization is firstly carried out on each point cloud data in the abnormal cluster point cloud, an abnormal cluster covariance matrix is calculated according to the quantity of the point cloud data, the abnormal cluster feature vector and the abnormal cluster feature value of the covariance matrix are calculated according to the abnormal cluster covariance matrix, the abnormal cluster feature values are ordered from large to small, the abnormal cluster feature value ordered as the first abnormal cluster feature value is selected as the long axis direction of the abnormal cluster OBB, the abnormal cluster feature value ordered as the second abnormal cluster feature value is selected as the abnormal cluster wide axis direction of the abnormal cluster OBB, the rest abnormal cluster vectors are used as the abnormal cluster high axis direction of the abnormal cluster OBB, the maximum coordinate and the minimum coordinate on the axis are obtained according to the abnormal cluster BOO long axis direction, the difference value of the maximum coordinate and the minimum coordinate is calculated, and the abnormal cluster long axis size is obtained, calculating the wide axis dimension of the OBB and the high axis dimension of the OBB according to the method to obtain the wide axis dimension of the abnormal cluster and the high axis dimension of the abnormal cluster, calculating corresponding abnormal cluster unit vectors according to the long axis direction of the abnormal cluster OBB, the wide axis direction of the abnormal cluster OBB and the high axis direction of the abnormal cluster OBB, constructing a rotation matrix by using the abnormal cluster unit vectors, traversing each point cloud data in the point cloud, adding the coordinates of each point cloud data to the coordinates of a center point, dividing the coordinates of the center point by the number of the point cloud data after traversing all the point cloud data to obtain an average value, aligning the rotation matrix with the centers of the point cloud data of the point cloud of the part to be tested according to the center value to obtain a new center point, a new long axis dimension, a new wide axis dimension and a new high axis dimension of the rotation matrix, and using the new center point, the new long axis dimension, the method comprises the steps of constructing an OBB bounding box of a point cloud of a part to be detected by the new wide axis size and the new high axis size, judging whether the part to be detected corresponding to the position of the point cloud of the part to be detected is an abnormal part according to the size of the OBB bounding box of the point cloud of the part to be detected, presetting a qualified size threshold, defining the part where the abnormal part is located as the abnormal part if the minimum size bounding box is larger than the preset qualified size threshold, and otherwise defining the part where the abnormal part is located as the normal part.
The technical features of the above embodiments may be combined arbitrarily, and the steps of the method are not limited to the execution sequence, so that all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description of the present specification.
The foregoing examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1. The method for detecting the foreign matters at the bottom of the track traffic vehicle based on the point cloud is characterized by comprising the following steps:
S1: creating an initial vehicle bottom foreign matter feature point extraction model, and training the initial vehicle bottom foreign matter feature point extraction model by utilizing a vehicle bottom foreign matter image feature data set to obtain the vehicle bottom foreign matter feature point extraction model, wherein the vehicle bottom foreign matter image feature data set comprises external factor foreign matter images, internal factor foreign matter images, dangerous foreign matter images and lost article images with different degrees of darkness;
S2: introducing the geometric transformation convolution layer to the vehicle bottom foreign object feature point extraction model, training the geometric transformation convolution layer, and iteratively optimizing parameters of the geometric transformation convolution layer until the parameters reach qualified parameters, so as to generate a comprehensive vehicle bottom foreign object feature point extraction model;
S3: shooting a vehicle bottom 3D image, utilizing the comprehensive visual characteristics of the vehicle bottom 3D image to identify the foreign matters, classifying the identified foreign matters according to the foreign matters types, and dividing the foreign matters into external factor foreign matters, internal factor foreign matters, dangerous foreign matters and missing object foreign matters, if the foreign matters are missing object foreign matters, reserving the foreign matters, otherwise shooting a to-be-detected foreign matter 2D image and a to-be-detected foreign matter 3D image;
s4: performing foreign matter positioning according to the foreign matter identification to generate foreign matter positioning information, wherein the foreign matter positioning information comprises track clearance foreign matter positioning information, vehicle bottom middle foreign matter positioning information and obstacle deflector area foreign matter positioning information;
S5: shooting a foreign object 2D image to be detected and a foreign object 3D image to be detected according to the foreign object positioning information, inputting the foreign object 2D image to be detected and the foreign object 3D image to be detected into a comprehensive vehicle bottom foreign object feature point extraction model, and obtaining the foreign object 2D image feature point to be detected and the foreign object 3D image feature point to be detected.
2. The method for detecting foreign objects on a track traffic vehicle bottom based on point cloud as recited in claim 1, wherein the step S3 further includes: and carrying out batch processing and saving of shooting dates, shooting vehicle models, passenger information and staff information corresponding to the 3D images of the vehicle bottom, inquiring the passenger information and the staff information corresponding to the foreign matters of the lost articles and sending the lost article information.
3. The method for detecting foreign objects on a track traffic vehicle bottom based on point cloud as recited in claim 1, wherein the step S5 further includes: the method comprises the steps of presetting an emergency foreign matter treatment level according to a vehicle running state, and extracting feature points of a 2D image feature point of a foreign matter to be detected and a 3D image feature point of the foreign matter to be detected in sequence from high to low according to the emergency foreign matter treatment level, wherein the vehicle running state comprises a dangerous running degree of a vehicle and a running degree of the vehicle.
4. The method for detecting the foreign object at the bottom of the track traffic vehicle based on the point cloud according to claim 1 is characterized in that the characteristic points of the 2D image of the foreign object to be detected and the characteristic points of the 3D image of the foreign object to be detected are matched and aligned by utilizing camera calibration, so that an aligned foreign object 2D image and an aligned foreign object 3D image are obtained.
5. The method for detecting the foreign matter at the bottom of the track traffic vehicle based on the point cloud according to claim 1, wherein the foreign matter labeling area is preset according to the type and the structure of the foreign matter, the target detection neural network model is utilized to detect the foreign matter labeling area of the 2D image of the foreign matter to be detected, if the foreign matter labeling area is detected, labeling is carried out, otherwise, labeling is not carried out, and a rectangular foreign matter labeling frame is obtained, wherein the rectangular foreign matter labeling frame comprises the positions of bolt parts and the sizes of the bolt parts.
6. The method for detecting the foreign object at the bottom of the track traffic vehicle based on the point cloud according to claim 5, wherein a foreign object mask image is created according to the foreign object rectangular labeling frame, 2D effective pixel points of the foreign object mask image are extracted by utilizing logic operation, 3D effective pixel points in the 3D image aligned with the foreign object are obtained according to the 2D effective pixel points, the point cloud of the foreign object part to be detected is generated by utilizing the 3D effective pixel points, and the 2D effective pixel points are positioned in the foreign object rectangular labeling frame.
7. The method for detecting the foreign object at the bottom of the track traffic vehicle based on the point cloud according to claim 6, wherein the average distance between each adjacent point in the point cloud of the foreign object part to be detected is calculated, the standard distance is calculated according to the average distance and the preset standard deviation, the point to be defined in the point cloud of the foreign object part to be detected is traversed, the adjacent distance between the point to be defined and the adjacent point is calculated, if the adjacent distance is larger than the standard distance, the point to be defined is defined as a noise point, the noise point is removed from the point cloud, otherwise, the point to be defined is defined as a qualified point, and the qualified point is reserved in the point cloud.
8. The method for detecting the foreign object under the track traffic vehicle based on the point cloud, which is characterized in that the standard point cloud is created according to the standard component, the fit points are in ICP registration with the standard points in the standard point cloud, and the distance between the point cloud of the component to be detected and the standard point cloud is iteratively optimized until a preset threshold is reached.
9. The method for detecting foreign matter at the bottom of a track traffic vehicle based on point clouds according to claim 8, wherein a fixed threshold radius of a standard point cloud of a standard component is preset, a sphere neighborhood is obtained according to the fixed threshold radius, whether a qualified point is located in the sphere neighborhood is judged, if the qualified point is located in the sphere neighborhood, the qualified point is defined as an inner point and removed, otherwise, the qualified point is defined as an outer point and reserved.
10. The method for detecting the foreign object at the bottom of the track traffic vehicle based on the point cloud is characterized in that the number of qualified cluster point clouds is preset, the outer points are subjected to cluster segmentation to generate the to-be-defined cluster point clouds, whether the to-be-defined cluster point clouds are abnormal cluster point clouds is judged according to the number of the preset qualified cluster point clouds, if the to-be-defined cluster point clouds are abnormal cluster point clouds, a minimum size bounding box of the abnormal cluster point clouds is calculated according to the OBB, abnormal parts are obtained by utilizing the minimum size bounding box, and otherwise, calculation is not performed.
CN202410141597.5A 2024-01-31 2024-01-31 Point cloud-based method for detecting foreign matters at bottom of rail transit vehicle Pending CN117894011A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410141597.5A CN117894011A (en) 2024-01-31 2024-01-31 Point cloud-based method for detecting foreign matters at bottom of rail transit vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410141597.5A CN117894011A (en) 2024-01-31 2024-01-31 Point cloud-based method for detecting foreign matters at bottom of rail transit vehicle

Publications (1)

Publication Number Publication Date
CN117894011A true CN117894011A (en) 2024-04-16

Family

ID=90648953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410141597.5A Pending CN117894011A (en) 2024-01-31 2024-01-31 Point cloud-based method for detecting foreign matters at bottom of rail transit vehicle

Country Status (1)

Country Link
CN (1) CN117894011A (en)

Similar Documents

Publication Publication Date Title
CN109816024B (en) Real-time vehicle logo detection method based on multi-scale feature fusion and DCNN
CN108171112B (en) Vehicle identification and tracking method based on convolutional neural network
CN109101924B (en) Machine learning-based road traffic sign identification method
CN111428748B (en) HOG feature and SVM-based infrared image insulator identification detection method
Mi et al. Research on regional clustering and two-stage SVM method for container truck recognition
CN110264448B (en) Insulator fault detection method based on machine vision
CN111079518B (en) Ground-falling abnormal behavior identification method based on law enforcement and case handling area scene
CN110287798B (en) Vector network pedestrian detection method based on feature modularization and context fusion
CN112966542A (en) SLAM system and method based on laser radar
CN113781585B (en) Online detection method and system for surface defects of additive manufactured parts
CN113393426A (en) Method for detecting surface defects of rolled steel plate
Fondevik et al. Image segmentation of corrosion damages in industrial inspections
CN113538503A (en) Solar panel defect detection method based on infrared image
CN115830359A (en) Workpiece identification and counting method based on target detection and template matching in complex scene
CN115482195A (en) Train part deformation detection method based on three-dimensional point cloud
CN115995056A (en) Automatic bridge disease identification method based on deep learning
Fang et al. Towards real-time crack detection using a deep neural network with a Bayesian fusion algorithm
CN110889418A (en) Gas contour identification method
CN116309407A (en) Method for detecting abnormal state of railway contact net bolt
CN117894011A (en) Point cloud-based method for detecting foreign matters at bottom of rail transit vehicle
Yang et al. Defect detection of axle box cover device fixing bolts in metro based on convolutional neural network
CN114821165A (en) Track detection image acquisition and analysis method
CN113052799A (en) Osteosarcoma and osteochondroma prediction method based on Mask RCNN network
Zeng et al. Research on recognition technology of vehicle rolling line violation in highway based on visual UAV
Park et al. Visual representation learning for automating car part recognition in a large-scale car sharing platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination