WO2022199472A1 - 障碍物检测方法、车辆、设备及计算机存储介质 - Google Patents

障碍物检测方法、车辆、设备及计算机存储介质 Download PDF

Info

Publication number
WO2022199472A1
WO2022199472A1 PCT/CN2022/081631 CN2022081631W WO2022199472A1 WO 2022199472 A1 WO2022199472 A1 WO 2022199472A1 CN 2022081631 W CN2022081631 W CN 2022081631W WO 2022199472 A1 WO2022199472 A1 WO 2022199472A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
track
initial
point
target
Prior art date
Application number
PCT/CN2022/081631
Other languages
English (en)
French (fr)
Inventor
胡荣东
万波
谢伟
Original Assignee
长沙智能驾驶研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 长沙智能驾驶研究院有限公司 filed Critical 长沙智能驾驶研究院有限公司
Publication of WO2022199472A1 publication Critical patent/WO2022199472A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • the present application belongs to the technical field of information processing, and in particular, relates to an obstacle detection method, a vehicle, a device and a computer storage medium.
  • rail transit usually has the characteristics of large carrying capacity and fast running speed, and is an important part of transportation. In order to ensure the safe running of trains in rail transit, it is often necessary to detect possible obstacles in the track.
  • the ground where the track is located is usually first fitted based on point cloud data, and then the obstacle is determined according to the height of the point cloud within the track range relative to the fitted ground.
  • the error of fitting the ground is often large, resulting in low obstacle detection accuracy.
  • Embodiments of the present application provide an obstacle detection method, a vehicle, a device, and a computer storage medium, so as to solve the problem of low detection accuracy caused by the obstacle detection based on the fitting of the track where the track is located in the related art.
  • an embodiment of the present application provides an obstacle detection method, which is applied to a vehicle, and the method includes:
  • the first point cloud is determined from the K initial point cloud data, and the first point cloud is the point cloud belonging to the track in the target track;
  • the target obstacle is associated with the second point cloud in the K initial point cloud data, and the second point A preset positional relationship is satisfied between the cloud and the first point cloud.
  • an embodiment of the present application provides a vehicle, including:
  • the acquisition module is used to acquire K initial point cloud data and L initial images collected from the target track, where K and L are both positive integers;
  • the first determination module is used to determine the first point cloud from the K initial point cloud data based on each initial image, and the first point cloud is the point cloud belonging to the track in the target track;
  • the detection module is used to detect the target obstacle in the target track based on the K initial point cloud data with the first point cloud as the reference point cloud, wherein the target obstacle is associated with the second point in the K initial point cloud data Cloud, a preset positional relationship is satisfied between the second point cloud and the first point cloud.
  • an embodiment of the present application provides an electronic device, where the device includes: a processor and a memory storing computer program instructions;
  • the above-mentioned obstacle detection method is implemented when the processor executes the computer program instructions.
  • an embodiment of the present application provides a computer storage medium, where computer program instructions are stored thereon, and the above-mentioned obstacle detection method is implemented when the computer program instructions are executed by a processor.
  • the obstacle detection method provided by the embodiment of the present application acquires K initial point cloud data and L initial images collected from the target track, and determines, based on each initial image, belonging to the target from the K initial point cloud data
  • the first point cloud of the track of the track can further detect the target obstacle that satisfies the preset positional relationship between the associated second point cloud and the first point cloud.
  • the first point cloud of the track belonging to the target track can be obtained more accurately.
  • the track can usually be used as a relatively stable reference.
  • the first point cloud detects the target obstacles in the target track, which can effectively improve the obstacle detection accuracy.
  • FIG. 1 is a schematic flowchart of an obstacle detection method provided by an embodiment of the present application.
  • FIG. 2 is an example diagram of pixel interval division for initial track pixels in an embodiment of the present application
  • FIG 3 is an example diagram of an object blocking the track when the target track exits the curve in the embodiment of the present application
  • FIG. 4 is an example diagram of a first projected point set obtained by projecting a third point cloud onto a target plane in an embodiment of the present application
  • FIG. 5 is a schematic flowchart of an application example of the obstacle detection method provided by the embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a vehicle provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the embodiments of the present application provide an obstacle detection method, apparatus, device, and computer storage medium.
  • the obstacle detection method provided by the embodiment of the present application is first introduced below.
  • FIG. 1 shows a schematic flowchart of an obstacle detection method provided by an embodiment of the present application. As shown in Figure 1, the method includes:
  • Step 101 obtaining K initial point cloud data and L initial images collected from the target track, where K and L are both positive integers;
  • Step 102 based on each initial image, determine a first point cloud from K initial point cloud data, and the first point cloud is a point cloud belonging to a track in the target track;
  • Step 103 taking the first point cloud as the reference point cloud, and detecting the target obstacle in the target track based on the K initial point cloud data, wherein the target obstacle is associated with the second point cloud in the K initial point cloud data, A preset positional relationship is satisfied between the second point cloud and the first point cloud.
  • the obstacle detection method provided in this embodiment can be applied to rail vehicles in the conventional sense, such as trains, subways, light rails, or trams, etc.; of course, the method can also be applied to other types of rail vehicles, for example, There are minecarts, or railcars in the factory area, etc.
  • rail transit vehicles In order to simplify the description, the above vehicles to which the obstacle detection method can be applied may be referred to as rail transit vehicles.
  • these sensors can be installed on the vehicle body, and can at least collect environmental information in the traveling direction of the vehicle. That is to say, the above-mentioned sensor can collect the relevant information of the track where the vehicle is located; the aforementioned target track may refer to the track where the vehicle is currently located.
  • the target track may also be a track that is adjacent to or intersects with the track where the vehicle is currently located, so as to consider the possible lane change of the vehicle.
  • the obstacle detection method provided by the embodiment of the present application will be described below mainly by taking the target track as the track where the vehicle is currently located as an example.
  • the rails are usually steel rails (or rails), etc., and their shapes, such as width and surface flatness, are relatively fixed.
  • the quantity of various sensors installed on the vehicle can be set according to actual needs. For example, one camera may be installed on the vehicle, and multiple cameras may be installed on the vehicle for redundant arrangement, or for the purpose of clearly capturing images of the track at different distances. Similarly, for lidar, the specific installation quantity can also be one or more.
  • each camera installed on the vehicle can perform a scanning operation on the target track in front of the vehicle.
  • each lidar installed on the vehicle can scan the target track in front of the vehicle to obtain at least one initial point cloud data.
  • the above process of photographing the target track by the camera and the process of scanning the lidar can be regarded as the acquisition process of the target track.
  • the initial image in addition to the image including the target track, there may also be images of other objects, such as utility poles or surrounding vegetation, etc.
  • images of other objects such as utility poles or surrounding vegetation, etc.
  • point clouds of other objects in addition to the point with the target track.
  • the vehicle may combine the initial image and the above-mentioned initial point cloud to distinguish the point cloud of the track belonging to the target track from the point cloud of other objects.
  • the vehicle may determine the first point cloud of the track belonging to the target track based on the processing method of fusion of the image and the point cloud.
  • the following mainly takes the process of determining the first point cloud from K initial point cloud data based on an initial image as an example to describe the fusion processing of the image and the point cloud.
  • the track of the target track in the initial image can be identified for identification, and the pixel points of the track in the initial image can be obtained based on pixel segmentation, and the coordinate position of each pixel in the image coordinate system is also can be obtained.
  • the initial point cloud data collected by a lidar may include multiple point clouds and the coordinates of each point cloud in the radar coordinate system, and the relative position of the lidar and a camera on the vehicle may be fixed (that is, it can be related by the body coordinate system), and the camera coordinate system and the image coordinate system of the camera are generally known. Therefore, the point cloud in the initial point cloud data can be mapped to the image coordinate system through the corresponding relationship between the radar coordinate system-body coordinate system-camera coordinate system-image coordinate system.
  • the above K initial point cloud data can be mapped to the image coordinate system, and the point cloud that falls within the position range of the pixel points of the track can be regarded as the first point cloud belonging to the track to a certain extent.
  • the point cloud falling within the position range of the pixel points of the track may be further filtered, etc., to further obtain the first point cloud belonging to the track.
  • the position of the track in the image coordinate system can be reflected in the form of a fitting equation.
  • the position of the track in the image coordinate system can be expressed according to the relationship between the fitting equation and the K initial point cloud data. The distance relationship is used to determine the first point cloud belonging to the track.
  • the position of the track in the initial image may also be obtained by other methods, such as Radon transformation, etc., which will not be listed here.
  • the position, trend or shape of the track is relatively fixed. Therefore, when the first point cloud belonging to the track is determined, the first point cloud can be used as the reference point cloud to more accurately detect the target track. target obstacle.
  • the target track mentioned here can refer to the area between two rails in common rail transit scenarios, or it can be an area within a certain width of the rail, etc., which is not specified here. limited.
  • the target obstacles in the target track can be considered as obstacles located in the driving area of rail transit vehicles, which may affect the driving of vehicles.
  • the target obstacle can be detected by the lidar and reflected to the above K initial point cloud data. From another point of view, the target obstacle will have associated points in the K initial point cloud data.
  • the cloud that is, the above-mentioned second point cloud, and these second point clouds usually satisfy a preset positional relationship with the first point cloud.
  • an obstacle in combination with the actual scene, when an obstacle is located between two rails and is significantly higher than the rails, it can be considered as an obstacle that may affect the driving of the vehicle.
  • the obstacles are located between the two rails and are higher than the rails, which can usually be reflected in the preset positional relationship between the first point cloud and the second point cloud.
  • the above is just an example of the preset position condition that needs to be set to detect the target obstacle.
  • the preset position condition can be determined as required.
  • the target obstacles may be located within a preset distance range on both sides of the monorail, and so on.
  • the vehicle may determine the second point cloud associated with the target obstacle from the K initial point cloud data according to the determined first point cloud and the preset positional relationship.
  • the obstacle detection method provided by the embodiment of the present application acquires K initial point cloud data and L initial images collected from the target track, and determines, based on each initial image, belonging to the target from the K initial point cloud data
  • the first point cloud of the track of the track can further detect the target obstacle that satisfies the preset positional relationship between the associated second point cloud and the first point cloud.
  • the first point cloud of the track belonging to the target track can be obtained more accurately.
  • the track can usually be used as a relatively stable reference.
  • the first point cloud detects the target obstacles in the target track, which can effectively improve the obstacle detection accuracy.
  • the above K pieces of initial point cloud data may be multiple pieces of initial point cloud data collected by multiple lidars; that is, K here may specifically be an integer greater than 1.
  • the point cloud of the target track obtained by a single lidar may be relatively sparse.
  • acquiring multiple initial point cloud data can effectively improve the point cloud density of the target track, increase the number of midpoints of the first point cloud belonging to the track, and help improve the detection of target obstacles. Effect.
  • one lidar can be used as the main radar, and the rest of the lidars can be used as blind-filling radars, and so on.
  • the K initial point cloud data may also be a single initial point cloud data collected by a single lidar; or, may also be multiple candidate point cloud data collected from multiple lidars, according to a preset One or more point cloud data obtained by screening the quality evaluation indicators (such as point cloud density, etc.).
  • the above-mentioned L initial images may also be multiple initial images collected by multiple cameras; that is, L here may specifically be an integer greater than 1.
  • the actual extension length of the target track may be relatively long. If the focal length of the camera is fixed, in the initial image captured by a single camera, the image quality of the target track at different distances may be relatively large. difference.
  • At least two cameras may have different focal lengths.
  • the target obstacle at a distance of 1 to 100m can be detected relatively accurately; and based on the initial image b and K initial point cloud data, it can be relatively accurately detected.
  • setting multiple cameras can also be used for the main camera and the blind-filling camera respectively, so as to shoot the blind area of a single camera.
  • the L initial images can also be a single initial image acquired by a single camera; or, it can also be an initial image with the highest quality determined from multiple candidate images acquired from multiple cameras with the same focal length Images, etc., will not be illustrated here.
  • the first point cloud is determined from the K initial point cloud data, including:
  • the K initial point cloud data are mapped to the image coordinate system, and the K initial point cloud data are screened according to the track fitting equation to obtain the first point cloud.
  • K initial point cloud data can be combined to detect the target obstacle; therefore, the following can be mainly based on a certain initial image to describe the target obstacle detection process; and
  • the certain initial image may be defined as the first initial image.
  • the track in the first initial image can be identified by means such as a deep learning algorithm to obtain the pixel points of the track in the initial image, or the fitting equation of the track in the image coordinate system.
  • the rail fitting equation may be a straight line equation, or a quadratic, cubic or higher-order curve equation, which is not specifically limited here.
  • the internal and external parameters of each sensor can be pre-calibrated. Based on the calibration of internal and external parameters, various data (such as point cloud data, images, etc.) can be converted between different coordinate systems, and the specific conversion process, such as the mapping process in each coordinate system, can also be realized. Therefore, in some embodiments below, in order to simplify the description, the specific implementation process of coordinate system conversion may be omitted.
  • the initial point cloud data is a three-dimensional point cloud, which is mapped to the image coordinate system to obtain a corresponding two-dimensional point set; and these two-dimensional point sets are points that fall on the track fitting equation, or distances from the track fitting equation.
  • the points whose combined equation is less than a certain distance threshold can often be considered as the points obtained after the first point cloud belonging to the track is mapped to the image coordinate system. Since there is a mapping relationship between the 3D point cloud and the 2D point set, for the points belonging to the track determined in the image coordinate system, the points belonging to the track in the 3D point cloud can be found according to the mapping relationship.
  • K initial point cloud data can be screened to obtain the point cloud belonging to the track, that is, the first point cloud above.
  • the track in the first initial image is characterized by the track fitting equation, which helps to pass the distance between the mapping points of the K initial point cloud data in the image coordinate system and the track fitting equation, The screening of K initial point cloud data is realized, and the screening accuracy of the first point cloud belonging to the track is improved.
  • the above-mentioned K initial point cloud data are mapped to the image coordinate system, and the K initial point cloud data are screened according to the track fitting equation to obtain the first point cloud, including:
  • the first point cloud is determined from the K initial point cloud data.
  • the second distance threshold can be determined to determine the point cloud projected near the track fitting equation in the initial point cloud data as belonging to the first point cloud of the track, which helps In order to improve the rationality of the obtained first point cloud.
  • the initial point cloud data can be mapped to the image coordinate system according to the conversion relationship between the coordinate system associated with the initial point cloud data and the image coordinate system to obtain the mapped point set.
  • Each mapping point in the mapping point set has corresponding coordinates in the image coordinate system. According to these coordinates and the rail fitting equation, the distance from each mapping point to the rail fitting equation can be determined. When the distance is less than the second distance threshold, it can be considered that the mapping point is obtained by mapping the first point cloud to the image coordinate system.
  • fitting a track fitting equation of the track in the first initial image in the image coordinate system including:
  • the initial track pixels are fitted to obtain the track fitting equation.
  • a deep learning model may be used to identify the first initial image.
  • the deep learning model may be obtained by training based on training samples in advance.
  • the training samples can be sample images marked with rails, and these sample images can be obtained by setting the cameras on the rail transit vehicles; correspondingly, the marked rails can be the tracks where the rail transit vehicles are currently located. in the rails.
  • the deep learning model trained based on such sample images can identify the track of the track (that is, the target track) where the vehicle is currently located in the initial image, while excluding the tracks of other parallel tracks;
  • the target obstacle in the target track can be detected by focusing, so as to improve the detection efficiency of the target obstacle.
  • the pixels associated with the track can be obtained from the first initial image.
  • the deep learning model can segment the pixels corresponding to the track to obtain a binary image; in the binary image, the foreground data corresponding to the track can be included; these foreground data can be clustered or searched for connected areas , several areas can be obtained; and each area here can correspond to a track respectively.
  • the pixel points belonging to the track it may be a part of the pixel points in the first initial image, or it may be a part of the pixel points in the above-mentioned binary image, which is not limited here. But in general, tracks are associated with corresponding pixels, which have corresponding coordinates in the image coordinate system.
  • the pixels belonging to the track can be fitted to obtain the track fitting equation.
  • the initial track pixels are fitted to obtain a track fitting equation, including:
  • the candidate track pixels located in the preset image height interval are selected from the initial track pixels
  • the target track pixels belonging to each track are respectively fitted to obtain N track fitting equations corresponding to the N tracks; wherein, the target track pixels are candidate track pixels corresponding to the second mapped pixels.
  • the track fitting equation can be acquired more accurately based on the application of the bird's-eye view.
  • the following mainly takes the target track including the first track and the second track as an example for description.
  • Both the first track and the second track can be the track of the target track on the current road of the vehicle; in a general dual-track operating environment, the first track and the second track form a double track; and in the above-mentioned track merging, track bifurcation or In a special three-track environment, the first rail and the second rail can be the two outermost rails.
  • the pixels located in the preset image height range can be selected from the initial track pixels first.
  • the candidate track pixels may be pixels with a certain height at the bottom of the initial image, so as to ensure that the line fitting can be completed with relatively high quality in the next step.
  • FIG. 2 shows an example diagram of pixel interval division for initial track pixels.
  • candidate tracks belonging to each track can be divided along the image height direction (corresponding to the above-mentioned preset direction).
  • the pixel points are respectively divided into M pixel intervals.
  • the corresponding M pixel center points can be respectively determined for the above-mentioned first rail and the second rail; the M pixel center points corresponding to the first rail are fitted with straight lines, and the corresponding first fitted straight line can be obtained, It is denoted as l 1 ; similarly, by performing straight line fitting on the center points of M pixels corresponding to the second track, a corresponding first fitted straight line can be obtained, denoted as l 2 .
  • the target track is usually presented in a perspective view, that is, although the first track and the second track are parallel in the actual scene, the distance between l 1 and l 2 may be non-parallel
  • the two first fitting straight lines are gradually approached in the direction of the image height.
  • l 1 and l 2 can be mapped to the bird's-eye view, and after the two first fitted straight lines are mapped to the bird's eye view, two second fitted straight lines can be obtained, and the two second fitted straight lines can be obtained.
  • the combined lines can be parallel to each other.
  • Two points can be selected from l 1 , denoted as a and d. These two points can be directly determined based on the equation of l 1 , or they can be two pixel center points used for fitting, which are not done here. Specific restrictions. Similarly, two points can be selected from l2 , denoted as b and c.
  • the two points a and b can be translated until the line ad is parallel to the line bc.
  • a transformation matrix can be obtained, that is, the above-mentioned perspective transformation matrix.
  • l 1 and l 2 can be further mapped to the bird's-eye view, respectively, and the obtained two second fitting straight lines are respectively denoted as l 3 and l 4 .
  • the candidate track pixels can also be mapped to the bird's-eye view, and these mapped pixels are defined as candidate mapping pixels.
  • the distance between 3 and 1 and 4 is greater than the first distance threshold, it can be determined that the pixel corresponding to the candidate mapping pixel in the first initial image (or the above-mentioned binary image) is noise. , can be filtered out.
  • the distances between each of the second fitting straight lines are greater than the first distance threshold, that is, the above-mentioned first mapping pixels. Filter out the mapped pixels; and keep the remaining second mapped pixels.
  • the second mapped pixels may be obtained by mapping some of the pixels in the candidate track pixels to the bird's-eye view. Therefore, the second mapped pixels have corresponding pixels in the candidate track pixels, that is, the above-mentioned target track pixels point.
  • the target track pixels belonging to each track can be fitted, and the obtained track fitting equation matches the actual state of the track (such as position, trend, etc.) accuracy.
  • the number of rail fitting equations can match the number of rails, and the correspondence between each rail and the pixel points used to fit the rail fitting equation can be in the process of rail identification and pixel segmentation.
  • the determination may also be performed according to the distance between each pixel point and the first fitting straight line or the second fitting straight line.
  • corresponding track fitting equations can be respectively fitted to the first track and the second track.
  • the above K initial point cloud data Mapped to the image coordinate system including:
  • the mixed point cloud data is mapped into the image coordinate system according to the second coordinate system transformation relationship.
  • the value of K may be an integer greater than 1.
  • each lidar collects corresponding initial point cloud data in its own radar coordinate system. Mixing these initial point cloud data can effectively increase the number of point clouds that can be used to determine the track and target obstacles, making the features of these objects more significant and improving the detection effect.
  • a preset reference coordinate system may be determined, and the reference coordinate system may be the radar coordinate system of one of the laser radars, or the vehicle body coordinate system. , which is not specifically limited here.
  • various coordinate systems can be pre-calibrated, and the conversion relationship between different coordinate systems can also be pre-determined. Therefore, the radar coordinate system and preset reference coordinates of each lidar can be directly compared. The first coordinate system conversion relationship between the systems and the second coordinate system conversion relationship between the preset reference coordinate system and the image coordinate system of any initial image are acquired.
  • the initial point cloud data collected by each lidar can be converted into a preset reference coordinate system according to the first coordinate system conversion relationship to obtain mixed point cloud data.
  • the mixed point cloud data generally includes a three-dimensional point cloud, and the three-dimensional information in the point cloud data is retained, and the mixed point cloud data can be used to detect target obstacles later.
  • the mixed point cloud data After the mixed point cloud data is obtained, it can be further mapped into the image coordinate system according to the above-mentioned second coordinate system conversion relationship.
  • the first point cloud belonging to the track can be screened according to the distance relationship between the mapped point set of the initial point cloud data in the image coordinate system and the track fitting equation.
  • the rectangular object (marked as T) on the right side of the target track may be an object such as a normal telephone pole on one side of the curved track, and then in the image coordinate system, the rectangular object corresponds to Part of the point cloud is also mapped to the line corresponding to the rail fitting equation (the rail corresponding to the target rail).
  • the first point cloud obtained by simply filtering the track fitting equation may include points of objects that do not actually belong to the track. cloud.
  • the first point cloud is determined from the K initial point cloud data, including:
  • the first point cloud is determined from the K initial point cloud data.
  • the above-mentioned third point cloud can be considered to a certain extent as a point cloud belonging to the track, which is simply screened based on the track fitting equation in the initial point cloud data.
  • the third point cloud can be further projected onto the target plane of the vehicle body coordinate system.
  • the initial point cloud data may be in the corresponding radar coordinate system, and in some application scenarios, these initial point cloud data may also be pre-converted to a preset reference coordinate system such as the vehicle body coordinate system .
  • FIG. 4 shows an example diagram of the first projected point set obtained by projecting the third point cloud into the target plane.
  • the target plane can be denoted as the XOZ plane, where the X axis is consistent with the vehicle's traveling direction, and the Z axis is consistent with the vehicle's height direction.
  • the denser point cloud part below (denoted as the first point cloud part R1) can correspond to the actual point cloud of the track; in the first point cloud part R1, there is a gap along the X-axis, which corresponds to the track.
  • the sparse point cloud part above (referred to as the second point cloud part R2 ) may be the point cloud associated with the rectangular object T in FIG. 3 .
  • the second projected point set can generally be considered to be obtained by projecting the point cloud actually associated with the track onto the target plane. According to the second projected point set, the first point actually belonging to the track can be more accurately determined in the initial point cloud data. cloud.
  • the least squares method As for the method of filtering out outliers, the least squares method, the RANSAC algorithm, or statistical filtering can be used, which is not specifically limited here.
  • this embodiment can effectively solve the problem of false detection of obstacles at the track curve, and improve the accuracy of the determined first point cloud.
  • the first point cloud is used as the reference point cloud
  • the target obstacle in the target track is detected based on K initial point cloud data, including:
  • the second point cloud point associated with each first point cloud point is determined from the first point cloud; wherein, the first point cloud point is the K initial point cloud data, except the first point cloud The second point cloud point is the point cloud point closest to the first point cloud point on the X axis in the first point cloud, and the X axis is parallel to the driving direction of the vehicle;
  • the target obstacle is determined from the at least one candidate obstacle according to the height difference between each point cloud point in the fourth point cloud and its associated second point cloud point.
  • the target obstacle on the basis of determining the first point cloud, the target obstacle may be detected based on the first point cloud.
  • the initial point cloud data can have multiple point clouds, and more specifically, each point in the point cloud can be specific; that is, the initial point cloud data can include multiple point cloud points; Among them, in the initial point cloud data, the aggregation of point cloud points other than the first point cloud may be recorded as P e , and any point cloud point among them may be recorded as pi (corresponding to the first point cloud point).
  • the number of rails may be two, each rail corresponds to a first point cloud, and the sets of all point cloud points in the first point cloud corresponding to the two rails are respectively denoted as P l and P r .
  • the vehicle body coordinate system established in the previous embodiment can be cited here, that is, the X axis is consistent with the vehicle running direction, the Z axis is consistent with the vehicle height direction, and the Y axis is consistent with the vehicle width direction.
  • Each point cloud point in the initial point cloud data may have corresponding coordinates in the vehicle body coordinate system.
  • the point cloud point closest to the X coordinate of p i can be searched from P l and P r respectively, and denoted as p l and pr respectively (both can correspond to the second point cloud point). Then compare the Y coordinates of p i with p l and p r , if on the Y axis, p i is located between p l and p r , or the distance between p i and p l is less than a threshold, or p i and p If the distance between r is less than a threshold, pi can be determined as a candidate point cloud point.
  • the condition for judging whether p i is a candidate point cloud point can be set according to actual needs, which can be specifically embodied by the above-mentioned preset distance condition.
  • the purpose of screening the candidate point cloud points based on the preset distance condition can be to screen out the point cloud points located between the two rails (or according to the actual needs, the point cloud points within a certain range outside the rails can be further screened out) .
  • a clustering process may be performed to obtain a fourth point cloud associated with at least one candidate obstacle.
  • the specific clustering algorithm can be selected according to actual needs, which is not specifically limited here.
  • the relationship between the candidate obstacle and the fourth point cloud can be understood as each candidate obstacle has a fourth point cloud belonging to it.
  • a reference height can be determined by the coordinates of the corresponding p l and pr on the Z axis.
  • the Z of p l and pr can be obtained.
  • the Z-axis coordinates zi of each point cloud point can be compared with the corresponding reference height zc.
  • the point cloud point may be the point cloud point corresponding to the target obstacle.
  • the number of point cloud points satisfying z i >z c can also be further obtained.
  • the candidate obstacle The object is determined as the target obstacle.
  • the target obstacle may be determined from at least one candidate obstacle according to the height difference between each point cloud point in the fourth point cloud and its associated second point cloud point.
  • the above is only an example of the process of detecting target obstacles in the dual-track application scenario.
  • the second point cloud point associated with pi can also be determined on the X-axis, and further according to The coordinate relationship between the Y axis and the Z axis is used to detect the target obstacle, which will not be repeated here.
  • the first point cloud associated with the track can be used as a reference, and candidate point cloud points are determined from the initial point cloud data, and after clustering them to obtain candidate obstacles, each of the points belonging to the candidate obstacles is further used.
  • the target obstacle is determined based on the track, which can effectively improve the detection accuracy of the obstacle.
  • the number of target point cloud points is greater than the number threshold, and the maximum value of the height difference between each target point cloud point and its associated second point cloud point is greater than The second difference threshold, wherein the target point cloud is a point cloud point whose height difference between the associated second point cloud points is greater than the first difference threshold.
  • This example defines the conditions that the fourth point cloud of the determined target obstacle in the initial point cloud data needs to meet.
  • the condition that the fourth point cloud of the target obstacle needs to meet can effectively avoid the false detection of the target obstacle caused by noise.
  • the height of the target obstacle should be Make a limit to avoid determining the low obstacle that will not affect the normal driving of the vehicle as the target obstacle. It can be seen that this example can effectively improve the rationality of determining the target obstacle.
  • the method further includes:
  • the closest distance between each point cloud point in the fourth point cloud associated with the vehicle and the target obstacle is determined as the distance between the vehicle and the target obstacle;
  • the closest distance between the first point cloud points in the fourth point cloud associated with the vehicle and the target obstacle is determined as the target distance, and when the target distance is less than the third distance threshold, By outputting an alarm signal, the detected target obstacle can be alarmed in time to improve the safety of driving.
  • the origin of the body coordinate system is at the frontmost side of the vehicle, and the positive half-axis of the X-axis is at the front of the vehicle, then the minimum X-coordinate of each point cloud point in the fourth point cloud can be used as the vehicle and the target distance between obstacles.
  • the distance between the vehicle and each target obstacle can be determined according to the above method.
  • the obstacle detection method includes:
  • Step 501 calibration of internal and external parameters of each sensor
  • L cameras (L ⁇ 1) and K lidars (K ⁇ 1) can be used.
  • cameras with different focal lengths and lidars with different field of view angles can be used to effectively improve the range of obstacle detection.
  • a lidar can be selected as the reference Ob , and the conversion relationship between each camera and the reference lidar coordinate system can be obtained through the calibration algorithm of lidar and camera Obtain the transformation relationship between other lidar and reference lidar coordinate system through the calibration algorithm of lidar and lidar In addition, the conversion relationship between the reference lidar coordinate system O b and the vehicle body coordinate system O c is obtained by measuring
  • the position and definition method of the vehicle body coordinate system O c can be determined according to actual needs.
  • the conversion relationship between the reference lidar coordinate system O b and the vehicle body coordinate system O c there can also be a stricter and more accurate determination method, such as obtaining the pitch, roll, and yaw angles of O b and O c through a device such as a laser level.
  • Step 502 two-dimensional rail detection
  • one or more cameras may be used to capture the scene of the train's traveling direction, and for each image captured by the camera, a computer vision algorithm may be used to realize track and rail detection in a two-dimensional space.
  • a deep learning algorithm can be used to first achieve the pixel segmentation of the train running rails (corresponding to the rails in the above-mentioned target track) to obtain a binary image B, and then according to the foreground data of the binary image B (that is, belonging to the The pixel points of the rail) are filtered, and N curve equations describing the rail are obtained by fitting.
  • the implementation process of this step may mainly include the following steps:
  • the N straight lines obtained by fitting in the above step 2) are parallel to each other. Extend these parallel lines on the railway track segmentation result B' of the bird's-eye view, and filter according to the distance between the foreground pixel and the line obtained by the segmentation: if the closest distance between a point and all the lines is greater than the predetermined threshold, it is considered as noise, Removed from foreground data.
  • the noise can be effectively filtered through the above processing in the bird's-eye view.
  • a corresponding two-dimensional railway track curve equation (corresponding track fitting equation) is obtained by fitting respectively. Also mark the leftmost and rightmost rails for use in the next steps.
  • the point cloud of the railway track obtained on a fast-moving train is relatively sparse, and it is difficult to do direct and effective analysis.
  • the purpose of this step is to find the 3D lidar point cloud on the rail through the 2D rail equation.
  • convert the point cloud obtained by each lidar according to the conversion relationship determined in step 501 Convert it to the O b coordinate system, and accumulate the point cloud P 3d (corresponding to the mixed point cloud data); then according to Map the point cloud P 3d into the camera image coordinate system to get P 2d (corresponding to the set of mapped points).
  • the point cloud on the rail will be projected on the two-dimensional image in step 502, and the distance between each point in P 2d and the leftmost and rightmost two-dimensional rail curve equations will be calculated in turn. If the threshold is set, it is considered that the corresponding three-dimensional point cloud may belong to the rail corresponding to the curve, and these point clouds are recorded as P′ 3d (corresponding to the third point cloud).
  • a part of the laser point cloud of the rectangular target T shown in Figure 3 will also be projected onto the two-dimensional rail, that is, its corresponding three-dimensional point cloud is also included in P' 3d .
  • the Z coordinate of P' 3d is counted along the X direction of the body coordinate system.
  • the Z coordinate value belonging to the rail should change continuously, while the Z value of the point cloud of the occluded part of the rectangular target in Figure 3 will produce a jump.
  • the multiplication method can realize the filtering of outliers, and record the filtered point cloud as (corresponding to the first point cloud).
  • Figure 4 takes the point cloud X as the abscissa and the height Z as the ordinate, and counts the point clouds on the rails
  • the point in R2 in the figure is the point cloud of obstacles next to the track, and the point in R1 is the point cloud of the rail.
  • the straight line equation can be used as a model, and the least squares method can be used to fit the data.
  • the points in R2 can be filtered out by calculating the distance between each point and the fitted straight line.
  • Step 504 obstacle detection and filtering
  • the filtering to get the point cloud in the track (corresponding to the aggregation of candidate point cloud points).
  • the filtering method is: for each point pi in P 3d , according to its X coordinate, find the nearest points p l and pr with the X coordinate of the left and right rail surface point clouds. Then compare the Y-coordinates of p i with p l and p r . If p i is between p l and p r , it is considered that p i belongs to At the same time, record the average Z coordinate value of p l and p r as the reference track height of p i .
  • the obstacle detection method provided by the embodiment of the present application can obtain real-time obstacle information in the track by means of multi-sensor fusion, and has high reliability; a two-dimensional track curve fitting method based on deep learning track segmentation , which can effectively filter out noise points, and the rail fitting effect is stable; the method based on outlier filtering can effectively solve the problem of false detection of obstacles in the curve.
  • the sensors used can be relatively simple, and the installation and maintenance are convenient.
  • an embodiment of the present application also provides a vehicle, including:
  • An acquisition module 601 configured to acquire K initial point cloud data and L initial images collected from the target track, where K and L are both positive integers;
  • the first determination module 602 is configured to determine, based on each initial image, a first point cloud from K initial point cloud data, where the first point cloud is a point cloud belonging to a track in the target track;
  • the detection module 603 is configured to take the first point cloud as the reference point cloud and detect the target obstacle in the target track based on the K initial point cloud data, wherein the target obstacle is associated with the second one in the K initial point cloud data. Point cloud, the preset positional relationship is satisfied between the second point cloud and the first point cloud.
  • the above-mentioned first determining module 602 may include:
  • the fitting submodule is used to fit the track fitting equation in the image coordinate system of the track in the first initial image, where the first initial image is any one of the L initial images, and the first initial image is L any of the initial images;
  • the screening sub-module is used to map the K initial point cloud data to the image coordinate system, and filter the K initial point cloud data according to the track fitting equation to obtain the first point cloud.
  • the fitting submodule can include:
  • a segmentation acquisition unit configured to perform pixel segmentation on the first initial image based on the deep learning model obtained by pre-training, to obtain initial track pixels belonging to the track in the first initial image
  • the fitting unit is used for fitting the initial track pixel points in the image coordinate system to obtain the track fitting equation.
  • the fitting unit may include:
  • a screening subunit used for screening candidate track pixels in the preset image height interval from the initial track pixels
  • Dividing subunits which are used to divide the candidate track pixels belonging to each track into M pixel intervals, where M is an integer greater than 1;
  • the first fitting subunit is used to obtain the pixel center point of each pixel interval, fit M pixel center points corresponding to each track respectively, and obtain N first fitting straight lines corresponding to the N tracks;
  • the first determination subunit is used to determine the perspective transformation matrix according to the N first fitted straight lines, and map the N first fitted straight lines and the pixel points of the candidate tracks to the bird's-eye view according to the perspective transformation matrix, and respectively obtain the N first fitted straight lines.
  • the first filtering sub-unit is used to filter out the first mapping pixel in the candidate mapping pixels to obtain the second mapping pixel, and the first mapping pixel is the candidate mapping pixel, which is fitted with any second straight line.
  • the distance between the pixels is greater than the first distance threshold;
  • the second fitting subunit is used to fit the target track pixels belonging to each track respectively, and obtain N track fitting equations corresponding to the N tracks; wherein, the target track pixels are the pixels corresponding to the second mapping pixel. The corresponding candidate track pixels.
  • K is an integer greater than 1, the K initial point cloud data are collected by K laser radars;
  • the above-mentioned screening sub-module may include:
  • the acquisition unit is used to acquire the first coordinate system conversion relationship between the radar coordinate system of each lidar and the preset reference coordinate system, and the second coordinate system between the preset reference coordinate system and the image coordinate system of any initial image. Coordinate system conversion relationship;
  • a first mapping unit configured to map the initial point cloud data collected by each lidar to a preset reference coordinate system according to the corresponding first coordinate system conversion relationship, to obtain mixed point cloud data
  • the second mapping unit is used for mapping the mixed point cloud data into the image coordinate system according to the second coordinate system conversion relationship.
  • the above-mentioned screening submodule may include:
  • the third mapping unit is used to map the K initial point cloud data to the image coordinate system to obtain a mapping point set
  • a first determining unit configured to determine, from the set of mapping points, a target mapping point whose distance from the rail fitting equation is less than a second distance threshold
  • the second determining unit is configured to determine the first point cloud from the K initial point cloud data according to the target mapping point.
  • the second determining unit may include:
  • the second determination subunit is used to determine the third point cloud corresponding to the target mapping point from the K initial point cloud data
  • an acquisition sub-unit for projecting the third point cloud into the target plane of the vehicle body coordinate system to obtain a first projected point set, wherein the target plane is a plane determined according to the vehicle's driving direction and the vehicle's height direction;
  • the second filtering subunit is used to filter out the outliers in the first projection point set to obtain the second projection point set;
  • the third determination subunit determines the first point cloud from the K initial point cloud data according to the second projection point set.
  • the detection module 603 may include:
  • the first determination sub-module is used to determine the second point cloud point associated with each first point cloud point from the first point cloud point in the vehicle body coordinate system; wherein, the first point cloud point is K initial point clouds In the data, except for the first point cloud, the second point cloud point is the point cloud point closest to the first point cloud point on the X axis in the first point cloud, and the X axis is parallel to the vehicle driving direction;
  • the second determination sub-module is used to determine the first point cloud point whose distance between the Y axis and the associated second point cloud point satisfies the preset distance condition as the candidate point cloud point, and the Y axis is parallel to the vehicle width direction ;
  • the clustering submodule is used to cluster the candidate point cloud points to obtain at least one fourth point cloud associated with the candidate obstacle;
  • the third determination submodule is configured to determine a target obstacle from at least one candidate obstacle according to the height difference between each point cloud point in the fourth point cloud and its associated second point cloud point.
  • the number of target point cloud points is greater than the number threshold, and the maximum value of the height difference between each target point cloud point and its associated second point cloud point is greater than the first point cloud point.
  • Two difference thresholds where the target point cloud is a point cloud whose height difference between the second point cloud points associated with it is greater than the first difference threshold.
  • the above-mentioned obstacle detection device may further include:
  • the second determination module is used to determine, on the X-axis of the vehicle body coordinate system, the closest distance between the first point cloud points in the fourth point cloud associated with the vehicle and the target obstacle as the distance between the vehicle and the target obstacle target distance;
  • the output module is used for outputting an alarm signal when the target distance is less than the third distance threshold.
  • L initial images are acquired by L cameras
  • At least two cameras have different focal lengths.
  • the vehicle is a vehicle corresponding to the above-mentioned obstacle detection method, and all implementations in the above-mentioned method embodiments are applicable to the embodiments of the vehicle, and the same technical effect can also be achieved.
  • FIG. 7 shows a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • the electronic device may include a processor 701 and a memory 702 storing computer program instructions.
  • processor 701 may include a central processing unit (CPU), or a specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured to implement one or more integrated circuits of the embodiments of the present application.
  • CPU central processing unit
  • ASIC Application Specific Integrated Circuit
  • Memory 702 may include mass storage for data or instructions.
  • memory 702 may include a Hard Disk Drive (HDD), a floppy disk drive, a flash memory, an optical disk, a magneto-optical disk, a magnetic tape, or a Universal Serial Bus (USB) drive or two or more A combination of more than one of the above.
  • Memory 702 may include removable or non-removable (or fixed) media, where appropriate.
  • Storage 702 may be internal or external to the integrated gateway disaster recovery device, where appropriate.
  • memory 702 is non-volatile solid state memory.
  • Memory may include read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical or other physical/tangible memory storage devices.
  • ROM read only memory
  • RAM random access memory
  • magnetic disk storage media devices e.g., magnetic disks
  • optical storage media devices e.g., magnetic disks
  • flash memory devices e.g., electrical, optical or other physical/tangible memory storage devices.
  • a memory includes one or more tangible (non-transitory) computer-readable storage media (eg, memory devices) encoded with software including computer-executable instructions, and when the software is executed (eg, by a or multiple processors), it is operable to perform the operations described with reference to methods according to the present disclosure.
  • the processor 701 reads and executes the computer program instructions stored in the memory 702 to implement any one of the obstacle detection methods in the above embodiments.
  • the electronic device may also include a communication interface 703 and a bus 704 .
  • the processor 701 , the memory 702 , and the communication interface 703 are connected through the bus 704 and complete the communication with each other.
  • the communication interface 703 is mainly used to implement communication between modules, apparatuses, units and/or devices in the embodiments of the present application.
  • the bus 704 includes hardware, software, or both, coupling the components of the online data flow metering device to each other.
  • the bus may include Accelerated Graphics Port (AGP) or other graphics bus, Enhanced Industry Standard Architecture (EISA) bus, Front Side Bus (FSB), HyperTransport (HT) Interconnect, Industry Standard Architecture (ISA) Bus, Infiniband Interconnect, Low Pin Count (LPC) Bus, Memory Bus, Microchannel Architecture (MCA) Bus, Peripheral Component Interconnect (PCI) Bus, PCI-Express (PCI-X) Bus, Serial Advanced Technology Attachment (SATA) bus, Video Electronics Standards Association Local (VLB) bus or other suitable bus or a combination of two or more of the above.
  • Bus 704 may include one or more buses, where appropriate. Although embodiments of this application describe and illustrate a particular bus, this application contemplates any suitable bus or interconnect.
  • the electronic device may be a mobile electronic device or a non-mobile electronic device.
  • the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palmtop computer or an in-vehicle electronic device, etc.
  • the non-mobile electronic device may be a server or the like.
  • the embodiment of the present application may provide a computer storage medium for implementation.
  • Computer program instructions are stored on the computer storage medium; when the computer program instructions are executed by the processor, any one of the obstacle detection methods in the foregoing embodiments is implemented.
  • Examples of computer storage media include physical/tangible storage media such as electronic circuits, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, and the like.
  • Embodiments of the present application further provide a computer program product, which can be executed by a processor to implement the various processes of the above-mentioned embodiments of the obstacle detection method, and can achieve the same technical effect. Repeat.
  • An embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run a program or an instruction to implement each process of the above-mentioned embodiment of the obstacle detection method, and can achieve the same In order to avoid repetition, the technical effect will not be repeated here.
  • the chip mentioned in the embodiments of the present application may also be referred to as a system-on-chip, a system-on-chip, a system-on-a-chip, or a system-on-a-chip, or the like.
  • the functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof.
  • it When implemented in hardware, it may be, for example, an electronic circuit, an application specific integrated circuit (ASIC), suitable firmware, a plug-in, a function card, or the like.
  • ASIC application specific integrated circuit
  • elements of the present application are programs or code segments used to perform the required tasks.
  • the program or code segments may be stored in a machine-readable medium or transmitted over a transmission medium or communication link by a data signal carried in a carrier wave.
  • a "machine-readable medium” may include any medium that can store or transmit information.
  • machine-readable media examples include electronic circuits, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio frequency (RF) links, and the like.
  • the code segments may be downloaded via a computer network such as the Internet, an intranet, or the like.
  • processors may be, but are not limited to, general purpose processors, special purpose processors, application specific processors, or field programmable logic circuits. It will also be understood that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can also be implemented by special purpose hardware for performing the specified functions or actions, or by special purpose hardware and/or A combination of computer instructions is implemented.

Abstract

本申请公开了一种障碍物检测方法、车辆、设备及计算机存储介质。其中,上述障碍物检测方法,应用于车辆,该方法包括:获取对目标轨道采集得到的K个初始点云数据与L张初始图像;分别基于每一初始图像,从K个初始点云数据中确定出第一点云,第一点云为归属于目标轨道中的路轨的点云;以第一点云为基准点云,根据K个初始点云数据,检测目标轨道中的目标障碍物。

Description

障碍物检测方法、车辆、设备及计算机存储介质
相关申请的交叉引用
本申请要求享有于2021年03月23日提交的名称为“障碍物检测方法、车辆、设备及计算机存储介质”的中国专利申请202110306554.4的优先权,该申请的全部内容通过引用并入本文中。
技术领域
本申请属于信息处理技术领域,尤其涉及一种障碍物检测方法、车辆、设备及计算机存储介质。
背景技术
众所周知,轨道交通通常具有运载量大,运行速度较快的特点,属于交通运输中重要的组成部分。为了保证轨道交通中的列车的安全行驶,往往需要对轨道中可能存在的障碍物进行检测。
相关技术在使用雷达传感器进行障碍物检测时,通常是先基于点云数据对轨道所在地面进行拟合,再根据位于轨道范围内的点云相对拟合地面的高度进行障碍物的判定。然而,由于轨道内部的环境比较复杂,例如可能存在枕木与碎石等,得到拟合地面的误差往往较大,进而导致障碍物检测精度较低。
发明内容
本申请实施例提供一种障碍物检测方法、车辆、设备及计算机存储介质,以解决相关技术中基于对轨道所在地面的拟合进行障碍物检测,导致检测精度较低的问题。
第一方面,本申请实施例提供一种障碍物检测方法,应用于车辆,方法包括:
获取对目标轨道采集得到的K个初始点云数据与L张初始图像,K与L均为正整数;
分别基于每一初始图像,从K个初始点云数据中确定出第一点云,第一点云为归属于目标轨道中的路轨的点云;
以第一点云为基准点云,根据K个初始点云数据,检测目标轨道中的目标障碍物,其中,目标障碍物在K个初始点云数据中关联有第二点云,第二点云与第一点云之间满足预设位置关系。
第二方面,本申请实施例提供了一种车辆,包括:
获取模块,用于获取对目标轨道采集得到的K个初始点云数据与L张初始图像,K与L均为正整数;
第一确定模块,用于分别基于每一初始图像,从K个初始点云数据中确定出第一点云,第一点云为归属于目标轨道中的路轨的点云;
检测模块,用于以第一点云为基准点云,基于K个初始点云数据,检测目标轨道中的目标障碍物,其中,目标障碍物在K个初始点云数据中关联有第二点云,第二点云与第一点云之间满足预设位置关系。
第三方面,本申请实施例提供了一种电子设备,设备包括:处理器以及存储有计算机程序指令的存储器;
处理器执行计算机程序指令时实现上述的障碍物检测方法。
第四方面,本申请实施例提供了一种计算机存储介质,计算机存储介质上存储有计算机程序指令,计算机程序指令被处理器执行时实现上述的障碍物检测方法。
本申请实施例提供的障碍物检测方法,获取对目标轨道采集得到的K个初始点云数据与L张初始图像,分别基于每一初始图像,从K个初始点云数据中确定出归属于目标轨道的路轨的第一点云,进而可以检测出关联的第二点云与第一点云之间满足预设位置关系的目标障碍物。本申请实施例中,结合初始图像与初始点云数据,可以比较准确地获取到归属于目标轨道的路轨的第一点云,同时,路轨通常可以作为比较稳定的参照物,基于归属于路轨的第一点云检测目标轨道中的目标障碍物,可以有效提高障碍物检测精度。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例中所需要使用的附图作简单的介绍,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的障碍物检测方法的流程示意图;
图2是本申请实施例中针对初始路轨像素点进行像素区间划分的示例图;
图3是本申请实施例中目标轨道在弯道出存在物体遮挡路轨的示例图;
图4是本申请实施例中将第三点云投影至目标平面中得到的第一投影点集的示例图;
图5是本申请实施例提供的障碍物检测方法在一个应用例中的流程示意图;
图6是本申请实施例提供的车辆的结构示意图;
图7是本申请实施例提供的电子设备的结构示意图。
具体实施方式
下面将详细描述本申请的各个方面的特征和示例性实施例,为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及具体实施例,对本申请进行进一步详细描述。应理解,此处所描述的具体实施例仅意在解释本申请,而不是限定本申请。对于本领域技术人员来说,本申请可以在不需要这些具体细节中的一些细节的情况下实施。下面对实施例的描述仅仅是为了通过示出本申请的示例来提供对本申请更好的理解。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括……” 限定的要素,并不排除在包括要素的过程、方法、物品或者设备中还存在另外的相同要素。
为了解决相关技术中存在的问题,本申请实施例提供了一种障碍物检测方法、装置、设备及计算机存储介质。下面首先对本申请实施例所提供的障碍物检测方法进行介绍。
图1示出了本申请一个实施例提供的障碍物检测方法的流程示意图。如图1所示,该方法包括:
步骤101,获取对目标轨道采集得到的K个初始点云数据与L张初始图像,K与L均为正整数;
步骤102,分别基于每一初始图像,从K个初始点云数据中确定出第一点云,第一点云为归属于目标轨道中的路轨的点云;
步骤103,以第一点云为基准点云,基于K个初始点云数据,检测目标轨道中的目标障碍物,其中,目标障碍物在K个初始点云数据中关联有第二点云,第二点云与第一点云之间满足预设位置关系。
本实施例提供的障碍物检测方法可以应用在常规意义上轨道交通车辆中,例如,火车、地铁、轻轨或者有轨电车等;当然,该方法也可以应用到其他类型的有轨车辆中,例如,具有矿车,或者厂区的有轨运输车等等。为了简化说明,以上可应用障碍物检测方法的车辆,均可以称为轨道交通车辆。
容易理解的是,上述车辆的运行环境中,通常会存在轨道;通过传感器的设置,可以采集到包括轨道相关的信息。比如,通过设置相机,可以采集到包括有轨道图像的初始图像;再比如,通过设置激光雷达,可以采集到包括轨道点云的初始点云数据。
一般情况下,这些传感器可以是安装在车辆本体上的,至少可以采集车辆行进方向上的环境信息。也就是说,上述的传感器,可以采集到车辆所处轨道的相关信息;上述的目标轨道可以是指车辆当前所处轨道。
当然,在实际应用中,目标轨道也可以是与车辆当前所处轨道临近或者交叉的轨道等,以便考虑车辆可能存在的变道的情况。为了简化描述,以下主要以目标轨道为车辆当前所处轨道为例,对本申请实施例提供的障 碍物检测方法进行说明。
一般来说,在目标轨道会存在路轨、枕木以及碎石等物体,其中,路轨通常为钢轨(或者说铁轨)等,其形态,例如宽度、表面平整度等会比较固定。
安装在车辆上的各类传感器的数量组成,可以是根据实际需要进行设定的。例如,车辆上可以安装有一个相机,而出于冗余布置,或者是出于清楚采集不同距离段的轨道图像的考虑,也可以在车辆上安装多个相机。类似地,对于激光雷达来说,具体的安装数量也可以是一个或者多个。
至于步骤101中采集K个初始点云数据与L张初始图像的过程,可以结合一个应用场景的举例进行说明:车辆在行驶过程中,安装在车辆上的各相机可以对车辆前方的目标轨道进行拍摄,得到至少一张初始图像;安装在车辆上的各激光雷达可以对车辆前方的目标轨道进行扫描,得到至少一个初始点云数据。以上相机对目标轨道进行拍摄的过程,以及激光雷达进行扫描的过程,均可以认为是对目标轨道的采集过程。
一般情况下,在初始图像中,除了包括目标轨道的图像外,还可能具有其他物体的图像,例如电线杆或者周边的植被等;同理,在初始点云数据中,除了具有目标轨道的点云外,还可能具有其他物体的点云。
本实施例中,车辆可以结合初始图像与上述的初始点云,来将归属于目标轨道的路轨的点云从其他物体的点云中区分出来。
具体来说,在步骤102中,车辆可以基于图像与点云融合的处理方式,来从确定出归属于目标轨道的路轨的第一点云。以下主要以基于某一张初始图像来从K个初始点云数据确定出第一点云的过程为例,对图像与点云的融合处理进行说明。
举例来说,基于深度学习模型,可以识别出初始图像中的目标轨道的路轨进行识别,基于像素分割可以得到路轨在初始图像中的像素点,而各个像素点在图像坐标系中的坐标位置也可以进行获取。
对于车辆上的各个传感器,相对位置关系以及安装的角度等信息可以是已知的,因此,各个传感器关联的坐标系之间的转换关系可以是预先确定的。比如说,对于某一激光雷达采集的初始点云数据,可以包括多个点 云以及各个点云在雷达坐标系中的坐标,该激光雷达与某一相机在车辆上的相对位置可以是固定的(即可以通过车身坐标系关联),而相机的相机坐标系与图像坐标系一般也是已知的。因此,初始点云数据中的点云,可以通过雷达坐标系—车身坐标系—相机坐标系—图像坐标系的对应关系,映射至图像坐标系中。
因此,上述K个初始点云数据可以映射至图像坐标系中,而落入到路轨的像素点的位置范围的点云,在一定程度上可以认为是归属于上述路轨的第一点云。
而在一些可选的实施方式中,也可以进一步对落入到路轨的像素点的位置范围的点云进行过滤等,来进一步得到归属于路轨的第一点云。
当然,在实际应用中,路轨在图像坐标系中位置,可以是通过拟合方程的形式进行体现,在K个初始点云数据映射至图像坐标系中后,可以根据与拟合方程之间的距离关系,来确定出归属于路轨的第一点云。
此外,在另一个举例中,路轨在初始图像中的位置,也可以是通过其他方式进行获取的,例如拉东变换等,此处不做一一举例。
一般情况下,路轨的位置、走势或者形状等均比较固定,因此在归属于路轨的第一点云得到确定的情况下,可以以第一点云为基准点云,比较准确地检测目标轨道中的目标障碍物。
值得强调的是,这里的提到的目标轨道中,可以是指轨道交通常见场景下的两条路轨之间的区域,也可以是路轨的某一宽度范围内的区域等等,此处不作具体限定。总的来说,目标轨道中的目标障碍物,可以认为是位于轨道交通车辆行驶区域中,可能对车辆行驶带来影响的障碍物。
容易理解的是,目标障碍物可以被激光雷达探测到,并反映到上述的K个初始点云数据,从另一个角度来说,目标障碍物在K个初始点云数据中会存在关联的点云,即上述的第二点云,而这些第二点云通常会与第一点云之间满足预设位置关系。
比如,结合实际的场景,当某一障碍物位于两条路轨之间,且明显高于路轨时,则可以认为是可能对车辆行驶带来影响的障碍物。而障碍物位于两条路轨之间,且高于路轨这些判定条件,通常可以反映到第一点云与 第二点云之间的预设位置关系中。
当然,以上仅仅是检测目标障碍物所需设定的预设位置条件的一种举例说明,实际应用中,预设位置条件可以根据需要进行确定。比如,在单轨的轨道交通应用场景中,目标障碍物可以是位于单路轨两侧的预设距离范围内的等等。
结合以上说明,上述步骤103中,车辆可以根据确定的第一点云,以及预设位置关系,从K个初始点云数据中确定出目标障碍物关联的第二点云。
本申请实施例提供的障碍物检测方法,获取对目标轨道采集得到的K个初始点云数据与L张初始图像,分别基于每一初始图像,从K个初始点云数据中确定出归属于目标轨道的路轨的第一点云,进而可以检测出关联的第二点云与第一点云之间满足预设位置关系的目标障碍物。本申请实施例中,结合初始图像与初始点云数据,可以比较准确地获取到归属于目标轨道的路轨的第一点云,同时,路轨通常可以作为比较稳定的参照物,基于归属于路轨的第一点云检测目标轨道中的目标障碍物,可以有效提高障碍物检测精度。
在一个示例中,上述的K个初始点云数据,可以是通过多个激光雷达采集得到的多个初始点云数据;也就是说,这里的K可以具体是大于1的整数。
结合一个具体应用场景,在一些行驶速度较快的轨道交通车辆中(例如火车、地铁等),单个激光雷达获得的目标轨道的点云可能比较稀疏。在这种情况下,获取多个初始点云数据,可以有效提高目标轨道的点云密集程度,增加归属于路轨的第一点云中点的数量,进而也有助于提升对目标障碍物的检测效果。
此外,通过设置多个激光雷达,也有助于对单个激光雷达的盲区进行补充探测。比如,可以将一个激光雷达作为主雷达,而将其余的激光雷达作为补盲雷达等等。
当然,容易理解的是,本示例仅仅是对K个初始点云数据的获取方式的一种举例说明。在实际应用中,这K个初始点云数据,也可以是单个激 光雷达采集的单个初始点云数据;或者,还可以是从多个激光雷达采集的多个候选点云数据中,依据预设的质量评价指标(例如点云密度等)筛选得到的一个或多个点云数据等等。
在一个示例中,上述的L张初始图像,也可以是通过多个相机采集得到的多张初始图像;也就是说,这里的L可以具体是大于1的整数。
举例来说,在初始图像中,目标轨道的实际延伸长度可能较长,在相机的焦距固定的情况下,单个相机采集的初始图像中,不同距离处的目标轨道的图像的质量可能存在较大差异。
本示例中,这多个相机中,可以是至少两个相机之间的焦距存在不同。比如,车辆上可能存在相机A与相机B;相机A焦距较短,可以比较清晰地获取到1~100m距离处的目标轨道的初始图像a;相机B焦距较长,可以比较清晰地获取到100~500m距离处的目标轨道的初始图像b。
如此,基于初始图像a与K个初始点云数据,可以比较精确地检测到1~100m距离处的目标障碍物;而基于初始图像b与K个初始点云数据,则可以比较精确地检测到100~500m距离处的目标障碍物;进而提高对目标障碍物的检测效果。
此外,与多个激光雷达的设置类似地,设置多个相机,也可以分别用于主相机与补盲相机,以便对单个相机的盲区进行拍摄。
当然,在实际应用中,L张初始图像也可以是单个相机采集的单张初始图像;或者,也可以是从多个焦距相同的相机获取的多张候选图像中确定的质量最高的一张初始图像等等,此处不做一一举例说明。
可选地,上述步骤102,分别基于每一初始图像,从K个初始点云数据中确定出第一点云,包括:
拟合第一初始图像中的路轨在图像坐标系中的路轨拟合方程,第一初始图像为L张初始图像中的任一张初始图像;
将K个初始点云数据映射至图像坐标系中,依据路轨拟合方程对K个初始点云数据进行筛选,得到第一点云。
本实施例,对于L张初始图像,可以分别结合K个初始点云数据进行目标障碍物的检测;因此,以下可以主要以某一张初始图像为基础,来描 述目标障碍物的检测过程;而该某一张初始图像,可以定义为第一初始图像。
如上文所示的,可以通过深度学习算法等方式,可以对第一初始图像中的路轨进行识别,得到路轨在初始图像中的像素点,或者是路轨在图像坐标系中的拟合方程。
本实施例中,通过对路轨的识别,可以得到的是路轨在图像坐标系中的路轨拟合方程。该路轨拟合方程可以是直线方程,也可以是二次、三次或者更高次的曲线方程,此处不作具体限定。
如上文所示的,对于车辆上的各个传感器,相对位置关系以及安装的角度等信息可以是已知的,因此,各个传感器关联的坐标系之间的转换关系可以是预先确定的。
换而言之,各个传感器的内参与外参等都可以是预先标定好的。基于内外参的标定,各个数据(例如点云数据、图像等)可以在不同的坐标系之间进行转换,具体的转换过程,例如在各个坐标系中的映射过程也是可以实现的。因此,下文的一些实施例中,为了简化说明,可以省略坐标系转换的具体实现过程。
通常来说,初始点云数据为三维点云,映射至图像坐标系中后,得到对应的二维点集;而这些二维点集中,落在路轨拟合方程上的点,或者距离路轨拟合方程小于某一距离阈值的点,往往可以考虑为属于路轨的第一点云映射到图像坐标系中后得到的点。由于三维点云与二维点集之间存在映射关系,针对在图像坐标系确定的属于路轨的点,可以根据该映射关系查找到三维点云中属于路轨的点。
换而言之,基于以上的映射处理,结合路轨拟合方程,可以对K个初始点云数据进行筛选,从而得到属于路轨的点云,也就是上述的第一点云。
本实施例中,通过路轨拟合方程来对第一初始图像中的路轨进行表征,有助于通过K个初始点云数据在图像坐标系中的映射点到路轨拟合方程之间的距离,实现对K个初始点云数据的筛选,提高对归属于路轨的第一点云的筛选准确度。
在一个示例中,上述将K个初始点云数据映射至图像坐标系中,依据 路轨拟合方程对K个初始点云数据进行筛选,得到第一点云,包括:
将K个初始点云数据映射至图像坐标系中,得到映射点集;
从映射点集中确定与路轨拟合方程之间距离小于第二距离阈值的目标映射点;
依据目标映射点,从K个初始点云数据中确定出第一点云。
本实施例中,考虑路轨一般具有一定的宽度,可以通过确定第二距离阈值,将初始点云数据中投影位于路轨拟合方程附近的点云,确定为归属于路轨第一点云,有助于提高得到的第一点云的合理性。
具体来说,可以按照初始点云数据关联的坐标系与图像坐标系之间的转换关系,将初始点云数据映射至图像坐标系中,得到映射点集。
映射点集中的各个映射点在图像坐标系中具有相应的坐标,根据这些坐标与路轨拟合方程,可以确定各映射点到路轨拟合方程的距离,当某一映射点到路轨拟合方程的距离小于第二距离阈值时,可以认为该映射点为上述第一点云映射至图像坐标系中得到的。
可选地,拟合第一初始图像中的路轨在图像坐标系中的路轨拟合方程,包括:
基于预先训练得到的深度学习模型对第一初始图像进行像素分割,得到归属于第一初始图像中的路轨的初始路轨像素点;
在图像坐标系中,拟合初始路轨像素点,得到路轨拟合方程。
本实施例中,可以是使用深度学习模型对第一初始图像进行识别的。该深度学习模型可以是预先基于训练样本进行训练得到的。
举例来说,训练样本可以是标注有路轨的样本图像,这些样本图像可以是通过设置轨道交通车辆上的相机拍摄得到的;相应地,标注的路轨,可以是这些轨道交通车辆当前所处的轨道中的路轨。基于这样的样本图像训练得到的深度学习模型,可以对初始图像中的车辆当前所处的轨道(即目标轨道)的路轨进行识别,而排除其他并行的轨道的路轨;进而在后续的障碍物的检测过程中,可以聚焦对目标轨道中的目标障碍物进行检测,提高目标障碍物检测效率。
本实施例中,基于深度学习模型,可以从第一初始图像中得到路轨关 联的像素点。比如,结合一个具体应用场景,深度学习模型可以对路轨对应的像素进行分割得到二值图像;在二值图像中,可以包括与路轨对应的前景数据;对这些前景数据进行聚类或者连通区域查找,可以得到若干个区域;而这里每一个区域,可以分别对应了一条路轨。
对于归属于路轨的像素点来说,可以是第一初始图像中部分的像素点,也可以是上述二值图像中的部分的像素点,此处不做限定。但总的来说,路轨关联有相应的像素点,这些像素点在图像坐标系中具有相应的坐标。
基于此,在图像坐标系中,可以对归属于路轨的像素点进行拟合,得到路轨拟合方程。
可选地,在目标轨道中的路轨的数量为N,N为大于1的整数的情况下,在图像坐标系中,拟合初始路轨像素点,得到路轨拟合方程,包括:
在图像坐标系中,从初始路轨像素点中筛选位于预设图像高度区间中的候选路轨像素点;
沿预设方向将归属于每一路轨的候选路轨像素点分别划分至M个像素区间,M为大于1的整数;
获取每一像素区间的像素中心点,分别拟合每一路轨对应的M个像素中心点,得到与N条路轨对应的N条第一拟合直线;
根据N条第一拟合直线,确定透视变换矩阵,依据透视变换矩阵将N条第一拟合直线以及候选路轨像素点映射至鸟瞰图中,分别得到N条第二拟合直线以及候选映射像素点;
滤除候选映射像素点中的第一映射像素点,得到第二映射像素点,第一映射像素点为候选映射像素点中,与任一第二拟合直线之间的距离均大于第一距离阈值的像素点;
分别拟合归属于每一路轨的目标路轨像素点,得到与N条路轨对应的N条路轨拟合方程;其中,目标路轨像素点为与第二映射像素点对应的候选路轨像素点。
容易理解的是,在轨道交通车辆中,可以分为单轨车辆与双轨车辆;而在实际的轨道运行环境中,也可能存在轨道并道,或者轨道分叉的情况。也就是说,在轨道运行环境中,可能存在车辆当前行驶的路线上,包括多 条目标轨道的情况。
本实施例中,对于双轨车辆,可以基于鸟瞰图的应用,更加准确地对路轨拟合方程进行获取。为了简化描述,以下主要以目标轨道包括第一路轨与第二路轨为例进行说明。
第一路轨与第二路轨均可以为车辆当前行驶路上的目标轨道的路轨;一般的双轨运行环境下,第一路轨与第二路轨构成了双轨;而在上述的轨道并道、轨道分叉或者特殊的三轨等环境下,第一路轨与第二路轨可以是最外侧的两条路轨。
一般来说,随着路轨的长度的增加,路轨出现弯曲,或者图像质量变差的可能性越大,因此,本实施例中,可以先在初始路轨像素点中筛选位于预设图像高度区间中的候选路轨像素点;例如,候选路轨像素点可以是初始图像底部一定高度的像素点,保证在下一步骤中能够比较高质量地完成直线拟合。
参见图2,图2示出了针对初始路轨像素点进行像素区间划分的示例图,在该图中,可以沿图像高度方向(对应了上述的预设方向),将归属于各路轨的候选路轨像素点分别划分至M个像素区间中。一般来说,在每个像素区间中可以存在多个像素点,以及各个像素点在图像坐标系中的坐标;因此,在一个像素区间中,基于位于其中的像素点的坐标值,可以确定出该像素区间的像素中心点。
如此,可以针对上述的第一路轨与第二路轨分别确定出对应的M个像素中心点;对第一路轨对应的M个像素中心点进行直线拟合,可以得到对应的第一拟合直线,记为l 1;类似地,对第二路轨对应的M个像素中心点进行直线拟合,可以得到对应的第一拟合直线,记为l 2
在图像坐标系中,目标轨道通常是在透视图中进行呈现的,也就是说,虽然第一路轨与第二路轨在实际场景中是平行的,但l 1与l 2之间则可能是非平行的;参见图2,两条第一拟合直线在图像高度方向上逐渐靠拢。
本实施例中,可以将l 1与l 2映射至鸟瞰图中,两条第一拟合直线分别映射至鸟瞰图中后,可以得到两条第二拟合直线,而这两条第二拟合直线之间可以相互平行。
结合图2,以下对将l 1与l 2映射至鸟瞰图的方式进行举例说明。可以从l 1上选取两个点,记为a和d,这两点可以是直接基于l 1的方程确定的两点,也可以是用于拟合的两个像素中心点,此处不做具体限定。类似地,可以从l 2上选取两个点,记为b和c。
一般情况下,可以按照预设的平移规则,对a和b两点进行平移,直至ad连线与bc连线平行。基于a和b的平移结果,可以得到一变换矩阵,也就是上述的透视变换矩阵。根据透视变换矩阵,则可以进一步将l 1与l 2分别映射至鸟瞰图中,得到的两条第二拟合直线分别记为l 3与l 4
容易理解的是,受到相机焦距的影响,或者是受到行驶环境条件的影响(例如雨天、雾天),得到的路轨像素点中可能会出现噪声。一般来说,距离相机越远的轨道,对对应的像素点的分割结果越不稳定,产生噪声的可能也越大。
为了过滤上述的噪声,本实施例中,可以同样将候选路轨像素点映射至鸟瞰图中,设将这些映射后的像素点定义为候选映射像素点,对于某一个候选映射像素点,如果与l 3之间的距离,以及与l 4之间的距离,均大于第一距离阈值时,可以确定该候选映射像素点在第一初始图像(或者上述的二值图像)中对应的像素点为噪声,可以进行滤除。
因此,对于映射至鸟瞰图中得到的候选映射像素点中,与各第二拟合直线之间的距离均大于第一距离阈值的像素点,也就是上述的第一映射像素点,可以从候选映射像素点中进行滤除;而保留剩余的第二映射像素点。
第二映射像素点可以是候选路轨像素点中的部分像素点映射至鸟瞰图中得到的,因此,第二映射像素点在候选路轨像素点中,具有对应的像素点,即上述的目标路轨像素点。
此时,可以对归属于各路轨的目标路轨像素点进行拟合,得到的路轨拟合方程与路轨的实际状态(例如位置、走势等)更加匹配,进而也有助于提升后续目标障碍物检测的准确性。
容易理解是,路轨拟合方程的数量,与路轨的数量可以是匹配的,各路轨与用于拟合路轨拟合方程的像素点之间的对应关系,可以是在路轨识别与像素分割的过程进行确定的,也可以是根据各像素点与第一拟合直线 或者第二拟合直线之间的距离进行确定的。总的来说,根据上述的目标路轨像素点,可以对第一路轨与第二路轨分别拟合出对应的路轨拟合方程。
可选地,在上述K个初始点云数据为多个初始点云数据,且来自K个激光雷达的情况下(即K为大于1的整数的情况下),上述将K个初始点云数据映射至图像坐标系中,包括:
获取每一激光雷达的雷达坐标系与预设基准坐标系之间的第一坐标系转换关系、以及预设基准坐标系与任一初始图像的图像坐标系之间的第二坐标系转换关系;
分别将每一激光雷达采集的初始点云数据按对应的第一坐标系转换关系映射至预设基准坐标系中,得到混合点云数据;
将混合点云数据按第二坐标系转换关系映射至图像坐标系中。
换而言之,本实施例中,K值可以是大于1的整数。此时每一个激光雷达在自身的雷达坐标系下,均采集有对应的初始点云数据。将这些初始点云数据进行混合,可以有效提升可用于确定路轨与目标障碍物的点云的数量,使得这些物体的特征更加显著,提升检测效果。
而为了实现对K个初始点云数据的混合,本实施例中,可以确定一预设基准坐标系,该基准坐标系可以是其中的某一个激光雷达的雷达坐标系,也可以是车身坐标系,此处不做具体限定。
如上文所示的,各类坐标系是可以预先标定的,不同坐标系之间的转换关系也可以是预先确定好的,因此,可以直接对每一激光雷达的雷达坐标系与预设基准坐标系之间的第一坐标系转换关系、以及预设基准坐标系与任一初始图像的图像坐标系之间的第二坐标系转换关系进行获取。
本实施例中,可以根据第一坐标系转换关系,将各个激光雷达采集的初始点云数据,均转换至预设基准坐标系中,得到混合点云数据。该混合点云数据一般包括三维点云,保留了点云数据中的三维信息,后续可以使用该混合点云数据进行目标障碍物的检测。
在得到混合点云数据后,则可以进一步将其按照上述的第二坐标系转换关系映射至图像坐标系中。
上文实施例中提到,可以依据初始点云数据在图像坐标系中的映射点 集与路轨拟合方程之间的距离关系,来筛选归属于路轨的第一点云。然而,在图3所示的场景中,位于目标轨道右侧的矩形物体(记为T),可能是弯曲轨道一侧正常的电线杆等物体,然后在图像坐标系中,该矩形物体对应的部分点云也会映射到路轨拟合方程对应的线条(对应目标轨道的路轨)上。
换而言之,在一些应用场景中,初始点云数据映射至图像坐标系中后,单纯根据路轨拟合方程筛选得到的第一点云中,可能会包括了实际不属于路轨的物体的点云。
基于此,在一个可选实施例中,上述依据目标映射点,从K个初始点云数据中确定出第一点云,包括:
从K个初始点云数据中确定与目标映射点对应的第三点云;
将第三点云投影至车身坐标系的目标平面中,得到第一投影点集,其中,目标平面为依据车辆行驶方向与车辆高度方向确定的平面;
滤除第一投影点集中的离群点,得到第二投影点集;
根据第二投影点集,从K个初始点云数据中确定出第一点云。
上述第三点云在一定程度上可以认为是初始点云数据中,单纯基于路轨拟合方程筛选得到的归属于路轨的点云。
本实施例中,可以进一步将第三点云投影至车身坐标系的目标平面中。如上文所示的,初始点云数据可以是在对应的雷达坐标系中的,而在一些应用场景下,这些初始点云数据也可以是预先转换到例如车身坐标系的预设基准坐标系中。
但是总的来说,基于预先标定的坐标系,第三点云无论处于哪一坐标系中,均可以转换至车身坐标系中,并可进一步投影到车身坐标系的目标平面中。
如图4所示,图4示出了将第三点云投影至目标平面中得到的第一投影点集的示例图。在该示例图中个,目标平面可以记为XOZ平面,其中X轴与车辆行驶方向一致,Z轴与车辆高度方向一致。
图4中,下方较为密集的点云部分(记为第一点云部分R1),可以对应路轨实际的点云;在第一点云部分R1中,沿X轴方向存在一段空隙, 对应了路轨被图3中的矩形物体T遮挡的部分。而在上方较为稀疏的点云部分(记为第二点云部分R2),则可能是图3中矩形物体T所关联的点云。
从图4中可见,对于第二点云部分R2中的各个点云点,相对于第一点云部分R1,可以认为是离群点,因此可以对这部分离群点进行滤除,得到剩余的点云,即上述的第二投影点集。而第二投影点集一般可以认为是路轨实际关联的点云投影至目标平面上得到的,根据第二投影点集,可以较为准确地在初始点云数据中确定实际归属于路轨的第一点云。
至于离群点的滤除方式,可以采用最小二乘法、RANSAC算法或者统计滤波等方式进行,此处不做具体限定。
可见,本实施例可以有效解决轨道弯道处障碍物误检的问题,提高确定得到的第一点云的准确性。
可选地,上述步骤103,以第一点云为基准点云,基于K个初始点云数据,检测目标轨道中的目标障碍物,包括:
在车身坐标系中,从第一点云中确定与每一第一点云点关联的第二点云点;其中,第一点云点为K个初始点云数据中,除第一点云以外的点云点,第二点云点为第一点云中在X轴上距离第一点云点最近的点云点,X轴平行于车辆行驶方向;
将在Y轴上与关联的第二点云点之间的距离满足预设距离条件的第一点云点确定为候选点云点,Y轴平行于车辆宽度方向;
对候选点云点进行聚类,得到至少一个候选障碍物关联的第四点云;
根据第四点云中的各点云点与其关联的第二点云点之间的高度差,从至少一个候选障碍物中确定目标障碍物。
本实施例中,在确定了第一点云的基础上,可以以第一点云为基准对目标障碍物进行检测。
结合一个应用场景,初始点云数据中可以具有多个点云,更为细化地,可以具体到点云中的每个点;也就是说,初始点云数据可以包括多个点云点;其中,初始点云数据中,除第一点云以外的点云点的聚合可以记为P e,其中的任一点云点可以记为p i(对应第一点云点)。
路轨的数量可以为两条,每一路轨均对应有第一点云,两条路轨对应的第一点云中所有点云点的集合分别记为P l与P r
此处可以引用上一实施例中所建立的车身坐标系,即X轴与车辆行驶方向一致,Z轴与车辆高度方向一致,同时Y轴与车辆宽度方向一致。初始点云数据中的每一点云点,在车身坐标系中可以具有相应的坐标。
对于p i,可以从P l与P r中分别查找与p i的X坐标最近的点云点,分别记为p l与p r(均可以对应第二点云点)。然后比较p i与p l、p r的Y坐标,如果在Y轴上,p i位于p l与p r之间,或者是p i与p l之间距离小于一阈值,或者p i与p r之间距离小于一阈值,则可以将p i确定为候选点云点。这里,判断p i是否为候选点云点的条件,可以根据实际需要进行设定,具体可以通过上述的预设距离条件进行体现。
基于预设距离条件筛选候选点云点的目的,可以是将位于两条路轨之间的点云点均筛选出来(或者根据实际需要也可以进一步筛选路轨外侧一定范围内的点云点筛选出来)。
对于这些候选点云点,可以进行聚类处理,得到至少一个候选障碍物关联的第四点云。具体的聚类算法可以根据实际需要进行选用,此处不做具体限定。这里,候选障碍物与第四点云的关联关系,可以理解为各个候选障碍物均具有归属于其的第四点云。
在以上步骤中,对于确定为候选点云点的p i,同时可以通过对应的p l与p r在Z轴上的坐标,确定一基准高度,比如,可以求取p l与p r的Z轴坐标的平均值(或者根据需要也可以采用加权平均值,或者两者中的较大者或较小值等等),得到一基准高度z c,而该p i本身也具有Z轴坐标z i。z c与z i可以具有对应关系。
结合一个举例,一般高于路轨的障碍物会对车辆的运动带来影响,因此,在得到候选障碍物后,可以将其中的各个点云点的Z轴坐标z i与对应的基准高度z c进行比较,当z i>z c时,可以认为是有效的点云点(即该点云点可能是目标障碍物对应的点云点)。当然,也可以进一步获取候选障碍物对应的全部点云点(对应第四点云)中,满足z i>z c的点云点的数量,在该数量大于一阈值的情况下,将候选障碍物确定为目标障碍物。
也就是说,可以根据第四点云中的各点云点与其关联的第二点云点之间的高度差,从至少一个候选障碍物中确定目标障碍物。
当然,以上仅仅是对双轨应用场景下检测目标障碍物的过程进行了举例说明,对于单轨应用环境来说,确定也可以在X轴上确定与p i关联的第二点云点,并进一步根据Y轴和Z轴的坐标关系,来检测目标障碍物,此处不再赘述。
本实施例中,可以与路轨关联的第一点云为基准,从初始点云数据中确定候选点云点,对其进行聚类得到候选障碍物后,再进一步利用归属于候选障碍物的各点云点与其关联的第二点云点之间的高度关系,来从候选障碍物中确定出目标障碍物。以路轨为基准来确定目标障碍物,能够有效提高障碍物的检测准确度。
在一个示例中,在目标障碍物关联的第四点云中,目标点云点的数量大于数量阈值,且各目标点云点与其关联的第二点云点之间的高度差的最大值大于第二差值阈值,其中,目标点云为与其关联的第二点云点之间的高度差大于第一差值阈值的点云点。
本示例限定了初始点云数据中,确定的目标障碍物的第四点云所需满足的条件,一方面,从满足第一差值阈值的点云(对应目标点云)的数量角度限定了目标障碍物的第四点云需满足的条件,可以有效避免因噪声等带来的目标障碍物的误检;另一方面,从高度差的最大值的角度,对目标障碍物应具有的高度进行限定,避免将不会影响车辆正常行驶的低矮障碍物确定为目标障碍物。可见,本示例可以有效提高确定出了目标障碍物的合理性。
可选地,上述根据第四点云中的各点云点与其关联的第二点云点之间的高度差,从至少一个候选障碍物中确定目标障碍物之后,方法还包括:
在车身坐标系的X轴上,将车辆与目标障碍物关联的第四点云中各点云点之间的最近距离,确定为车辆与目标障碍物之间的距离;
在目标距离小于第三距离阈值时,输出报警信号。
本实施例中,在X轴上,将车辆与目标障碍物关联的第四点云中各第一点云点之间的最近距离,确定为目标距离,在目标距离小于第三距离阈 值时,输出报警信号,可以及时对检测出的目标障碍物进行报警,提高行车的安全性。
在一个示例中,车身坐标系的原点在车辆的最前侧,且X轴的正半轴位于车辆的前方,则可以将第四点云中的各点云点的最小X坐标,作为车辆与目标障碍物之间的距离。
此外,一些场景下,检测出的目标障碍物的数量可以存在多个,可以根据上述方式,确定车辆到每一目标障碍物之间的距离。
以下结合一个具体应用例,对本申请实施例提供的障碍物检测方法进行说明。结合图5,在该应用例中,障碍物检测方法包括:
步骤501,各传感器内外参标定;
本应用例中可以采用L个相机(L≥1)及K个激光雷达(K≥1)。
在一些可行的实施方式中,采用不同焦距的相机及不同视场角(Field of View,FOV)的激光雷达,可以有效提高障碍物检测的范围。
对于相机,可以标定其内参和畸变参数。另外,可以选定一个激光雷达作为基准O b,通过激光雷达与相机的标定算法获得每个相机到基准激光雷达坐标系的转换关系
Figure PCTCN2022081631-appb-000001
通过激光雷达与激光雷达的标定算法获得其他激光雷达与基准激光雷达坐标系的转换关系
Figure PCTCN2022081631-appb-000002
此外,通过测量获得基准激光雷达坐标系O b与车身坐标系O c的转换关系
Figure PCTCN2022081631-appb-000003
其中,车身坐标系O c的位置与定义方式可以根据实际需要进行确定。
由于在检测障碍物后,比较关心的是障碍物在列车前进方向上的位置;因此,为了简化标定过程,对于
Figure PCTCN2022081631-appb-000004
可以简单的只测量O b与O c在列车前进方向的距离。
当然,为了更加准确地实现对障碍物的检测,基准激光雷达坐标系O b与车身坐标系O c的转换关系
Figure PCTCN2022081631-appb-000005
也可以有更严格更精确的确定方式,比如通过激光水平仪等装置获得O b与O c的俯仰、翻滚、偏航角。
步骤502,二维铁轨检测;
本步骤可以使用一个或多个相机拍摄列车行驶方向的场景,分别针对相机拍摄得到的每一图像,利用计算机视觉算法实现二维空间的轨道路轨检测。
总的来说,本步骤中,可以首先通过深度学习算法实现列车行驶铁轨(对应上述的目标轨道中的路轨)的像素分割得到二值图像B,然后根据二值图像B的前景数据(即属于铁轨的像素点)做过滤,拟合得到描述铁轨的N条曲线方程。
具体来说,本步骤的实现过程可以主要包括以下步骤:
1)取像素分割结果中靠近图像底部一定高度的前景数据(比如从图像底部往上取图像1/3高度内的前景数据),对其做聚类或者连通区域查找得到N个区域(对应N条铁轨);
2)对每个区域内的像素,沿图像高的方向将其均匀分成M个区间,每个区间计算得到其中心点,根据这些中心点拟合一条直线;而上述N个区域,可以对应N条直线;
3)基于拟合得到的图像最左边和最右边的两条直线,取如图2所示的a、b、c、d点,计算其映射到鸟瞰视角的透视变换矩阵H;根据H对整个二值图像B做变换得到鸟瞰图的铁轨分割结果B′;
4)在鸟瞰视角下,上面步骤2)拟合得到的N条直线互相平行。在鸟瞰图的铁轨分割结果B′上延伸这些平行的直线,根据分割得到的前景像素与直线的距离做过滤:如果某个点与所有直线的最近距离都大于预定阈值,则认为其是噪声,从前景数据中删除掉。
一般来说,距离相机越远的铁轨分割结果越不稳定,通过以上在鸟瞰图中的处理过程,可以将噪声进行有效过滤。
5)根据过滤得到的属于各铁轨的前景像素点,分别拟合得到对应的二维铁轨曲线方程(对应路轨拟合方程)。同时标记最左边和最右边的铁轨供下面步骤使用。
步骤503,三维铁轨点云筛选;
限于激光雷达的感知特性,在快速运动的列车上获得的铁轨点云比较稀疏,很难做直接的有效分析。
本步骤的目的是通过二维铁轨方程找到铁轨上的3D激光雷达点云。首先将每个激光雷达获得的点云,根据步骤501中确定的转换关系
Figure PCTCN2022081631-appb-000006
将其转换到O b坐标系下,累积获得点云P 3d(对应混合点云数据);然后根据
Figure PCTCN2022081631-appb-000007
将点云P 3d映射到相机图像坐标系中得到P 2d(对应映射点集)。
此时铁轨上的点云将被投影在步骤502的二维图像上,依次计算P 2d中每个点与最左和最右边二维铁轨曲线方程的距离,如果与其中某条曲线距离小于预设阈值,则认为其对应的三维点云可能属于该曲线对应的铁轨,记这些点云为P′ 3d(对应第三点云)。
如图3所示的矩形目标T的激光点云也会有一部分投影到二维铁轨上,即其对应的三维点云也被包含在P′ 3d中。沿车身坐标系的X方向统计P′ 3d的Z坐标,属于铁轨的Z坐标值应该是连续变化的,而图3中矩形目标的遮挡部分点云的Z值会产生一个跳变,根据最小二乘等方法即可实现离群点的过滤,记过滤后的点云为
Figure PCTCN2022081631-appb-000008
(对应第一点云)。
图4以点云X值为横坐标,高度Z值为纵坐标,统计铁轨上的点云
Figure PCTCN2022081631-appb-000009
图中R2中的点为轨道旁障碍物点云,R1中的点为铁轨点云。过滤时可以首先根据点云的X坐标做分段,属于每一段的铁轨点云在图4中的走势可以近似为直线。于是可以以直线方程为模型,根据该段数据利用最小二乘法做拟合,最后通过计算每个点与拟合得到的直线的距离来过滤掉R2中的点。
步骤504,障碍物检测及过滤;
本步骤中,可以对步骤503中得到的点云P 3d,根据轨道面点云
Figure PCTCN2022081631-appb-000010
做过滤,得到轨道内的点云
Figure PCTCN2022081631-appb-000011
(对应候选点云点的聚合)。过滤方法为:对P 3d内每个点p i,根据其X坐标查找左右铁轨面点云X坐标最近的点p l与p r。然后比较p i与p l、p r的Y坐标,如果p i处于p l与p r之间则认为p i属于
Figure PCTCN2022081631-appb-000012
同时记录p l与p r的平均Z坐标值作为p i的参考轨道面高度。
Figure PCTCN2022081631-appb-000013
通过聚类算法获得可能的障碍物Q i(对应候选障碍物),依次计算其中每个点(对应第四点云中给每个点)的Z坐标值与其对应参考轨道面高度的差,统计高度差大于预设阈值的点数以及高度差最大值。如果点数及高度差最大值分别大于预设阈值,则认为该Q i为真实障碍物(对应目标障碍物)。分析Q i中点云的X轴数据分布,取其最小的X坐标值(即距离车头最近)为目标障碍物的距离D i
基于以上应用例可见,本申请实施例提供的障碍物检测方法,可以使用多传感器融合的方式获得实时的轨道内障碍物信息,可靠性高;基于深度学习铁轨分割的二维铁轨曲线拟合方法,可以有效滤除噪声点,铁轨拟合效果稳定;基于离群点过滤的方式可以有效解决弯道障碍物误检的问题。此外,从硬件配置的角度来说,使用的传感器可以比较简单,安装维护方便。
如图6所示,本申请实施例还提供了一种车辆,包括:
获取模块601,用于获取对目标轨道采集得到的K个初始点云数据与L张初始图像,K与L均为正整数;
第一确定模块602,用于分别基于每一初始图像,从K个初始点云数据中确定出第一点云,第一点云为归属于目标轨道中的路轨的点云;
检测模块603,用于以第一点云为基准点云,基于K个初始点云数据,检测目标轨道中的目标障碍物,其中,目标障碍物在K个初始点云数据中关联有第二点云,第二点云与第一点云之间满足预设位置关系。
可选地,上述第一确定模块602,可以包括:
拟合子模块,用于拟合第一初始图像中的路轨在图像坐标系中的路轨拟合方程,第一初始图像为L张初始图像中的任一张初始图像,第一初始图像为L张初始图像中的任一张初始图像;
筛选子模块,用于将K个初始点云数据映射至图像坐标系中,依据路轨拟合方程对K个初始点云数据进行筛选,得到第一点云。
可选地,拟合子模块,可以包括:
分割获取单元,用于基于预先训练得到的深度学习模型对第一初始图像进行像素分割,得到归属于第一初始图像中的路轨的初始路轨像素点;
拟合单元,用于在图像坐标系中,拟合初始路轨像素点,得到路轨拟合方程。
可选地,在目标轨道中的路轨的数量为N,N为大于1的整数的情况下,拟合单元,可以包括:
筛选子单元,用于从初始路轨像素点中筛选位于预设图像高度区间中的候选路轨像素点;
划分子单元,用于归属于每一路轨的候选路轨像素点分别划分至M个像素区间,M为大于1的整数;
第一拟合子单元,用于获取每一像素区间的像素中心点,分别拟合每一路轨对应的M个像素中心点,得到与N条路轨对应的N条第一拟合直线;
第一确定子单元,用于根据N条第一拟合直线,确定透视变换矩阵,依据透视变换矩阵将N条第一拟合直线以及候选路轨像素点映射至鸟瞰图中,分别得到N条第二拟合直线以及候选映射像素点;
第一滤除子单元,用于滤除候选映射像素点中的第一映射像素点,得到第二映射像素点,第一映射像素点为候选映射像素点中,与任一第二拟合直线之间的距离均大于第一距离阈值的像素点;
第二拟合子单元,用于分别拟合归属于每一路轨的目标路轨像素点,得到与N条路轨对应的N条路轨拟合方程;其中,目标路轨像素点为与第二映射像素点对应的候选路轨像素点。
可选地,在K为大于1的整数的情况下,K个初始点云数据为通过K个激光雷达采集得到;
相应地,上述筛选子模块,可以包括:
获取单元,用于获取每一激光雷达的雷达坐标系与预设基准坐标系之间的第一坐标系转换关系、以及预设基准坐标系与任一初始图像的图像坐标系之间的第二坐标系转换关系;
第一映射单元,用于分别将每一激光雷达采集的初始点云数据按对应的第一坐标系转换关系映射至预设基准坐标系中,得到混合点云数据;
第二映射单元,用于将混合点云数据按第二坐标系转换关系映射至图像坐标系中。
可选地,上述筛选子模块,可以包括:
第三映射单元,用于将K个初始点云数据映射至图像坐标系中,得到映射点集;
第一确定单元,用于从映射点集中确定与路轨拟合方程之间距离小于第二距离阈值的目标映射点;
第二确定单元,用于依据目标映射点,从K个初始点云数据中确定出第一点云。
可选地,第二确定单元,可以包括:
第二确定子单元,用于从K个初始点云数据中确定与目标映射点对应的第三点云;
获取子单元,用于将第三点云投影至车身坐标系的目标平面中,得到第一投影点集,其中,目标平面为依据车辆行驶方向与车辆高度方向确定的平面;
第二滤除子单元,用于滤除第一投影点集中的离群点,得到第二投影点集;
第三确定子单元,根据第二投影点集,从K个初始点云数据中确定出第一点云。
可选地,检测模块603,可以包括:
第一确定子模块,用于在车身坐标系中,从第一点云中确定与每一第一点云点关联的第二点云点;其中,第一点云点为K个初始点云数据中,除第一点云以外的点云点,第二点云点为第一点云中在X轴上距离第一点云点最近的点云点,X轴平行于车辆行驶方向;
第二确定子模块,用于将在Y轴上与关联的第二点云点之间的距离满足预设距离条件的第一点云点确定为候选点云点,Y轴平行于车辆宽度方向;
聚类子模块,用于对候选点云点进行聚类,得到至少一个候选障碍物关联的第四点云;
第三确定子模块,用于根据第四点云中的各点云点与其关联的第二点云点之间的高度差,从至少一个候选障碍物中确定目标障碍物。
可选地,在目标障碍物关联的第四点云中,目标点云点的数量大于数量阈值,且各目标点云点与其关联的第二点云点之间的高度差的最大值大于第二差值阈值,其中,目标点云为与其关联的第二点云点之间的高度差大于第一差值阈值的点云点。
可选地,上述障碍物检测装置还可以包括:
第二确定模块,用于在车身坐标系的X轴上,将车辆与目标障碍物关联的第四点云中各第一点云点之间的最近距离,确定为车辆与目标障碍物之间的目标距离;
输出模块,用于在目标距离小于第三距离阈值时,输出报警信号。
可选地,在L为大于1的整数的情况下,L张初始图像为通过L个相机采集得到;
L个相机中,至少两个相机之间的焦距存在不同。
需要说明的是,该车辆是与上述障碍物检测方法对应的车辆,上述方法实施例中所有实现方式均适用于该车辆的实施例中,也能达到相同的技术效果。
图7示出了本申请实施例提供的电子设备的硬件结构示意图。
电子设备可以包括处理器701以及存储有计算机程序指令的存储器702。
具体地,上述处理器701可以包括中央处理器(CPU),或者特定集成电路(Application Specific Integrated Circuit,ASIC),或者可以被配置成实施本申请实施例的一个或多个集成电路。
存储器702可以包括用于数据或指令的大容量存储器。举例来说而非限制,存储器702可包括硬盘驱动器(Hard Disk Drive,HDD)、软盘驱动器、闪存、光盘、磁光盘、磁带或通用串行总线(Universal Serial Bus,USB)驱动器或者两个或更多个以上这些的组合。在合适的情况下,存储器702可包括可移除或不可移除(或固定)的介质。在合适的情况下,存储器702可在综合网关容灾设备的内部或外部。在特定实施例中,存储器702是非易失性固态存储器。
存储器可包括只读存储器(ROM),随机存取存储器(RAM),磁盘存储介质设备,光存储介质设备,闪存设备,电气、光学或其他物理/有形的存储器存储设备。因此,通常,存储器包括一个或多个编码有包括计算机可执行指令的软件的有形(非暂态)计算机可读存储介质(例如,存储器设备),并且当该软件被执行(例如,由一个或多个处理器)时,其可操作来执行参考根据本公开的方法所描述的操作。
处理器701通过读取并执行存储器702中存储的计算机程序指令,以 实现上述实施例中的任意一种障碍物检测方法。
在一个示例中,电子设备还可包括通信接口703和总线704。其中,如图7所示,处理器701、存储器702、通信接口703通过总线704连接并完成相互间的通信。
通信接口703,主要用于实现本申请实施例中各模块、装置、单元和/或设备之间的通信。
总线704包括硬件、软件或两者,将在线数据流量计费设备的部件彼此耦接在一起。举例来说而非限制,总线可包括加速图形端口(AGP)或其他图形总线、增强工业标准架构(EISA)总线、前端总线(FSB)、超传输(HT)互连、工业标准架构(ISA)总线、无限带宽互连、低引脚数(LPC)总线、存储器总线、微信道架构(MCA)总线、外围组件互连(PCI)总线、PCI-Express(PCI-X)总线、串行高级技术附件(SATA)总线、视频电子标准协会局部(VLB)总线或其他合适的总线或者两个或更多个以上这些的组合。在合适的情况下,总线704可包括一个或多个总线。尽管本申请实施例描述和示出了特定的总线,但本申请考虑任何合适的总线或互连。
根据本申请的实施例,电子设备可以是移动电子设备,也可以为非移动电子设备。示例性的,移动电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑或者车载电子设备等,非移动电子设备可以为服务器等。
另外,结合上述实施例中的障碍物检测方法,本申请实施例可提供一种计算机存储介质来实现。该计算机存储介质上存储有计算机程序指令;该计算机程序指令被处理器执行时实现上述实施例中的任意一种障碍物检测方法。计算机存储介质的示例包括物理/有形的存储介质,如电子电路、半导体存储器设备、ROM、闪存、可擦除ROM(EROM)、软盘、CD-ROM、光盘、硬盘等。
本申请实施例还提供一种计算机程序产品,所述计算机程序产品可被处理器执行以实现上述障碍物检测方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本申请实施例另提供了一种芯片,芯片包括处理器和通信接口,通信 接口和处理器耦合,处理器用于运行程序或指令,实现上述障碍物检测方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
需要明确的是,本申请并不局限于上文所描述并在图中示出的特定配置和处理。为了简明起见,这里省略了对已知方法的详细描述。在上述实施例中,描述和示出了若干具体的步骤作为示例。但是,本申请的方法过程并不限于所描述和示出的具体步骤,本领域的技术人员可以在领会本申请的精神后,作出各种改变、修改和添加,或者改变步骤之间的顺序。
以上所述的结构框图中所示的功能块可以实现为硬件、软件、固件或者它们的组合。当以硬件方式实现时,其可以例如是电子电路、专用集成电路(ASIC)、适当的固件、插件、功能卡等等。当以软件方式实现时,本申请的元素是被用于执行所需任务的程序或者代码段。程序或者代码段可以存储在机器可读介质中,或者通过载波中携带的数据信号在传输介质或者通信链路上传送。“机器可读介质”可以包括能够存储或传输信息的任何介质。机器可读介质的例子包括电子电路、半导体存储器设备、ROM、闪存、可擦除ROM(EROM)、软盘、CD-ROM、光盘、硬盘、光纤介质、射频(RF)链路,等等。代码段可以经由诸如因特网、内联网等的计算机网络被下载。
还需要说明的是,本申请中提及的示例性实施例,基于一系列的步骤或者装置描述一些方法或系统。但是,本申请不局限于上述步骤的顺序,也就是说,可以按照实施例中提及的顺序执行步骤,也可以不同于实施例中的顺序,或者若干步骤同时执行。
上面参考根据本公开的实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本公开的各方面。应当理解,流程图和/或框图中的每个方框以及流程图和/或框图中各方框的组合可以由计算机程序指令实现。这些计算机程序指令可被提供给通用计算机、专用计算机、或其它可编程数据处理装置的处理器,以产生一种机器,使得经由计算机或其 它可编程数据处理装置的处理器执行的这些指令使能对流程图和/或框图的一个或多个方框中指定的功能/动作的实现。这种处理器可以是但不限于是通用处理器、专用处理器、特殊应用处理器或者现场可编程逻辑电路。还可理解,框图和/或流程图中的每个方框以及框图和/或流程图中的方框的组合,也可以由执行指定的功能或动作的专用硬件来实现,或可由专用硬件和计算机指令的组合来实现。
以上所述,仅为本申请的具体实施方式,所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的系统、模块和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。应理解,本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。

Claims (16)

  1. 一种障碍物检测方法,应用于车辆,所述方法包括:
    获取对目标轨道采集得到的K个初始点云数据与L张初始图像,K与L均为正整数;
    分别基于每一所述初始图像,从所述K个初始点云数据中确定出第一点云,所述第一点云为归属于所述目标轨道中的路轨的点云;
    以所述第一点云为基准点云,基于所述K个初始点云数据,检测所述目标轨道中的目标障碍物,其中,所述目标障碍物在所述K个初始点云数据中关联有第二点云,所述第二点云与所述第一点云之间满足预设位置关系。
  2. 根据权利要求1所述的方法,其中,所述分别基于每一所述初始图像,从所述K个初始点云数据中确定出第一点云,包括:
    拟合第一初始图像中的路轨在图像坐标系中的路轨拟合方程,所述第一初始图像为所述L张初始图像中的任一张初始图像;
    将所述K个初始点云数据映射至所述图像坐标系中,依据所述路轨拟合方程对所述K个初始点云数据进行筛选,得到所述第一点云。
  3. 根据权利要求2所述的方法,其中,所述拟合第一初始图像中的路轨在图像坐标系中的路轨拟合方程,包括:
    基于预先训练得到的深度学习模型对所述第一初始图像进行像素分割,得到归属于所述第一初始图像中的路轨的初始路轨像素点;
    在所述图像坐标系中,拟合所述初始路轨像素点,得到所述路轨拟合方程。
  4. 根据权利要求3所述的方法,其中,在所述目标轨道中的路轨的数量为N,N为大于1的整数的情况下,所述在所述图像坐标系中,拟合所述初始路轨像素点,得到所述路轨拟合方程,包括:
    在所述图像坐标系中,从所述初始路轨像素点中筛选位于预设图像高度区间中的候选路轨像素点;
    沿预设方向将归属于每一所述路轨的候选路轨像素点分别划分至M个像素区间,M为大于1的整数;
    获取每一像素区间的像素中心点,分别拟合每一所述路轨对应的M个像素中心点,得到与N条所述路轨对应的N条第一拟合直线;
    根据N条所述第一拟合直线,确定透视变换矩阵,依据所述透视变换矩阵将N条所述第一拟合直线以及所述候选路轨像素点映射至鸟瞰图中,分别得到N条第二拟合直线以及候选映射像素点;
    滤除所述候选映射像素点中的第一映射像素点,得到第二映射像素点,所述第一映射像素点为所述候选映射像素点中,与任一所述第二拟合直线之间的距离均大于第一距离阈值的像素点;
    分别拟合归属于每一所述路轨的目标路轨像素点,得到与N条所述路轨对应的N条路轨拟合方程;其中,所述目标路轨像素点为与所述第二映射像素点对应的候选路轨像素点。
  5. 根据权利要求2所述的方法,其中,在K为大于1的整数的情况下,所述K个初始点云数据为通过K个激光雷达采集得到;
    所述将所述K个初始点云数据映射至所述图像坐标系中,包括:
    获取每一所述激光雷达的雷达坐标系与预设基准坐标系之间的第一坐标系转换关系、以及所述预设基准坐标系与任一所述初始图像的图像坐标系之间的第二坐标系转换关系;
    分别将每一所述激光雷达采集的初始点云数据按对应的第一坐标系转换关系映射至所述预设基准坐标系中,得到混合点云数据;
    将所述混合点云数据按所述第二坐标系转换关系映射至所述图像坐标系中。
  6. 根据权利要求2所述的方法,其中,所述将所述K个初始点云数据映射至所述图像坐标系中,依据所述路轨拟合方程对所述K个初始点云数据进行筛选,得到所述第一点云,包括:
    将所述K个初始点云数据映射至所述图像坐标系中,得到映射点集;
    从所述映射点集中确定与所述路轨拟合方程之间距离小于第二距离阈 值的目标映射点;
    依据所述目标映射点,从所述K个初始点云数据中确定出所述第一点云。
  7. 根据权利要求6所述的方法,其中,所述依据所述目标映射点,从所述K个初始点云数据中确定出所述第一点云,包括:
    从所述K个初始点云数据中确定与所述目标映射点对应的第三点云;
    将所述第三点云投影至车身坐标系的目标平面中,得到第一投影点集,其中,所述目标平面为依据车辆行驶方向与车辆高度方向确定的平面;
    滤除所述第一投影点集中的离群点,得到第二投影点集;
    根据所述第二投影点集,从所述K个初始点云数据中确定出所述第一点云。
  8. 根据权利要求1所述的方法,其中,以所述第一点云为基准点云,基于所述K个初始点云数据,检测所述目标轨道中的目标障碍物,包括:
    在车身坐标系中,从所述第一点云中确定与每一第一点云点关联的第二点云点;其中,所述第一点云点为所述K个初始点云数据中,除所述第一点云以外的点云点,所述第二点云点为所述第一点云中在X轴上距离所述第一点云点最近的点云点,所述X轴平行于车辆行驶方向;
    将在Y轴上与关联的第二点云点之间的距离满足预设距离条件的第一点云点确定为候选点云点,所述Y轴平行于车辆宽度方向;
    对所述候选点云点进行聚类,得到至少一个候选障碍物关联的第四点云;
    根据所述第四点云中的各点云点与其关联的第二点云点之间的高度差,从所述至少一个候选障碍物中确定目标障碍物。
  9. 根据权利要求8所述的方法,其中,在所述目标障碍物关联的第四点云中,目标点云点的数量大于数量阈值,且各目标点云点与其关联的第二点云点之间的高度差的最大值大于第二差值阈值,其中,所述目标点云为与其关联的第二点云点之间的高度差大于第一差值阈值的点云点。
  10. 根据权利要求8所述的方法,其中,所述根据所述第四点云中的各点云点与其关联的第二点云点之间的高度差,从所述至少一个候选障碍物中确定目标障碍物之后,所述方法还包括:
    在车身坐标系的X轴上,将车辆与所述目标障碍物关联的第四点云中各点云点之间的最近距离,确定为车辆与所述目标障碍物之间的目标距离;
    在所述目标距离小于第三距离阈值时,输出报警信号。
  11. 根据权利要求1所述的方法,其中,在L为大于1的整数的情况下,所述L张初始图像为通过L个相机采集得到;
    所述L个相机中,至少两个相机之间的焦距存在不同。
  12. 一种车辆,包括:
    获取模块,用于获取对目标轨道采集得到的K个初始点云数据与L张初始图像,K与L均为正整数;
    第一确定模块,用于分别基于每一所述初始图像,从所述K个初始点云数据中确定出第一点云,所述第一点云为归属于所述目标轨道中的路轨的点云;
    检测模块,用于以所述第一点云为基准点云,基于所述K个初始点云数据,检测所述目标轨道中的目标障碍物,其中,所述目标障碍物在所述K个初始点云数据中关联有第二点云,所述第二点云与所述第一点云之间满足预设位置关系。
  13. 一种电子设备,所述设备包括:处理器以及存储有计算机程序指令的存储器;
    所述处理器执行所述计算机程序指令时实现如权利要求1-11任意一项所述的障碍物检测方法。
  14. 一种计算机存储介质,所述计算机存储介质上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现如权利要求1-11任意一项所述的障碍物检测方法。
  15. 一种计算机程序产品,所述计算机程序产品可被处理器执行以实 现如权利要求1-11任意一项所述的障碍物检测方法。
  16. 一种芯片,所述芯片包括处理器和通信接口,提供的通信接口和提供的处理器耦合,提供的处理器用于运行程序或指令,实现如权利要求1-11任意一项所述的障碍物检测方法。
PCT/CN2022/081631 2021-03-23 2022-03-18 障碍物检测方法、车辆、设备及计算机存储介质 WO2022199472A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110306554.4 2021-03-23
CN202110306554.4A CN113536883B (zh) 2021-03-23 2021-03-23 障碍物检测方法、车辆、设备及计算机存储介质

Publications (1)

Publication Number Publication Date
WO2022199472A1 true WO2022199472A1 (zh) 2022-09-29

Family

ID=78094376

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/081631 WO2022199472A1 (zh) 2021-03-23 2022-03-18 障碍物检测方法、车辆、设备及计算机存储介质

Country Status (2)

Country Link
CN (1) CN113536883B (zh)
WO (1) WO2022199472A1 (zh)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115508844A (zh) * 2022-11-23 2022-12-23 江苏新宁供应链管理有限公司 基于激光雷达的物流输送机跑偏智能检测方法
CN115797401A (zh) * 2022-11-17 2023-03-14 昆易电子科技(上海)有限公司 对齐参数的验证方法、装置、存储介质及电子设备
CN115880252A (zh) * 2022-12-13 2023-03-31 北京斯年智驾科技有限公司 一种集装箱吊具检测方法、装置、计算机设备及存储介质
CN115880536A (zh) * 2023-02-15 2023-03-31 北京百度网讯科技有限公司 数据处理方法、训练方法、目标对象检测方法及装置
CN115937826A (zh) * 2023-02-03 2023-04-07 小米汽车科技有限公司 目标检测方法及装置
CN115965682A (zh) * 2022-12-16 2023-04-14 镁佳(北京)科技有限公司 一种车辆可通行区域确定方法、装置及计算机设备
CN116148878A (zh) * 2023-04-18 2023-05-23 浙江华是科技股份有限公司 船舶干舷高度识别方法及系统
CN116385528A (zh) * 2023-03-28 2023-07-04 小米汽车科技有限公司 标注信息的生成方法、装置、电子设备、车辆及存储介质
CN116533998A (zh) * 2023-07-04 2023-08-04 深圳海星智驾科技有限公司 车辆的自动驾驶方法、装置、设备、存储介质及车辆
CN116612059A (zh) * 2023-07-17 2023-08-18 腾讯科技(深圳)有限公司 图像处理方法及装置、电子设备、存储介质
CN116630390A (zh) * 2023-07-21 2023-08-22 山东大学 基于深度图模板的障碍物检测方法、系统、设备及介质
CN116703922A (zh) * 2023-08-08 2023-09-05 青岛华宝伟数控科技有限公司 一种锯木缺陷位置智能定位方法及系统
CN116793245A (zh) * 2023-08-24 2023-09-22 济南瑞源智能城市开发有限公司 一种基于轨道机器人的隧道检测方法、设备及介质
CN116824518A (zh) * 2023-08-31 2023-09-29 四川嘉乐地质勘察有限公司 基于图像识别的桩基静载检测方法、装置、处理器
CN117048596A (zh) * 2023-08-04 2023-11-14 广州汽车集团股份有限公司 避让障碍物的方法、装置、车辆及存储介质
CN116385528B (zh) * 2023-03-28 2024-04-30 小米汽车科技有限公司 标注信息的生成方法、装置、电子设备、车辆及存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113536883B (zh) * 2021-03-23 2023-05-02 长沙智能驾驶研究院有限公司 障碍物检测方法、车辆、设备及计算机存储介质
CN116047499B (zh) * 2022-01-14 2024-03-26 北京中创恒益科技有限公司 一种目标施工车辆的输电线路高精度实时防护系统和方法
CN114723830B (zh) * 2022-03-21 2023-04-18 深圳市正浩创新科技股份有限公司 障碍物的识别方法、设备及存储介质
CN115824237B (zh) * 2022-11-29 2023-09-26 重庆赛迪奇智人工智能科技有限公司 轨道路面识别方法及装置
CN116772887B (zh) * 2023-08-25 2023-11-14 北京斯年智驾科技有限公司 一种车辆航向初始化方法、系统、装置和可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635700A (zh) * 2018-12-05 2019-04-16 深圳市易成自动驾驶技术有限公司 障碍物识别方法、设备、系统及存储介质
CN110481601A (zh) * 2019-09-04 2019-11-22 深圳市镭神智能系统有限公司 一种轨道检测系统
CN112036274A (zh) * 2020-08-19 2020-12-04 江苏智能网联汽车创新中心有限公司 一种可行驶区域检测方法、装置、电子设备及存储介质
CN112154445A (zh) * 2019-09-19 2020-12-29 深圳市大疆创新科技有限公司 高精度地图中车道线的确定方法和装置
CN113536883A (zh) * 2021-03-23 2021-10-22 长沙智能驾驶研究院有限公司 障碍物检测方法、车辆、设备及计算机存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104374335B (zh) * 2014-11-20 2017-09-05 中车青岛四方机车车辆股份有限公司 轨道车辆限界检测系统
CN107533630A (zh) * 2015-01-20 2018-01-02 索菲斯研究股份有限公司 用于远程感测和车辆控制的实时机器视觉和点云分析
CN109360239B (zh) * 2018-10-24 2021-01-15 长沙智能驾驶研究院有限公司 障碍物检测方法、装置、计算机设备和存储介质
CN110239592A (zh) * 2019-07-03 2019-09-17 中铁轨道交通装备有限公司 一种轨道车辆主动式障碍物及脱轨检测系统
CN110967024A (zh) * 2019-12-23 2020-04-07 苏州智加科技有限公司 可行驶区域的检测方法、装置、设备及存储介质
CN111007531A (zh) * 2019-12-24 2020-04-14 电子科技大学 一种基于激光点云数据的道路边沿检测方法
CN111881752B (zh) * 2020-06-27 2023-04-28 武汉中海庭数据技术有限公司 一种护栏检测分类方法、装置、电子设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635700A (zh) * 2018-12-05 2019-04-16 深圳市易成自动驾驶技术有限公司 障碍物识别方法、设备、系统及存储介质
CN110481601A (zh) * 2019-09-04 2019-11-22 深圳市镭神智能系统有限公司 一种轨道检测系统
CN112154445A (zh) * 2019-09-19 2020-12-29 深圳市大疆创新科技有限公司 高精度地图中车道线的确定方法和装置
CN112036274A (zh) * 2020-08-19 2020-12-04 江苏智能网联汽车创新中心有限公司 一种可行驶区域检测方法、装置、电子设备及存储介质
CN113536883A (zh) * 2021-03-23 2021-10-22 长沙智能驾驶研究院有限公司 障碍物检测方法、车辆、设备及计算机存储介质

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797401A (zh) * 2022-11-17 2023-03-14 昆易电子科技(上海)有限公司 对齐参数的验证方法、装置、存储介质及电子设备
CN115508844A (zh) * 2022-11-23 2022-12-23 江苏新宁供应链管理有限公司 基于激光雷达的物流输送机跑偏智能检测方法
CN115880252A (zh) * 2022-12-13 2023-03-31 北京斯年智驾科技有限公司 一种集装箱吊具检测方法、装置、计算机设备及存储介质
CN115880252B (zh) * 2022-12-13 2023-10-17 北京斯年智驾科技有限公司 一种集装箱吊具检测方法、装置、计算机设备及存储介质
CN115965682B (zh) * 2022-12-16 2023-09-01 镁佳(北京)科技有限公司 一种车辆可通行区域确定方法、装置及计算机设备
CN115965682A (zh) * 2022-12-16 2023-04-14 镁佳(北京)科技有限公司 一种车辆可通行区域确定方法、装置及计算机设备
CN115937826B (zh) * 2023-02-03 2023-05-09 小米汽车科技有限公司 目标检测方法及装置
CN115937826A (zh) * 2023-02-03 2023-04-07 小米汽车科技有限公司 目标检测方法及装置
CN115880536B (zh) * 2023-02-15 2023-09-01 北京百度网讯科技有限公司 数据处理方法、训练方法、目标对象检测方法及装置
CN115880536A (zh) * 2023-02-15 2023-03-31 北京百度网讯科技有限公司 数据处理方法、训练方法、目标对象检测方法及装置
CN116385528A (zh) * 2023-03-28 2023-07-04 小米汽车科技有限公司 标注信息的生成方法、装置、电子设备、车辆及存储介质
CN116385528B (zh) * 2023-03-28 2024-04-30 小米汽车科技有限公司 标注信息的生成方法、装置、电子设备、车辆及存储介质
CN116148878A (zh) * 2023-04-18 2023-05-23 浙江华是科技股份有限公司 船舶干舷高度识别方法及系统
CN116533998A (zh) * 2023-07-04 2023-08-04 深圳海星智驾科技有限公司 车辆的自动驾驶方法、装置、设备、存储介质及车辆
CN116533998B (zh) * 2023-07-04 2023-09-29 深圳海星智驾科技有限公司 车辆的自动驾驶方法、装置、设备、存储介质及车辆
CN116612059B (zh) * 2023-07-17 2023-10-13 腾讯科技(深圳)有限公司 图像处理方法及装置、电子设备、存储介质
CN116612059A (zh) * 2023-07-17 2023-08-18 腾讯科技(深圳)有限公司 图像处理方法及装置、电子设备、存储介质
CN116630390A (zh) * 2023-07-21 2023-08-22 山东大学 基于深度图模板的障碍物检测方法、系统、设备及介质
CN116630390B (zh) * 2023-07-21 2023-10-17 山东大学 基于深度图模板的障碍物检测方法、系统、设备及介质
CN117048596A (zh) * 2023-08-04 2023-11-14 广州汽车集团股份有限公司 避让障碍物的方法、装置、车辆及存储介质
CN116703922B (zh) * 2023-08-08 2023-10-13 青岛华宝伟数控科技有限公司 一种锯木缺陷位置智能定位方法及系统
CN116703922A (zh) * 2023-08-08 2023-09-05 青岛华宝伟数控科技有限公司 一种锯木缺陷位置智能定位方法及系统
CN116793245A (zh) * 2023-08-24 2023-09-22 济南瑞源智能城市开发有限公司 一种基于轨道机器人的隧道检测方法、设备及介质
CN116793245B (zh) * 2023-08-24 2023-12-01 济南瑞源智能城市开发有限公司 一种基于轨道机器人的隧道检测方法、设备及介质
CN116824518A (zh) * 2023-08-31 2023-09-29 四川嘉乐地质勘察有限公司 基于图像识别的桩基静载检测方法、装置、处理器
CN116824518B (zh) * 2023-08-31 2023-11-10 四川嘉乐地质勘察有限公司 基于图像识别的桩基静载检测方法、装置、处理器

Also Published As

Publication number Publication date
CN113536883B (zh) 2023-05-02
CN113536883A (zh) 2021-10-22

Similar Documents

Publication Publication Date Title
WO2022199472A1 (zh) 障碍物检测方法、车辆、设备及计算机存储介质
Fernandez Llorca et al. Vision‐based vehicle speed estimation: A survey
Zhangyu et al. A camera and LiDAR data fusion method for railway object detection
CN113468941B (zh) 障碍物检测方法、装置、设备及计算机存储介质
JP7062407B2 (ja) 支障物検知装置
US20140348390A1 (en) Method and apparatus for detecting traffic monitoring video
CN108230254A (zh) 一种自适应场景切换的高速交通全车道线自动检测方法
Wu et al. An algorithm for automatic vehicle speed detection using video camera
JP5834933B2 (ja) 車両位置算出装置
JP4940177B2 (ja) 交通流計測装置
WO2021253245A1 (zh) 识别车辆变道趋势的方法和装置
Sochor et al. Brnocompspeed: Review of traffic camera calibration and comprehensive dataset for monocular speed measurement
CN111915883A (zh) 一种基于车载摄像的道路交通状况检测方法
CN102914290A (zh) 地铁限界检测系统及其检测方法
EP3806062A1 (en) Detection device and detection system
CN114495064A (zh) 一种基于单目深度估计的车辆周围障碍物预警方法
Tak et al. Development of AI-based vehicle detection and tracking system for C-ITS application
CN114814823A (zh) 基于毫米波雷达和相机融合的轨道车辆检测系统和方法
CN111160132B (zh) 障碍物所在车道的确定方法、装置、电子设备和存储介质
Wang et al. Object tracking based on the fusion of roadside LiDAR and camera data
CN114814826A (zh) 一种基于目标网格的雷达轨行区环境感知方法
CN109720380B (zh) 用于隐藏列车排除的主动识别系统及隐藏列车排除方法
CN115755094A (zh) 障碍物检测方法、装置、设备及存储介质
CN117523914A (zh) 碰撞预警方法、装置、设备、可读存储介质及程序产品
CN112380927B (zh) 一种轨道识别方法及其装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22774133

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22774133

Country of ref document: EP

Kind code of ref document: A1