CN114283391A - Automatic parking sensing method fusing panoramic image and laser radar - Google Patents
Automatic parking sensing method fusing panoramic image and laser radar Download PDFInfo
- Publication number
- CN114283391A CN114283391A CN202111362215.4A CN202111362215A CN114283391A CN 114283391 A CN114283391 A CN 114283391A CN 202111362215 A CN202111362215 A CN 202111362215A CN 114283391 A CN114283391 A CN 114283391A
- Authority
- CN
- China
- Prior art keywords
- parking space
- image
- coordinates
- laser radar
- parking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Abstract
The invention relates to an automatic parking sensing method fusing a panoramic image and a laser radar, which comprises the following steps: step 1: acquiring an original image through four fisheye cameras mounted on a vehicle body, and carrying out distortion correction on the original image; step 2: respectively carrying out projection transformation on the original image after distortion correction by selecting four mark points with known ground coordinates to obtain four aerial views projected to the ground; and step 3: splicing the four aerial views based on the consistency of the spatial positions of the mark points to obtain a panoramic image; and 4, step 4: inputting the ring-view image into a parking space detection model for parking space detection to obtain pixel coordinates of a parking space; and 5: compared with the prior art, the method has the advantages of simplifying a tracking algorithm of the parking space, improving the intelligence and robustness of path planning, ensuring the safety in the automatic parking process and the like.
Description
Technical Field
The invention relates to the technical field of automatic parking, in particular to an automatic parking sensing method fusing a panoramic image and a laser radar.
Background
Most of sensing information sources of the existing automatic parking scheme are based on fusion of a look-around image, a look-around image and a millimeter wave radar or fusion of a look-around image and an ultrasonic sensor, the look-around image is used for detecting a parking space, the detection method comprises a traditional vision method and a deep learning method, whether a target parking space has a vehicle or not is judged through the millimeter wave radar and the ultrasonic sensor, the distance of an obstacle is obtained, and the existing automatic parking scheme is mainly used for positioning by adopting kinematic information of the vehicle per se and then positioning is used for tracking the coordinates of the parking space.
The parking scheme which only depends on the looking-around image cannot sense the obstacle, the sensing distance of the ultrasonic sensor is short, the sensing angle range of the millimeter wave radar has limitation, and a sensing blind area exists; in addition, a positioning method that relies solely on the motion state of the vehicle may cause cumulative errors. Therefore, for the automatic parking function, the existing method is not comprehensive in environment perception, on one hand, the intelligent degree of automatic parking is limited, on the other hand, the safety in the automatic parking process is difficult to guarantee, and great hidden danger exists.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an automatic parking sensing method fusing a panoramic image and a laser radar.
The purpose of the invention can be realized by the following technical scheme:
an automatic parking sensing method fusing a surround view image and a laser radar comprises the following steps:
step 1: acquiring an original image through four fisheye cameras mounted on a vehicle body, and carrying out distortion correction on the original image;
step 2: respectively carrying out projection transformation on the original image after distortion correction by selecting four mark points with known ground coordinates to obtain four aerial views projected to the ground;
and step 3: splicing the four aerial views based on the consistency of the spatial positions of the mark points to obtain a panoramic image;
and 4, step 4: inputting the ring-view image into a parking space detection model for parking space detection to obtain pixel coordinates of a parking space;
and 5: and acquiring the global coordinate of the parking space according to the pixel coordinate of the parking space so as to plan the automatic parking path.
In the step 1, the four fisheye cameras are respectively installed at the front, the rear, the left side and the right side of the vehicle body, and the fisheye camera at the left side of the vehicle body and the fisheye camera at the right side of the vehicle body are respectively installed in the middle of the left side and the right side of the vehicle roof, so that the definition of the projection-converted panoramic image is improved.
In the step 1, the process of performing distortion correction on the original image specifically includes:
distortion correction, namely distortion removal, is carried out on an original image, internal reference calibration is carried out on four fisheye cameras by adopting an OpenCV fisheye camera model to obtain an internal reference matrix A and a distortion coefficient group K, the distortion correction transformation of the image is calculated by the internal reference matrix A and the distortion coefficient group K, the result of the distortion correction transformation is expressed in a mapping form, the original image input by each frame is remapped according to the result of the distortion correction transformation in the mapping form, and images corresponding to the four fisheye cameras are obtained:
K=(k1,k2,k3,k4)
wherein A is an internal reference matrix, fxAnd fyFocal length parameter in x-direction and focal length parameter in y-direction, respectively, cxOffset of the optical axis and pixel origin in the x-direction, cyIs the deviation of the optical axis and the pixel origin in the y direction, K is the distortion coefficient group, K1、k2、k3And k4Are all distortion coefficients.
In the step 2, the process of obtaining the four aerial views projected on the ground specifically comprises the following steps:
respectively selecting two marking points at the overlapped part of the image visual fields of every two fisheye cameras, wherein the total eight marking points are A1、A2、B1、B2、C1、C2、D1And D2The ground coordinates of the mark points are obtained through field measurement, the pixel coordinates of the mark points are obtained through manual selection on an original image, and the image of each fisheye camera is subjected to projection transformation through the corresponding four mark points.
In the step 3, the process of obtaining the panoramic image specifically comprises the following steps:
determining a straight line by mark points at the overlapped part of the visual fields of two adjacent aerial views, splicing four overhead images by adopting the straight line as a splicing line of the two adjacent aerial views in a mask mode, dividing a final all-round view image into four parts by taking the splicing line as a boundary, respectively manufacturing masks of the four images, and carrying out Gaussian blur treatment on the edges of the masks, so that the transition of seams between the all-round view images is natural, balancing the colors of the four images, eliminating the influence of ambient light, and obtaining an all-round view image with 600 × 600 pixels after splicing.
In the step 4, the process of obtaining the pixel coordinates of the parking space specifically includes the following steps:
step 401: carrying out parking space detection by using a parking space detection model based on YOLOv3 to obtain a characteristic diagram of a look-around image, and dividing the characteristic diagram of the look-around image into a plurality of grids;
step 402: returning the parking space angular points in the corresponding range from the feature map of each grid;
step 403: the method comprises the steps of reasoning effective parking space positions according to relative position relations of parking space angular points, obtaining pixel coordinates of four complete parking space angular points of effective parking spaces through calculation and screening, namely one parking space corresponds to the four parking space angular points, detecting the two parking space angular points, and reasoning the rest two parking space angular points according to prior information after calculation and screening.
In the step 5, the process of obtaining the global coordinate of the parking space specifically includes the following steps:
step 501: calibrating by adopting a quick calibration method based on a look-around image and a laser radar, acquiring parking space information slots _ msg _ lidar of a parking space in a laser radar coordinate system, and sensing obstacle information of a surrounding environment in real time;
step 502: carrying out vehicle positioning on the point cloud map by adopting a laser radar to obtain global coordinates of the self-vehicle positioning, and further forming a vehicle pose message entity of the self-vehicle positioning;
step 503: synchronizing timestamps of a vehicle pose message odometer for positioning a self vehicle and a parking space message slots _ msg _ lidar, obtaining a rotation and translation transformation matrix of a laser radar coordinate system relative to a global map from the vehicle pose message odometer, converting radar coordinates of parking space corner points under the laser radar coordinate system into global coordinates under the global map, and obtaining global coordinates of the parking space;
step 504: respectively calculating the distance between two global coordinates of different frames of parking spaces obtained from different frames, judging whether the distance is smaller than a set threshold value, if so, judging that the two parking spaces belong to the same parking space, calculating the average value of the global coordinates of the same parking space and different frames, and taking the average value as a final detection result;
step 505: and constructing a barrier grid map in real time through radar point cloud and barrier information acquired by a laser radar, selecting a target parking space for automatic parking in a detection result, obtaining the global position and posture of a path planning end point, and further performing real-time path planning on the grid map.
In step 501, the process of performing calibration by using a fast calibration method based on a panoramic image and a laser radar specifically includes the following steps:
step 5011: in the process of automatic parking, the ground is assumed to be a horizontal plane, the height coordinate of radar point cloud on the ground is ignored, and the calibration is simplified into the coordinate calibration of two planes:
step 5012: describing a mapping relation from a pixel coordinate system to a laser radar coordinate system by using a homography matrix H of 3 x 3, wherein the expression of the mapping relation is as follows:
wherein (x)1,y1) Corresponding pixel coordinates (x) of the parking space corner point in a pixel coordinate system2,y2) Corresponding radar coordinates of the parking space corner points in a laser radar coordinate system, wherein H is a homography matrix and H is11、H12、H13、H21、H22、H23、H31、H32And H33Elements which are homography matrices, and oc represents a proportional relationship;
step 5013: by making the element H of the homography matrix, since homogeneous coordinates are transformed33Normalization is performed as 1:
step 5014: selecting a calibration point pair, respectively selecting the calibration point pair from the radar point cloud and the ring-view image, marking the calibration point by pasting a reflective sticker on the ground, finding a marking point in the RVIZ by displaying the reflection intensity of the radar point cloud, obtaining the radar coordinate of the marking point in a laser radar coordinate system, and obtaining the pixel coordinate of the marking point in the ring-view image through image processing software;
step 5015: solving the mapping relation between the all-round view image plane and the laser radar ground plane according to the collected calibration point pairs to obtain a homography matrix H, and minimizing a reprojection error;
step 5016: after calibration is finished, placing a conical barrel at the position where a wire harness of the laser radar scans the ground, projecting the radar point cloud onto the annular view image for verification, wherein the projection of the laser radar point cloud corresponds to the image;
step 5017: and converting the pixel coordinates of the angular point of the parking space into radar coordinates under a laser radar coordinate system by using the obtained homography matrix H to obtain the radar coordinates of the parking space, and issuing the radar coordinates as parking space information slots _ msg _ lidar.
In step 5015, the calculation formula for minimizing the reprojection error is:
where ε represents the reprojection error.
In step 5017, the calculation formula for converting the pixel coordinates of the parking space corner into radar coordinates in the laser radar coordinate system is as follows:
wherein x is2Is the abscissa, y, of the angular point of the parking space in the laser radar coordinate system2And the vertical coordinate of the parking space corner point under the laser radar coordinate system.
Compared with the prior art, the invention has the following advantages:
1. the automatic parking sensing method fusing the panoramic image and the laser radar establishes a bridge fusing the panoramic image and the point cloud sensing information, fills up the technical blank of a laser radar sensor in automatic parking application, and realizes a more intelligent automatic parking function;
2. according to the invention, the side camera is arranged on the roof, so that the visual field range of the all-round vision perception is improved;
3. the invention simplifies the calibration process and is the basis of the fusion of the perspective image and the perception information of the laser radar;
4. the parking space is marked on the global map for planning, so that the tracking algorithm of the parking space is simplified, the laser radar information is fused, the environment is more comprehensively sensed, the intelligence and the robustness of path planning are improved, and the safety in the automatic parking process is also guaranteed.
Drawings
Fig. 1 is a structural frame diagram of the present invention.
Fig. 2 is a schematic view of the mounting position of the fisheye camera of the invention.
FIG. 3 is a schematic diagram of the stitching of the ring-view images according to the present invention.
Fig. 4 is a view of the laser radar calibration effect of the panoramic image.
FIG. 5 is a diagram showing the detection results.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments.
Examples
The invention discloses an automatic parking sensing method fusing a panoramic image and a laser radar, which comprises the following steps:
step 1: acquiring an original image through four fisheye cameras mounted on a vehicle, and carrying out distortion correction on the original image;
step 2: respectively carrying out projection transformation on the original image after distortion correction by selecting four mark points with known ground coordinates to obtain four aerial views projected to the ground;
and step 3: splicing the four aerial views based on the consistency of the spatial positions of the mark points to obtain a panoramic image;
and 4, step 4: inputting the ring-view image into a parking space detection model for parking space detection to obtain pixel coordinates of a parking space;
and 5: and acquiring the global coordinate of the parking space according to the pixel coordinate of the parking space so as to plan the automatic parking path.
As shown in figure 1, the invention adopts four fisheye cameras with 190-degree visual angles to acquire an original image, the four fisheye cameras are respectively arranged at the front, the rear, the left side and the right side of a vehicle body, the left fisheye camera and the right fisheye camera are respectively arranged in the middle of the left side and the right side of a vehicle roof, and the range of the circular visual field reaches 12 multiplied by 12m2Meanwhile, the definition of a panoramic image after projection conversion is improved, and the installation position of the fisheye camera is shown in fig. 2.
In step 1, distortion correction, namely distortion removal, is performed on an original image acquired by a fisheye camera, four fisheye cameras are calibrated by adopting an OpenCV fisheye camera model to obtain an internal reference matrix A and a distortion coefficient group K, distortion correction transformation of the original all-round looking image is calculated, a result of the distortion correction transformation is represented in a mapping form, remapping transformation is performed on the basis of the result of the distortion correction transformation in the mapping form, the same remapping transformation is performed on each frame of input image afterwards to obtain four images, and therefore the image processing speed is improved.
K=(k1,k2,k3,k4)
Wherein A is an internal reference matrix, fxAnd fyFocal length parameter in x-direction and focal length parameter in y-direction, respectively, cxOffset of the optical axis and pixel origin in the x-direction, cyIs the deviation of the optical axis and the pixel origin in the y direction, K is the distortion coefficient group, K1、k2、k3And k4Are all distortion coefficients.
As shown in fig. 3, in step 2, four markers with known ground coordinates are selected for top-view projection transformation, and four markers are required for the image of each fisheye camera for projection transformation, so that two markers, namely a total of eight markers, are selected at the overlapping positions of the image fields of each two cameras respectively1、A2、B1、B2、C1、C2、D1And D2The ground coordinates of the mark points are obtained through field measurement, and the pixel coordinates of the mark points are obtained through manual selection on the image.
In step 3, a straight line is determined by mark points at the overlapped part of the visual fields of two adjacent overhead views, the straight line is used as a splicing line of the two adjacent overhead views, four overhead images are spliced in a masking mode, the final all-round view image is divided into four parts by taking the splicing line as a boundary, masks of the four images are respectively manufactured, Gaussian blur processing is carried out on the edges of the masks, then the transition of joints between the all-round view images is natural, the colors of the four images are balanced, the influence of ambient light is eliminated, and a 600 × 600 pixel all-round view image is obtained after splicing.
In step 4, a parking space detection model based on YOLOv3 is adopted to perform parking space detection to obtain a feature map of a look-around image, the feature map of the look-around image is divided into grids, parking space angular points in a corresponding range are regressed from the feature map of each grid, effective parking space positions are deduced according to the relative position relationship of the parking space angular points, and then the pixel coordinates of four complete parking space angular points of a parking space are calculated to obtain the pixel coordinates of the parking space.
The method comprises the steps that the output result of parking space detection is pixel coordinates of a parking space, when automatic parking path planning is carried out, the global coordinates of the parking space under a global coordinate system (namely the global coordinate system) need to be obtained, obstacle information of the surrounding environment is sensed in real time, a laser radar (laser SLAM) is adopted to carry out vehicle positioning on a point cloud map, the global coordinates of self-vehicle positioning are obtained, vehicle pose information odometry of the self-vehicle positioning is formed, a obstacle grid map is constructed in real time based on radar point cloud collected by the laser radar, and real-time path planning is carried out on the grid map by combining the self-vehicle positioning and the global coordinates of parking space corner points.
Calculating the global coordinates of the parking space under the global map:
the invention firstly calibrates a pixel coordinate system and a laser radar coordinate system, namely a panoramic image 2D coordinate system and a laser radar 3D coordinate system, obtains radar coordinates of parking space angular points under the laser radar 3D coordinate system, calibrates the panoramic image 2D coordinate system and the laser radar 3D coordinate system, and makes a calibration flow become complicated if a single fisheye camera is respectively calibrated with the laser radar coordinate system and then converted into the panoramic image 2D coordinate system because the panoramic image is not directly acquired by the camera and has no parameters such as camera internal parameters, so that the calibration flow is complicated and fussy, therefore, the invention adopts a rapid calibration method based on the panoramic image and the laser radar to calibrate, because the parking space detection only needs to focus on image information of the ground in the panoramic image, only focuses on the radar which scans the Point cloud on the ground in the calibration process, in the process of automatic parking, the ground is assumed to be a horizontal plane, the height coordinate of ground radar point cloud is ignored, and the calibration is simplified into the calibration of two plane coordinates, so that a 3 x 3 homography matrix H is adopted to describe the mapping relation from pixel coordinates to radar coordinates, and (x) is set1,y1) Corresponding pixel coordinates (x) of the parking space corner point in a pixel coordinate system2,y2) Corresponding radar coordinates of the parking space angular points in a laser radar coordinate system include
Wherein (x)1,y1) Corresponding pixel coordinates (x) of the parking space corner point in a pixel coordinate system2,y2) Corresponding radar coordinates of the parking space corner points in a laser radar coordinate system, wherein H is a homography matrix and H is11、H12、H13、H21、H22、H23、H31、H32And H33Elements which are homography matrices, and oc represents a proportional relationship;
since the coordinates are converted to homogeneous coordinates, H33Normalization is performed as 1:
a total of 8 unknowns, therefore 8 equations are needed for solving the homography matrix H, at least 4 groups of calibration point pairs need to be selected, the calibration point pairs are respectively selected from the radar point cloud and the annular view image, the calibration points are marked by pasting reflective stickers on the ground, the marking points are found by displaying the reflection intensity of the radar point cloud and the radar coordinates of the marking points under the laser radar coordinate system are obtained at RVIZ, the pixel coordinates of the marking points in the annular view image are obtained by image processing software, the collected calibration point pairs are used for solving the perspective transformation relation between the annular view image plane and the laser radar ground plane, the homography matrix H is obtained, the reprojection error is minimized, and the calculation formula of the reprojection error is as follows:
where ε represents the reprojection error.
As shown in fig. 4, after calibration is completed, a conical barrel is placed at the position where the laser radar wire harness scans the ground, the radar point cloud is projected to the annular view image for verification, and the projection of the laser radar point cloud and the image have high goodness of fit.
Converting the pixel coordinates of the angular points of the parking spaces into radar coordinates under a laser radar coordinate system by using a homography matrix H obtained in the calibration process to obtain the radar coordinates of the parking spaces, and issuing the radar coordinates as parking space information slots _ msg _ lidar:
wherein x is2Is the abscissa, y, of the angular point of the parking space in the laser radar coordinate system2And the vertical coordinate of the parking space corner point under the laser radar coordinate system.
The method comprises the steps of obtaining a vehicle pose message odometry and a parking space message slots _ msg _ lidar from vehicle positioning, synchronizing timestamps of the vehicle pose message odometry and the parking space message slots _ msg _ lidar, obtaining a rotation translation transformation matrix of a laser radar coordinate system relative to a global map from the vehicle pose message odometry, converting radar coordinates of parking space corner points under the laser radar coordinate system into global coordinates under the global map, and obtaining the global coordinates of the parking space.
As shown in fig. 5, for the global coordinates of the parking spaces obtained in different frames, the distances between two global coordinates of different frames are calculated respectively, whether the distances are smaller than a set threshold value is judged, if yes, it is judged that the two parking spaces belong to the same parking space, for the global coordinates obtained in different frames of the same parking space, the average value of the global coordinates of different frames is calculated, the average value is used as a final detection result, the detection result is selected as a target parking space for automatic parking, the global pose of the path planning end point is obtained, and then path planning is performed.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and those skilled in the art can easily conceive of various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. An automatic parking sensing method fusing a surround view image and a laser radar is characterized by comprising the following steps:
step 1: acquiring an original image through four fisheye cameras mounted on a vehicle body, and carrying out distortion correction on the original image;
step 2: respectively carrying out projection transformation on the original image after distortion correction by selecting four mark points with known ground coordinates to obtain four aerial views projected to the ground;
and step 3: splicing the four aerial views based on the consistency of the spatial positions of the mark points to obtain a panoramic image;
and 4, step 4: inputting the ring-view image into a parking space detection model for parking space detection to obtain pixel coordinates of a parking space;
and 5: and acquiring the global coordinate of the parking space according to the pixel coordinate of the parking space so as to plan the automatic parking path.
2. The method for sensing automatic parking of a fusion panoramic image and lidar according to claim 1, wherein in step 1, four fisheye cameras are respectively installed in front, back, left side and right side of a vehicle body, and the fisheye camera on the left side of the vehicle body and the fisheye camera on the right side of the vehicle body are respectively installed in the middle of the left side and the right side of a vehicle roof, so as to improve the definition of the projection-transformed panoramic image.
3. The method for sensing automatic parking by fusing the panoramic image and the laser radar according to claim 1, wherein in the step 1, the process of performing distortion correction on the original image specifically comprises:
distortion correction, namely distortion removal, is carried out on an original image, internal reference calibration is carried out on four fisheye cameras by adopting an OpenCV fisheye camera model to obtain an internal reference matrix A and a distortion coefficient group K, the distortion correction transformation of the image is calculated by the internal reference matrix A and the distortion coefficient group K, the result of the distortion correction transformation is expressed in a mapping form, the original image input by each frame is remapped according to the result of the distortion correction transformation in the mapping form, and images corresponding to the four fisheye cameras are obtained:
K=(k1,k2,k3,k4)
wherein A is an internal reference matrix, fxAnd fyFocal length parameter in x-direction and focal length parameter in y-direction, respectively, cxOffset of the optical axis and pixel origin in the x-direction, cyIs the deviation of the optical axis and the pixel origin in the y direction, K is the distortion coefficient group, K1、k2、k3And k4Are all distortion coefficients.
4. The method for sensing automatic parking by fusing a panoramic image and a lidar according to claim 1, wherein the step 2 of obtaining four overhead views projected to the ground comprises:
respectively selecting two marking points at the overlapped part of the image visual fields of every two fisheye cameras, wherein the total eight marking points are A1、A2、B1、B2、C1、C2、D1And D2The ground coordinates of the mark points are obtained through field measurement, the pixel coordinates of the mark points are obtained through manual selection on an original image, and the image of each fisheye camera is subjected to projection transformation through the corresponding four mark points.
5. The method for sensing automatic parking by fusing the panoramic image and the lidar according to claim 1, wherein the step 3 is to obtain the panoramic image specifically as follows:
determining a straight line by mark points at the overlapped part of the visual fields of two adjacent aerial views, splicing four overhead images by adopting the straight line as a splicing line of the two adjacent aerial views in a mask mode, dividing a final all-round view image into four parts by taking the splicing line as a boundary, respectively manufacturing masks of the four images, and carrying out Gaussian blur treatment on the edges of the masks, so that the transition of seams between the all-round view images is natural, balancing the colors of the four images, eliminating the influence of ambient light, and obtaining an all-round view image with 600 × 600 pixels after splicing.
6. The method for sensing automatic parking through fusion of the panoramic image and the lidar according to claim 1, wherein the step 4 of obtaining the pixel coordinates of the parking space comprises the following steps:
step 401: carrying out parking space detection by using a parking space detection model based on YOLOv3 to obtain a characteristic diagram of a look-around image, and dividing the characteristic diagram of the look-around image into a plurality of grids;
step 402: returning the parking space angular points in the corresponding range from the feature map of each grid;
step 403: the method comprises the steps of reasoning effective parking space positions according to relative position relations of parking space angular points, obtaining pixel coordinates of four complete parking space angular points of effective parking spaces through calculation and screening, namely one parking space corresponds to the four parking space angular points, detecting the two parking space angular points, and reasoning the rest two parking space angular points according to prior information after calculation and screening.
7. The method for sensing automatic parking by fusing the panoramic image and the lidar according to claim 1, wherein the step 5 of obtaining the global coordinate of the parking space comprises the following steps:
step 501: calibrating by adopting a quick calibration method based on a look-around image and a laser radar, acquiring parking space information slots _ msg _ lidar of a parking space in a laser radar coordinate system, and sensing obstacle information of a surrounding environment in real time;
step 502: carrying out vehicle positioning on the point cloud map by adopting a laser radar to obtain global coordinates of the self-vehicle positioning, and further forming a vehicle pose message entity of the self-vehicle positioning;
step 503: synchronizing timestamps of a vehicle pose message odometer for positioning a self vehicle and a parking space message slots _ msg _ lidar, obtaining a rotation and translation transformation matrix of a laser radar coordinate system relative to a global map from the vehicle pose message odometer, converting radar coordinates of parking space corner points under the laser radar coordinate system into global coordinates under the global map, and obtaining global coordinates of the parking space;
step 504: respectively calculating the distance between two global coordinates of different frames of parking spaces obtained from different frames, judging whether the distance is smaller than a set threshold value, if so, judging that the two parking spaces belong to the same parking space, calculating the average value of the global coordinates of the same parking space and different frames, and taking the average value as a final detection result;
step 505: and constructing a barrier grid map in real time through radar point cloud and barrier information acquired by a laser radar, selecting a target parking space for automatic parking in a detection result, obtaining the global position and posture of a path planning end point, and further performing real-time path planning on the grid map.
8. The method for sensing automatic parking through fusion of the panoramic image and the lidar according to claim 7, wherein the step 501 of calibrating through a fast calibration method based on the panoramic image and the lidar comprises the following steps:
step 5011: in the process of automatic parking, the ground is assumed to be a horizontal plane, the height coordinate of radar point cloud on the ground is ignored, and the calibration is simplified into the coordinate calibration of two planes:
step 5012: describing a mapping relation from a pixel coordinate system to a laser radar coordinate system by using a homography matrix H of 3 x 3, wherein the expression of the mapping relation is as follows:
wherein (x)1,y1) Corresponding pixel coordinates (x) of the parking space corner point in a pixel coordinate system2,y2) Corresponding radar coordinates of the parking space corner points in a laser radar coordinate system, wherein H is a homography matrix and H is11、H12、H13、H21、H22、H23、H31、H32And H33Elements which are homography matrices, and oc represents a proportional relationship;
step 5013: by making the element H of the homography matrix, since homogeneous coordinates are transformed33Normalization is performed as 1:
step 5014: selecting a calibration point pair, respectively selecting the calibration point pair from the radar point cloud and the ring-view image, marking the calibration point by pasting a reflective sticker on the ground, finding a marking point in the RVIZ by displaying the reflection intensity of the radar point cloud, obtaining the radar coordinate of the marking point in a laser radar coordinate system, and obtaining the pixel coordinate of the marking point in the ring-view image through image processing software;
step 5015: solving the mapping relation between the all-round view image plane and the laser radar ground plane according to the collected calibration point pairs to obtain a homography matrix H, and minimizing a reprojection error;
step 5016: after calibration is finished, placing a conical barrel at the position where a wire harness of the laser radar scans the ground, projecting the radar point cloud onto the annular view image for verification, wherein the projection of the laser radar point cloud corresponds to the image;
step 5017: and converting the pixel coordinates of the angular point of the parking space into radar coordinates under a laser radar coordinate system by using the obtained homography matrix H to obtain the radar coordinates of the parking space, and issuing the radar coordinates as parking space information slots _ msg _ lidar.
10. The method for sensing automatic parking with fused surround view image and lidar according to claim 8, wherein in step 5017, the calculation formula for converting the pixel coordinates of the corner point of the parking space into radar coordinates in the lidar coordinate system is:
wherein x is2Is the abscissa, y, of the angular point of the parking space in the laser radar coordinate system2And the vertical coordinate of the parking space corner point under the laser radar coordinate system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111362215.4A CN114283391A (en) | 2021-11-17 | 2021-11-17 | Automatic parking sensing method fusing panoramic image and laser radar |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111362215.4A CN114283391A (en) | 2021-11-17 | 2021-11-17 | Automatic parking sensing method fusing panoramic image and laser radar |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114283391A true CN114283391A (en) | 2022-04-05 |
Family
ID=80869321
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111362215.4A Pending CN114283391A (en) | 2021-11-17 | 2021-11-17 | Automatic parking sensing method fusing panoramic image and laser radar |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114283391A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115022604A (en) * | 2022-04-21 | 2022-09-06 | 新华智云科技有限公司 | Projection book and use method and system thereof |
CN115329932A (en) * | 2022-08-05 | 2022-11-11 | 中国民用航空飞行学院 | Airplane landing attitude monitoring method based on digital twins |
CN115661395A (en) * | 2022-12-27 | 2023-01-31 | 安徽蔚来智驾科技有限公司 | Parking space map building method, vehicle and storage medium |
CN115965682A (en) * | 2022-12-16 | 2023-04-14 | 镁佳(北京)科技有限公司 | Method and device for determining passable area of vehicle and computer equipment |
CN116229426A (en) * | 2023-05-09 | 2023-06-06 | 华东交通大学 | Unmanned parking space detection method based on panoramic all-around image |
CN116385528A (en) * | 2023-03-28 | 2023-07-04 | 小米汽车科技有限公司 | Method and device for generating annotation information, electronic equipment, vehicle and storage medium |
-
2021
- 2021-11-17 CN CN202111362215.4A patent/CN114283391A/en active Pending
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115022604A (en) * | 2022-04-21 | 2022-09-06 | 新华智云科技有限公司 | Projection book and use method and system thereof |
CN115329932A (en) * | 2022-08-05 | 2022-11-11 | 中国民用航空飞行学院 | Airplane landing attitude monitoring method based on digital twins |
CN115965682A (en) * | 2022-12-16 | 2023-04-14 | 镁佳(北京)科技有限公司 | Method and device for determining passable area of vehicle and computer equipment |
CN115965682B (en) * | 2022-12-16 | 2023-09-01 | 镁佳(北京)科技有限公司 | Vehicle passable area determining method and device and computer equipment |
CN115661395A (en) * | 2022-12-27 | 2023-01-31 | 安徽蔚来智驾科技有限公司 | Parking space map building method, vehicle and storage medium |
CN115661395B (en) * | 2022-12-27 | 2023-04-11 | 安徽蔚来智驾科技有限公司 | Parking space map building method, vehicle and storage medium |
CN116385528A (en) * | 2023-03-28 | 2023-07-04 | 小米汽车科技有限公司 | Method and device for generating annotation information, electronic equipment, vehicle and storage medium |
CN116385528B (en) * | 2023-03-28 | 2024-04-30 | 小米汽车科技有限公司 | Method and device for generating annotation information, electronic equipment, vehicle and storage medium |
CN116229426A (en) * | 2023-05-09 | 2023-06-06 | 华东交通大学 | Unmanned parking space detection method based on panoramic all-around image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114283391A (en) | Automatic parking sensing method fusing panoramic image and laser radar | |
CN110136208B (en) | Joint automatic calibration method and device for robot vision servo system | |
US9858639B2 (en) | Imaging surface modeling for camera modeling and virtual view synthesis | |
KR102516326B1 (en) | Camera extrinsic parameters estimation from image lines | |
JP5588812B2 (en) | Image processing apparatus and imaging apparatus using the same | |
US7321839B2 (en) | Method and apparatus for calibration of camera system, and method of manufacturing camera system | |
JP5157067B2 (en) | Automatic travel map creation device and automatic travel device. | |
CN112180373B (en) | Multi-sensor fusion intelligent parking system and method | |
CN105445721A (en) | Combined calibrating method of laser radar and camera based on V-shaped calibrating object having characteristic protrusion | |
CN112785655A (en) | Method, device and equipment for automatically calibrating external parameters of all-round camera based on lane line detection and computer storage medium | |
JP5811327B2 (en) | Camera calibration device | |
KR102078229B1 (en) | Apparatus and Method for Interpolating Occluded Regions in Scanning Data Using Camera Images | |
CN110779491A (en) | Method, device and equipment for measuring distance of target on horizontal plane and storage medium | |
Yan et al. | Joint camera intrinsic and lidar-camera extrinsic calibration | |
JP6186431B2 (en) | Calibration apparatus, calibration system, and imaging apparatus | |
CN114413958A (en) | Monocular vision distance and speed measurement method of unmanned logistics vehicle | |
KR101597163B1 (en) | Method and camera apparatus for calibration of stereo camera | |
CN116187158A (en) | Automatic layout method for multiple cameras in multi-vision measurement system | |
CN116630444A (en) | Optimization method for fusion calibration of camera and laser radar | |
JP2011254128A (en) | Plane view generating device and plane view generating method | |
CN113869422A (en) | Multi-camera target matching method, system, electronic device and readable storage medium | |
CN111380503B (en) | Monocular camera ranging method adopting laser-assisted calibration | |
Toepfer et al. | A unifying omnidirectional camera model and its applications | |
CN116403186A (en) | Automatic driving three-dimensional target detection method based on FPN Swin Transformer and Pointernet++ | |
KR20240056516A (en) | Method and system for generating camera model for camera calibration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |