CN117746396A - Parking environment sensing method based on fisheye camera - Google Patents

Parking environment sensing method based on fisheye camera Download PDF

Info

Publication number
CN117746396A
CN117746396A CN202410018414.0A CN202410018414A CN117746396A CN 117746396 A CN117746396 A CN 117746396A CN 202410018414 A CN202410018414 A CN 202410018414A CN 117746396 A CN117746396 A CN 117746396A
Authority
CN
China
Prior art keywords
vehicle
target
points
point set
boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410018414.0A
Other languages
Chinese (zh)
Inventor
郑瑞
于宏啸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Liu Ma Chi Chi Technology Co ltd
Original Assignee
Beijing Liu Ma Chi Chi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Liu Ma Chi Chi Technology Co ltd filed Critical Beijing Liu Ma Chi Chi Technology Co ltd
Priority to CN202410018414.0A priority Critical patent/CN117746396A/en
Publication of CN117746396A publication Critical patent/CN117746396A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of automatic parking, and particularly relates to a parking environment sensing method. A parking environment sensing method based on a fisheye camera comprises the following steps: s1, acquiring 4 fish-eye images around a vehicle at the same moment; s2, respectively carrying out obstacle detection and drivable region detection on 4 fish-eye images, extracting the boundary of the drivable region, and projecting boundary points of the drivable region into a looking-around top view through a homography matrix; s3, determining a target feature point set of the obstacle; s4, projecting the target feature point set into a looking-around top view; s5, filtering noise points in the target feature points; s6, fusing the same target feature point set under different cameras; s7, estimating the position and the direction of the target according to the feature point sets of different targets; s8, converting the determined target coordinates and the boundary coordinates of the drivable area from the looking-around top view coordinate system to the vehicle coordinate system, and outputting the target coordinates and the boundary coordinates of the drivable area to a downstream task. The method realizes the perception of low cost of the surrounding environment of the vehicle in the parking environment.

Description

Parking environment sensing method based on fisheye camera
Technical Field
The invention belongs to the technical field of automatic parking, and particularly relates to a parking environment sensing method.
Background
With the gradual maturity of parking technology, how to reduce hardware cost and improve cost performance becomes a hot spot problem.
The environment of a common fisheye camera usually adopts the fusion perception of the camera and a radar, and the fisheye image is detected by using a deep learning model after distortion removal. However, the advantage of wide viewing angle range of the fish-eye camera is limited, and blind area around the vehicle is increased.
Disclosure of Invention
The purpose of the invention is that: a method for sensing the surrounding environment of a vehicle by a low-cost scheme only relying on a fisheye camera around the vehicle body is provided.
The technical scheme of the invention is as follows: a parking environment sensing method based on a fisheye camera comprises the following steps:
s1, 4 fish-eye images of the periphery of a vehicle at the same moment are acquired.
S2, respectively carrying out obstacle detection and drivable region detection on the 4 fish-eye images, extracting the boundary of the drivable region, projecting boundary points of the drivable region into a looking-around top view through a homography matrix, and switching to S3 for obstacle detection.
S3, determining a target feature point set of the obstacle.
S4, projecting the target feature point set into the looking-around top view.
S5, filtering noise points in the target feature points.
S6, fusing the same target feature point set under different cameras.
S7, estimating the position and the direction of the target according to the feature point sets of different targets.
S8, converting the target coordinates determined in the S7 and the boundary coordinates of the drivable area determined in the S2 from a top view looking around coordinate system to a vehicle coordinate system, and outputting the vehicle coordinate system to a downstream task.
In the above scheme, specifically, in S2, the method for obtaining the boundary point set of the drivable area is as follows:
s2.1, carrying out edge detection on the drivable area through a Canny edge detection algorithm.
S2.2, converting the graph into a binary graph according to the pixel values.
S2.3, setting the pixel value of the edge position to be 1, and setting the pixel value of the rest positions to be 0, so as to obtain a boundary point set of the drivable region.
In the above scheme, specifically, in S3, according to the 2D target detection frame and the boundary of the drivable area, the step of obtaining the obstacle target feature point set includes:
and S3.1, traversing boundary points of the drivable area, and judging whether the boundary points are positioned in a detection frame of a certain target in the fish-eye graph.
S3.2, taking the boundary points of the movable area in each target detection frame as the characteristic point set of the target.
In the above scheme, specifically, the step S4 includes:
s4.1, de-distorting the target feature point set through distortion parameters.
S4.2, projecting the target feature point set into the looking-around top view through the calibrated homography matrix.
In the above scheme, specifically, the step S5 includes:
s5.1, DBSCAN clustering is conducted on each target feature point set.
S5.2, removing the category of the noise, and screening the category with the point number which is 30% greater than the total characteristic point number of the target.
And S5.3, merging the screened categories to be used as a characteristic point set after target denoising.
In the above scheme, specifically, the step S6 includes:
s6.1, obtaining the circumscribed rectangle of the feature points.
S6.2, calculating the minimum area IOU between every two target circumscribed rectangles under the looking-around top view coordinate system.
S6.3, setting a threshold value to be 0.5, merging targets with the minimum area IOU between every two targets being larger than 0.5, and outputting the merged feature point set.
In the above scheme, specifically, the step S7 includes:
s7.1, calculating the distances from all the characteristic points of the obstacle to the center point of the vehicle under the looking-around top view coordinate system for the target obstacle with smaller occupied area, and using the characteristic point corresponding to the minimum distance to represent the position of the obstacle.
S7.2 for a vehicle obstacle, four corner points are determined by the following method.
S7.2.1 the maximum and minimum values x_min_pt, x_max_pt, y_min_pt and y_max_pt of all the obstacle feature points x and y in the looking-around top view coordinate system are obtained.
S7.2.2 the vectors l1, l2, l3, l4, l5, l6 are formed by connecting x_min_pt, x_max_pt, y_min_pt, and y_max_pt in pairs.
S7.2.3 the cosine similarity cosine between every two l1, l2, l3, l4, l5 and l6 is calculated to obtain the minimum cosine.
S7.2.4 if the minimum cosine is less than 0.5 and the minimum cosine is S7.2.5, or if the minimum cosine is greater than or equal to 0.5 and the minimum cosine is S7.2.6.
S7.2.5 acquires the corresponding vectors lm, ln, and uses the intersection of lm, ln as the reference point P of the vehicle; if the maximum value of lm, ln is smaller than 1.2 times of the set vehicle width, the longer side vertical direction of lm, ln is taken as the vehicle direction theta, and if the maximum value of lm, ln is larger than 1.2 times of the set vehicle width, the longer side direction of lm, ln is taken as the vehicle direction theta.
S7.2.6 fitting all the characteristic points of the vehicle to obtain points P1 and P2 with the largest distance of the vehicle point set; p1 and P2 points which are closer to the vehicle are set as datum points P; if the distance between P1 and P2 is less than 1.2 times of the set vehicle width, the vertical direction of the connecting line between P1 and P2 is taken as the vehicle direction theta, and if the distance between P1 and P2 is more than 1.2 times of the set vehicle width, the direction of the connecting line between P1 and P2 is taken as the vehicle direction theta.
S7.2.7 the other 3 corner points of the vehicle are calculated given the vehicle reference point P, the vehicle orientation theta and the set vehicle length and width.
The beneficial effects are that: the method realizes the perception of low cost of the surrounding environment of the vehicle in the parking environment, and the position and the posture of the obstacle in the vehicle coordinate system are finally estimated according to the specific characteristics of the characteristic point set and the category of the obstacle by only carrying out 2D target detection and driving area detection on the image shot by the fish-eye cameras around the vehicle and taking the point of the driving area boundary in the target detection frame as the target characteristic point set.
Drawings
FIG. 1 is a schematic flow chart of the method;
FIG. 2 is a diagram showing the result of detecting the driving area in the S2 of the present invention;
FIG. 3 is a schematic diagram of the detection result of the target feature point of the obstacle in S3 according to the present invention;
FIG. 4 is a projected annular view of the invention S4;
FIG. 5 is a schematic diagram of the result of denoising the target in S5 according to the present invention;
FIG. 6 is a flow chart of the invention S7;
FIG. 7 is a schematic illustration of the annotation of the characteristic points of the obstacle according to the invention S7.2.1;
FIG. 8 is a schematic diagram of the present invention S7.2.2 with barrier feature points connected in pairs;
fig. 9 is a partial image visualized in a top view looking around using the present method.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a parking environment sensing method based on a fisheye camera includes the following steps:
s1, 4 fish-eye images of the periphery of a vehicle at the same moment are acquired.
S2, respectively carrying out obstacle detection and drivable region detection on the 4 fish-eye images, extracting the boundary of the drivable region, projecting boundary points of the drivable region into a looking-around top view through a homography matrix, and switching to S3 for obstacle detection.
Referring to fig. 2, the method for obtaining the boundary point set of the drivable area is as follows:
s2.1, carrying out edge detection on the drivable area through a Canny edge detection algorithm.
S2.2, converting the graph into a binary graph according to the pixel values.
S2.3, setting the pixel value of the edge position to be 1, and setting the pixel value of the rest positions to be 0, so as to obtain a boundary point set of the drivable region.
S3, determining a target feature point set of the obstacle.
Referring to fig. 3, according to the 2D target detection frame and the boundary of the drivable area, the method for acquiring the obstacle target feature point set comprises the following specific steps:
s3.1, traversing boundary points of a drivable area, and judging whether the boundary points are positioned in a detection frame of a certain target in the fish-eye graph;
s3.2, taking the boundary points of the movable area in each target detection frame as the characteristic point set of the target.
Referring to fig. 4, s4 the set of target feature points is projected into a top view looking around.
S4.1, de-distorting the target feature point set through distortion parameters;
s4.2, projecting the target feature point set into the looking-around top view through the calibrated homography matrix.
S5, filtering noise points in the target feature points.
Since the projection to look around top view approach employs inverse perspective transformation (IPM) techniques that rely on ground flatness and grounding, objects that are highly present will be significantly distorted, whereas obstacle targets are typically highly. And observing that the object distortion part characteristic point set of the obstacle has obvious sparsity compared with the undistorted part characteristic point set on the ground. According to the characteristic, a density-based noise application spatial clustering (DBSCAN) method is selected to remove noise points from the parking spot set, and the method does not need to pre-determine the number of categories and can cluster the characteristics of nonlinear relations. The specific implementation steps are as follows:
s5.1, DBSCAN clustering is conducted on each target feature point set.
S5.2, removing the category of the noise, and screening the category with the point number which is 30% greater than the total characteristic point number of the target.
And S5.3, merging the screened categories to be used as a characteristic point set after target denoising.
The result after target denoising is shown in fig. 5.
S6, fusing the same target feature point set under different cameras.
Because of the large visual angle of the fish-eye cameras, overlapping areas exist among the visual angles of the four fish-eye cameras of the vehicle, so that the same object in different camera images needs to be fused, and the fusion is performed under a circular top view coordinate system.
And (3) carrying out fusion on the feature point sets of the same target under different cameras in a mode of obtaining the circumscribed rectangle and obtaining the IOU set threshold value. The method comprises the following specific steps:
s6.1, obtaining the circumscribed rectangle of the feature points.
S6.2, calculating the minimum area IOU between every two target circumscribed rectangles under the looking-around top view coordinate system.
S6.3, setting a threshold value to be 0.5, merging targets with the minimum area IOU between every two targets being larger than 0.5, and outputting the merged feature point set.
Referring to fig. 6, s7, the target position and direction are estimated according to the feature point sets of different targets.
S7.1, calculating distances from all characteristic points of the obstacle to the center point of the vehicle under a top view coordinate system for the obstacle with small occupied area, such as a cone, a pillar, a person and the like, and using the characteristic point corresponding to the minimum distance to represent the position of the obstacle.
S7.2 for a vehicle obstacle, four corner points are determined by the following method.
Referring to fig. 7, S7.2.1 obtains the maximum and minimum values x_min_pt, x_max_pt, y_min_pt, y_max_pt of all the obstacle feature points x, y in the looking-around top view coordinate system.
Referring to fig. 8, S7.2.2 connects x_min_pt, x_max_pt, y_min_pt, and y_max_pt in pairs to form vectors l1, l2, 13, 14, l5, and l6.
S7.2.3 the cosine similarity cosine between every two of 11, l2, l3, 14, l5 and 16 is calculated to obtain the minimum cosine.
i,j∈[1,6],i≠j
S7.2.4 if the minimum cosine is less than 0.5 and the minimum cosine is S7.2.5, or if the minimum cosine is greater than or equal to 0.5 and the minimum cosine is S7.2.6.
S7.2.5 obtaining corresponding vectors lm, ln; taking the intersection point of lm and ln as a datum point P of the vehicle; if the maximum value of lm, ln is smaller than 1.2 times of the set vehicle width, the longer side vertical direction of lm, ln is taken as the vehicle direction theta, and if the maximum value of lm, ln is larger than 1.2 times of the set vehicle width, the longer side direction of lm, ln is taken as the vehicle direction theta.
S7.2.6 fitting all the characteristic points of the vehicle to obtain points P1 and P2 with the largest distance of the vehicle point set; p1 and P2 points which are closer to the vehicle are set as datum points P; if the distance between P1 and P2 is less than 1.2 times of the set vehicle width, the vertical direction of the connecting line between P1 and P2 is taken as the vehicle direction theta, and if the distance between P1 and P2 is more than 1.2 times of the set vehicle width, the direction of the connecting line between P1 and P2 is taken as the vehicle direction theta.
S7.2.7 calculates the other 3 corner points of the vehicle given the vehicle reference point P, vehicle heading theta and the set vehicle length and width L, W.
Taking a certain condition of a left parking space as an example, the calculation method of the other 3 corner points P1, P2 and P3 comprises the following steps:
P1=(P x -L*||cos(theta)||,P y +L*||sin(theta)||)
P2=(P x +W*||cos(theta)|,P y +W*||sin(theta)||)
P3=(P1 x +W*||cos(theta)||,P1 y +W*|||sin(theta)||)
s8, converting the target coordinates determined in the S7 and the boundary coordinates of the drivable area determined in the S2 from a top view looking around coordinate system to a vehicle coordinate system, and outputting the vehicle coordinate system to a downstream task.
While the invention has been described in detail in the foregoing general description and specific examples, it will be apparent to those skilled in the art that modifications and improvements can be made thereto. Accordingly, such modifications or improvements may be made without departing from the spirit of the invention and are intended to be within the scope of the invention as claimed.

Claims (7)

1. The parking environment sensing method based on the fish-eye camera is characterized by comprising the following steps of:
s1, acquiring 4 fish-eye images around a vehicle at the same moment;
s2, respectively carrying out obstacle detection and drivable region detection on 4 fish-eye images, extracting the boundary of the drivable region, projecting boundary points of the drivable region into a looking-around top view through a homography matrix, and switching to S3 for obstacle detection;
s3, determining a target feature point set of the obstacle;
s4, projecting the target feature point set into a looking-around top view;
s5, filtering noise points in the target feature points;
s6, fusing the same target feature point set under different cameras;
s7, estimating the position and the direction of the target according to the feature point sets of different targets;
s8, converting the target coordinates determined in the S7 and the boundary coordinates of the drivable area determined in the S2 from a top view looking around coordinate system to a vehicle coordinate system, and outputting the vehicle coordinate system to a downstream task.
2. The method for sensing a parking environment based on a fish-eye camera as set forth in claim 1, wherein in S2, the method for obtaining the boundary point set of the drivable area is as follows:
s2.1, carrying out edge detection on a drivable area through a Canny edge detection algorithm;
s2.2, converting the graph into a binary graph according to the pixel value;
s2.3, setting the pixel value of the edge position to be 1, and setting the pixel value of the rest positions to be 0, so as to obtain a boundary point set of the drivable region.
3. The method for sensing a parking environment based on a fish-eye camera according to claim 1 or 2, wherein in S3, the step of acquiring the set of obstacle target feature points according to the 2D target detection frame and the boundary of the drivable area comprises:
s3.1, traversing boundary points of a drivable area, and judging whether the boundary points are positioned in a detection frame of a certain target in the fish-eye graph;
s3.2, taking the boundary points of the movable area in each target detection frame as the characteristic point set of the target.
4. A method for sensing a parking environment based on a fish-eye camera as claimed in claim 1 or 2, wherein the step S4 comprises:
s4.1, de-distorting the target feature point set through distortion parameters;
s4.2, projecting the target feature point set into the looking-around top view through the calibrated homography matrix.
5. A method for sensing a parking environment based on a fish-eye camera as claimed in claim 1 or 2, wherein the step S5 comprises:
s5.1, DBSCAN clustering is carried out on each target feature point set respectively;
s5.2, removing the category of noise, and screening the category with the point number which is 30% greater than the total characteristic point number of the target in the category;
and S5.3, merging the screened categories to be used as a characteristic point set after target denoising.
6. A method for sensing a parking environment based on a fish-eye camera as claimed in claim 1 or 2, wherein the step S6 comprises:
s6.1, obtaining an external rectangle of the feature points;
s6.2, calculating the minimum area IOU between every two of all the target circumscribed rectangles under the looking-around top view coordinate system;
s6.3, setting a threshold value to be 0.5, merging targets with the minimum area IOU between every two targets being larger than 0.5, and outputting the merged feature point set.
7. A method for sensing a parking environment based on a fish-eye camera as claimed in claim 1 or 2, wherein the step S7 comprises:
s7.1, calculating the distances from all the characteristic points of the obstacle to the center point of the vehicle under the coordinate system of the looking-around top view for the target obstacle with smaller occupied area, and using the characteristic point corresponding to the minimum distance to represent the position of the obstacle;
s7.2, for a vehicle obstacle, determining four corner points by the following method;
s7.2.1 obtaining maximum and minimum values x_min_pt, x_max_pt, y_min_pt and y_max_pt of all the barrier feature points x and y in the looking-around top view coordinate system;
s7.2.2 connects x_min_pt, x_max_pt, y_min_pt and y_max_pt in pairs to form vectors l1, l2, l3, l4, l5 and l6;
s7.2.3 calculating cosine similarity cosines between every two l1, l2, l3, l4, l5 and l6 to obtain a minimum cosine;
s7.2.4 if the minimum cosine is less than 0.5 and the minimum cosine is S7.2.5, or if the minimum cosine is greater than or equal to 0.5 and the minimum cosine is S7.2.6;
s7.2.5 acquires the corresponding vectors lm, ln, and uses the intersection of lm, ln as the reference point P of the vehicle; if the maximum value of lm and ln is smaller than 1.2 times of the set vehicle width, the longer side vertical direction of lm and ln is taken as the vehicle direction theta, and if the maximum value of lm and ln is larger than 1.2 times of the set vehicle width, the longer side direction of lm and ln is taken as the vehicle direction theta;
s7.2.6 fitting all the characteristic points of the vehicle to obtain points P1 and P2 with the largest distance of the vehicle point set; p1 and P2 points which are closer to the vehicle are set as datum points P; if the distance between P1 and P2 is smaller than 1.2 times of the set vehicle width, taking the vertical direction of the connecting line between P1 and P2 as the vehicle direction theta, and if the distance between P1 and P2 is larger than 1.2 times of the set vehicle width, taking the direction of the connecting line between P1 and P2 as the vehicle direction theta;
s7.2.7 the other 3 corner points of the vehicle are calculated given the vehicle reference point P, the vehicle orientation theta and the set vehicle length and width.
CN202410018414.0A 2024-01-04 2024-01-04 Parking environment sensing method based on fisheye camera Pending CN117746396A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410018414.0A CN117746396A (en) 2024-01-04 2024-01-04 Parking environment sensing method based on fisheye camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410018414.0A CN117746396A (en) 2024-01-04 2024-01-04 Parking environment sensing method based on fisheye camera

Publications (1)

Publication Number Publication Date
CN117746396A true CN117746396A (en) 2024-03-22

Family

ID=90261008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410018414.0A Pending CN117746396A (en) 2024-01-04 2024-01-04 Parking environment sensing method based on fisheye camera

Country Status (1)

Country Link
CN (1) CN117746396A (en)

Similar Documents

Publication Publication Date Title
US10452931B2 (en) Processing method for distinguishing a three dimensional object from a two dimensional object using a vehicular system
CN109785291B (en) Lane line self-adaptive detection method
Wu et al. Lane-mark extraction for automobiles under complex conditions
KR100975749B1 (en) Method for recognizing lane and lane departure with Single Lane Extraction
US9665782B2 (en) Obstacle detecting apparatus and obstacle detecting method
CN112102409B (en) Target detection method, device, equipment and storage medium
CN110414385B (en) Lane line detection method and system based on homography transformation and characteristic window
JP2013109760A (en) Target detection method and target detection system
Lee et al. An intelligent depth-based obstacle detection system for visually-impaired aid applications
JP5105481B2 (en) Lane detection device, lane detection method, and lane detection program
Youjin et al. A robust lane detection method based on vanishing point estimation
JP4344860B2 (en) Road plan area and obstacle detection method using stereo image
Kortli et al. Efficient implementation of a real-time lane departure warning system
US20230394829A1 (en) Methods, systems, and computer-readable storage mediums for detecting a state of a signal light
CN112597846A (en) Lane line detection method, lane line detection device, computer device, and storage medium
CN112731436A (en) Multi-mode data fusion travelable area detection method based on point cloud up-sampling
US11417080B2 (en) Object detection apparatus, object detection method, and computer-readable recording medium
CN113762134B (en) Method for detecting surrounding obstacles in automobile parking based on vision
KR101998584B1 (en) Lane detection apparatus and lane detection method
CN111199177A (en) Automobile rearview pedestrian detection alarm method based on fisheye image correction
CN112348837B (en) Point-line detection fusion object edge detection method and system
Baris et al. Classification and tracking of traffic scene objects with hybrid camera systems
Balisavira et al. Real-time object detection by road plane segmentation technique for ADAS
CN111860270B (en) Obstacle detection method and device based on fisheye camera
CN117746396A (en) Parking environment sensing method based on fisheye camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination