CN114511841B - Multi-sensor fusion idle parking space detection method - Google Patents

Multi-sensor fusion idle parking space detection method Download PDF

Info

Publication number
CN114511841B
CN114511841B CN202210401936.XA CN202210401936A CN114511841B CN 114511841 B CN114511841 B CN 114511841B CN 202210401936 A CN202210401936 A CN 202210401936A CN 114511841 B CN114511841 B CN 114511841B
Authority
CN
China
Prior art keywords
parking space
coordinates
points
free parking
idle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210401936.XA
Other languages
Chinese (zh)
Other versions
CN114511841A (en
Inventor
朱勇
赵明来
李鸿岳
赵彩智
吴毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yutong Bus Co Ltd
Original Assignee
Shenzhen Yutong Zhilian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yutong Zhilian Technology Co ltd filed Critical Shenzhen Yutong Zhilian Technology Co ltd
Priority to CN202210401936.XA priority Critical patent/CN114511841B/en
Publication of CN114511841A publication Critical patent/CN114511841A/en
Application granted granted Critical
Publication of CN114511841B publication Critical patent/CN114511841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06T3/047

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of advanced assistant driving, in particular to a multi-sensor fusion free parking space detection method, which comprises the following steps: A. firstly, detecting distance information by using an ultrasonic radar, and extracting an initial position of an idle parking space; B. then obtaining a fish-eye camera image, and completing the functions of slam drawing construction and parking space line identification; C. and extracting the final idle parking space position through a multi-sensor fusion algorithm. D. And displaying the identified free parking spaces on a central control screen. The invention improves the accuracy and robustness of the coordinate estimation of two points at the tail of the parking space line by a multi-sensor fusion method, can simultaneously adapt to various scenes such as an indoor garage, an outdoor parking lot, a wireless parking space or an unclear parking space on the parking space line, and the like, thereby improving the experience and satisfaction of users, having wider commercial value and being widely applied to semi-automatic parking and autonomous parking systems.

Description

Multi-sensor fusion idle parking space detection method
Technical Field
The invention relates to the technical field of advanced auxiliary driving, in particular to a multi-sensor fusion free parking space detection method.
Background
With the development of economy, the living standard of people is gradually improved, more and more families buy cars, and the car quantity increases, brings the difficult problem of parking, including the car of others is scraped easily in the parking process, and the car is not put according to unified standard after parking etc..
Therefore, intelligent parking is a future development direction, the most common scheme in the market at present is to use an ultrasonic radar on the side of a vehicle body to realize idle parking space identification through distance measurement, the information of the ultrasonic radar is simply used and is limited by a plurality of factors, for example, the driving posture of the vehicle is not parallel to a target parking space, the vehicles close to the target parking space are not placed regularly, the driving speed of the vehicle is reduced, the idle parking space is identified by simply using a visual vehicle line, and even if distortion correction is carried out on images at a distance, the problem that the coordinate estimation of two points at the tail of the vehicle line is inaccurate usually occurs.
Disclosure of Invention
The invention aims to make up the defects in the background technology, and provides a multi-sensor fusion idle parking space identification method, which is used for improving the parking space identification rate and accuracy, is suitable for various parking spaces, and can be widely applied to semi-automatic parking and autonomous parking systems.
In order to achieve the purpose, the invention provides the following technical scheme: a multi-sensor integrated idle parking space detection method comprises the following steps: A. firstly, detecting distance information by using an ultrasonic radar, and extracting an initial position of an idle parking space; B. acquiring a fish eye camera image, and completing the functions of building a map and identifying a parking space line by slam; C. extracting the final idle parking space position through a multi-sensor fusion algorithm; D. displaying the identified idle parking spaces on a central control screen;
the step A comprises the following steps:
a1, driving the vehicle forwards in parallel to the parking space, and detecting the distance of the obstacle by using a side ultrasonic radar;
a2, dividing the obstacle distance in the step A1 into 75-85 equal interval intervals
Figure 100137DEST_PATH_IMAGE001
And storing in an array;
a3, counting the continuous intervals with the obstacle distance value of array in the step A2 being greater than or equal to 5m, and acquiring the maximum length maxLen of the continuous intervals;
a4, judging whether the parking space is an idle parking space according to the maximum length maxLen of the continuous interval in the step A3, wherein if the value of maxLen is more than 2m, the interval corresponding to maxLen is the initial position of the idle parking space
Figure 911666DEST_PATH_IMAGE002
The step B comprises the following steps:
b1, if the acquisition of the initial position of the free parking space in the step A fails, the next step is not carried out, and the step A is continuously executed;
b2, if the initial position of the free parking space is successfully acquired in the step A, executing the following step B3;
b3, obtaining a fisheye camera image, and carrying out camera calibration and distortion correction to obtain a distortion correction image;
b4, using the camera calibration parameters in the step B3 to firstly perform the top view transformation on the distortion correction image in the step B3 to obtain a bird's eye view image, and then performing the initial position of the empty parking space in the step a4 in the bird's eye view image
Figure 893660DEST_PATH_IMAGE002
Identifying a parking space line to obtain an idle parking space;
b5, starting to build a local map by utilizing the slam technology by utilizing the continuous distortion correction image frames in the step B3 to obtain a local map around the vehicle body;
the step C comprises the following steps:
c1, projecting the coordinate points of the free parking spaces extracted in the step B4 to the local map in the step B5 to obtain the coordinates of the projection points;
c2, calculating the free parking space of the local map in the step B5 by using the projection point coordinates obtained in the step C1;
c3, learning the fusion parameters of the free parking space coordinates in the step B5 and the free parking space coordinates in the step C2 by using an MLP multi-layer sensing machine network;
c4, fusing the free parking space coordinates in the step B5 and the free parking space coordinates in the step C2 by using the fusion parameters in the step C3;
and C5, acquiring the fused idle parking space coordinates.
Preferably, in step B3, the resolution of the fisheye image is 1280 × 720, in the process of distortion correction, the camera is calibrated to obtain an internal parameter K, a radial distortion coefficient K1, a radial distortion coefficient K2, a radial distortion coefficient K3, and an external parameter R, and then the fisheye image is subjected to distortion correction by using a distortion correction formula, where the distortion correction formula is as follows:
Figure DEST_PATH_IMAGE004AAAA
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE006AAAA
is the original position of the distortion point on the camera sensor,
Figure DEST_PATH_IMAGE008AAAA
is a new position after the distortion is corrected,
Figure DEST_PATH_IMAGE010AAAA
is the radius from the center point of the camera sensor.
Preferably, in step B4, after obtaining the bird's eye view, the parking space line recognition may be performed, and the specific steps are as follows:
01. collecting a car position sample, and training by using yolov 3;
02. detecting the car positions on the top view by using a trained yolov3 model, and connecting a pair of car positions (bpt 1, bpt 2) by using a straight line to be used as an entrance of a parking space;
03. vector quantity using right parking space as example
Figure DEST_PATH_IMAGE012AAA
Rotating around the bpt1 anticlockwise, estimating a vehicle position bpt3 according to the length of the parking space, and estimating a vehicle position bpt4 in the same way;
04. connecting bpt1, bpt2, bpt3, bpt4, the vehicle bit line is obtained.
Preferably, in the step B5, the specific steps of the local mapping are as follows:
b5.1, extracting Harris angular points while obtaining the vehicle-to-location line, and performing visual tracking;
b5.2, obtaining an initial value by adopting a loose coupling mode, matching the initial value with the characteristic points in the step B5.1, triangularizing, solving the poses of all frames in the sliding window and the inverse depths of the road mark points, aligning with IMU pre-integration, and recovering the parameters of an alignment scale s, gravity g, IMU speed v and gyroscope bias bg;
b5.3, constructing constraint equations of IMU constraint and visual constraint, and performing back-end nonlinear optimization by using a tight coupling technology to obtain an optimal local map;
and B5.4, searching the optimal free parking space in the local map.
Preferably, in the step C1, the two entry point coordinates bpt1 and bpt2 and the two end point coordinates bpt3 and bpt4 in the step B4 are respectively projected onto the local map in the step B5, so as to obtain four points bspt1, bspt2, bspt3 and bspt 4;
in step C2, the local map of step B5 estimates two end point coordinates spt3 and spt4 according to the parking space length by using the entry point projection coordinates bspt1 and bspt2 of step C1, and the calculation formula is:
spt3=bspt1+lenth;
spt4=bspt2+lenth;
thereby deducing the free parking spaces (bspt 1, bspt2, spt3, spt 4) in the local map of the step B5;
in the step C3, when the MLP multi-layer sensor network is used to learn the fusion parameters of the free parking space coordinate in the step B5 and the free parking space coordinate in the step C2, a large amount of real free parking space coordinate data needs to be labeled in advance, and is represented by GT, and then the error square sum loss function E is used for training, where the formula is:
Figure DEST_PATH_IMAGE014A
wherein w is weight of the perceptron, i.e. fusion parameter, a is the coordinates of the idle parking space in the step B5, and B is the coordinates of the idle parking space in the step C2;
in the step C4, the fusion parameter w in the step C3 is used to fuse the two coordinate points at the end of the free parking space in the step B5 and the two coordinate points at the end of the free parking space in the step C2, where the formula is as follows;
Figure 379304DEST_PATH_IMAGE015
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE017A
for the two last point coordinates bpt3, bpt4 in the above step B4 projected to the projected points bspt3, bspt4 in the local map of the above step B5,
Figure DEST_PATH_IMAGE019A
coordinates of two end points spt3 and spt4 estimated in the step C2, and coordinates of two end points fpt3 and fpt4 of the fused free parking space y;
in the step C5, according to the fusion result of the step C4, the fused free parking space coordinates are bspt1, bspt2, fpt3, and fpt 4.
Preferably, in each of the steps A2
Figure 674370DEST_PATH_IMAGE001
The length of the interval is 0.05-0.15 m.
Preferably, in the step D, the finally identified rectangular frame of the vacant parking spaces is displayed on a central control screen, and the number of the parking spaces on both sides of the vehicle body is less than or equal to 6.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, the visual perception technology and the ultrasonic radar perception technology are closely fused, the acquired position of the idle parking space is more accurate after rich visual information is introduced, the accuracy and robustness of the coordinate estimation of two points at the tail of the parking space line are improved by the fusion method of multiple sensors, and the method can be simultaneously suitable for various scenes such as an indoor garage, an outdoor parking lot, a wireless parking space or an unclear parking space on the vehicle position line, so that the experience and satisfaction of users are improved, and the method has a wider commercial value.
Drawings
FIG. 1 is a schematic view of various types of parking spaces;
FIG. 2 is a flow chart of the steps of the present invention;
FIG. 3 is a schematic diagram of an idle parking space detected by ultrasonic waves;
FIG. 4 is a schematic diagram illustrating fisheye image distortion correction;
FIG. 5 is a schematic view of a parking space line identification;
FIG. 6 is a schematic diagram of a slam vacant parking space;
FIG. 7 is a schematic diagram of the integration of the parking space line recognition of the free parking space and the slam free parking space.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A multi-sensor integrated idle parking space detection method comprises the following steps:
A. firstly, the ultrasonic radar is used for detecting distance information, the initial position of an idle parking space is extracted, and the ultrasonic radar comprises the following steps of:
a1, driving the vehicle forwards in parallel to the parking space, and detecting the distance of the obstacle by using a side ultrasonic radar;
a2, dividing the obstacle distance in the step A1 into 80 equal interval intervals
Figure 777455DEST_PATH_IMAGE001
Wherein the subscript
Figure 484511DEST_PATH_IMAGE021
Is an index of the interval and is stored in an array, each
Figure 395966DEST_PATH_IMAGE001
The interval is 0.1 meter;
a3, counting the continuous intervals with the obstacle distance value of array in the step A2 being greater than or equal to 5m, and acquiring the maximum length maxLen of the continuous intervals;
a4, judging whether the parking space is an idle parking space according to the maximum length maxLen of the continuous interval in the step A3, wherein if the value of maxLen is more than 2m, the interval corresponding to maxLen is the initial position of the idle parking space
Figure DEST_PATH_IMAGE023A
Please refer to fig. 3;
B. the method obtains a fish eye camera image, completes the functions of slam image building and parking space line identification, and comprises the following steps:
b1, if the acquisition of the initial position of the free parking space in the step A fails, the next step is not carried out, and the step A is continuously executed;
b2, if the initial position of the free parking space is successfully acquired in the step A, executing the following step B3;
b3, acquiring a fisheye camera image, calibrating the camera and performing distortion correction to obtain a distortion corrected image, and referring to the figure 4;
the resolution of the fisheye image is 1280 × 720, in the process of distortion correction, a camera is calibrated to obtain an internal parameter K, radial distortion coefficients K1, K2, K3 and an external parameter R of the camera, and then the fisheye image is subjected to distortion correction by using a distortion correction formula, wherein the distortion correction formula is as follows:
Figure DEST_PATH_IMAGE004_5A
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE006_5A
is the original position of the distortion point on the camera sensor,
Figure DEST_PATH_IMAGE008_5A
is a new position after the distortion is corrected,
Figure DEST_PATH_IMAGE010_5A
is the radius from the center point of the camera sensor.
B4, performing top view transformation on the distortion correction image in the step B3 by using the camera calibration parameters in the step B3 to obtain a bird's-eye view image, and then performing line recognition on the bird's-eye view image;
before identifying the parking space line, the distortion correction image is converted into an aerial view, and the conversion process comprises the following steps:
b4.1, firstly, carrying out inverse projection on points on the distortion correction image to the camera coordinate, wherein the inverse projection formula is as follows:
Figure 309870DEST_PATH_IMAGE025
wherein the content of the first and second substances,
Figure 165830DEST_PATH_IMAGE027
is a homogeneous coordinate point in a pixel coordinate system,
Figure 614260DEST_PATH_IMAGE029
is a point in the coordinate system of the camera,
Figure 973698DEST_PATH_IMAGE031
is the internal reference in step B3
Figure 114960DEST_PATH_IMAGE031
B4.2, then rotating the points in the camera coordinates obtained in the above step B4.1 by using the camera external reference R obtained by the calibration in step B3, wherein the rotation transformation formula is:
Figure 861812DEST_PATH_IMAGE033
wherein the content of the first and second substances,
Figure 504146DEST_PATH_IMAGE035
is a point in the coordinate system of the camera,
Figure 390193DEST_PATH_IMAGE037
is a point obtained by rotating a point in a camera coordinate system,
Figure 702357DEST_PATH_IMAGE039
is the external ginseng in the step B3
Figure 205014DEST_PATH_IMAGE039
B4.3, projecting the points in the step B4.2 to an image coordinate system, wherein the projection formula is as follows:
Figure 992316DEST_PATH_IMAGE041
wherein the content of the first and second substances,
Figure 529608DEST_PATH_IMAGE043
is a point after rotation under the camera coordinate system,
Figure 747094DEST_PATH_IMAGE045
is a homogeneous coordinate point in a pixel coordinate system,
Figure DEST_PATH_IMAGE047
is the internal reference in step B3
Figure 152360DEST_PATH_IMAGE047
B4.4, summarizing the above steps B4.1, B4.2 and B4.3, we can obtain the transformation formula for the top view as:
Figure DEST_PATH_IMAGE049
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE051
is a homogeneous coordinate point on the distortion corrected image,
Figure DEST_PATH_IMAGE053
is a homogeneous coordinate point on the top view image,
Figure DEST_PATH_IMAGE055
is the internal reference in step B3
Figure 838295DEST_PATH_IMAGE055
Figure DEST_PATH_IMAGE057
Is the external ginseng in the step B3
Figure 636617DEST_PATH_IMAGE057
After obtaining the aerial view, the parking space line recognition can be carried out, referring to fig. 5, the specific steps are as follows:
01. collecting a car position sample, and training by using yolov 3;
02. detecting the car positions on the top view by using a trained yolov3 model, and connecting a pair of car positions (bpt 1, bpt 2) by using a straight line to be used as an entrance of a parking space;
03. take the right parking space as an example, vector
Figure DEST_PATH_IMAGE012AAAA
Rotating around the bpt1 anticlockwise, estimating a vehicle position point bpt3 according to the length of the parking space, and estimating a vehicle position point bpt4 in the same way;
04. connecting bpt1, bpt2, bpt3 and bpt4 to obtain a vehicle position line;
b5, beginning to establish a local map by using the slam technology by using the continuous distortion correction image frames in the step B3, and extracting free parking spaces from the local map, referring to fig. 6;
the slam technology based on vio visual inertial odometer is used for local mapping, scale information cannot be estimated due to the fact that the slam of the monocular camera has a scale problem, vio makes up for the defect of the monocular slam by means of IMU sensor information, the scale information can be accurately estimated, and the specific process of local mapping is as follows:
b5.1, extracting Harris angular points and carrying out visual tracking;
b5.2, obtaining an initial value by adopting a loose coupling mode, matching the initial value with the characteristic points in the step B5.1, triangularizing, solving the poses of all frames in the sliding window and the inverse depths of the road mark points, aligning with IMU pre-integration, and recovering an alignment scale s, gravity g, IMU velocity v, gyroscope bias bg and the like;
b5.3, constructing constraint equations of IMU constraint and visual constraint, and performing back-end nonlinear optimization by using a tight coupling technology to obtain an optimal local map;
and B5.4, searching the optimal free parking space in the local map.
C. The method comprises the following steps of extracting the final idle parking space position through a multi-sensor fusion algorithm:
c1, projecting the two entry point coordinates bpt1 and bpt2 and the two end point coordinates bpt3 and bpt4 in the step B4 into the local map of the step B5 respectively to obtain four points bspt1, bspt2, bspt3 and bspt 4;
c2, projecting coordinates bspt1 and bspt2 at the entry point of the step C1 in the local map of the step B5, estimating two end point coordinates spt3 and spt4 according to the parking space length, and calculating the formula as follows:
spt3=bspt1+lenth;
spt4=bspt2+lenth;
thereby deducing the free parking spaces (bspt 1, bspt2, spt3, spt 4) in the local map of the step B5;
c3, when learning the fusion parameters of the free parking space coordinates in the step B5 and the free parking space coordinates in the step C2 by using an MLP multi-layer sensing machine network, marking a large amount of real free parking space coordinate data in advance, expressing the real free parking space coordinate data by GT, and then training by adopting an error square sum loss function E, wherein the formula is as follows:
Figure DEST_PATH_IMAGE014AA
wherein w is weight of the perceptron, i.e. fusion parameter, a is the coordinates of the idle parking space in the step B5, and B is the coordinates of the idle parking space in the step C2;
c4, fusing the two coordinate points at the tail of the idle parking space in the step B5 and the two coordinate points at the tail of the idle parking space in the step C2 by using the fusion parameter w in the step C3, wherein the formula is as follows;
Figure DEST_PATH_IMAGE059
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE017AA
for the two last point coordinates bpt3, bpt4 in the above step B4 projected to the projected points bspt3, bspt4 in the local map of the above step B5,
Figure DEST_PATH_IMAGE019AA
coordinates of two end points spt3 and spt4 estimated in the step C2, and coordinates of two end points fpt3 and fpt4 of the fused free parking space y;
c5, according to the fusion result of the step C4, the fused free parking space coordinates are bspt1, bspt2, fpt3 and fpt4, as shown in FIG. 7;
D. and displaying the finally identified rectangular frame of the idle parking spaces to a central control screen, and displaying 6 parking spaces at most on two sides of the vehicle body so as to facilitate the selection of a user.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (7)

1. A multi-sensor integrated idle parking space detection method comprises the following steps: A. firstly, detecting distance information by using an ultrasonic radar, and extracting an initial position of an idle parking space; B. acquiring a fish eye camera image, and completing the functions of building a map and identifying a parking space line by slam; C. extracting the final idle parking space position through a multi-sensor fusion algorithm; D. displaying the identified idle parking spaces on a central control screen; the method is characterized in that:
the step A comprises the following steps:
a1, driving the vehicle forwards in parallel to the parking space, and detecting the distance of the obstacle by using a side ultrasonic radar;
a2, dividing the obstacle distance in the step A1 into 75-85 equal interval intervals
Figure DEST_PATH_IMAGE002
And storing in an array;
a3, counting the continuous intervals with the obstacle distance value of array in the step A2 being greater than or equal to 5m, and acquiring the maximum length maxLen of the continuous intervals;
a4, judging whether the parking space is an idle parking space according to the maximum length maxLen of the continuous interval in the step A3, wherein if the value of maxLen is more than 2m, the interval corresponding to maxLen is the initial position of the idle parking space
Figure DEST_PATH_IMAGE004
The step B comprises the following steps:
b1, if the acquisition of the initial position of the free parking space in the step A fails, the next step is not carried out, and the step A is continuously executed;
b2, if the initial position of the free parking space is successfully acquired in the step A, executing the following step B3;
b3, acquiring a fisheye camera image, and performing camera calibration and distortion correction to obtain a distortion corrected image;
b4, using the camera calibration parameters in the step B3 to firstly perform the top view transformation on the distortion correction image in the step B3 to obtain a bird's eye view image, and then performing the initial position of the empty parking space in the step a4 in the bird's eye view image
Figure 47715DEST_PATH_IMAGE004
Identifying a parking space line to obtain an idle parking space;
b5, establishing a local map by utilizing slam technology to obtain a local map around the vehicle body by utilizing the continuous distortion correction image frames in the step B3;
the step C comprises the following steps:
c1, projecting the coordinate points of the free parking spaces extracted in the step B4 to the local map in the step B5 to obtain the coordinates of the projection points;
c2, calculating the free parking space of the local map in the step B5 by using the projection point coordinates obtained in the step C1;
c3, learning the fusion parameters of the free parking space coordinates in the step B5 and the free parking space coordinates in the step C2 by using an MLP multi-layer sensing machine network;
c4, fusing the free parking space coordinates in the step B5 and the free parking space coordinates in the step C2 by using the fusion parameters in the step C3;
and C5, acquiring the fused coordinates of the free parking space.
2. The multi-sensor integrated vacant stall detection method according to claim 1, characterized in that: in the step B3, the resolution of the fisheye image is 1280 × 720, and in the process of distortion correction, the camera is calibrated to obtain an internal parameter K, a radial distortion coefficient K1, a radial distortion coefficient K2, a radial distortion coefficient K3, and an external parameter R, and then the fisheye image is subjected to distortion correction by using a distortion correction formula, where the distortion correction formula is as follows:
Figure DEST_PATH_IMAGE006
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE008
is the original position of the distortion point on the camera sensor,
Figure DEST_PATH_IMAGE010
is a new position after the distortion is corrected,
Figure DEST_PATH_IMAGE012
is the radius from the center point of the camera sensor.
3. The multi-sensor integrated vacant stall detection method according to claim 1, characterized in that: in step B4, after the bird's-eye view is obtained, the parking space line recognition can be performed, and the specific steps are as follows:
01. collecting a car position sample, and training by using yolov 3;
02. detecting the car positions on the top view by using a trained yolov3 model, and connecting a pair of car positions (bpt 1, bpt 2) by using a straight line to be used as an entrance of a parking space;
03. vector quantity using right parking space as example
Figure DEST_PATH_IMAGE014
Rotating around the bpt1 anticlockwise, estimating a vehicle position bpt3 according to the length of the parking space, and estimating a vehicle position bpt4 in the same way;
04. connecting bpt1, bpt2, bpt3, bpt4, the vehicle bit line is obtained.
4. The method for detecting the vacant parking spaces with the multiple sensors integrated as claimed in claim 3, characterized in that: in the step B5, the specific steps of local mapping are as follows:
b5.1, extracting Harris angular points while obtaining the vehicle-to-location line, and performing visual tracking;
b5.2, obtaining an initial value by adopting a loose coupling mode, matching the initial value with the characteristic points in the step B5.1, triangularizing, solving the poses of all frames in the sliding window and the inverse depths of the road mark points, aligning with IMU pre-integration, and recovering the parameters of an alignment scale s, gravity g, IMU speed v and gyroscope bias bg;
b5.3, constructing constraint equations of IMU constraint and visual constraint, and performing back-end nonlinear optimization by using a tight coupling technology to obtain an optimal local map;
and B5.4, searching the optimal free parking space in the local map.
5. The multi-sensor integrated vacant stall detection method according to claim 4, characterized in that: in the step C1, the two entry point coordinates bpt1 and bpt2 and the two end point coordinates bpt3 and bpt4 in the step B4 are respectively projected onto the local map in the step B5, so as to obtain four points bspt1, bspt2, bspt3 and bspt 4;
in step C2, in the local map of step B5, the entry point projection coordinates bspt1 and bspt2 of step C1 are used, and two end point coordinates spt3 and spt4 are estimated according to the parking space length, and the calculation formula is as follows:
spt3=bspt1+lenth;
spt4=bspt2+lenth;
thereby deducing the free parking spaces (bspt 1, bspt2, spt3, spt 4) in the local map of the step B5;
in the step C3, when the MLP multi-layer sensor network is used to learn the fusion parameters of the free parking space coordinate in the step B5 and the free parking space coordinate in the step C2, a large amount of real free parking space coordinate data needs to be labeled in advance, and is represented by GT, and then the error square sum loss function E is used for training, where the formula is:
Figure DEST_PATH_IMAGE016
wherein w is weight of the perceptron, i.e. fusion parameter, a is the coordinates of the idle parking space in the step B5, and B is the coordinates of the idle parking space in the step C2;
in the step C4, the fusion parameter w in the step C3 is used to fuse the two coordinate points at the end of the free parking space in the step B5 and the two coordinate points at the end of the free parking space in the step C2, where the formula is as follows;
Figure DEST_PATH_IMAGE018
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE020
for the two last point coordinates bpt3, bpt4 in the above step B4 projected to the projected points bspt3, bspt4 in the local map of the above step B5,
Figure DEST_PATH_IMAGE022
for the two end points spt3 estimated in step C2Spt4, y is coordinates of two tail points fpt3 and fpt4 of the fused idle parking space;
in the step C5, according to the fusion result of the step C4, the fused free parking space coordinates are bspt1, bspt2, fpt3, and fpt 4.
6. The multi-sensor integrated vacant stall detection method according to claim 1, characterized in that: in said step A2, each
Figure 502267DEST_PATH_IMAGE002
The length of the interval is 0.05-0.15 m.
7. The multi-sensor integrated vacant stall detection method according to claim 1, characterized in that: and D, displaying the finally identified rectangular frame of the idle parking spaces to a central control screen, wherein the display number of the parking spaces on two sides of the vehicle body is less than or equal to 6.
CN202210401936.XA 2022-04-18 2022-04-18 Multi-sensor fusion idle parking space detection method Active CN114511841B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210401936.XA CN114511841B (en) 2022-04-18 2022-04-18 Multi-sensor fusion idle parking space detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210401936.XA CN114511841B (en) 2022-04-18 2022-04-18 Multi-sensor fusion idle parking space detection method

Publications (2)

Publication Number Publication Date
CN114511841A CN114511841A (en) 2022-05-17
CN114511841B true CN114511841B (en) 2022-07-05

Family

ID=81554914

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210401936.XA Active CN114511841B (en) 2022-04-18 2022-04-18 Multi-sensor fusion idle parking space detection method

Country Status (1)

Country Link
CN (1) CN114511841B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111845723A (en) * 2020-08-05 2020-10-30 北京四维智联科技有限公司 Full-automatic parking method and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102010044219A1 (en) * 2010-11-22 2012-05-24 Robert Bosch Gmbh Method for detecting the environment of a vehicle
TWM492262U (en) * 2014-07-18 2014-12-21 Seeways Technology Inc Reversing imaging system featuring automatic tri-state viewing angle display and reversing photographic device thereof
CN110775052B (en) * 2019-08-29 2021-01-29 浙江零跑科技有限公司 Automatic parking method based on fusion of vision and ultrasonic perception
CN111942372B (en) * 2020-07-27 2022-02-22 广州汽车集团股份有限公司 Automatic parking method and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111845723A (en) * 2020-08-05 2020-10-30 北京四维智联科技有限公司 Full-automatic parking method and system

Also Published As

Publication number Publication date
CN114511841A (en) 2022-05-17

Similar Documents

Publication Publication Date Title
CN104848851B (en) Intelligent Mobile Robot and its method based on Fusion composition
CN106997688B (en) Parking lot parking space detection method based on multi-sensor information fusion
CN103954275B (en) Lane line detection and GIS map information development-based vision navigation method
CN112417926B (en) Parking space identification method and device, computer equipment and readable storage medium
CN111169468B (en) Automatic parking system and method
CN105511462B (en) A kind of AGV air navigation aids of view-based access control model
CN106651953A (en) Vehicle position and gesture estimation method based on traffic sign
WO2021056341A1 (en) Lane line fusion method, lane line fusion apparatus, vehicle, and storage medium
CN105785989A (en) System for calibrating distributed network camera by use of travelling robot, and correlation methods
CN113220818B (en) Automatic mapping and high-precision positioning method for parking lot
CN113903011A (en) Semantic map construction and positioning method suitable for indoor parking lot
CN111862673A (en) Parking lot vehicle self-positioning and map construction method based on top view
CN107607091A (en) A kind of method for measuring unmanned plane during flying flight path
WO2023240805A1 (en) Connected vehicle overspeed early warning method and system based on filtering correction
CN106183995A (en) A kind of visual parking device method based on stereoscopic vision
CN114755662A (en) Calibration method and device for laser radar and GPS with road-vehicle fusion perception
CN115761007A (en) Real-time binocular camera self-calibration method
CN116026315B (en) Ventilating duct scene modeling and robot positioning method based on multi-sensor fusion
CN114964236A (en) Mapping and vehicle positioning system and method for underground parking lot environment
CN112556719A (en) Visual inertial odometer implementation method based on CNN-EKF
CN110956067A (en) Construction method and device for eyelid curve of human eye
CN113947714A (en) Multi-mode collaborative optimization method and system for video monitoring and remote sensing
CN114511841B (en) Multi-sensor fusion idle parking space detection method
CN112833889B (en) Vehicle positioning method and device
CN117333846A (en) Detection method and system based on sensor fusion and incremental learning in severe weather

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231220

Address after: No. 6, Yutong Road, Guancheng Hui District, Zhengzhou, Henan 450061

Patentee after: Yutong Bus Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Patentee before: SHENZHEN YUTONG ZHILIAN TECHNOLOGY Co.,Ltd.