CN116386002A - Fish-eye travelable region evaluating method and device - Google Patents

Fish-eye travelable region evaluating method and device Download PDF

Info

Publication number
CN116386002A
CN116386002A CN202310226465.8A CN202310226465A CN116386002A CN 116386002 A CN116386002 A CN 116386002A CN 202310226465 A CN202310226465 A CN 202310226465A CN 116386002 A CN116386002 A CN 116386002A
Authority
CN
China
Prior art keywords
point cloud
preset
evaluation
region
point information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310226465.8A
Other languages
Chinese (zh)
Inventor
刘洋
赵天坤
彭伟
唐佳
汤雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hozon New Energy Automobile Co Ltd
Original Assignee
Hozon New Energy Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hozon New Energy Automobile Co Ltd filed Critical Hozon New Energy Automobile Co Ltd
Priority to CN202310226465.8A priority Critical patent/CN116386002A/en
Publication of CN116386002A publication Critical patent/CN116386002A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/43Determining position using carrier phase measurements, e.g. kinematic positioning; using long or short baseline interferometry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a fish-eye drivable region evaluation method and device, relates to the field of region evaluation, and mainly aims to improve accuracy of fish-eye drivable region evaluation. The main technical scheme of the invention is as follows: acquiring time-space synchronous laser radar point cloud data, RTK positioning data and fish-eye pictures; constructing a laser point cloud multi-frame local map based on the laser radar point cloud data and the RTK positioning data; marking the travelable region on the laser point cloud multi-frame local map by using a preset marking rule to obtain real travelable region point information; acquiring 3D space travelable region point information corresponding to the fisheye picture by using a preset acquisition rule; and calculating an evaluation value by using a preset evaluation algorithm based on the 3D space drivable region point information and the real drivable region point information, and acquiring an evaluation result based on the evaluation value. The method is used for evaluating the fish-eye travelable area.

Description

Fish-eye travelable region evaluating method and device
Technical Field
The invention relates to the technical field of evaluation of a vehicle drivable area, in particular to a fish-eye drivable area evaluation method and device.
Background
In a parking operation scene, the main boundary between a drivable area and a non-drivable area is composed of different types of obstacle ground boundaries. The obstacle here includes not only the category of obstacle defined in the visual detection, such as a vehicle; other types of obstructions in some scenarios are also included, such as: road edges, steps, road fences, guard rails, fire hydrants, etc. For safe performance of parking, it is necessary to detect the fish-eye drivable region recognition performance of the vehicle.
At present, the evaluation method for the fish-eye drivable region identification performance of the vehicle is to manually mark the drivable region of a single Zhang Yuyan picture, directly output a drivable region detection result on the fish-eye picture based on a deep-learning drivable region model, and then evaluate the drivable region IOU based on pixels of the fish-eye picture.
However, the evaluation method is based on 2D pixel points of the drivable region of the single-frame picture, not only does not reflect the evaluation result of the drivable region of the real 3D space, but also the true value information of the manual annotation is not full due to the fact that the single-frame picture information is not full, further the evaluation precision is poor, and finally the evaluation accuracy is affected.
Disclosure of Invention
In view of the above problems, the present invention provides a method and an apparatus for evaluating a fish-eye travelable region, which mainly aims to improve accuracy of evaluating the fish-eye travelable region.
In order to solve the technical problems, the invention provides the following scheme:
in a first aspect, the present invention provides a fisheye travelable region evaluation method, the method comprising:
acquiring time-space synchronous laser radar point cloud data, RTK positioning data and fish-eye pictures;
constructing a laser point cloud multi-frame local map based on the laser radar point cloud data and the RTK positioning data;
marking the travelable region on the laser point cloud multi-frame local map by using a preset marking rule to obtain real travelable region point information;
acquiring 3D space travelable region point information corresponding to the fisheye picture by using a preset acquisition rule;
and calculating an evaluation value by using a preset evaluation algorithm based on the 3D space drivable region point information and the real drivable region point information, and acquiring an evaluation result based on the evaluation value.
In a second aspect, the present invention provides a fisheye travelable region evaluation device, the device comprising:
the first acquisition unit is used for acquiring time-space synchronous laser radar point cloud data, RTK positioning data and fisheye pictures;
the construction unit is used for constructing a laser point cloud multi-frame local map based on the laser radar point cloud data and the RTK positioning data;
the marking unit is used for marking the drivable region on the laser point cloud multi-frame local map by using a preset marking rule to obtain real drivable region point information;
the second acquisition unit is used for acquiring 3D space travelable region point information corresponding to the fisheye picture by utilizing a preset acquisition rule;
the calculating unit is used for calculating an evaluation value by using a preset evaluation algorithm based on the 3D space drivable region point information and the real drivable region point information, and acquiring an evaluation result based on the evaluation value.
In order to achieve the above object, according to a third aspect of the present invention, there is provided a storage medium including a stored program, wherein the apparatus in which the storage medium is controlled to execute the fisheye travelable region evaluation method of the first aspect when the program runs.
In order to achieve the above object, according to a fourth aspect of the present invention, there is provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing all or part of the steps for a fish-eye travelable region evaluating apparatus as described in the second aspect when the program is executed.
By means of the technical scheme, the fish-eye drivable region evaluation method and device provided by the invention are characterized in that the evaluation method for the recognition performance of the fish-eye drivable region of the vehicle is based on the 2D pixel points of the drivable region of the single-frame picture at present, the evaluation result of the real 3D space drivable region is not reflected, and the true value information marked manually is incomplete due to the incomplete information of the single-frame picture, so that the evaluation precision is relatively poor, and the evaluation accuracy is finally affected. Therefore, the invention acquires the time-space synchronous laser radar point cloud data, RTK positioning data and fish-eye pictures; constructing a laser point cloud multi-frame local map based on the laser radar point cloud data and the RTK positioning data; marking the travelable region on the laser point cloud multi-frame local map by using a preset marking rule to obtain real travelable region point information; acquiring 3D space travelable region point information corresponding to the fisheye picture by using a preset acquisition rule; and calculating an evaluation value by using a preset evaluation algorithm based on the 3D space drivable region point information and the real drivable region point information, and acquiring an evaluation result based on the evaluation value. The method has the advantages that the marking of the drivable area on the point cloud generated by the laser radar is more complete than that of single-frame point cloud information, and the marking is more accurate; and the pixel points of the fisheye picture are converted into the drivable region points based on the 3D space of the vehicle body coordinate system through the camera inner parameter and the camera outer parameter corresponding to the fisheye picture, so that the evaluation result of the drivable region of the real 3D space can be reflected, and the evaluation accuracy is further improved.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
fig. 1 shows a flowchart of a fish-eye travelable area evaluation method provided by an embodiment of the present invention;
FIG. 2 shows a flowchart of another fish-eye travelable region evaluation method provided by an embodiment of the present invention;
fig. 3 shows a block diagram of a fisheye travelable region evaluation device according to an embodiment of the present invention;
fig. 4 shows a block diagram of another fish-eye travelable region evaluating device according to an embodiment of the present invention;
fig. 5 shows a hardware device layout diagram of a fisheye travelable region evaluation device according to an embodiment of the present invention;
fig. 6 shows a vehicle coordinate system diagram of a fish-eye travelable area evaluation device according to an embodiment of the present invention;
fig. 7 shows a travelable region diagram of a fisheye image corresponding to a fisheye travelable region evaluation device provided by the embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In the existing parking operation scene, the method for evaluating the identification performance of the drivable region and the non-drivable region of the vehicle is to manually mark the drivable region of a single Zhang Yuyan picture, directly output a drivable region detection result on a fisheye picture based on a deep learning drivable region model, and evaluate the drivable region IOU based on pixels of the fisheye picture. However, the evaluation method is based on 2D pixel points of the drivable region of the single-frame picture, not only does not reflect the evaluation result of the drivable region of the real 3D space, but also the true value information of the manual annotation is not full due to the fact that the single-frame picture information is not full, further the evaluation precision is poor, and finally the evaluation accuracy is affected. Aiming at the problem, the inventor thinks that the laser point cloud multi-frame local mapping is carried out based on the point cloud output by the laser radar and RTK original data, then a fish-eye picture is marked on a locally created point cloud map for a running area, a running area point is directly output on the fish-eye picture through a deep learning model of the running area, and pixel points are converted into the running area point based on a 3D space of a vehicle body coordinate system based on camera internal and external parameters corresponding to the fish-eye picture; and then IOU comparison is carried out with the true travelable area point marked based on the laser radar for evaluation.
Therefore, the embodiment of the invention provides a fish-eye travelable area evaluation method, by which the accuracy of fish-eye travelable area evaluation is improved, and the specific implementation steps are as shown in fig. 1, including:
101. and acquiring time-space synchronous laser radar point cloud data, RTK positioning data and fish-eye pictures.
The time-space synchronization means that the laser radar point cloud data, the RTK positioning data and the fish eyes are synchronized in time and are also synchronized in space; realizing time synchronization can be realized by uniformly acquiring the laser radar point cloud data, RTK positioning data and the time of the equipment of the fish-eye picture; the space synchronization can be realized by calibrating equipment for acquiring the laser radar point cloud data, the RTK positioning data and the fish-eye picture in a unified coordinate system.
The laser radar point cloud data are acquired through a laser radar; the RTK positioning data are acquired through RTK positioning equipment; the fish-eye picture is obtained through a fish-eye camera; the fisheye camera can be arranged around the vehicle and used for collecting the environmental information around the vehicle, so that the environment around the vehicle can be more comprehensively known, and the omission is avoided. The RTK (Real-time kinematic) carrier phase difference technology is a difference method for processing the observed quantity of the carrier phases of two measuring stations in Real time, and the carrier phases collected by a reference station are sent to a user receiver to calculate the difference coordinates.
During the running of the vehicle, the laser radar, the RTK system device and the fish-eye camera acquire data in real time based on the synchronized time and space for the subsequent identification of the drivable area.
102. And constructing a laser point cloud multi-frame local map based on the laser radar point cloud data and the RTK positioning data.
The laser point cloud data and the RTK positioning data of the time-space synchronization can be obtained from the step 101; and constructing a laser point cloud multi-frame local map through a preset map constructing rule according to the laser point cloud data and the RTK positioning data, wherein the information presented by the map obtained through processing is closer to a real scene, and the accuracy of subsequent processing can be improved.
The preset mapping rule may be that the relative pose between adjacent point clouds is obtained by using an inter-frame matching algorithm according to laser radar point cloud data, then the relative pose between the point clouds and RTK absolute pose positioning data are fused according to a kalman algorithm to obtain an updated point cloud pose, and based on the information, multi-frame point clouds are overlapped and mapped to obtain the laser point cloud multi-frame local mapping.
103. And marking the drivable region on the laser point cloud multi-frame local map by using a preset marking rule to obtain the real drivable region point information.
102, obtaining the multi-frame local map of the laser point cloud, and manually marking a travelable area on the multi-frame local map of the laser point cloud; or deducing the travelable region for marking by a point cloud travelable region segmentation algorithm, and the embodiment is not particularly limited.
By the labeling method, the travelable region can be labeled on the laser point cloud multi-frame local map to obtain the real travelable region point information.
104. And acquiring 3D space travelable region point information corresponding to the fisheye picture by using a preset acquisition rule.
And (3) obtaining a fisheye image which is in time-space synchronization with the laser radar point cloud data and the RTK positioning data from step 101, wherein the step obtains 3D space drivable region point information corresponding to the fisheye image by utilizing a preset obtaining rule, wherein the preset obtaining rule can be to output pixel coordinates of the fisheye image through a fisheye deep learning model (drivable region deep learning model), and convert the freeface pixel point (namely the drivable region pixel point) into a freeface point (namely the 3D space drivable region point information corresponding to the fisheye image) based on a 3D space of a vehicle body coordinate system based on the relative pose of the fisheye camera model and the fisheye camera and a vehicle body coordinate system.
105. And calculating to obtain an evaluation value by using a preset evaluation algorithm based on the 3D space drivable region point information and the real drivable region point information, and obtaining an evaluation result based on the evaluation value.
The real drivable area point information is obtained from step 103, and the 3D space drivable area point information is obtained from step 104, so as to be used for calculating an evaluation value by taking the 3D space drivable area point information and the real drivable area point information as input items of a preset evaluation algorithm, wherein the preset evaluation algorithm can be calculated by using an IOU formula and/or calculated by using an ACC formula.
The evaluation value obtained through calculation can be a first data value corresponding to the IOU and a second value corresponding to the ACC, the first value and the second value are compared with preset judging conditions, the evaluation is qualified if the first value and the second value meet the preset judging conditions, the evaluation is unqualified if the first value and the second value do not meet the preset judging conditions, the preset judging conditions can be value intervals, the first value and the second value can be freely set according to specific requirements, and the evaluation is not qualified if the first value and the second value meet the preset judging conditions.
Based on the implementation manner of the embodiment of fig. 1, the invention provides a fish-eye travelable area evaluation method, which comprises the steps of acquiring time-space synchronous laser radar point cloud data, RTK positioning data and fish-eye pictures; constructing a laser point cloud multi-frame local map based on the laser radar point cloud data and the RTK positioning data; marking the travelable region on the laser point cloud multi-frame local map by using a preset marking rule to obtain real travelable region point information; acquiring 3D space travelable region point information corresponding to the fisheye picture by using a preset acquisition rule; and calculating an evaluation value by using a preset evaluation algorithm based on the 3D space drivable region point information and the real drivable region point information, and acquiring an evaluation result based on the evaluation value. The method has the advantages that the marking of the drivable area on the point cloud generated by the laser radar is more complete than that of single-frame point cloud information, and the marking is more accurate; and the pixel points of the fisheye picture are converted into the drivable region points based on the 3D space of the vehicle body coordinate system through the camera inner parameter and the camera outer parameter corresponding to the fisheye picture, so that the evaluation result of the drivable region of the real 3D space can be reflected, and the evaluation accuracy is further improved.
Further, as a refinement and expansion of the embodiment shown in fig. 1, the embodiment of the present invention further provides another fish-eye travelable area evaluation method, as shown in fig. 2, which specifically includes the following steps:
201. and uniformly calibrating the laser radar and the RTK which are arranged at the top end of the vehicle and the fish-eye cameras which are arranged around the vehicle and are in preset number to the center of a rear axle of the vehicle.
As shown in fig. 5, the hardware scheme related to the present invention is: the vehicle-mounted hardware equipment is a laser radar and an RTK which are arranged on the top, and 4-way fisheye cameras which are arranged on the front, the back, the left and the right of the vehicle; and the laser radar, the RTK and the 4-way fisheye camera are calibrated to the center of the rear axle of the vehicle in a unified mode, so that all data can be spatially synchronized later.
202. The directions of an X axis, a Y axis and a Z axis of a vehicle body coordinate system of the vehicle are set by taking the center of a rear axis of the vehicle as a coordinate origin.
The method is used for spatially synchronizing laser radar point cloud data, RTK positioning data and fish-eye pictures.
As shown in fig. 6, the directions of the X axis, the Y axis and the Z axis of the vehicle body coordinate system of the vehicle are set by taking the center of the vehicle rear axis as the coordinate origin, and the XYZ axis of the vehicle body coordinate system is defined as the front left upper part, and the coordinate origin is the center of the vehicle rear axis, so that the space synchronization is convenient; the method of unifying coordinates, i.e., coordinate system definition, is illustrated in fig. 6 and table 1;
coordinate system definition: the origin is the projection point 0 of the center of the rear axle of the vehicle to the ground, the front direction of the vehicle is the positive direction of the X axis, the left side of the vehicle is the positive direction of the Y axis, and the vertical upward direction is the positive direction of the Z axis. And the vehicle body coordinate system is defined by the ISO international standard.
TABLE 1 definition of body coordinate System
ltem Description Unlit
Origin of origin Projection point of rear axle center of vehicle body to ground -
X positive direction Front part m
Y positive direction Left side m
Z positive direction Upper part m
Positive direction of roll angle Right is positive rad
Positive pitch angle direction Downward to be positive rad
Yaw angle positive direction Counterclockwise is positive rad
203. And setting laser radar, RTK and time synchronization of a preset number of fish-eye cameras.
And (3) performing time synchronization setting on the laser radar processed in the step 202, the RTK and the fish-eye cameras with preset quantity, wherein the time synchronization setting is used for time synchronization of laser radar point cloud data, RTK positioning data and fish-eye pictures.
The purpose of steps 201 to 203 is to facilitate the time-space synchronization of the individual sensors, to facilitate the acquisition of unified time-space sensor data, and to simultaneously process the above sensor data based on the same vehicle body coordinate system for subsequent use in acquiring time-space synchronized laser radar point cloud data, RTK positioning data and fish eye pictures.
204. And acquiring time-space synchronous laser radar point cloud data, RTK positioning data and fish-eye pictures.
This step is described in conjunction with step 101 in the above method, and the same contents are not repeated here.
205. And constructing a laser point cloud multi-frame local map based on the laser radar point cloud data and the RTK positioning data.
This step is described in conjunction with step 102 in the above method, and the same contents are not repeated here.
Fusing the laser radar point cloud data and the RTK positioning data by using a preset fusion rule to obtain a new point cloud pose; and based on the pose of the new point cloud, overlapping and mapping the multi-frame point cloud to obtain the multi-frame local mapping of the laser point cloud.
The step of fusing the laser radar point cloud data and the RTK positioning data by using a preset fusion rule to obtain a new point cloud pose comprises the following steps: calculating to obtain the relative pose between the point cloud frames by utilizing an interframe matching algorithm based on the laser radar point cloud data; fusing the relative pose between the point cloud frames and the RTK positioning data by using a Kalman algorithm to obtain a new point cloud pose; the RTK positioning data are RTK absolute pose positioning data.
206. And marking the drivable region on the laser point cloud multi-frame local map by using a preset marking rule to obtain the real drivable region point information.
This step is described in conjunction with step 103 in the above method, and the same contents are not repeated here.
And calculating a travelable region by using a preset point cloud travelable region segmentation algorithm based on the laser point cloud multi-frame local map construction to mark, so as to obtain the real travelable region point information.
207. And acquiring 3D space travelable region point information corresponding to the fisheye picture by using a preset acquisition rule.
This step is described in conjunction with step 104 in the above method, and the same contents are not repeated here.
The preset acquisition rule refers to that a fisheye image is directly output on the fisheye image through a preset drivable region deep learning model, and pixel points are converted into drivable region points based on a 3D space of a vehicle body coordinate system based on camera inner parameters and camera outer parameters corresponding to the fisheye image.
The specific method comprises the following steps: based on the fish-eye picture, obtaining 2D space travelable region point information corresponding to the fish-eye picture by using a preset travelable region deep learning model; converting 2D space travelable region point information corresponding to the fisheye picture into 3D space travelable region point information based on a vehicle body coordinate system by utilizing a preset fisheye camera model and the relative pose of the fisheye camera and the vehicle body coordinate system; specifically, the method is shown in the following formula:
Figure BDA0004120857500000091
wherein, the liquid crystal display device comprises a liquid crystal display device,
u and v are corresponding pixel coordinates in the image coordinate system; the coordinate units are: a pixel;
dx and dy represent how many mm each column and each row represents, respectively;
u 0 and v 0 Representing the position of the pixel coordinate of the center of the image under a pixel coordinate system;
f represents a camera focal length;
f x finger-image distance, i.e. f x =f/dx, where f represents the focal length; dx represents what the actual physical length corresponds to one pixel in the x direction (how many mm one pixel corresponds to);
f y finger-image distance, i.e. f y =f/dy, where f represents the focal length; dy represents what the actual physical length corresponds to one pixel in the y direction (how many mm one pixel corresponds to); r and T represent external parameters of the camera, namely a rotation and translation matrix of a camera coordinate system with respect to a vehicle body coordinate system; x is X w ,Y w ,Z w The pixel points correspond to the coordinates of the real points in the vehicle body coordinate system.
208. And calculating to obtain an evaluation value by using a preset evaluation algorithm based on the 3D space drivable region point information and the real drivable region point information, and obtaining an evaluation result based on the evaluation value.
This step is described in conjunction with step 105 of the above method, and the same contents are not repeated here.
The preset evaluation algorithm comprises an IOU algorithm and an ACC algorithm;
calculating an evaluation value corresponding to the IOU by utilizing an IOU algorithm based on the 3D space drivable region point information and the real drivable region point information; calculating an evaluation value corresponding to the ACC by utilizing an ACC algorithm based on the 3D space drivable region point information and the real drivable region point information; judging whether the evaluation value corresponding to the IOU and the evaluation value corresponding to the ACC meet preset conditions or not; if yes, determining that the evaluation is qualified; if not, determining that the evaluation is unqualified.
Wherein, the drivable area of the invention is expressed by adopting a panoramic segmentation mode; the evaluation index is used for calculating the IOU and ACC of the drivable area after being converted into the vehicle body coordinate system as the evaluation index; the formulas of the IOU and the ACC are as follows:
IOU=TP/(FP+TP+FN)
ACC=TP/(FP+TP)
wherein, TP: true Positive, the classifier predicts that the result is a Positive sample, and actually is also a Positive sample, i.e., the number of Positive samples that are correctly identified. FP: false Positive, the classifier predicts Positive samples, and actually negative samples, i.e. the number of False negative samples. TN: true Negative, the classifier predicts that the result is a Negative sample, and is actually a Negative sample, i.e., the number of Negative samples that are correctly identified. FN: false positive, the classifier predicts Negative samples, and actually positive samples, i.e. the number of missing positive samples. ACC: representing accuracy; IOU: the representative interaction ratio is a standard for measuring the coincidence degree of the target detection frame and the real frame and judging whether the detection frame is a positive sample or not. Whether a positive or negative sample is determined by comparison with a threshold value is shown in fig. 7. In general, when the prediction block and the true block IOU > =0.5, it is considered as a positive sample.
Based on the implementation mode of the figure 2, the invention provides a fish-eye drivable region evaluating method, and the marked point cloud is marked on a local point cloud map after multi-frame fusion, so that the marked point cloud is more complete and accurate than single-frame point cloud information; and the pixel points of the fisheye picture are converted into the drivable region points based on the 3D space of the vehicle body coordinate system through the camera inner parameter and the camera outer parameter corresponding to the fisheye picture, so that the evaluation result of the drivable region of the real 3D space can be reflected, and the evaluation accuracy is further improved.
Further, as an implementation of the method shown in fig. 1, the embodiment of the invention further provides a fisheye travelable region evaluating device, which is used for implementing the method shown in fig. 1. The embodiment of the device corresponds to the embodiment of the method, and for convenience of reading, details of the embodiment of the method are not repeated one by one, but it should be clear that the device in the embodiment can correspondingly realize all the details of the embodiment of the method. As shown in fig. 3, the apparatus includes:
a first obtaining unit 31, configured to obtain space-time synchronized laser radar point cloud data, RTK positioning data, and fisheye images;
a construction unit 32, configured to construct a laser point cloud multi-frame local map based on the laser radar point cloud data and the RTK positioning data obtained from the first obtaining unit 31;
the labeling unit 33 is configured to label a travelable region on the laser point cloud multi-frame local map obtained from the construction unit 32 by using a preset labeling rule, so as to obtain real travelable region point information;
a second obtaining unit 34, configured to obtain 3D space travelable region point information corresponding to the fisheye picture obtained from the first obtaining unit 31 using a preset obtaining rule;
a calculating unit 35, configured to calculate an evaluation value by using a preset evaluation algorithm based on the 3D space travelable region point information obtained from the second obtaining unit 34 and the real travelable region point information obtained from the labeling unit 33, and obtain an evaluation result based on the evaluation value.
Further, as an implementation of the method shown in fig. 2, the embodiment of the invention further provides another fish-eye travelable area evaluation device, which is used for implementing the method shown in fig. 2. The embodiment of the device corresponds to the embodiment of the method, and for convenience of reading, details of the embodiment of the method are not repeated one by one, but it should be clear that the device in the embodiment can correspondingly realize all the details of the embodiment of the method. As shown in fig. 4, the apparatus includes:
a first obtaining unit 31, configured to obtain space-time synchronized laser radar point cloud data, RTK positioning data, and fisheye pictures from the laser radar, the RTK, and the fisheye camera processed by the first setting unit 37 and the second setting unit 38;
a construction unit 32, configured to construct a laser point cloud multi-frame local map based on the laser radar point cloud data and the RTK positioning data obtained from the first obtaining unit 31;
the labeling unit 33 is configured to label a travelable region on the laser point cloud multi-frame local map obtained from the construction unit 32 by using a preset labeling rule, so as to obtain real travelable region point information;
a second obtaining unit 34, configured to obtain 3D space travelable region point information corresponding to the fisheye picture obtained from the first obtaining unit 31 using a preset obtaining rule;
a calculating unit 35, configured to calculate an evaluation value using a preset evaluation algorithm based on the 3D space travelable region point information obtained from the second obtaining unit 34 and the real travelable region point information obtained from the labeling unit 33, and obtain an evaluation result based on the evaluation value;
the calibration unit 36 is used for uniformly calibrating the laser radar, the RTK and the fish-eye cameras arranged at the periphery of the vehicle to the center of the rear axle of the vehicle;
a first setting unit 37 for setting directions of an X axis, a Y axis and a Z axis of a body coordinate system of the vehicle with the center of the rear axis of the vehicle obtained from the calibration unit 36 as a coordinate origin, for spatially synchronizing the laser radar point cloud data, the RTK positioning data and the fisheye picture;
the second setting unit 38 is configured to set time synchronization of the lidar, the RTK, and the predetermined number of fisheye cameras processed by the first setting unit 37, and is configured to time synchronize the lidar point cloud data, the RTK positioning data, and the fisheye picture.
Further, the second obtaining unit 34 includes:
an obtaining module 341, configured to obtain 2D spatial drivable region point information corresponding to the fisheye image using a preset drivable region deep learning model based on the fisheye image;
a conversion module 342, configured to convert the 2D space travelable region point information corresponding to the fisheye picture obtained from the obtaining module 341 into the 3D space travelable region point information based on the vehicle body coordinate system by using a preset fisheye camera model and a relative pose of the fisheye camera and the vehicle body coordinate system.
Further, the construction unit 32 includes:
the fusion module 321 is configured to fuse the laser radar point cloud data and the RTK positioning data by using a preset fusion rule to obtain a new point cloud pose;
and the construction module 322 is configured to perform superposition mapping on multiple frames of point clouds based on the new point cloud pose obtained from the fusion module 321, so as to obtain the laser point cloud multiple frames of local mapping.
Further, the fusion module 321 includes:
a calculating submodule 3211, configured to calculate a relative pose between point cloud frames by using an inter-frame matching algorithm based on the laser radar point cloud data;
the fusion submodule 3212 is used for fusing the relative pose between the point cloud frames obtained from the calculation submodule 3211 and the RTK positioning data by using a Kalman algorithm to obtain a new point cloud pose; the RTK positioning data are RTK absolute pose positioning data.
Further, the calibration unit 36 is further configured to:
and calculating a travelable region by using a preset point cloud travelable region segmentation algorithm based on the laser point cloud multi-frame local map construction to mark, so as to obtain the real travelable region point information.
Further, the preset evaluation algorithm comprises an IOU algorithm and an ACC algorithm; the calculation unit 35 includes:
the first calculating module 351 is configured to calculate, using an IOU algorithm, an evaluation value corresponding to the IOU based on the 3D space drivable region point information and the real drivable region point information;
the second calculating module 352 is configured to calculate, using an ACC algorithm, an evaluation value corresponding to the ACC based on the 3D space drivable region point information and the real drivable region point information;
a judging module 353, configured to judge whether the evaluation value corresponding to the IOU obtained from the first computing module 351 and the evaluation value corresponding to the ACC obtained from the second computing module 352 meet a preset condition;
a first determining module 354, configured to determine that the evaluation is qualified if the evaluation value corresponding to the IOU and the evaluation value corresponding to the ACC obtained from the judging module 353 meet a preset condition;
a second determining module 355, configured to determine that the evaluation is not qualified if no is obtained from the judging module 353.
Further, an embodiment of the present invention further provides a processor, where the processor is configured to run a program, and when the program runs, the fisheye travelable region evaluation method described in fig. 1-2 is executed.
Further, an embodiment of the present invention further provides a storage medium, where the storage medium is configured to store a computer program, where the computer program controls a device where the storage medium is located to execute the fisheye travelable region evaluation method described in fig. 1-2.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
It will be appreciated that the relevant features of the methods and apparatus described above may be referenced to one another. In addition, the "first", "second", and the like in the above embodiments are for distinguishing the embodiments, and do not represent the merits and merits of the embodiments.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
Furthermore, the memory may include volatile memory, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), in a computer readable medium, the memory including at least one memory chip.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (10)

1. A fish-eye travelable region evaluation method, the method comprising:
acquiring time-space synchronous laser radar point cloud data, RTK positioning data and fish-eye pictures;
constructing a laser point cloud multi-frame local map based on the laser radar point cloud data and the RTK positioning data;
marking the travelable region on the laser point cloud multi-frame local map by using a preset marking rule to obtain real travelable region point information;
acquiring 3D space travelable region point information corresponding to the fisheye picture by using a preset acquisition rule;
and calculating an evaluation value by using a preset evaluation algorithm based on the 3D space drivable region point information and the real drivable region point information, and acquiring an evaluation result based on the evaluation value.
2. The method according to claim 1, wherein the obtaining 3D space drivable region point information corresponding to the fisheye picture using a preset obtaining rule includes:
based on the fish-eye picture, obtaining 2D space travelable region point information corresponding to the fish-eye picture by using a preset travelable region deep learning model;
and converting the 2D space drivable region point information corresponding to the fisheye picture into the 3D space drivable region point information based on the vehicle body coordinate system by using a preset fisheye camera model and the relative pose of the fisheye camera and the vehicle body coordinate system.
3. The method of claim 2, wherein the constructing a laser point cloud multi-frame partial map based on the laser radar point cloud data and the RTK positioning data comprises:
fusing the laser radar point cloud data and the RTK positioning data by using a preset fusion rule to obtain a new point cloud pose;
and based on the pose of the new point cloud, overlapping and mapping the multi-frame point cloud to obtain the multi-frame local mapping of the laser point cloud.
4. The method of claim 3, wherein fusing the lidar point cloud data and the RTK positioning data using a preset fusion rule to obtain a new point cloud pose comprises:
calculating to obtain the relative pose between the point cloud frames by utilizing an interframe matching algorithm based on the laser radar point cloud data;
fusing the relative pose between the point cloud frames and the RTK positioning data by using a Kalman algorithm to obtain a new point cloud pose; the RTK positioning data are RTK absolute pose positioning data.
5. The method of any of claims 1-4, wherein prior to the acquiring the spatiotemporal synchronized lidar point cloud data, RTK positioning data, and fisheye picture, the method further comprises:
uniformly calibrating a laser radar, an RTK (real-time kinematic) arranged at the top end of a vehicle and a preset number of fish-eye cameras arranged around the vehicle to the center of a rear axle of the vehicle;
setting directions of an X axis, a Y axis and a Z axis of a vehicle body coordinate system of the vehicle by taking the center of a rear axis of the vehicle as a coordinate origin, wherein the directions are used for spatially synchronizing the laser radar point cloud data, the RTK positioning data and the fisheye picture;
setting time synchronization of the laser radar, the RTK and the fish-eye cameras with preset quantity, and performing time synchronization on the laser radar point cloud data, the RTK positioning data and the fish-eye pictures.
6. The method of claim 5, wherein the marking the travelable region on the laser point cloud multi-frame partial map by using a preset marking rule to obtain real travelable region point information comprises:
and calculating a travelable region by using a preset point cloud travelable region segmentation algorithm based on the laser point cloud multi-frame local map construction to mark, so as to obtain the real travelable region point information.
7. The method of claim 6, wherein the preset evaluation algorithm comprises an IOU algorithm and an ACC algorithm;
the calculating, based on the 3D space drivable region point information and the real drivable region point information, an evaluation value by using a preset evaluation algorithm, and obtaining an evaluation result based on the evaluation value, includes:
calculating an evaluation value corresponding to the IOU by utilizing an IOU algorithm based on the 3D space drivable region point information and the real drivable region point information;
calculating an evaluation value corresponding to the ACC by utilizing an ACC algorithm based on the 3D space drivable region point information and the real drivable region point information;
judging whether the evaluation value corresponding to the IOU and the evaluation value corresponding to the ACC meet preset conditions or not;
if yes, determining that the evaluation is qualified;
if not, determining that the evaluation is unqualified.
8. The fish-eye travelable region evaluating device is characterized by comprising:
the first acquisition unit is used for acquiring time-space synchronous laser radar point cloud data, RTK positioning data and fisheye pictures;
the construction unit is used for constructing a laser point cloud multi-frame local map based on the laser radar point cloud data and the RTK positioning data;
the marking unit is used for marking the drivable region on the laser point cloud multi-frame local map by using a preset marking rule to obtain real drivable region point information;
the second acquisition unit is used for acquiring 3D space travelable region point information corresponding to the fisheye picture by utilizing a preset acquisition rule;
the calculating unit is used for calculating an evaluation value by using a preset evaluation algorithm based on the 3D space drivable region point information and the real drivable region point information, and acquiring an evaluation result based on the evaluation value.
9. A storage medium including a stored program, characterized in that the apparatus in which the storage medium is controlled to execute the fish-eye travelable region evaluation method according to any one of claims 1 to 7 when the program is run.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the fish-eye travelable region evaluation method as claimed in any one of claims 1-7 when executing the program.
CN202310226465.8A 2023-03-03 2023-03-03 Fish-eye travelable region evaluating method and device Pending CN116386002A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310226465.8A CN116386002A (en) 2023-03-03 2023-03-03 Fish-eye travelable region evaluating method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310226465.8A CN116386002A (en) 2023-03-03 2023-03-03 Fish-eye travelable region evaluating method and device

Publications (1)

Publication Number Publication Date
CN116386002A true CN116386002A (en) 2023-07-04

Family

ID=86976017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310226465.8A Pending CN116386002A (en) 2023-03-03 2023-03-03 Fish-eye travelable region evaluating method and device

Country Status (1)

Country Link
CN (1) CN116386002A (en)

Similar Documents

Publication Publication Date Title
CN111462135B (en) Semantic mapping method based on visual SLAM and two-dimensional semantic segmentation
CN110795819B (en) Method and device for generating automatic driving simulation scene and storage medium
CN112180373B (en) Multi-sensor fusion intelligent parking system and method
CN111192331B (en) External parameter calibration method and device for laser radar and camera
CN111830953B (en) Vehicle self-positioning method, device and system
CN112749594B (en) Information completion method, lane line identification method, intelligent driving method and related products
CN114359181B (en) Intelligent traffic target fusion detection method and system based on image and point cloud
US20230144678A1 (en) Topographic environment detection method and system based on binocular stereo camera, and intelligent terminal
CN113160327A (en) Method and system for realizing point cloud completion
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN113205604A (en) Feasible region detection method based on camera and laser radar
Petrovai et al. A stereovision based approach for detecting and tracking lane and forward obstacles on mobile devices
CN114037762A (en) Real-time high-precision positioning method based on image and high-precision map registration
CN115410167A (en) Target detection and semantic segmentation method, device, equipment and storage medium
Kruber et al. Vehicle position estimation with aerial imagery from unmanned aerial vehicles
CN114898314A (en) Target detection method, device and equipment for driving scene and storage medium
KR102264152B1 (en) Method and system for ground truth auto labeling advanced sensor data and image by camera
KR102195040B1 (en) Method for collecting road signs information using MMS and mono camera
CN103903269B (en) The description method and system of ball machine monitor video
CN116386002A (en) Fish-eye travelable region evaluating method and device
US11763492B1 (en) Apparatus and methods to calibrate a stereo camera pair
CN114494466A (en) External parameter calibration method, device and equipment and storage medium
CN115457084A (en) Multi-camera target detection tracking method and device
CN114998436A (en) Object labeling method and device, electronic equipment and storage medium
CN114155258A (en) Detection method for highway construction enclosed area

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination