CN116309858A - Camera external parameter processing method and device, electronic equipment and automatic driving vehicle - Google Patents

Camera external parameter processing method and device, electronic equipment and automatic driving vehicle Download PDF

Info

Publication number
CN116309858A
CN116309858A CN202211738409.4A CN202211738409A CN116309858A CN 116309858 A CN116309858 A CN 116309858A CN 202211738409 A CN202211738409 A CN 202211738409A CN 116309858 A CN116309858 A CN 116309858A
Authority
CN
China
Prior art keywords
detection result
target
obstacle
camera
center position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211738409.4A
Other languages
Chinese (zh)
Inventor
余东应
周珣
谢青青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211738409.4A priority Critical patent/CN116309858A/en
Publication of CN116309858A publication Critical patent/CN116309858A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Abstract

The disclosure provides a processing method and a processing device for camera external parameters, electronic equipment and an automatic driving vehicle, relates to the technical field of computers, and particularly relates to the fields of automatic driving, computer vision, intelligent traffic and the like. The specific implementation scheme is as follows: the processing method of the camera external parameters comprises the following steps: acquiring a first obstacle detection result and a second obstacle detection result; the first obstacle detection result is a detection result of the obstacle based on the point cloud data of the radar, and the second obstacle detection result is a detection result of the obstacle based on the image data of the camera; determining a first central position corresponding to a target obstacle according to a first target detection result, and determining a second central position corresponding to the target obstacle according to a second target detection result; and determining whether the external parameters of the camera are abnormal according to the first central position and the second central position.

Description

Camera external parameter processing method and device, electronic equipment and automatic driving vehicle
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to the fields of autopilot, computer vision, intelligent transportation, and the like.
Background
An autonomous vehicle, also called an unmanned vehicle, mainly relies on various sensors to sense the surrounding environment of the vehicle, i.e. to identify specific conditions of the surrounding environment of the vehicle, such as detecting, classifying and tracking obstacles, and to control the vehicle to run on a road according to information obtained by sensing without any active operation of human beings. The commonly used sensors mainly comprise a laser radar, a millimeter wave radar, an ultrasonic radar, a vision camera and the like.
Disclosure of Invention
The present disclosure provides a processing method and apparatus for camera external parameters, an electronic device, a storage medium, a computer program product, and an autonomous vehicle.
According to an aspect of the present disclosure, there is provided a processing method of camera external parameters, including: acquiring a first obstacle detection result and a second obstacle detection result; the first obstacle detection result is a detection result of the obstacle based on the point cloud data of the radar, and the second obstacle detection result is a detection result of the obstacle based on the image data of the camera; determining a first central position corresponding to a target obstacle according to a first target detection result, and determining a second central position corresponding to the target obstacle according to a second target detection result; the first target detection result is a detection result of the first obstacle detection result corresponding to the target obstacle, and the second target detection result is a detection result of the second obstacle detection result corresponding to the target obstacle; according to the first center position and the second center the location determines whether the camera's external parameters are abnormal.
According to another aspect of the present disclosure, there is provided a processing apparatus of camera external parameters, including: the detection result acquisition module is used for acquiring a first obstacle detection result and a second obstacle detection result; the first obstacle detection result is a detection result of the obstacle based on the point cloud data of the radar, and the second obstacle detection result is a detection result of the obstacle based on the image data of the camera; the central position determining module is used for determining a first central position corresponding to the target obstacle according to a first target detection result and determining a second central position corresponding to the target obstacle according to a second target detection result; the first target detection result is a detection result of the first obstacle detection result corresponding to the target obstacle, and the second target detection result is a detection result of the second obstacle detection result corresponding to the target obstacle; and the external parameter abnormality determining module is used for determining whether the external parameters of the camera are abnormal according to the first central position and the second central position.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one radar for collecting point cloud data; at least one camera for acquiring image data; at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the processing method described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the above-described processing method.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the above-mentioned processing method.
According to another aspect of the present disclosure, there is provided an autonomous vehicle including the above-described electronic device.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart of a method for processing camera exograms according to an embodiment of the present disclosure;
FIG. 2 is a partial schematic view of a processing method of camera exogenously according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram of spherical linear interpolation;
FIG. 4 is a flow chart of step S103 according to an embodiment of the present disclosure;
FIG. 5 is a flow chart of step S103c according to an embodiment of the present disclosure;
FIG. 6 is a partial schematic diagram of a processing method of camera exograms according to an embodiment of the present disclosure;
FIG. 7 is a block diagram of a processing device of a camera external parameter according to an embodiment of the present disclosure;
fig. 8 is a block diagram of an electronic device for implementing a processing method of camera exogenously, according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with the disclosed embodiments, there is provided an embodiment of a processing method of camera exogenesis, it being noted that the steps shown in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that herein.
Fig. 1 is a flowchart of a processing method of camera external parameters according to an embodiment of the present disclosure, as shown in fig. 1, including the following steps S101 to S103:
step S101 acquiring a first obstacle detection result and a second obstacle detection result; the first obstacle detection result is a detection result of the obstacle based on the point cloud data of the radar, and the second obstacle detection result is a detection result of the obstacle based on the image data of the camera.
The radar can be specifically a laser radar, a millimeter wave radar, an ultrasonic radar and the like. In a specific implementation, the radar collects point cloud data, the obstacle can be detected based on the point cloud data, and the first obstacle detection result can include a category of the obstacle, three-dimensional coordinates of a contour point of the obstacle, and the like, wherein the contour point of the obstacle can also be called a polygonal corner point, and the contour point of the obstacle can form a three-dimensional detection frame.
The camera may also be referred to as a monocular camera, a vision camera, a video camera, or the like. In a specific implementation, image data is acquired through a camera, an obstacle can be detected based on the image data, and the second obstacle detection result can include a category of the obstacle, a position of a detection frame of the obstacle, and the like, wherein the detection frame of the obstacle is a two-dimensional detection frame.
In a specific use scene, the radar and the camera are both installed on an automatic driving vehicle, and the automatic driving vehicle can be controlled to run on a road based on point cloud data collected by the radar and image data collected by the camera.
Step S102, determining a first center position corresponding to the target obstacle according to the first target detection result, and determining a second center position corresponding to the target obstacle according to the second target detection result. The first target detection result is a detection result of the first obstacle detection result corresponding to the target obstacle, and the second target detection result is a detection result of the second obstacle detection result corresponding to the target obstacle.
In particular, the method comprises the steps of, the first target detection result and the second target detection result correspond to the same obstacle, i.e. a target obstacle. In a specific example, the first center position corresponding to the target obstacle is the center position of the three-dimensional detection frame, and the second center position corresponding to the target obstacle is the center position of the two-dimensional detection frame. In another specific example, the first center position corresponding to the target obstacle is a center position of the target obstacle determined according to the first target detection result, and the second center position corresponding to the target obstacle is a center position of the target obstacle determined according to the second target detection result.
Step S103, determining whether the external parameters of the camera are abnormal according to the first central position and the second central position.
In this embodiment, whether the external parameters of the camera are abnormal is determined by combining the detection result of the radar-based point cloud data on the obstacle and the detection result of the camera-based image data on the obstacle, so that the external parameters of the camera can be determined in real time without depending on external calibration equipment or manual participation, and the safety of the automatic driving vehicle is improved.
In an optional embodiment, the detection result of the first obstacle detection result corresponding to the target obstacle is at least one, and the detection result of the second obstacle detection result corresponding to the target obstacle is at least one. In this embodiment, at least one detection result corresponding to the same obstacle needs to be determined from a plurality of detection results, and specifically, the first target detection result and the second target detection result are determined according to a timestamp of a detection result corresponding to the target obstacle in the first obstacle detection result and a timestamp of a detection result corresponding to the target obstacle in the second obstacle detection result.
The time stamp of the detection result corresponding to the target obstacle in the first obstacle detection result is a first time stamp, the time stamp of the detection result corresponding to the target obstacle in the second obstacle detection result is a second time stamp, the number of the first time stamp and the number of the second time stamp are at least one, the first target time stamp is determined from at least one first time stamp, and the second target time stamp is determined from at least one second time stamp, and the first target time stamp and the second target time stamp are two nearest time stamps. Further, if the difference between the first target timestamp and the second target timestamp meets the condition, determining the detection result corresponding to the first target timestamp as the first target detection result, and determining the detection result corresponding to the second target timestamp as the second target detection result. Wherein the first target detection result and the second target detection result are matched.
In a specific example, the detection results corresponding to the target obstacle in the first obstacle detection results include b1 and b2, the timestamps of b1 and b2 are tb1 and tb2, respectively, the detection results corresponding to the target obstacle in the second obstacle detection results include c1 and c2, the timestamps of c1 and c2 are tc1 and tc2, respectively, and it may be determined that tb1 and tc1 are two timestamps nearest to each other, that is, tb1 is the first target timestamp, and tc1 is the second target timestamp, according to the minimum value in the first obstacle detection results of tb1-tc1, tb1-tc2, tb2-tc1, and tc 2. And satisfies ||tb1-tc1| <1e-3, so the detection result b1 corresponding to tb1 is determined as the first target detection result, and determining the detection result c1 corresponding to tc1 as a second target detection result.
In this embodiment, when at least one detection result corresponding to the same obstacle is detected, a set of matched detection results is determined from at least one detection result according to the timestamp of each detection result, and whether the camera external parameter is abnormal or not is determined according to the matched detection results, so that the accuracy of the determination can be ensured.
In an alternative embodiment, the three-dimensional coordinates of the contour point of the target obstacle in the first target detection result are subjected to velocity compensation. In a specific implementation, the speed can be measured in the process of collecting the point cloud data and the image data, and the three-dimensional coordinates of the contour points are compensated according to the measured speed. The speed is the running speed of the automatic driving vehicle where the radar and the camera are located.
In a specific example, the three-dimensional coordinates of the contour point of the target obstacle in the first target detection result are subjected to velocity compensation according to the following formula:
pc=pn+vn*Δt;
wherein pc is the three-dimensional coordinate of the profile point after speed compensation, pn is the three-dimensional coordinate of the profile before speed compensation, vn is the speed measured when the point cloud data corresponding to the first target detection result is acquired, and Δt is the difference between the timestamp of the first target detection result and the timestamp of the nearest speed.
In this embodiment, by performing speed compensation on the three-dimensional coordinates of the contour point of the target obstacle in the first target detection result, the situation that the camera external parameter is misjudged to be abnormal due to inconsistent speed can be avoided, so that the accuracy of judging whether the camera external parameter is abnormal is improved.
In an alternative embodiment, before the step S102, the processing method further includes the following steps S201 to S203, as shown in fig. 2:
step S201, positioning information is obtained; wherein the positioning information includes a pose. In a specific implementation, the positioning information may be acquired in the process of acquiring the point cloud data and the image data, for example, a positioning device such as a GPS (Global Positioning System ), an IMU (Inertial Navigation System, inertial navigation system), a radar and the like may be used to acquire the positioning information.
And step S202, interpolating according to the positioning information to respectively obtain the pose corresponding to the first target detection result and the pose corresponding to the second target detection result.
In particular implementations, linear interpolation, non-linear interpolation may be utilizedInterpolation is carried out by methods such as linear interpolation and the like. In a specific example, as shown in fig. 3, interpolation is performed by spherical linear interpolation, specifically: let the quaternion pose at time t0 be q 0 The quaternion pose at time t1 is q 1 Then the quaternion pose at time t can be calculated as q (t) using the following formula: q (t) = (1-s) ×q 1 +s*q 0 . Wherein s= (t-t 0)/(t 1-t 0). For the first target detection result b1, the pose corresponding to the two time stamps nearest to the time stamp tb1 is found in the positioning information, and the pose corresponding to the first target detection result b1 can be calculated by using the formula. For the second target detection result c1, the pose corresponding to the two time stamps nearest to the time stamp tc1 is found in the positioning information, and the pose corresponding to the second target detection result c1 can be calculated by using the formula.
And step 203, performing pose compensation on the three-dimensional coordinates of the outline points of the target obstacle in the first target detection result according to the pose corresponding to the first target detection result and the pose corresponding to the second target detection result.
In a specific example, pose compensation is performed on three-dimensional coordinates of a contour point of the target obstacle in the first target detection result according to the following formula:
pose diff =pose c -1 *pose l
P2=pose diff .q*P1+pose diff .p;
wherein, the phase diff Pose phase corresponding to the first target detection result l Pose phase corresponding to the second target detection result c Difference, post diff Comprising a rotation component phase diff Q and translational component phase diff P, P1 is the three-dimensional coordinates of the contour point before the pose compensation, and P2 is the three-dimensional coordinates of the contour point after the pose compensation.
In this embodiment, the pose compensation is performed on the three-dimensional coordinates of the contour points of the target obstacle in the first target detection result by combining the positioning information, so that the situation that the camera external parameters are misjudged to be abnormal due to inconsistent poses can be avoided, and the accuracy of judging whether the camera external parameters are abnormal is improved.
It should be noted that, the speed compensation may be performed on the three-dimensional coordinates of the contour point of the target obstacle in the first target detection result, and then the pose compensation may be performed on the three-dimensional coordinates of the contour point of the target obstacle in the first target detection result, and then the speed compensation may be performed.
In an optional embodiment, the step S102 specifically includes: converting the three-dimensional coordinates of the contour points of the target obstacle in the first target detection result into a camera coordinate system to obtain two-dimensional coordinates of the contour points; and determining a first center position corresponding to the target obstacle according to the two-dimensional coordinates of the contour point.
The three-dimensional coordinates of the contour points of the target obstacle in the first target detection result can be converted into a camera coordinate system by combining the external reference relation between the radar and the camera, namely, the three-dimensional contour points are projected to a two-dimensional plane, a plurality of contour points projected to the two-dimensional plane can enclose a two-dimensional detection frame, and the maximum value x of the horizontal coordinates in the two-dimensional coordinates of all the contour points is calculated according to the three-dimensional coordinates lmax Minimum value x of abscissa lmin Maximum y of ordinate lmax Minimum y of ordinate lmin The center position L of the two-dimensional detection frame can be obtained o ((x lmax +x lmin )/2,(y lmax +y lmin ) 2), the present embodiment uses the target obstacle as the first central position L corresponding to the target obstacle o The calculation mode is simple, and the efficiency of judging whether the external parameters of the camera are abnormal can be improved.
In an alternative embodiment, the step S102 specifically includes: determining a second central position corresponding to the target obstacle according to the position of the detection frame of the target obstacle in the second target detection result, and specifically, directly determining the gravity center position of the detection frame of the target obstacle in the second target detection result as a second central position C corresponding to the target obstacle o
In an optional embodiment, the step S103 specifically includes: if the difference between the abscissa of the first center position and the abscissa of the second center position is smaller than a first average value, and the difference between the ordinate of the first center position and the ordinate of the second center position is smaller than a second average value, determining whether the external parameters of the camera are abnormal according to the first center position and the second center position; the first average value is an average value of differences between abscissas of the first center positions and abscissas of the second center positions corresponding to all the target obstacles, and the second average value is an average value of differences between ordinates of the first center positions and ordinates of the second center positions corresponding to all the target obstacles.
In this embodiment, a corresponding first center position and a corresponding second center position may be calculated for each target obstacle, the first average value and the second average value may be obtained by statistically analyzing the first center positions and the second center positions corresponding to all the target obstacles, and by comparing a difference value between an abscissa of the first center position corresponding to the target obstacle and an abscissa of the second center position corresponding to the target obstacle with the first average value and comparing a difference value between an ordinate of the first center position corresponding to the target obstacle and an ordinate of the second center position corresponding to the target obstacle with the second average value, the first center position and the second center position with higher accuracy may be maintained, thereby improving accuracy of determining whether the external parameters of the camera are abnormal.
In an optional embodiment, the step S103 specifically includes: if the area of the detection frame of the target obstacle in the first target detection result is matched with the area of the detection frame of the target obstacle in the second target detection result, determining whether the external parameters of the camera are abnormal according to the first central position and the second central position. The detection frame of the target obstacle in the first target detection result is surrounded by two-dimensional coordinates of the contour point.
In a specific implementation, the area S of the detection frame of the target obstacle in the first target detection result can be used for lidar Area S of detection frame of the target obstacle in the second target detection result cam The ratio determines whether the two match. In a specific example, if S lidar /S cam And 1.5 or less, and the two are considered to be matched. In another specific example, if S cam /S lidar And 1.5 or less, and the two are considered to be matched.
In this embodiment, the reliability of judging whether the external parameters of the camera are abnormal can be improved by adding the area constraint conditions of the two detection frames.
In an alternative embodiment, as shown in fig. 4, the step S103 specifically includes the following steps S103a to S103b:
step S103a, calculating a lateral angle difference according to the abscissa of the first center position and the abscissa of the second center position. And calculating a transverse angle difference value according to the difference value between the abscissa of the first central position and the abscissa of the second central position and the internal parameter of the camera.
Step S103b, calculating a longitudinal angle difference according to the ordinate of the first center position and the ordinate of the second center position. And calculating a longitudinal angle difference value according to the difference value between the ordinate of the first central position and the ordinate of the second central position and the internal reference of the camera.
Step S103c, determining whether the external parameters of the camera are abnormal according to the transverse angle difference value and the longitudinal angle difference value.
In a specific implementation, in order to increase the accuracy of the lateral angle difference and the longitudinal angle difference, the difference x between the abscissa of the first central position and the abscissa of the second central position may be calculated diff And the difference y between the ordinate of the first central position and the ordinate of the second central position diff Performing filtering processing, and utilizing the difference value x of the abscissa after the filtering processing filter Difference y from the ordinate filter And calculating a transverse angle difference value and a longitudinal angle difference value. In a specific example, the first center position corresponds to all target obstaclesPut and second center position for all x diff And y diff Respectively constructing statistical analysis histograms with fixed step length, further finding out the position of the mode, marking the data of other intervals as invalid data, calculating the mean value of all the data of the intervals of the mode, and obtaining the difference value x of the abscissa after the histogram filtering processing filter Difference y from the ordinate filter . In a specific example, the lateral angle difference x can be calculated using the following formula angle And a difference y in angle between longitudinal directions angle
x angle =atan2(x filter ,fx)*180.0/π;
y angle =atan2(y filter ,fy)*180.0/π;
Wherein fx and fy are internal references of the camera.
In this embodiment, a transverse angle difference value and a longitudinal angle difference value are calculated through the first center position and the second center position, where the transverse angle difference value may represent a deviation of an external parameter of the camera in a transverse direction, the longitudinal angle difference value may represent a deviation of the external parameter of the camera in a longitudinal direction, and whether the external parameter of the camera is abnormal may be accurately determined according to the transverse angle difference value and the longitudinal angle difference value.
In an alternative embodiment, as shown in fig. 5, the step S103c specifically includes the following steps S501 to S503:
step S501, determining whether the transverse angle difference is smaller than a first threshold and the longitudinal angle difference is smaller than a second threshold, if yes, executing step S502, otherwise, executing step S503.
The first threshold and the second threshold may be set according to actual situations, and the first threshold and the second threshold may be set to be the same, for example, set to 0.3 degrees, or may be set to be different.
Step S502, determining that the external parameters of the camera are normal. The transverse angle difference value is smaller than a first threshold value, the longitudinal angle difference value is smaller than a second threshold value, and therefore the deviation of the external parameters of the camera in the transverse direction and the longitudinal direction is smaller, and the fact that the external parameters of the camera are normal is determined.
Step S503, determining that the external parameters of the camera are abnormal. The transverse angle difference value is larger than or equal to a first threshold value, which indicates that the deviation of the external parameters of the camera in the transverse direction is larger, and the longitudinal angle difference value is larger than or equal to a second threshold value, which indicates that the deviation of the external parameters of the camera in the longitudinal direction is larger, and at the moment, the external parameters of the camera are determined to be abnormal.
In this embodiment, whether the external parameter of the camera is normal or abnormal may be accurately determined by comparing the relationship between the lateral angle difference and the first threshold and the relationship between the longitudinal angle difference and the second threshold.
In a specific implementation, the abnormal condition of the external parameters of the camera can be subdivided. In a specific example, if the lateral angle difference is greater than a first threshold and less than a first double threshold, and the longitudinal angle difference is greater than a second threshold and less than a second double threshold, then it is determined that there is a smaller abnormality in the external parameters of the camera. If the transverse angle difference value is larger than a first threshold value which is two times larger than the longitudinal angle difference value, determining that the external parameters of the camera are abnormal.
In an alternative embodiment, as shown in fig. 6, step S504 is further included after step S503 described above: and calibrating the external parameters of the camera according to the transverse angle difference value and the longitudinal angle difference value to obtain calibrated external parameters.
Wherein the lateral angle difference and the longitudinal angle difference may also be referred to as euler angle difference. In a specific implementation, for the convenience of calculation, the euler angle difference value can be converted into a quaternion form to obtain a rotation increment q incre Then calibrating the external parameters of the camera according to the following formula:
q modify =q initial *q incre -1
wherein q modify Is the calibrated external parameter, q initial Is an external parameter before calibration. In specific implementation, the external parameters before calibration can be replaced by the marksThe calibrated external parameters can be used by the subsequent camera to collect image data.
In this embodiment, the external parameters of the camera may be calibrated in real time by using the transverse angle difference value and the longitudinal angle difference value, so that the accuracy of the external parameters of the camera may be improved, and further the reliability and safety of the automatic driving vehicle may be improved.
In a specific implementation, if the external parameter of the camera is determined to be abnormal, alarm information can be output.
According to an embodiment of the present disclosure, there is further provided an embodiment of a processing apparatus for camera external parameters, where fig. 7 is a schematic diagram of the processing apparatus for camera external parameters according to an embodiment of the present disclosure, and the apparatus includes a detection result obtaining module 701, a central position determining module 702, and an external parameter anomaly determining module 703. The detection result obtaining module 701 is configured to obtain a first obstacle detection result and a second obstacle detection result; the first obstacle detection result is a detection result of the obstacle based on the point cloud data of the radar, and the second obstacle detection result is a detection result of the obstacle based on the image data of the camera. The central position determining module 702 is configured to determine a first central position corresponding to a target obstacle according to a first target detection result, and determine a second central position corresponding to the target obstacle according to a second target detection result; the first target detection result is a detection result of the first obstacle detection result corresponding to the target obstacle, and the second target detection result is a detection result of the second obstacle detection result corresponding to the target obstacle. The external parameter anomaly determination module 703 is configured to determine whether external parameters of the camera are anomaly according to the first center position and the second center position.
It should be noted that the detection result obtaining module 701, the center position determining module 702, and the external parameter anomaly determining module 703 correspond to steps S101 to S103 in the above embodiment, and the three modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in the above embodiment.
In an optional implementation manner, the detection result corresponding to the target obstacle in the first obstacle detection result is at least one, and the detection result corresponding to the target obstacle in the second obstacle detection result is at least one, and the processing device further includes: and the detection result determining module is used for determining the first target detection result and the second target detection result according to the time stamp of the detection result corresponding to the target obstacle in the first obstacle detection result and the time stamp of the detection result corresponding to the target obstacle in the second obstacle detection result.
In an optional implementation manner, the central position determining module is specifically configured to convert three-dimensional coordinates of a contour point of the target obstacle in the first target detection result to a camera coordinate system to obtain two-dimensional coordinates of the contour point; and determining a first center position corresponding to the target obstacle according to the two-dimensional coordinates of the contour point.
In an optional implementation manner, the processing device further includes a speed compensation module, configured to perform speed compensation on three-dimensional coordinates of a contour point of the target obstacle in the first target detection result.
In an alternative embodiment, the processing device further includes: the positioning information acquisition module is used for acquiring positioning information; wherein the positioning information includes a pose; the pose interpolation module is used for interpolating according to the positioning information to respectively obtain a pose corresponding to the first target detection result and a pose corresponding to the second target detection result; and the pose compensation module is used for carrying out pose compensation on the three-dimensional coordinates of the outline points of the target obstacles in the first target detection result according to the pose corresponding to the first target detection result and the pose corresponding to the second target detection result.
In an optional embodiment, the extrinsic parameter anomaly determination module is specifically configured to determine whether the extrinsic parameter of the camera is abnormal according to the first center position and the second center position when a difference between an abscissa of the first center position and an abscissa of the second center position is smaller than a first average value, and a difference between an ordinate of the first center position and an ordinate of the second center position is smaller than a second average value; the first average value is an average value of differences between abscissas of the first center positions and abscissas of the second center positions corresponding to all the target obstacles, and the second average value is an average value of differences between ordinates of the first center positions and ordinates of the second center positions corresponding to all the target obstacles.
In an optional implementation manner, the external parameter anomaly determination module is specifically configured to determine, when an area of a detection frame of the target obstacle in the first target detection result matches an area of a detection frame of the target obstacle in the second target detection result, whether the external parameter of the camera is abnormal according to the first center position and the second center position, where the detection frame of the target obstacle in the first target detection result is surrounded by two-dimensional coordinates of the contour point.
In an optional implementation manner, the external parameter anomaly determination module specifically includes: a first calculation unit configured to calculate a lateral angle difference value according to an abscissa of the first center position and an abscissa of the second center position; a second calculation unit for calculating a longitudinal angle difference value according to the ordinate of the first center position and the ordinate of the second center position; and the calibration determining unit is used for determining whether the external parameters of the camera are abnormal according to the transverse angle difference value and the longitudinal angle difference value.
In an alternative embodiment, the calibration determination unit is specifically configured to determine that the external parameters of the camera are normal when the lateral angle difference is smaller than a first threshold value and the longitudinal angle difference is smaller than a second threshold value, and to determine that the external parameters of the camera are abnormal otherwise.
In an alternative embodiment, the processing device further includes: and the external parameter calibration module is used for calibrating the external parameters of the camera according to the transverse angle difference value and the longitudinal angle difference value under the condition that the external parameters of the camera are determined to be abnormal, so as to obtain calibrated external parameters.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 8 illustrates a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data required for the operation of the device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in device 800 are connected to I/O interface 805, including: an input unit 806 such as a keyboard, mouse, etc.; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, etc.; and a communication unit 809, such as a network card, modem, wireless communication transceiver, or the like. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks. The device 800 may also include various sensors, such as radar, cameras, positioning systems, and the like.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 801 performs the respective methods and processes described above, for example, a processing method of camera external parameters. For example, in some embodiments, the processing method of camera external parameters may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 800 via ROM 802 and/or communication unit 809. When the computer program is loaded into the RAM803 and executed by the computing unit 801, one or more steps of the above-described processing method of camera exogenesis may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the processing method of camera exogenesis in any other suitable way (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
According to the embodiment of the disclosure, the disclosure further provides an automatic driving vehicle, which includes the electronic device in the embodiment, can execute the processing method of the camera external parameters in the embodiment of the disclosure, and can be suitable for various automatic driving scenes.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (24)

1. A processing method of camera external parameters comprises the following steps:
acquiring a first obstacle detection result and a second obstacle detection result; the first obstacle detection result is a detection result of the obstacle based on the point cloud data of the radar, and the second obstacle detection result is a detection result of the obstacle based on the image data of the camera;
Determining a first central position corresponding to a target obstacle according to a first target detection result, and determining a second central position corresponding to the target obstacle according to a second target detection result; the first target detection result is a detection result of the first obstacle detection result corresponding to the target obstacle, and the second target detection result is a detection result of the second obstacle detection result corresponding to the target obstacle;
and determining whether the external parameters of the camera are abnormal according to the first central position and the second central position.
2. The processing method according to claim 1, wherein at least one of the first obstacle detection results corresponds to the target obstacle, and at least one of the second obstacle detection results corresponds to the target obstacle, the processing method further comprising:
and determining the first target detection result and the second target detection result according to the time stamp of the detection result of the corresponding target obstacle in the first obstacle detection result and the time stamp of the detection result of the corresponding target obstacle in the second obstacle detection result.
3. The processing method according to claim 1, wherein the determining the first center position corresponding to the target obstacle according to the first target detection result includes:
converting the three-dimensional coordinates of the contour points of the target obstacle in the first target detection result into a camera coordinate system to obtain two-dimensional coordinates of the contour points;
and determining a first center position corresponding to the target obstacle according to the two-dimensional coordinates of the contour point.
4. A processing method according to claim 3, wherein before said converting the three-dimensional coordinates of the contour point of the target obstacle in the first target detection result into the camera coordinate system, further comprising:
and carrying out speed compensation on the three-dimensional coordinates of the contour points of the target obstacle in the first target detection result.
5. A processing method according to claim 3, further comprising:
acquiring positioning information; wherein the positioning information includes a pose;
interpolation is carried out according to the positioning information, and the pose corresponding to the first target detection result and the pose corresponding to the second target detection result are obtained respectively;
before the converting the three-dimensional coordinates of the contour point of the target obstacle in the first target detection result into the camera coordinate system, the method further comprises:
And performing pose compensation on the three-dimensional coordinates of the outline points of the target obstacle in the first target detection result according to the pose corresponding to the first target detection result and the pose corresponding to the second target detection result.
6. The processing method of claim 1, wherein the determining whether the external parameters of the camera are abnormal based on the first center position and the second center position comprises:
if the difference between the abscissa of the first center position and the abscissa of the second center position is smaller than a first average value, and the difference between the ordinate of the first center position and the ordinate of the second center position is smaller than a second average value, determining whether the external parameters of the camera are abnormal according to the first center position and the second center position;
the first average value is an average value of differences between abscissas of the first center positions and abscissas of the second center positions corresponding to all the target obstacles, and the second average value is an average value of differences between ordinates of the first center positions and ordinates of the second center positions corresponding to all the target obstacles.
7. A processing method according to claim 3, wherein said determining whether or not the external parameters of the camera are abnormal from the first center position and the second center position comprises:
If the area of the detection frame of the target obstacle in the first target detection result is matched with the area of the detection frame of the target obstacle in the second target detection result, determining whether the external parameters of the camera are abnormal according to the first central position and the second central position;
the detection frame of the target obstacle in the first target detection result is surrounded by two-dimensional coordinates of the contour point.
8. The processing method of claim 1, wherein the determining whether the external parameters of the camera are abnormal based on the first center position and the second center position comprises:
calculating a transverse angle difference value according to the abscissa of the first central position and the abscissa of the second central position;
calculating a longitudinal angle difference value according to the ordinate of the first central position and the ordinate of the second central position;
and determining whether the external parameters of the camera are abnormal according to the transverse angle difference value and the longitudinal angle difference value.
9. The processing method of claim 8, wherein the determining whether the external parameters of the camera are abnormal based on the lateral angle difference and the longitudinal angle difference comprises: if the transverse angle difference value is smaller than a first threshold value and the longitudinal angle difference value is smaller than a second threshold value, determining that the external parameters of the camera are normal, otherwise, determining that the external parameters of the camera are abnormal.
10. The processing method according to claim 9, further comprising: if the external parameters of the camera are determined to be abnormal, calibrating the external parameters of the camera according to the transverse angle difference value and the longitudinal angle difference value to obtain calibrated external parameters.
11. A processing device of camera external parameters, comprising:
the detection result acquisition module is used for acquiring a first obstacle detection result and a second obstacle detection result; the first obstacle detection result is a detection result of the obstacle based on the point cloud data of the radar, and the second obstacle detection result is a detection result of the obstacle based on the image data of the camera;
the central position determining module is used for determining a first central position corresponding to the target obstacle according to a first target detection result and determining a second central position corresponding to the target obstacle according to a second target detection result; the first target detection result is a detection result of the first obstacle detection result corresponding to the target obstacle, and the second target detection result is a detection result of the second obstacle detection result corresponding to the target obstacle;
and the external parameter abnormality determining module is used for determining whether the external parameters of the camera are abnormal according to the first central position and the second central position.
12. The processing device of claim 11, wherein at least one of the first obstacle detection results corresponds to the target obstacle, and at least one of the second obstacle detection results corresponds to the target obstacle, the processing device further comprising:
and the detection result determining module is used for determining the first target detection result and the second target detection result according to the time stamp of the detection result corresponding to the target obstacle in the first obstacle detection result and the time stamp of the detection result corresponding to the target obstacle in the second obstacle detection result.
13. The processing device of claim 11, wherein the central position determining module is specifically configured to convert three-dimensional coordinates of a contour point of a target obstacle in the first target detection result into a camera coordinate system to obtain two-dimensional coordinates of the contour point; and determining a first center position corresponding to the target obstacle according to the two-dimensional coordinates of the contour point.
14. The processing device according to claim 13, further comprising a speed compensation module for speed compensating three-dimensional coordinates of a contour point of a target obstacle in the first target detection result.
15. The processing apparatus of claim 13, the processing apparatus further comprising:
the positioning information acquisition module is used for acquiring positioning information; wherein the positioning information includes a pose;
the pose interpolation module is used for interpolating according to the positioning information to respectively obtain a pose corresponding to the first target detection result and a pose corresponding to the second target detection result;
and the pose compensation module is used for carrying out pose compensation on the three-dimensional coordinates of the outline points of the target obstacles in the first target detection result according to the pose corresponding to the first target detection result and the pose corresponding to the second target detection result.
16. The processing device according to claim 11, wherein the extrinsic anomaly determination module is specifically configured to determine whether the extrinsic anomaly of the camera is abnormal according to the first center position and the second center position, when a difference between an abscissa of the first center position and an abscissa of the second center position is smaller than a first average value and a difference between an ordinate of the first center position and an ordinate of the second center position is smaller than a second average value;
the first average value is an average value of differences between abscissas of the first center positions and abscissas of the second center positions corresponding to all the target obstacles, and the second average value is an average value of differences between ordinates of the first center positions and ordinates of the second center positions corresponding to all the target obstacles.
17. The processing device of claim 13, wherein the extrinsic feature determining module is specifically configured to determine whether extrinsic feature of the camera is abnormal according to the first center position and the second center position when an area of a detection frame of the target obstacle in the first target detection result matches an area of a detection frame of the target obstacle in the second target detection result;
the detection frame of the target obstacle in the first target detection result is surrounded by two-dimensional coordinates of the contour point.
18. The processing device of claim 11, wherein the extrinsic anomaly determination module specifically comprises:
a first calculation unit configured to calculate a lateral angle difference value according to an abscissa of the first center position and an abscissa of the second center position;
a second calculation unit for calculating a longitudinal angle difference value according to the ordinate of the first center position and the ordinate of the second center position;
and the calibration determining unit is used for determining whether the external parameters of the camera are abnormal according to the transverse angle difference value and the longitudinal angle difference value.
19. The processing device according to claim 18, wherein the calibration determination unit is specifically configured to determine that the camera's external parameters are normal, and to determine that the camera's external parameters are abnormal, in case the lateral angular difference is smaller than a first threshold and the longitudinal angular difference is smaller than a second threshold.
20. The processing apparatus of claim 19, the processing apparatus further comprising: and the external parameter calibration module is used for calibrating the external parameters of the camera according to the transverse angle difference value and the longitudinal angle difference value under the condition that the external parameters of the camera are determined to be abnormal, so as to obtain calibrated external parameters.
21. An electronic device, comprising:
at least one radar for collecting point cloud data;
at least one camera for acquiring image data;
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the processing method of any one of claims 1-10.
22. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the processing method according to any one of claims 1-10.
23. A computer program product comprising a computer program which, when executed by a processor, implements the processing method according to any of claims 1-10.
24. An autonomous vehicle comprising the electronic device of claim 21.
CN202211738409.4A 2022-12-30 2022-12-30 Camera external parameter processing method and device, electronic equipment and automatic driving vehicle Pending CN116309858A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211738409.4A CN116309858A (en) 2022-12-30 2022-12-30 Camera external parameter processing method and device, electronic equipment and automatic driving vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211738409.4A CN116309858A (en) 2022-12-30 2022-12-30 Camera external parameter processing method and device, electronic equipment and automatic driving vehicle

Publications (1)

Publication Number Publication Date
CN116309858A true CN116309858A (en) 2023-06-23

Family

ID=86802161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211738409.4A Pending CN116309858A (en) 2022-12-30 2022-12-30 Camera external parameter processing method and device, electronic equipment and automatic driving vehicle

Country Status (1)

Country Link
CN (1) CN116309858A (en)

Similar Documents

Publication Publication Date Title
EP3581890B1 (en) Method and device for positioning
EP3627181A1 (en) Multi-sensor calibration method, multi-sensor calibration device, computer device, medium and vehicle
US9243916B2 (en) Observability-constrained vision-aided inertial navigation
Nieto et al. Real-time lane tracking using Rao-Blackwellized particle filter
US20160305784A1 (en) Iterative kalman smoother for robust 3d localization for vision-aided inertial navigation
WO2020107931A1 (en) Pose information determination method and apparatus, and visual point cloud construction method and apparatus
US20180075614A1 (en) Method of Depth Estimation Using a Camera and Inertial Sensor
KR20180117879A (en) Method and apparatus for position estimation of unmanned vehicle based on graph structure
CN110470333B (en) Calibration method and device of sensor parameters, storage medium and electronic device
CN113376650A (en) Mobile robot positioning method and device, electronic equipment and storage medium
CN113887400B (en) Obstacle detection method, model training method and device and automatic driving vehicle
WO2022062480A1 (en) Positioning method and positioning apparatus of mobile device
CN114323033B (en) Positioning method and equipment based on lane lines and feature points and automatic driving vehicle
CN113177980B (en) Target object speed determining method and device for automatic driving and electronic equipment
KR20220004604A (en) Method for detecting obstacle, electronic device, roadside device and cloud control platform
CN114662587A (en) Three-dimensional target sensing method, device and system based on laser radar
CN116958452A (en) Three-dimensional reconstruction method and system
CN115900697B (en) Object motion trail information processing method, electronic equipment and automatic driving vehicle
CN116883460A (en) Visual perception positioning method and device, electronic equipment and storage medium
CN116129422A (en) Monocular 3D target detection method, monocular 3D target detection device, electronic equipment and storage medium
CN116309858A (en) Camera external parameter processing method and device, electronic equipment and automatic driving vehicle
CN115937449A (en) High-precision map generation method and device, electronic equipment and storage medium
CN114120252B (en) Automatic driving vehicle state identification method and device, electronic equipment and vehicle
CN114861725A (en) Post-processing method, device, equipment and medium for perception and tracking of target
CN117392241B (en) Sensor calibration method and device in automatic driving and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination