CN110458893B - Roll angle calibration method and system for advanced driving assistance visual perception sensor - Google Patents

Roll angle calibration method and system for advanced driving assistance visual perception sensor Download PDF

Info

Publication number
CN110458893B
CN110458893B CN201910691585.9A CN201910691585A CN110458893B CN 110458893 B CN110458893 B CN 110458893B CN 201910691585 A CN201910691585 A CN 201910691585A CN 110458893 B CN110458893 B CN 110458893B
Authority
CN
China
Prior art keywords
point
position information
pixel
optimal identification
identification point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910691585.9A
Other languages
Chinese (zh)
Other versions
CN110458893A (en
Inventor
王军德
李立
汤戈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Kotei Informatics Co Ltd
Original Assignee
Wuhan Kotei Informatics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Kotei Informatics Co Ltd filed Critical Wuhan Kotei Informatics Co Ltd
Priority to CN201910691585.9A priority Critical patent/CN110458893B/en
Publication of CN110458893A publication Critical patent/CN110458893A/en
Application granted granted Critical
Publication of CN110458893B publication Critical patent/CN110458893B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

The embodiment of the invention provides a roll angle calibration method and a roll angle calibration system for an advanced driving assistance visual perception sensor, wherein the method comprises the following steps: acquiring attitude information of a camera and calibrating a pixel translation distance projected from the center of a target object to a photosensitive device of the camera; calculating pixel position information of the identification point of the calibration target object through a convolution kernel based on the pixel translation distance; screening pixel position information to obtain a candidate point gathering area; performing sub-pixel level optimization on the candidate point gathering area to obtain the position information of the optimal identification point; the optimal identification points comprise a left optimal identification point and a right optimal identification point; and obtaining the roll angle through the position information of the left optimal identification point and the right optimal identification point. The embodiment of the invention sets a definite operation environment, performs centimeter-level rough calibration on position information in the external reference, and then performs fine calibration on the roll angle in the external reference, thereby achieving sub-pixel-level calibration accuracy.

Description

Roll angle calibration method and system for advanced driving assistance visual perception sensor
Technical Field
The invention relates to the technical field of driving, in particular to a roll angle calibration method and system of an advanced driving assistance visual perception sensor.
Background
Currently, more and more new passenger cars are loaded with advanced driver assistance systems. According to the prediction of a research report of an Advanced Driving Assistance System (ADAS) standard research made according to the standard release of China automobile technology research center and daily automobiles, in 2015 and 2020, the market share of the vehicle carrying Driving Assistance (DA) and partial automatic driving (PA) in China is about 50%; in 2020 and 2025 years, the occupancy of DA and PA vehicles is kept stable, and the occupancy of highly automated guided (HA) vehicles is about 15 percent; by 2025 + 2030, the market share of fully automated driving (FA) vehicles was close to 10%.
The functions of advanced assistant driving and full automatic driving are related to a visual sensor in a large amount, and even the visual sensor is taken as the main. Therefore, the reliability and safety of the vision sensor are important. The calibration of the vision sensor refers to a calibration process that a system converts two-dimensional data in the vision sensor into three-dimensional data which can be used for vehicle action decision-making judgment. Advanced driver assistance systems and fully automatic driver systems perform this calibration before reaching the end user, so as to ensure the reliability and safety of the system.
The general calibration system comprises external reference calibration and internal reference calibration, wherein the external reference calibration comprises calibration of position information and attitude information. In the fields of advanced assistant driving and full automatic driving, internal parameters of a common vision sensor are calibrated and provided by manufacturers, and external parameters are calibrated in a common vehicle-mounted vision system. The external reference position information is installed according to the designed position of the vehicle-mounted system, and occupies small influence in calibration accuracy. On the other hand, the pose information in the external reference has a large influence on the accuracy.
In the prior art, the calibration of the external reference attitude information usually focuses on the calibration of the Yaw angle (Yaw) and the Pitch angle (Pitch), and neglects the calibration of the Roll angle (Roll). This has the consequence that the measured values, which should have been roll angle, are given to the projection on yaw and pitch angles, resulting in a certain amount of reliability and safety degradation. Recently, as the resolution of the vision sensor is greatly improved in the aspect of hardware, for example, from VGA to 720p, 1080p, 2K and the like become mainstream in recent years, and the future prospects of 4K and 8K provide a foundation for technical upgrading. On one hand, the application functions need to be expanded, such as a visual real-time positioning system, drawing of a high-precision map and a tracking system of a precise pedestrian and a vehicle. Finally, the safety level of the driver assistance needs to be improved, for example, from Lane Departure Warning (LDW) to current road automatic maintenance (LKA), the advanced driver assistance function is increased from driver-oriented prompt to direct vehicle control oriented to vehicle body control, and the requirements for the calibration of the vision sensor are increasingly strict. Therefore, in calibration of the onboard camera, the need for accurate roll angle calibration becomes increasingly stringent.
Disclosure of Invention
To address the above problems, embodiments of the present invention provide a roll angle calibration method and system for an advanced driver assistance visual perception sensor that overcomes, or at least partially solves, the above problems.
According to a first aspect of the embodiments of the present invention, there is provided a roll angle calibration method for an advanced driver assistance visual perception sensor, the method including: acquiring attitude information of a camera and calibrating a pixel translation distance projected from the center of a target object to a photosensitive device of the camera; calculating pixel position information of the identification point of the calibration target object through a convolution kernel based on the pixel translation distance; screening pixel position information to obtain a candidate point gathering area; performing sub-pixel level optimization on the candidate point gathering area to obtain the position information of the optimal identification point; the optimal identification points comprise a left optimal identification point and a right optimal identification point; and obtaining the roll angle through the position information of the left optimal identification point and the right optimal identification point.
According to a second aspect of the embodiments of the present invention, there is provided a roll angle calibration system for an advanced driver assistance visual perception sensor, the system comprising: the acquisition module is used for acquiring the attitude information of the camera and calibrating the pixel translation distance of the center of the target object projected onto the photosensitive device of the camera; the calculation module is used for calculating the pixel position information of the identification point of the calibration target object through a convolution kernel based on the pixel translation distance; the screening module is used for screening the pixel position information to obtain a candidate point gathering area; the optimization module is used for performing sub-pixel level optimization on the candidate point aggregation area to obtain the position information of the optimal identification point; the optimal identification points comprise a left optimal identification point and a right optimal identification point; and the obtaining module is used for obtaining the roll angle through the position information of the left optimal identification point and the right optimal identification point.
According to a third aspect of the embodiments of the present invention, there is provided an electronic device, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the program, implements the roll angle calibration method for an advanced driving assistance visual perception sensor as provided in any one of the various possible implementations of the first aspect.
According to a fourth aspect of embodiments of the present invention, there is provided a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a roll angle calibration method for an advanced driver assistance visual perception sensor as provided in any one of the various possible implementations of the first aspect.
According to the roll angle calibration method and system for the advanced assistant driving visual perception sensor, provided by the embodiment of the invention, a clear operation environment is set, the position information in the external reference is roughly calibrated at a centimeter level, and then the roll angle in the external reference is finely calibrated, so that the calibration precision at a sub-pixel level is achieved. On the test vision sensor, the vision conversion error of the position of an obstacle located at a distance of 50m is reduced from 8cm to within 2 cm.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from these without inventive effort.
Fig. 1 is a schematic flowchart of a roll angle calibration method for an advanced driver assistance visual perception sensor according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a calibration environment setting of a vehicle-mounted camera according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating an imaging principle of a vehicle-mounted camera according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of points A and B provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of an A-point convolution kernel according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a B-point convolution kernel according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a filtered candidate point area according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of peak calculation using values near the maximum coordinate according to an embodiment of the present invention;
FIG. 9 is a schematic flow chart of a method for calculating peak value by using the gravity center principle according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of peak calculation using the gravity center principle for calculating peak according to the embodiment of the present invention;
FIG. 11 is a schematic diagram of a one-dimensional quadratic model in other peak-calculating models provided by embodiments of the present invention;
FIG. 12 is a schematic diagram of the calculation of the positive distribution model in the other peak calculation models provided by the embodiment of the present invention;
FIG. 13 is a schematic diagram illustrating roll angle calculation according to an embodiment of the present invention;
FIG. 14 is a schematic structural diagram of a roll angle calibration system of an advanced driver assistance visual perception sensor according to an embodiment of the present invention;
fig. 15 is a schematic physical structure diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments, but not all embodiments, of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention belongs to the technical field of advanced assistant driving and automatic driving, in particular relates to a calibration method for automatically adjusting and correcting a rotation angle in external parameters of a visual perception sensor and improving the accuracy of the adjustment, and particularly relates to application of the parameter configuration accuracy of advanced assistant driving and automatic driving technologies before landing of a driving scene so as to meet the reliability and safety of corresponding advanced assistant driving and automatic driving technologies.
Referring to fig. 1, an embodiment of the present invention provides a roll angle calibration method for an advanced driver assistance visual perception sensor, including:
step 101, according to a preset scene, obtaining translation posture information of a camera and calibrating a pixel translation distance of a center of a target object projected onto a photosensitive device of the camera.
As an alternative embodiment, acquiring the panning posture information of the camera includes: configuring an operating environment according to a preset scene, and calibrating a target object in the operating environment; and obtaining the translation attitude information of the camera according to the position information of the calibration target object and the position information of the camera.
Referring to fig. 2, the working environment is arranged according to the design scene, the target object is marked in the environment, the position of the camera is known, and the attitude information is solved. Specifically, work environment configuration is performed. The vehicle centerline extends to the center of the calibration plate. The position information of the calibration plate and the position information of the vehicle are known. The camera is installed in the front part in the vehicle and is close to the position of the center line of the vehicle, the translation position of the camera from the position of the center of the vehicle is known, and the precision error is in millimeter level. The rotational pose of the camera is unknown, i.e. the target result of the calculation.
Referring to fig. 3, according to the CMOS camera pinhole imaging principle, the pixel translation distance from the center of the target object projected onto the camera photosensitive device is calculated. In particular, the abscissa and the ordinate of the imaging center are determined (
Figure 336849DEST_PATH_IMAGE001
,
Figure 972099DEST_PATH_IMAGE002
). This coordinate is on the internal optical axis of the camera. The distance of the imaging center from the point where the center point of the template is projected on the picture can be calculated using fig. 3, i.e. the offset (b: (b))
Figure 220678DEST_PATH_IMAGE003
). The projected coordinates of the center of the object on the picture are
Figure 84728DEST_PATH_IMAGE004
. Wherein, the focal length (pixel) can be obtained by the camera parameter.
102, calculating pixel position information of the identification point of the calibration target object through a corresponding convolution kernel based on the morphological design of the target object.
As an alternative embodiment, based on the morphological design of the target object, the method for calculating the pixel position information of the identification point of the calibration target object by using the corresponding convolution kernel includes: acquiring a point A and a point B in the imaging of a calibration target object through different convolution kernels; the point A is used for determining the identification point, and the point B is used for determining the correctness of the point A.
As an alternative example, the point a corresponds to a first kernel with 19 × 19, in which all matrices with 8 × 8 values at the top left and bottom right are-1 matrices, all matrices with 8 × 8 values at the top right and bottom left are 1 matrices, and other sites are 0; the point B corresponds to a second kernel of 9 × 9, in which the matrix having a value of 1 for 3 × 3 at the top left and bottom right and the matrix having a value of-1 for 3 × 3 at the top right and bottom left are used.
Specifically, referring to fig. 4, 5 and 6, pixel position information of the left and right marker points of the target object is calculated using different convolution kernels. And finding points A and B in the imaging. A is calculated by two-dimensional convolution, a convolution kernel uses a 19 x 19 kernel, a matrix with the numerical values of 8 x 8 at the upper left and the lower right in the kernel are all-1, a matrix with the numerical values of 8 x 8 at the upper right and the lower left in the kernel are all-1, and other parts are 0. The convolution kernel for B is 9 × 9, the top left and bottom right are matrices with 3 × 3 all being 1, and the top right and bottom left are matrices with 3 × 3 all being-1. Finding A, B combinations greater than the threshold, point A is used to find the exact location, and then point B to the right or left of point A is used to determine the correctness of point A.
And 103, screening the pixel position information to obtain a candidate point aggregation area.
As an alternative embodiment, after the step of filtering the pixel position information to obtain the candidate point aggregation area, the method further includes: screening pixel position information, and collecting points exceeding a preset threshold value to form candidate points; and using the area of the candidate point aggregation as a candidate point aggregation area, and marking out the candidate point aggregation area by using a black frame.
Specifically, referring to fig. 7, candidate points are formed after collecting points whose convolution kernel calculation results exceed a preset threshold, and they are used for subsequent calculation, and a candidate point aggregation area is marked with a black box. In other words, the convolution kernel results are screened, and those exceeding a preset threshold are candidate points for the optimization scheme, and these points are marked with black boxes.
104, performing sub-pixel level optimization on the candidate point aggregation area to obtain the position information of the optimal identification point; the optimal identification points comprise a left optimal identification point and a right optimal identification point.
Referring to fig. 8, an optimization algorithm is used to perform sub-pixel level optimization on the candidate point aggregation areas to obtain optimal identification points. The exact position of point a can be called the peak coordinate. The sources are the areas marked by the black boxes in fig. 7. The embodiment of the invention determines the peak coordinates by looking for the variation law of these points. The peak coordinate accuracy is thus improved to sub-pixel level. And selecting five or more points according to the actual operation requirement and the precision requirement.
Referring to fig. 9 and 10, the optimization algorithm may use five-point filtering based on center of gravity calculations; referring to fig. 11 and 12, a one-dimensional quadratic curve or a normal distribution curve based on fitting is also possible. And five or more points can be selected according to the actual operation requirement and the precision requirement.
As an optional embodiment, performing sub-pixel level optimization on the candidate point aggregation regions to obtain location information of an optimal identification point includes: optimizing the A point at a sub-pixel level based on a gravity center five-point filtering method;
the optimization process comprises the steps of obtaining filtering results of five adjacent points, calculating peak coordinates, and obtaining position information of an optimal identification point; wherein, the adjacent five points specifically comprise five points with relative positions [ -2,2] on the abscissa and adjacent points with the ordinate on the relative positions [ -2,2 ]; wherein the exact position of point A is the peak coordinate.
Specifically, referring to fig. 9 and 10, a gravity-based five-point filtering method is described, which implements sub-pixel level optimization on the a point. The optimization procedure was calculated by taking the adjacent five points of the filtered results. The adjacent five points specifically include five points at the abscissa of the relative position [ -2,2] and adjacent points at the relative position [ -2,2] of the ordinate.
As an optional embodiment, performing sub-pixel level optimization on the candidate point aggregation regions to obtain location information of an optimal identification point includes: optimizing the A point at a sub-pixel level based on a curve fitting method;
the optimization process is to fit the change rule of the point A by using a model of a unitary quadratic equation or a model of normal distribution; and after the fitted formula is solved, the coordinate of the maximum value is solved for the formula, and the coordinate is the position information of the optimal identification point.
In particular, referring to fig. 11 and 12, other curve fitting based sub-pixel level optimization methods are described. The law of change of these points is fitted using a model of a quadratic equation of unity or a model of a normal distribution. And solving the fitted formula, and then solving the coordinate of the maximum value of the formula, namely the coordinate of the point A.
And 105, obtaining the roll angle through the position information of the left optimal identification point and the right optimal identification point.
Specifically, referring to fig. 13, roll angle attitude information is calculated from the left and right optimal identification points. After the coordinates of the left and right points A are obtained, the final roll angle is obtained. The left side A point and the right side A point are connected and then form a roll angle with the horizontal included angle:
Figure 633521DEST_PATH_IMAGE005
in summary, the calibration method provided by the embodiment of the invention is applied to external parameter calibration, and the external parameter data is calculated by a series of calibration methods based on internal parameter data provided by a visual sensor manufacturer in a general external parameter data extraction process. The embodiment of the invention sets a definite operation environment, performs centimeter-level rough calibration on position information in the external reference, and then performs fine calibration on the roll angle in the external reference, thereby achieving sub-pixel-level calibration accuracy. On the test vision sensor, the vision conversion error of the position of an obstacle located at a distance of 50m is reduced from 8cm to within 2 cm. Note: the vision sensor resolution was 1920 x 1080, focal length 3.6395 mm.
Based on the content of the foregoing embodiments, an embodiment of the present invention provides a roll angle calibration system for an advanced driving assistance visual perception sensor, where the roll angle calibration system is used to execute a roll angle calibration method for the advanced driving assistance visual perception sensor in the foregoing method embodiments. Referring to fig. 14, the system includes: the acquiring module 201 is configured to acquire pose information of the camera and calibrate a pixel translation distance from a center of a target object projected onto a photosensitive device of the camera; the calculation module 202 is configured to calculate, based on the pixel translation distance, pixel position information of an identification point of the calibration target object through a convolution kernel; the screening module 203 is configured to screen the pixel position information to obtain a candidate point aggregation area; the optimization module 204 is configured to perform sub-pixel level optimization on the candidate point aggregation regions to obtain position information of an optimal identification point; the optimal identification points comprise a left optimal identification point and a right optimal identification point; an obtaining module 205, configured to obtain the roll angle through the position information of the left optimal identification point and the right optimal identification point.
An embodiment of the present invention provides an electronic device, as shown in fig. 15, the electronic device includes: a processor (processor)501, a communication Interface (Communications Interface)502, a memory (memory)503, and a communication bus 504, wherein the processor 501, the communication Interface 502, and the memory 503 are configured to communicate with each other via the communication bus 504. The processor 501 may call a computer program stored in the memory 503 and operable on the processor 501 to execute the roll angle calibration method of the advanced driving assistance visual perception sensor provided by the above embodiments, for example, the method includes: acquiring attitude information of a camera and calibrating a pixel translation distance projected from the center of a target object to a photosensitive device of the camera; calculating pixel position information of the identification point of the calibration target object through a convolution kernel based on the pixel translation distance; screening pixel position information to obtain a candidate point gathering area; performing sub-pixel level optimization on the candidate point gathering area to obtain the position information of the optimal identification point; the optimal identification points comprise a left optimal identification point and a right optimal identification point; and obtaining the roll angle through the position information of the left optimal identification point and the right optimal identification point.
In addition, the logic instructions in the memory 503 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Embodiments of the present invention further provide a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented to, when executed by a processor, perform the roll angle calibration method for an advanced driver assistance visual perception sensor provided in the foregoing embodiments, for example, the method includes: acquiring attitude information of a camera and calibrating a pixel translation distance projected from the center of a target object to a photosensitive device of the camera; calculating pixel position information of the identification point of the calibration target object through a convolution kernel based on the pixel translation distance; screening pixel position information to obtain a candidate point gathering area; performing sub-pixel level optimization on the candidate point gathering area to obtain the position information of the optimal identification point; the optimal identification points comprise a left optimal identification point and a right optimal identification point; and obtaining the roll angle through the position information of the left optimal identification point and the right optimal identification point.
The above-described embodiments of the electronic device and the like are merely illustrative, and units illustrated as separate components may or may not be physically separate, and components displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute the various embodiments or some parts of the methods of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A roll angle calibration method of an advanced driving assistance visual perception sensor is characterized by comprising the following steps:
acquiring attitude information of a camera and calibrating a pixel translation distance of a center of a target object projected onto a photosensitive device of the camera;
calculating pixel position information of the identification point of the calibration target object through a convolution kernel based on the pixel translation distance;
screening the pixel position information to obtain a candidate point gathering area;
performing sub-pixel level optimization on the candidate point gathering area to obtain the position information of the optimal identification point; the optimal identification points comprise a left optimal identification point and a right optimal identification point;
and obtaining the roll angle through the position information of the left optimal identification point and the right optimal identification point.
2. The method of claim 1, wherein the obtaining pose information of the camera comprises:
configuring an operating environment according to a preset scene, and calibrating the calibration target object in the operating environment;
and acquiring the attitude information of the camera according to the position information of the calibration target object and the position information of the camera.
3. The method according to claim 1, wherein calculating pixel position information of the identification point of the calibration target by a convolution kernel based on the pixel translation distance comprises:
acquiring a point A and a point B in the imaging of the calibration target object through different convolution kernels; and the point A is used for determining the identification point, and the point B is used for determining the correctness of the point A.
4. The method of claim 3,
the A point corresponds to a first inner core of 19 x 19, the upper left and lower right matrixes of 8 x 8 are matrixes with the value of-1, the upper right and lower left matrixes of 8 x 8 are matrixes with the value of 1, and other parts are 0;
and the B point corresponds to a second inner core of 9 x 9, wherein the matrix with the value of 1 at the upper left and the lower right of 3 x 3 and the matrix with the value of-1 at the upper right and the lower left of 3 x 3 are used in the second inner core.
5. The method of claim 1, wherein after the filtering the pixel position information to obtain the candidate point aggregation areas, further comprising:
screening the pixel position information, and collecting points exceeding a preset threshold value to form candidate points; and using the area of candidate point aggregation as the candidate point aggregation area, and marking out the candidate point aggregation area by using a black frame.
6. The method of claim 4, wherein performing sub-pixel level optimization on the candidate point aggregation regions to obtain location information of an optimal identification point comprises: optimizing the A point at a sub-pixel level based on a gravity center five-point filtering method;
the optimization process comprises the steps of obtaining filtering results of five adjacent points, calculating peak coordinates, and obtaining position information of an optimal identification point; wherein the adjacent five points include five points at the abscissa of the relative position [ -2,2], and an adjacent point at the relative position [ -2,2] of the ordinate; wherein the exact position of point A is the peak coordinate.
7. The method of claim 4, wherein performing sub-pixel level optimization on the candidate point aggregation regions to obtain location information of an optimal identification point comprises: optimizing the A point at a sub-pixel level based on a curve fitting method;
the optimization process is to fit the change rule of the point A by using a model of a unitary quadratic equation or a model of normal distribution; and after the fitted formula is solved, the coordinate of the maximum value is solved for the formula, and the coordinate is the position information of the optimal identification point.
8. A roll angle calibration system for advanced driver assistance visual perception sensors, comprising:
the acquisition module is used for acquiring the attitude information of the camera and calibrating the pixel translation distance of the center of the target object projected onto the photosensitive device of the camera;
the calculation module is used for calculating the pixel position information of the identification point of the calibration target object through a convolution kernel based on the pixel translation distance;
the screening module is used for screening the pixel position information to obtain a candidate point gathering area;
the optimization module is used for performing sub-pixel level optimization on the candidate point aggregation area to obtain the position information of the optimal identification point; the optimal identification points comprise a left optimal identification point and a right optimal identification point;
and the obtaining module is used for obtaining the roll angle through the position information of the left optimal identification point and the right optimal identification point.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program performs the steps of the roll angle calibration method for an advanced driver assistance visual perception sensor according to any one of claims 1 to 7.
10. A non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor, performs the steps of the roll angle calibration method for an advanced driver assistance visual perception sensor according to any one of claims 1 to 7.
CN201910691585.9A 2019-07-29 2019-07-29 Roll angle calibration method and system for advanced driving assistance visual perception sensor Active CN110458893B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910691585.9A CN110458893B (en) 2019-07-29 2019-07-29 Roll angle calibration method and system for advanced driving assistance visual perception sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910691585.9A CN110458893B (en) 2019-07-29 2019-07-29 Roll angle calibration method and system for advanced driving assistance visual perception sensor

Publications (2)

Publication Number Publication Date
CN110458893A CN110458893A (en) 2019-11-15
CN110458893B true CN110458893B (en) 2021-09-24

Family

ID=68483893

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910691585.9A Active CN110458893B (en) 2019-07-29 2019-07-29 Roll angle calibration method and system for advanced driving assistance visual perception sensor

Country Status (1)

Country Link
CN (1) CN110458893B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101793517A (en) * 2010-02-09 2010-08-04 北京航空航天大学 Online quick method for improving accuracy of attitude determination of airborne platform
WO2015160287A1 (en) * 2014-04-14 2015-10-22 Saab Vricon Systems Ab A method and system for estimating information related to a vehicle pitch and/or roll angle
CN108648241A (en) * 2018-05-17 2018-10-12 北京航空航天大学 A kind of Pan/Tilt/Zoom camera field calibration and fixed-focus method
EP3389015A1 (en) * 2017-04-13 2018-10-17 Continental Automotive GmbH Roll angle calibration method and roll angle calibration device
CN110057295A (en) * 2019-04-08 2019-07-26 河海大学 It is a kind of to exempt from the monocular vision plan range measurement method as control

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9563951B2 (en) * 2013-05-21 2017-02-07 Magna Electronics Inc. Vehicle vision system with targetless camera calibration

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101793517A (en) * 2010-02-09 2010-08-04 北京航空航天大学 Online quick method for improving accuracy of attitude determination of airborne platform
WO2015160287A1 (en) * 2014-04-14 2015-10-22 Saab Vricon Systems Ab A method and system for estimating information related to a vehicle pitch and/or roll angle
EP3389015A1 (en) * 2017-04-13 2018-10-17 Continental Automotive GmbH Roll angle calibration method and roll angle calibration device
CN108648241A (en) * 2018-05-17 2018-10-12 北京航空航天大学 A kind of Pan/Tilt/Zoom camera field calibration and fixed-focus method
CN110057295A (en) * 2019-04-08 2019-07-26 河海大学 It is a kind of to exempt from the monocular vision plan range measurement method as control

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Automatic Calibration of Lidar and Camera Images using Normalized Mutual Information;Zachary Taylor 等;《Robotics and Automation (ICRA)》;20131231;全文 *
基于改进六位置法的一种MEMS加速度计标定补偿方案;向高林 等;《重庆邮电大学学报( 自然科学版)》;20170228;第29卷(第1期);全文 *

Also Published As

Publication number Publication date
CN110458893A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
EP2329222B1 (en) Method and measuring assembly for determining the wheel or axle geometry of a vehicle
CN107230218B (en) Method and apparatus for generating confidence measures for estimates derived from images captured by vehicle-mounted cameras
US10552982B2 (en) Method for automatically establishing extrinsic parameters of a camera of a vehicle
CN112348902B (en) Method, device and system for calibrating installation deviation angle of road-end camera
JP6950170B2 (en) Information processing device, imaging device, device control system, information processing method, and program
CN109543493B (en) Lane line detection method and device and electronic equipment
CN111815713A (en) Method and system for automatically calibrating external parameters of camera
CN108596899B (en) Road flatness detection method, device and equipment
CN113256739B (en) Self-calibration method and device for vehicle-mounted BSD camera and storage medium
CN110766760A (en) Method, device, equipment and storage medium for camera calibration
CN114549654A (en) External parameter calibration method, device, equipment and storage medium for vehicle-mounted camera
CN114280582A (en) Calibration and calibration method and device for laser radar, storage medium and electronic equipment
US20190297314A1 (en) Method and Apparatus for the Autocalibration of a Vehicle Camera System
CN114550042A (en) Road vanishing point extraction method, vehicle-mounted sensor calibration method and device
CN113985405A (en) Obstacle detection method and obstacle detection equipment applied to vehicle
CN113256701B (en) Distance acquisition method, device, equipment and readable storage medium
CN110458893B (en) Roll angle calibration method and system for advanced driving assistance visual perception sensor
CN111382591A (en) Binocular camera ranging correction method and vehicle-mounted equipment
CN113496528A (en) Method and device for calibrating position of visual detection target in fixed traffic roadside scene
CN113514803A (en) Combined calibration method for monocular camera and millimeter wave radar
CN114140533A (en) Method and device for calibrating external parameters of camera
CN112835029A (en) Unmanned-vehicle-oriented multi-sensor obstacle detection data fusion method and system
KR20210003325A (en) Method and apparatus for carlibratiing a plurality of cameras
CN114290995B (en) Implementation method and device of transparent A column, automobile and medium
US10643077B2 (en) Image processing device, imaging device, equipment control system, equipment, image processing method, and recording medium storing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant