CN113077523B - Calibration method, calibration device, computer equipment and storage medium - Google Patents

Calibration method, calibration device, computer equipment and storage medium Download PDF

Info

Publication number
CN113077523B
CN113077523B CN202110347822.7A CN202110347822A CN113077523B CN 113077523 B CN113077523 B CN 113077523B CN 202110347822 A CN202110347822 A CN 202110347822A CN 113077523 B CN113077523 B CN 113077523B
Authority
CN
China
Prior art keywords
image
target
cloud data
point cloud
external parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110347822.7A
Other languages
Chinese (zh)
Other versions
CN113077523A (en
Inventor
马政
闫国行
石建萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sensetime Group Ltd
Original Assignee
Sensetime Group Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sensetime Group Ltd filed Critical Sensetime Group Ltd
Priority to CN202110347822.7A priority Critical patent/CN113077523B/en
Publication of CN113077523A publication Critical patent/CN113077523A/en
Application granted granted Critical
Publication of CN113077523B publication Critical patent/CN113077523B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The present disclosure provides a calibration method, apparatus, computer device, and storage medium, the method comprising: acquiring point cloud data and a reference image which are synchronously acquired by a radar and an image acquisition device aiming at the same scene respectively; determining an image to be matched in a reference image based on the value of a pixel point in the reference image, wherein the image to be matched is an image corresponding to a target space region; determining two-dimensional coordinates of each radar scanning point in the point cloud data under a pixel coordinate system corresponding to the image acquisition device based on the initial external parameters, and screening out target point cloud data corresponding to the target space region from the point cloud data based on the two-dimensional coordinates of each radar scanning point; and matching the images to be matched with the target point cloud data based on the plurality of external parameters to be screened, and determining the target external parameters from the plurality of external parameters to be screened based on the matching result.

Description

Calibration method, calibration device, computer equipment and storage medium
Technical Field
The disclosure relates to the technical field of camera calibration, in particular to a calibration method, a calibration device, computer equipment and a storage medium.
Background
In the environment sensing, in order to improve the robustness of the system and the accuracy of measurement, a scheme of fusion of a laser radar and a camera is adopted. In the process of fusing point cloud data and images, the accuracy of external parameters of the laser radar and the camera determines the accuracy of environmental perception, and in practical application, the external parameters of the laser radar and the camera need to be calibrated again every time the positions and the orientations of the laser radar and the camera are changed frequently.
In the prior art, the external parameters of the radar and the camera are calibrated manually. However, the calibration mode is limited by labor cost, so that the efficiency of external parameter calibration is low; on the other hand, the calibration level is limited by different calibration personnel, so that the accuracy of the external parameter calibration may be low.
Disclosure of Invention
The embodiment of the disclosure at least provides a calibration method, a calibration device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a calibration method, including:
acquiring point cloud data and a reference image which are synchronously acquired by a radar and an image acquisition device aiming at the same scene respectively;
determining an image to be matched in the reference image based on the value of the pixel point in the reference image, wherein the image to be matched is an image corresponding to a target space region; the method comprises the steps of,
Determining two-dimensional coordinates of each radar scanning point in the point cloud data under a pixel coordinate system corresponding to the image acquisition device based on initial external parameters, and screening target point cloud data corresponding to the target space region from the point cloud data based on the two-dimensional coordinates of each radar scanning point;
and matching the images to be matched with the target point cloud data based on a plurality of external parameters to be screened, and determining target external parameters from the plurality of external parameters to be screened based on a matching result.
In a possible implementation manner, determining an image to be matched in the reference image based on the value of the pixel point in the reference image includes:
performing binarization processing on the reference image to obtain a binarized image corresponding to the reference image;
for each row in the binarized image, determining the number of pixel points with the pixel value of the row as a first preset value as a first number; the first preset value is 255 or 0;
and determining images to be matched in the reference image based on the first number corresponding to each row in the binarized image.
In a possible implementation manner, the determining the image to be matched in the reference image based on the first number corresponding to each line in the binarized image includes:
Taking a first target row with the corresponding first number and the positions meeting a first preset condition as a dividing line of the binarized image, and dividing the binarized image into two area images based on the dividing line;
and taking the region image with more pixel points with the pixel value of 255 as the image to be matched in the two region images.
In a possible implementation manner, the determining, based on the initial external parameters, two-dimensional coordinates of each radar scanning point in the point cloud data under a pixel coordinate system corresponding to the image acquisition device includes:
based on the initial external parameters, converting the first position coordinates of each radar scanning point under a radar coordinate system to a world coordinate system with the image acquisition device as a coordinate origin, and obtaining the second position coordinates of each radar scanning point;
converting the second position coordinates of each radar scanning point into an image coordinate system corresponding to the image acquisition device based on the internal reference of the image acquisition device to obtain third position coordinates of the radar scanning points;
and determining the two-dimensional coordinates of each radar scanning point under the pixel coordinate system based on the third position coordinates of the radar scanning points.
In a possible implementation manner, the selecting target point cloud data corresponding to the target space region from the point cloud data based on the two-dimensional coordinates of each radar scanning point includes:
determining corresponding rows of each radar scanning point in the two-dimensional projection image of the point cloud data based on the two-dimensional coordinates of each radar scanning point;
for each row of the two-dimensional projection image, determining the number of radar scanning points in the row as a second number;
and screening out target point cloud data corresponding to the target space region from the point cloud data based on the second number corresponding to each row in the two-dimensional projection image.
In a possible implementation manner, the selecting, from the point cloud data, the target point cloud data corresponding to the target space region based on the second number corresponding to each line in the two-dimensional projection image includes:
taking the corresponding second number and the second target row with the position meeting the second preset condition as a dividing line of the two-dimensional projection image, and dividing the point cloud data into two parts of point cloud data based on the dividing line;
and taking partial point cloud data with fewer point clouds in the two partial point cloud data as the target point cloud data.
In a possible embodiment, the method further comprises determining the plurality of external parameters to be screened according to the steps of:
and adjusting the initial external parameters for a plurality of times according to a preset adjustment scale, and obtaining an external parameter to be screened after adjusting the initial external parameters each time.
In a possible implementation manner, the matching the image to be matched with the target point cloud data based on a plurality of external parameters to be screened, and determining the target external parameters from the plurality of external parameters to be screened based on a matching result, includes:
determining two-dimensional coordinates of a target radar scanning point in the target point cloud data under the external parameters to be screened and under the pixel coordinate system based on the external parameters to be screened aiming at any external parameters to be screened;
based on the two-dimensional coordinates of the target radar scanning points, projecting the target radar scanning points to the image to be matched, and determining the target number of the target radar scanning points in a target position area formed by the pixel points with the pixel values being a second preset value; the second preset value is 255 or 0;
and determining target external parameters from the plurality of external parameters to be screened based on the target numbers corresponding to the plurality of external parameters to be screened.
In a second aspect, embodiments of the present disclosure further provide a calibration device, including:
the acquisition module is used for acquiring point cloud data and reference images which are synchronously acquired by the radar and the image acquisition device aiming at the same scene respectively;
the first determining module is used for determining an image to be matched in the reference image based on the value of the pixel point in the reference image, wherein the image to be matched is an image corresponding to a target space region;
the second determining module is used for determining two-dimensional coordinates of each radar scanning point in the point cloud data under a pixel coordinate system corresponding to the image acquisition device based on initial external parameters, and screening target point cloud data corresponding to the target space region from the point cloud data based on the two-dimensional coordinates of each radar scanning point;
and the matching module is used for matching the image to be matched with the target point cloud data based on a plurality of external parameters to be screened, and determining target external parameters from the plurality of external parameters to be screened based on a matching result.
In a possible implementation manner, the first determining module is configured to, when determining an image to be matched in the reference image based on the value of the pixel point in the reference image:
Performing binarization processing on the reference image to obtain a binarized image corresponding to the reference image;
for each row in the binarized image, determining the number of pixel points with the pixel value of the row as a first preset value as a first number; the first preset value is 255 or 0;
and determining images to be matched in the reference image based on the first number corresponding to each row in the binarized image.
In a possible implementation manner, the first determining module is configured to, when determining the image to be matched in the reference image based on the first number corresponding to each line in the binarized image:
taking a first target row with the corresponding first number and the positions meeting a first preset condition as a dividing line of the binarized image, and dividing the binarized image into two area images based on the dividing line;
and taking the region image with more pixel points with the pixel value of 255 as the image to be matched in the two region images.
In a possible implementation manner, the second determining module is configured to, when determining, based on the initial external parameter, two-dimensional coordinates of each radar scanning point in the point cloud data in a pixel coordinate system corresponding to the image acquisition device:
Based on the initial external parameters, converting the first position coordinates of each radar scanning point under a radar coordinate system to a world coordinate system with the image acquisition device as a coordinate origin, and obtaining the second position coordinates of each radar scanning point;
converting the second position coordinates of each radar scanning point into an image coordinate system corresponding to the image acquisition device based on the internal reference of the image acquisition device to obtain third position coordinates of the radar scanning points;
and determining the two-dimensional coordinates of each radar scanning point under the pixel coordinate system based on the third position coordinates of the radar scanning points.
In a possible implementation manner, the second determining module is configured to, when the target point cloud data corresponding to the target space region is screened out from the point cloud data based on the two-dimensional coordinates of the radar scanning points:
determining corresponding rows of each radar scanning point in the two-dimensional projection image of the point cloud data based on the two-dimensional coordinates of each radar scanning point;
for each row of the two-dimensional projection image, determining the number of radar scanning points in the row as a second number;
And screening out target point cloud data corresponding to the target space region from the point cloud data based on the second number corresponding to each row in the two-dimensional projection image.
In a possible implementation manner, the second determining module is configured to, when the target point cloud data corresponding to the target space region is screened from the point cloud data based on the second number corresponding to each line in the two-dimensional projection image:
taking the corresponding second number and the second target row with the position meeting the second preset condition as a dividing line of the two-dimensional projection image, and dividing the point cloud data into two parts of point cloud data based on the dividing line;
and taking partial point cloud data with fewer point clouds in the two partial point cloud data as the target point cloud data.
In a possible implementation manner, the matching module is further configured to determine the plurality of external parameters to be screened according to the following steps:
and adjusting the initial external parameters for a plurality of times according to a preset adjustment scale, and obtaining an external parameter to be screened after adjusting the initial external parameters each time.
In a possible implementation manner, the matching module is configured to, when the matching is performed on the image to be matched and the target point cloud data based on the plurality of external parameters to be screened, determine a target external parameter from the plurality of external parameters to be screened based on a matching result:
Determining two-dimensional coordinates of a target radar scanning point in the target point cloud data under the external parameters to be screened and under the pixel coordinate system based on the external parameters to be screened aiming at any external parameters to be screened;
based on the two-dimensional coordinates of the target radar scanning points, projecting the target radar scanning points to the image to be matched, and determining the target number of the target radar scanning points in a target position area formed by the pixel points with the pixel values being a second preset value; the second preset value is 255 or 0;
and determining target external parameters from the plurality of external parameters to be screened based on the target numbers corresponding to the plurality of external parameters to be screened.
In a third aspect, embodiments of the present disclosure further provide a computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect, or any of the possible implementations of the first aspect.
In a fourth aspect, the presently disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the first aspect, or any of the possible implementations of the first aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
FIG. 1 illustrates a flow chart of a calibration method provided by an embodiment of the present disclosure;
FIG. 2 is a flowchart of a specific method for determining an image to be matched in a reference image in a calibration method provided in an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a binarized image corresponding to a reference image in the calibration method according to the embodiment of the disclosure;
FIG. 4 is a flowchart of a specific method for determining an image to be matched in a reference image based on a first number in a calibration method provided in an embodiment of the present disclosure;
Fig. 5 shows a schematic diagram of an image to be matched in the calibration method provided by the embodiment of the disclosure;
FIG. 6 is a flowchart of a specific method for determining two-dimensional coordinates of a radar scan point in a pixel coordinate system in the calibration method provided by the embodiments of the present disclosure;
fig. 7 is a flowchart illustrating a specific method for screening target point cloud data from point cloud data in the calibration method according to the embodiment of the present disclosure;
fig. 8 is a flowchart illustrating a specific method for screening target point cloud data from point cloud data based on a second number in the calibration method according to the embodiment of the present disclosure;
FIG. 9 is a flowchart of a specific method for determining a target external parameter from a plurality of external parameters to be screened in the calibration method according to the embodiment of the present disclosure;
FIG. 10 shows a schematic diagram of a calibration device provided by an embodiment of the present disclosure;
fig. 11 shows a schematic diagram of a computer device provided by an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The term "and/or" is used herein to describe only one relationship, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
According to research, in the existing external parameter calibration, the calibration is generally carried out through the joint external parameter of a manual radar camera. However, the external parameters are calibrated manually, so that the efficiency of external parameter calibration is low due to the fact that the manual cost is limited on one hand; on the other hand, the external parameter calibration method is limited by the calibration level of different calibration personnel, so that the external parameter calibration effect is uneven, and the unified external parameter calibration effect cannot be achieved.
Based on the above study, the disclosure provides a calibration method, a device, a computer device and a storage medium, after point cloud data and a reference image are acquired, the point cloud data can be screened respectively, the reference image is processed to obtain target point cloud data corresponding to a target space region and an image to be matched, then under different external parameters to be screened, the target point cloud data corresponding to the same target space region and the image to be matched are matched, and a target external parameter is determined based on a matching result.
For the sake of understanding the present embodiment, first, a detailed description will be given of a calibration method disclosed in an embodiment of the present disclosure, where an execution subject of the calibration method provided in the embodiment of the present disclosure is generally a computer device having a certain computing capability, where the computer device includes, for example: the terminal device, or server or other processing device, may be a User Equipment (UE), mobile device, user terminal, cellular telephone, cordless telephone, personal digital assistant (Personal Digital Assistant, PDA), handheld device, computing device, vehicle mounted device, wearable device, etc. In some possible implementations, the calibration method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of a calibration method according to an embodiment of the disclosure is shown, where the method includes S101 to S104, where:
s101: and acquiring point cloud data and a reference image which are synchronously acquired by the radar and the image acquisition device aiming at the same scene.
S102: and determining an image to be matched in the reference image based on the value of the pixel point in the reference image, wherein the image to be matched is an image corresponding to a target space region.
S103: based on initial external parameters, determining two-dimensional coordinates of each radar scanning point in the point cloud data under a pixel coordinate system corresponding to the image acquisition device, and screening target point cloud data corresponding to the target space region from the point cloud data based on the two-dimensional coordinates of each radar scanning point.
S104: and matching the images to be matched with the target point cloud data based on a plurality of external parameters to be screened, and determining target external parameters from the plurality of external parameters to be screened based on a matching result.
The following describes each step and corresponding implementation method in the embodiments of the present disclosure in detail.
For S101, in one possible implementation manner, in order to ensure accuracy of external parameter calibration, it is required to collect point cloud data collected by a laser radar and a reference image collected by an image collecting device at the same time, where the laser radar and the image collecting device may be deployed on the same vehicle, and in a running process of the vehicle, the laser radar and the image collecting device may be controlled to start collecting data at the same time.
The image acquisition device can be a device with an image acquisition function, such as a camera, a video camera and the like. The image acquisition device can acquire images in real time; the laser radar can emit laser pulses according to a set frequency and receive the reflected laser pulses, so that the point cloud data are obtained.
In order to ensure that the reference image acquired by the image acquisition device is acquired when the laser radar acquires the point cloud data, the time when the laser radar receives the reflected laser pulse can be recorded, the recorded time is taken as the acquisition time corresponding to the point cloud data, and then the reference image corresponding to the acquisition time in the image acquired by the image acquisition device is acquired.
After receiving the reflected laser pulse, the laser radar generates point cloud data corresponding to the reflection position; the larger the volume of the object, the more laser beams are reflected, and the more radar scan points in the generated point cloud data. Therefore, for objects with more reflection laser beams, such as roads, and the like, radar scanning points in corresponding point cloud data are denser, if the matching is performed based on the point cloud data acquired by the laser radar and the reference image acquired by the image acquisition device, matching results under different external parameters to be screened may not differ greatly, so that in order to highlight differences between the matching results under different external parameters to be screened, the reference image and the point cloud data can be screened first, target point cloud data and an image to be matched corresponding to a target space area are screened, and then matching is performed based on the target point cloud data and the image to be matched.
The target space region may include a first region and a second region, where the first region is a region where the object is capable of reflecting the laser pulse, and the second region is a region where the object is incapable of reflecting the laser pulse.
In a specific implementation, after acquiring point cloud data acquired by a laser radar and a reference image acquired by an image acquisition device when the laser radar acquires the point cloud data, an image to be matched in the reference image may be determined according to S102, and meanwhile, target point cloud data in the point cloud data is determined according to S103, where the image to be matched is an image corresponding to a target space region, and the target point cloud data is point cloud data corresponding to the target space region, which will be described in detail below in S102 and S103 respectively.
S102: and determining an image to be matched in the reference image based on the value of the pixel point in the reference image, wherein the image to be matched is an image corresponding to a target space region.
In a possible implementation manner, when determining the image to be matched in the reference image based on the value of the pixel point in the reference image, as shown in fig. 2, the following three steps may be performed:
S201, binarizing the reference image to obtain a binarized image corresponding to the reference image.
Here, the binarization processing refers to setting the gray value of the pixel in the reference image to 0 (minimum value) or 255 (maximum value), and the display effect after the processing is that the pixel color with the gray value of 0 is white and the pixel color with the gray value of 255 is black.
Specifically, the pixel points in the reference image may be identified by setting a gray value threshold, the gray value of the pixel point in which the gray value is smaller than the gray value threshold is set to 0, and the gray value of the pixel point in which the gray value is greater than the gray value threshold is set to 255.
For example, the binarized image corresponding to the reference image may be as shown in fig. 3, the gray values of the pixels in the black area are all 0, the gray values of the pixels in the white area are all 255, and the contour of each object in the image is completely reserved.
S202, determining the number of pixel points with the pixel value of a row as a first preset value as a first number according to each row in the binarized image; the first preset value is 255 or 0.
Here, when the first preset value is 255, the color of the corresponding pixel point is white; when the first preset value is 0, the color of the corresponding pixel point is black.
In specific implementation, the binarized images can be sequentially identified from bottom to top or from top to bottom, the number of the pixel points with the corresponding pixel value of 255 in each row is determined, and the determined number is used as the first number corresponding to the row; alternatively, the number of pixels with a corresponding pixel value of 0 in each row may be determined, and the determined number may be used as the first number corresponding to the row.
S203, determining images to be matched in the reference image based on the first number corresponding to each row in the binarized image.
Therefore, the image to be matched corresponding to the target space region in the reference image can be accurately determined by binarizing the reference image, and the accuracy of the determined target parameter is higher after the image to be matched is matched.
In a possible implementation manner, when determining the image to be matched in the reference image based on the first number corresponding to each line in the binarized image, as shown in fig. 4, the following two steps may be performed:
S401, taking a first target row with the corresponding first number and the positions meeting a first preset condition as a dividing line of the binarized image, and dividing the binarized image into two area images based on the dividing line.
Here, when the first preset value is 255, the first preset condition may be nearest to the bottom of the binary image, and the corresponding first number is greater than the first preset number; or the first number is farthest from the top of the binarized image and is larger than the first preset number.
In addition, when the first preset value is 0, the first preset condition may be that the first preset condition is closest to the bottom of the binary image, and the corresponding first number is smaller than the first preset number; or the first number is farthest from the top of the binarized image and is smaller than the first preset number.
In a specific implementation, each line in the binarized image may be determined, so as to determine the separation line.
For example, taking the first preset value as 0, the first preset condition is that the first preset value is closest to the bottom of the binary image, and the corresponding first number is smaller than the first preset number, where the first number corresponding to each row in the binary image is sequentially as follows:
Wherein the values from top to bottom in the matrix represent the values of the first number corresponding to the row in turn. The separation line may be represented by setting a first preset number 55, thereby determining a fourth row (corresponding to the first number of 50) smaller than the first preset number and closest to the bottom of the binarized image.
In a possible implementation manner, the first preset condition may be further set for a first proportion and a first preset proportion, so as to determine the separation line, where the first proportion may be obtained by dividing the first number by the total number of pixel points in a row.
When the first preset value is 255, the first preset condition may be that the first preset condition is closest to the bottom of the binary image, and the corresponding first proportion is greater than the first preset proportion; or the first ratio is farthest from the top of the binarized image and is larger than the first preset ratio.
In addition, when the first preset value is 0, the first preset condition may be nearest to the bottom of the binary image, and the corresponding first ratio is smaller than the first preset ratio; or the first ratio is farthest from the top of the binarized image and is smaller than the first preset ratio.
And S402, taking the region image with more pixel points with the pixel value of 255 as the image to be matched.
In one possible application scenario, the separation line may be a horizon, that is, a separation line between an object on the ground and the sky, and because there are fewer objects in the sky relative to the object on the ground, in this application scenario, an image area with a larger number of pixels with the pixel value of 255 is generally an image area above the separation line.
For example, the image to be matched may be as shown in fig. 5, and in fig. 5, the area above the dividing line in fig. 3 is reserved.
Specifically, when the laser radar performs point cloud data acquisition, the ground is reflected by objects such as roads, so that in the acquired radar data, radar scanning points on the ground are more dense compared with sky without too many objects, the acquired radar scanning points are more dense, and the radar scanning points which are too dense have interference on the external parameter calibration with higher precision requirements, so that the external parameters which need to be marked cannot be accurately determined, therefore, the point cloud data (such as the point cloud data below a separation line) of a region with the radar scanning points distributed densely are required to be cleared through the steps, and only the point cloud data (such as the point cloud data above the separation line) distributed more dispersedly are reserved as the point cloud data of the target point, so that the matching precision is improved, and the precision of the finally determined target external parameters is further improved.
In a specific implementation, S102 may be executed simultaneously with S103, and the order of executing S102 and S103 has no influence on the final calibration result, so this embodiment is not limited to this.
S103: based on initial external parameters, determining two-dimensional coordinates of each radar scanning point in the point cloud data under a pixel coordinate system corresponding to the image acquisition device, and screening target point cloud data corresponding to the target space region from the point cloud data based on the two-dimensional coordinates of each radar scanning point.
In a possible implementation manner, when determining two-dimensional coordinates of each radar scanning point in the point cloud data under a pixel coordinate system corresponding to the image acquisition device based on initial external parameters, referring to fig. 6, the following three steps may be performed:
s601, converting the first position coordinates of each radar scanning point under a radar coordinate system to a world coordinate system with the image acquisition device as a coordinate origin, and obtaining the second position coordinates of each radar scanning point based on the initial external parameters.
For example, taking a Zhang Zhengyou calibration method as an example, in the step, a rigid transformation matrix may be used to transform the first position coordinates of each radar scanning point in the radar coordinate system to the world coordinate system with the image acquisition device as the origin of coordinates, so as to obtain the second position coordinates of each radar scanning point.
The Zhang Zhengyou calibration method comprises the following steps:
the scalar quantities in the formula are described in turn from left to right:
representing an unknown scale factor;
u and v represent the abscissa and ordinate, respectively, under the pixel coordinate system;
i.e. an internal reference matrix of the image acquisition device, wherein f represents the focal length of the image acquisition device,/-the matrix>Representing the length of the focal length in x-axis direction described using pixels, < >>Representing the length of the focal length in the y-axis direction using pixels, u 0 And v 0 Representing the actual position of the principal point;
i.e. a rigid body transformation matrix, wherein R 3×3 Is an orthogonal rotation matrix, T 3×1 Is a translation matrix;
X w 、Y w 、Z w coordinate values on an X axis, a Y axis and a Z axis in a first position coordinate in a radar coordinate system are sequentially shown;
K 3×3 the internal reference matrix is obtained.
Based on the above formula, multiplying the matrix corresponding to the first position coordinates by a rigid transformation matrixAnd obtaining a matrix corresponding to the second position coordinate, and converting the matrix into the second position coordinate through a corresponding relation.
S602, based on internal parameters of the image acquisition device, converting second position coordinates of all radar scanning points into an image coordinate system corresponding to the image acquisition device, and obtaining third position coordinates of the radar scanning points.
After the second position coordinates are obtained, multiplying the perspective projection matrix by a matrix corresponding to the second position coordinatesAnd obtaining a matrix corresponding to the third position coordinate, and converting the matrix into the third position coordinate through a corresponding relation.
S603, determining two-dimensional coordinates of each radar scanning point under the pixel coordinate system based on the third position coordinates of the radar scanning points.
The step of receiving, namely multiplying the internal reference matrix of the image acquisition device on the basis of obtaining the third position coordinateAnd obtaining the two-dimensional coordinates of each radar scanning point under the pixel coordinate system. The distance unit of the pixel coordinate system is a pixel, and is mainly used for determining the position of each pixel in the image, for example, (3, 2), and represents the pixel in the third row and the second column.
In this way, through the conversion process, each radar scanning point can be converted from a three-dimensional coordinate to a two-dimensional coordinate under a pixel coordinate system, and the radar scanning points in the target space region can be more conveniently screened under the pixel coordinate system.
In a possible implementation manner, when the target point cloud data corresponding to the target space region is screened out from the point cloud data based on the two-dimensional coordinates of each radar scanning point, as shown in fig. 7, the following three steps may be performed:
And S701, determining corresponding rows of each radar scanning point in the two-dimensional projection image of the point cloud data based on the two-dimensional coordinates of each radar scanning point.
The two-dimensional projection image may be obtained by projecting a scene in which radar scanning points are acquired into a pixel coordinate system, the radar scanning points may be pixel points in the two-dimensional projection image after projection, the two-dimensional coordinates of each radar scanning point are UV coordinates under the pixel coordinate system, corresponding rows may be found according to corresponding ordinate, for example, UV coordinates of four radar scanning points are (1, 2), (2, 2), (1, 1), (2, 1), and it may be determined that radar scanning points corresponding to (1, 2) and (2, 2) are all in a second row, that is, the ordinate corresponding to the radar scanning points is regarded as the basis of dividing each row, all radar scanning points with the same ordinate are regarded as the same row, and each row is sequenced from top to bottom according to the size of the ordinate, so that a specific serial number of each radar scanning point may be obtained, and it may be determined that each radar scanning point corresponds to each row in the two-dimensional projection image of the point cloud data.
S702, determining the number of radar scanning points in each row of the two-dimensional projection image as a second number.
In a specific implementation, the two-dimensional projection images can be sequentially identified from bottom to top or from top to bottom, and the number of radar scanning points in each row is determined to be the second number.
And S703, screening out target point cloud data corresponding to the target space region from the point cloud data based on the second number corresponding to each row in the two-dimensional projection image.
In this way, the above steps are used for clearing the point cloud data (the point cloud data below the separation line) of the area with dense radar scanning point distribution, only the point cloud data (the point cloud data above the separation line) with relatively dispersed distribution is reserved as the target point cloud data, so that when matching is carried out on the basis of the target point cloud data and the image to be matched, the difference between matching results under different external parameters to be screened can be highlighted, and the accuracy of the determined target external parameters is further improved.
In a possible implementation manner, when the target point cloud data corresponding to the target space region is screened out from the point cloud data based on the second number corresponding to each line in the two-dimensional projection image, as shown in fig. 8, the following two steps may be performed:
s801, taking a second target row with a corresponding second number and the positions meeting a second preset condition as a dividing line of the two-dimensional projection image, and dividing the point cloud data into two parts of point cloud data based on the dividing line.
Here, the second preset condition may be that the second number is closest to the bottom of the two-dimensional projection image, and the corresponding second number is smaller than the second preset number; or the second number is farthest from the top of the two-dimensional projection image and is smaller than the second preset number.
In a specific implementation, a determination may be made for each line in the two-dimensional projection image.
For example, taking the second preset condition as the closest distance from the bottom of the two-dimensional projection image, and the corresponding second number is smaller than the second preset number, the second numbers corresponding to each row in the two-dimensional projection image are sequentially as follows:
wherein the values from top to bottom in the matrix represent the values of the second number corresponding to the row in turn. A second preset number 55 may be set, so as to determine a fourth row (corresponding to the second number being 50) that is smaller than the second preset number and is closest to the bottom of the two-dimensional projection image as the second target row.
S802, taking partial point cloud data with fewer point clouds in the two partial point cloud data as the target point cloud data.
Here, similar to step S402 described above, partial point cloud data with a smaller number of point clouds may be used as the target point cloud data, so that the target point cloud data and the image to be matched that are finally matched both correspond to the same spatial region, that is, both correspond to the target spatial region.
In a possible implementation manner, when the numerical value of the first number and the numerical value of the second number are determined, the determination may be sequentially performed from bottom to top, after the first number (or the second number) of any row is determined, whether the determined first number (or the second number) meets a preset condition may be judged, when the obtained first number (or the second number) meets the preset condition, the current row may be determined as the first target row (or the second target row), and the subsequent statistical operation of the first number (or the second number) is stopped; if the preset condition is not met, determining the first number (or the second number) of the next row, and the like until the first target row (or the second target row) is determined.
Thus, the judgment process is carried out while the first number (or the second number) is determined, and the invalid statistics of the first number (or the second number) can be avoided, so that the calculation time is saved, and the calculation efficiency is improved.
In specific implementation, after the above S102 and S103 are performed, the target external parameters may be determined according to S104, and S104 will be described in detail below.
S104: and matching the images to be matched with the target point cloud data based on a plurality of external parameters to be screened, and determining target external parameters from the plurality of external parameters to be screened based on a matching result.
In a possible implementation manner, the plurality of external parameters to be screened may be determined according to the following method:
and adjusting the initial external parameters for a plurality of times according to a preset adjustment scale, and obtaining an external parameter to be screened after adjusting the initial external parameters each time.
Specifically, the initial external parameters include pitch angle, yaw angle, rotation angle and distances respectively corresponding to three coordinate axes in a world coordinate system, and the total of six values comprise three angles and three lengths. For pertinence, an angle adjustment scale may be preset for each angle, and a length adjustment scale may be preset for each distance.
For example, the angle adjustment scale may be preset to be 6 degrees, the length adjustment scale may be 1 meter, and the adjustment may be performed under the adjustment scale, and one to-be-screened external parameter may be obtained after each adjustment of the initial external parameter, so that a plurality of to-be-screened external parameters may be obtained.
Therefore, the system can automatically generate corresponding external parameters to be screened according to the preset adjustment scale, so that the whole process does not need to be manually operated after the adjustment scale is set, and the complex calculation and matching process is submitted to a computer for completion, so that the whole parameter calibration process is more automatic and the efficiency is higher.
In a possible implementation manner, when matching the image to be matched with the target point cloud data based on a plurality of external parameters to be screened, and determining a target external parameter from the plurality of external parameters to be screened based on a matching result, as shown in fig. 9, the following three steps may be performed:
s901, determining two-dimensional coordinates of a target radar scanning point in the target point cloud data under the to-be-screened external parameters and under the pixel point coordinate system according to any to-be-screened external parameters.
Here, for the specific process of determining the two-dimensional coordinates of the target radar scanning point in the target point cloud data under the to-be-screened external parameter and under the pixel point coordinate system based on the to-be-screened external parameter, see the above three steps in fig. 6, and the specific conversion steps are highly similar to S601, S602, S603 in fig. 6, so that details are not repeated herein.
S902, based on the two-dimensional coordinates of the target radar scanning points, projecting the target radar scanning points to the image to be matched, and determining the target number of the target radar scanning points in a target position area formed by the pixel points with the pixel values of a second preset value; the second preset value is 255 or 0.
Here, the second preset value may be the same as the first preset value, when the first preset value is 255, the second preset value may also be 255, and after the target radar scanning point is based on the projection of the external parameters to be screened, a plurality of target numbers may be counted through the above operation, and since each target number has a one-to-one correspondence with one external parameter to be screened, the matching degree between each external parameter to be screened and the current laser radar and the image acquisition device may be determined by comparing the sizes of the target numbers.
S903, determining target external parameters from the external parameters to be screened based on the target numbers corresponding to the external parameters to be screened.
When the second preset value is 0, after the number of targets corresponding to the radar scanning points is determined, the maximum value of the target number, namely the corresponding external parameters to be screened with the highest matching degree, can be found, and the maximum value is determined to be the target external parameters; when the second preset value is 255, after determining the number of targets corresponding to the radar scanning points, the minimum value can be found and is determined as the target external parameter.
Therefore, the plurality of external parameters to be screened are screened by comparing the corresponding target numbers, so that the finally determined target external parameters more meet the actual requirements, the accuracy is higher, and meanwhile, the automation degree in the whole implementation process is higher, and the external parameter determining and calibrating efficiency is improved.
In the above steps, the point cloud data refers to one frame of point cloud data. In another possible implementation manner, multiple frames of point cloud data and a reference image corresponding to each frame of point cloud data may be acquired, then, based on S101 to S104 described above, a target external parameter corresponding to each frame of point cloud data is determined, and then, based on a mean value of the determined multiple target external parameters, the target external parameter is used as a final determined target external parameter.
By the method, the influence of single-frame point cloud data on the accuracy of the target external parameters can be avoided, and the accuracy of the finally determined external parameters is improved.
According to the calibration method provided by the embodiment of the disclosure, after the point cloud data and the reference image are acquired, the point cloud data can be screened respectively, the reference image is processed to obtain the target point cloud data corresponding to the target space region and the image to be matched, then the target point cloud data corresponding to the same target space region and the image to be matched are matched under different external parameters to be screened, and the target external parameters are determined based on the matching result.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same inventive concept, the embodiments of the present disclosure further provide a calibration device corresponding to the calibration method, and since the principle of solving the problem of the device in the embodiments of the present disclosure is similar to that of the calibration method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 10, a schematic diagram of a calibration device according to an embodiment of the disclosure is provided, where the device includes: an acquisition module 1001, a first determination module 1002, a second determination module 1003, and a matching module 1004; wherein,
an acquisition module 1001, configured to acquire point cloud data and a reference image that are synchronously acquired by a radar and an image acquisition device for the same scene respectively;
a first determining module 1002, configured to determine an image to be matched in the reference image based on a value of a pixel point in the reference image, where the image to be matched is an image corresponding to a target spatial region;
A second determining module 1003, configured to determine two-dimensional coordinates of each radar scanning point in the point cloud data under a pixel coordinate system corresponding to the image acquisition device based on an initial external parameter, and screen target point cloud data corresponding to the target space region from the point cloud data based on the two-dimensional coordinates of each radar scanning point;
the matching module 1004 is configured to match the image to be matched with the target point cloud data based on a plurality of external parameters to be screened, and determine a target external parameter from the plurality of external parameters to be screened based on a matching result.
In a possible implementation manner, the first determining module 1002 is configured to, when determining, based on the value of the pixel point in the reference image, an image to be matched in the reference image:
performing binarization processing on the reference image to obtain a binarized image corresponding to the reference image;
for each row in the binarized image, determining the number of pixel points with the pixel value of the row as a first preset value as a first number; the first preset value is 255 or 0;
and determining images to be matched in the reference image based on the first number corresponding to each row in the binarized image.
In a possible implementation manner, the first determining module 1002 is configured to, when determining, based on the first number corresponding to each line in the binarized image, the image to be matched in the reference image:
taking a first target row with the corresponding first number and the positions meeting a first preset condition as a dividing line of the binarized image, and dividing the binarized image into two area images based on the dividing line;
and taking the region image with more pixel points with the pixel value of 255 as the image to be matched in the two region images.
In a possible implementation manner, the second determining module 1003 is configured to, when determining, based on the initial external parameter, two-dimensional coordinates of each radar scanning point in the point cloud data in a pixel coordinate system corresponding to the image capturing device:
based on the initial external parameters, converting the first position coordinates of each radar scanning point under a radar coordinate system to a world coordinate system with the image acquisition device as a coordinate origin, and obtaining the second position coordinates of each radar scanning point;
converting the second position coordinates of each radar scanning point into an image coordinate system corresponding to the image acquisition device based on the internal reference of the image acquisition device to obtain third position coordinates of the radar scanning points;
And determining the two-dimensional coordinates of each radar scanning point under the pixel coordinate system based on the third position coordinates of the radar scanning points.
In a possible implementation manner, the second determining module 1003 is configured to, when the target point cloud data corresponding to the target space region is selected from the point cloud data based on the two-dimensional coordinates of the radar scan points:
determining corresponding rows of each radar scanning point in the two-dimensional projection image of the point cloud data based on the two-dimensional coordinates of each radar scanning point;
for each row of the two-dimensional projection image, determining the number of radar scanning points in the row as a second number;
and screening out target point cloud data corresponding to the target space region from the point cloud data based on the second number corresponding to each row in the two-dimensional projection image.
In a possible implementation manner, the second determining module 1003 is configured to, when the target point cloud data corresponding to the target spatial region is selected from the point cloud data based on the second number corresponding to each line in the two-dimensional projection image:
taking the corresponding second number and the second target row with the position meeting the second preset condition as a dividing line of the two-dimensional projection image, and dividing the point cloud data into two parts of point cloud data based on the dividing line;
And taking partial point cloud data with fewer point clouds in the two partial point cloud data as the target point cloud data.
In a possible implementation manner, the matching module 1004 is further configured to determine the plurality of parameters to be screened according to the following steps:
and adjusting the initial external parameters for a plurality of times according to a preset adjustment scale, and obtaining an external parameter to be screened after adjusting the initial external parameters each time.
In a possible implementation manner, the matching module 1004 is configured to, when the matching is performed on the image to be matched and the target point cloud data based on the plurality of external parameters to be screened, determine, based on a matching result, a target external parameter from the plurality of external parameters to be screened, where the target external parameter is:
determining two-dimensional coordinates of a target radar scanning point in the target point cloud data under the external parameters to be screened and under the pixel coordinate system based on the external parameters to be screened aiming at any external parameters to be screened;
based on the two-dimensional coordinates of the target radar scanning points, projecting the target radar scanning points to the image to be matched, and determining the target number of the target radar scanning points in a target position area formed by the pixel points with the pixel values being a second preset value; the second preset value is 255 or 0;
And determining target external parameters from the plurality of external parameters to be screened based on the target numbers corresponding to the plurality of external parameters to be screened.
According to the calibration device provided by the embodiment of the disclosure, after the point cloud data and the reference image are acquired, the point cloud data can be screened respectively, the reference image is processed to obtain the target point cloud data corresponding to the target space region and the image to be matched, then under different external parameters to be screened, the target point cloud data corresponding to the same target space region and the image to be matched are matched, and the target external parameters are determined based on the matching result.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 11, a schematic structural diagram of a computer device 1100 according to an embodiment of the disclosure includes a processor 1101, a memory 1102, and a bus 1103. The memory 1102 is used for storing execution instructions, including a memory 11021 and an external memory 11022; the memory 11021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 1101 and data exchanged with the external memory 11022 such as a hard disk, the processor 1101 exchanges data with the external memory 11022 through the memory 11021, and when the computer device 1100 operates, the processor 1101 and the memory 1102 communicate with each other through the bus 1103, so that the processor 1101 executes the following instructions:
Acquiring point cloud data and a reference image which are synchronously acquired by a radar and an image acquisition device aiming at the same scene respectively;
determining an image to be matched in the reference image based on the value of the pixel point in the reference image, wherein the image to be matched is an image corresponding to a target space region; the method comprises the steps of,
determining two-dimensional coordinates of each radar scanning point in the point cloud data under a pixel coordinate system corresponding to the image acquisition device based on initial external parameters, and screening target point cloud data corresponding to the target space region from the point cloud data based on the two-dimensional coordinates of each radar scanning point;
and matching the images to be matched with the target point cloud data based on a plurality of external parameters to be screened, and determining target external parameters from the plurality of external parameters to be screened based on a matching result.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the calibration method described in the method embodiments above. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiments of the present disclosure further provide a computer program product, where the computer program product carries a program code, where instructions included in the program code may be used to perform the steps of the calibration method described in the foregoing method embodiments, and specifically reference may be made to the foregoing method embodiments, which are not described herein.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A calibration method, comprising:
acquiring point cloud data and a reference image which are synchronously acquired by a radar and an image acquisition device aiming at the same scene respectively;
determining an image to be matched in the reference image based on the value of the pixel point in the reference image, wherein the image to be matched is an image corresponding to a target space region; the method comprises the steps of,
Determining two-dimensional coordinates of each radar scanning point in the point cloud data under a pixel coordinate system corresponding to the image acquisition device based on initial external parameters, and screening target point cloud data corresponding to the target space region from the point cloud data based on the two-dimensional coordinates of each radar scanning point;
matching the images to be matched with the target point cloud data based on a plurality of external parameters to be screened, and determining target external parameters from the plurality of external parameters to be screened based on a matching result;
the step of matching the image to be matched with the target point cloud data based on the plurality of external parameters to be screened, and determining the target external parameters from the plurality of external parameters to be screened based on the matching result comprises the following steps:
determining two-dimensional coordinates of a target radar scanning point in the target point cloud data under the external parameters to be screened and under the pixel coordinate system based on the external parameters to be screened aiming at any external parameters to be screened; based on the two-dimensional coordinates of the target radar scanning points, projecting the target radar scanning points to the image to be matched, and determining the target number of the target radar scanning points in a target position area formed by the pixel points with the pixel values being a second preset value; the second preset value is 255 or 0; and determining target external parameters from the plurality of external parameters to be screened based on the target numbers corresponding to the plurality of external parameters to be screened.
2. The method of claim 1, wherein determining the image to be matched in the reference image based on the values of the pixels in the reference image comprises:
performing binarization processing on the reference image to obtain a binarized image corresponding to the reference image;
for each row in the binarized image, determining the number of pixel points with the pixel value of the row as a first preset value as a first number; the first preset value is 255 or 0;
and determining images to be matched in the reference image based on the first number corresponding to each row in the binarized image.
3. The method according to claim 2, wherein determining the image to be matched in the reference image based on the first number corresponding to each line in the binarized image comprises:
taking a first target row with the corresponding first number and the positions meeting a first preset condition as a dividing line of the binarized image, and dividing the binarized image into two area images based on the dividing line;
and taking the region image with more pixel points with the pixel value of 255 as the image to be matched in the two region images.
4. The method according to claim 1, wherein determining, based on the initial external parameters, two-dimensional coordinates of each radar scanning point in the point cloud data in a pixel coordinate system corresponding to the image acquisition device includes:
based on the initial external parameters, converting the first position coordinates of each radar scanning point in a radar coordinate system into a world coordinate system with the image acquisition device as a coordinate origin to obtain second position coordinates of each radar scanning point;
converting the second position coordinates of each radar scanning point into an image coordinate system corresponding to the image acquisition device based on the internal reference of the image acquisition device to obtain third position coordinates of the radar scanning points;
and determining the two-dimensional coordinates of each radar scanning point under the pixel coordinate system based on the third position coordinates of the radar scanning points.
5. The method according to claim 1 or 4, wherein the step of screening out target point cloud data corresponding to the target space region from the point cloud data based on the two-dimensional coordinates of the respective radar scanning points includes:
determining corresponding rows of each radar scanning point in the two-dimensional projection image of the point cloud data based on the two-dimensional coordinates of each radar scanning point;
For each row of the two-dimensional projection image, determining the number of radar scanning points in the row as a second number;
and screening out target point cloud data corresponding to the target space region from the point cloud data based on the second number corresponding to each row in the two-dimensional projection image.
6. The method of claim 5, wherein the selecting target point cloud data corresponding to the target spatial region from the point cloud data based on the second number corresponding to each line in the two-dimensional projection image comprises:
taking a second target row with a second number and a second position which meet a second preset condition as a dividing line of the two-dimensional projection image, and dividing the point cloud data into two parts of point cloud data based on the dividing line;
and taking partial point cloud data with fewer point clouds in the two partial point cloud data as the target point cloud data.
7. The method of claim 1, further comprising determining the plurality of external parameters to be screened according to the steps of:
and adjusting the initial external parameters for a plurality of times according to a preset adjustment scale, and obtaining an external parameter to be screened after adjusting the initial external parameters each time.
8. A calibration device, comprising:
the acquisition module is used for acquiring point cloud data and reference images which are synchronously acquired by the radar and the image acquisition device aiming at the same scene respectively;
the first determining module is used for determining an image to be matched in the reference image based on the value of the pixel point in the reference image, wherein the image to be matched is an image corresponding to a target space region;
the second determining module is used for determining two-dimensional coordinates of each radar scanning point in the point cloud data under a pixel coordinate system corresponding to the image acquisition device based on initial external parameters, and screening target point cloud data corresponding to the target space region from the point cloud data based on the two-dimensional coordinates of each radar scanning point;
the matching module is used for matching the image to be matched with the target point cloud data based on a plurality of external parameters to be screened, and determining target external parameters from the plurality of external parameters to be screened based on a matching result; the matching module is used for matching the image to be matched with the target point cloud data based on a plurality of external parameters to be screened, and determining target external parameters from the plurality of external parameters to be screened based on a matching result, wherein the matching module is used for:
Determining two-dimensional coordinates of a target radar scanning point in the target point cloud data under the external parameters to be screened and under the pixel coordinate system based on the external parameters to be screened aiming at any external parameters to be screened; based on the two-dimensional coordinates of the target radar scanning points, projecting the target radar scanning points to the image to be matched, and determining the target number of the target radar scanning points in a target position area formed by the pixel points with the pixel values being a second preset value; the second preset value is 255 or 0; and determining target external parameters from the plurality of external parameters to be screened based on the target numbers corresponding to the plurality of external parameters to be screened.
9. A computer device, comprising: a processor, a memory and a bus, said memory storing machine readable instructions executable by said processor, said processor and said memory communicating over the bus when the computer device is running, said machine readable instructions when executed by said processor performing the steps of the calibration method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the steps of the calibration method according to any of claims 1 to 7.
CN202110347822.7A 2021-03-31 2021-03-31 Calibration method, calibration device, computer equipment and storage medium Active CN113077523B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110347822.7A CN113077523B (en) 2021-03-31 2021-03-31 Calibration method, calibration device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110347822.7A CN113077523B (en) 2021-03-31 2021-03-31 Calibration method, calibration device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113077523A CN113077523A (en) 2021-07-06
CN113077523B true CN113077523B (en) 2023-11-24

Family

ID=76614193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110347822.7A Active CN113077523B (en) 2021-03-31 2021-03-31 Calibration method, calibration device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113077523B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439561B (en) * 2022-10-25 2023-03-10 杭州华橙软件技术有限公司 Robot sensor calibration method, robot and storage medium
CN115840196B (en) * 2023-02-24 2023-07-21 新石器慧通(北京)科技有限公司 Laser radar inter-calibration method and device based on entity calibration

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104142157A (en) * 2013-05-06 2014-11-12 北京四维图新科技股份有限公司 Calibration method, device and equipment
CN109521403A (en) * 2017-09-19 2019-03-26 百度在线网络技术(北京)有限公司 The parameter calibration method and device of multi-line laser radar, equipment and readable medium
CN110021039A (en) * 2018-11-15 2019-07-16 山东理工大学 The multi-angle of view material object surface point cloud data initial registration method of sequence image constraint
CN110456330A (en) * 2019-08-27 2019-11-15 中国人民解放军国防科技大学 Method and system for automatically calibrating external parameter without target between camera and laser radar
CN111965624A (en) * 2020-08-06 2020-11-20 北京百度网讯科技有限公司 Calibration method, device and equipment for laser radar and camera and readable storage medium
CN112085801A (en) * 2020-09-08 2020-12-15 清华大学苏州汽车研究院(吴江) Calibration method for three-dimensional point cloud and two-dimensional image fusion based on neural network
WO2020259506A1 (en) * 2019-06-27 2020-12-30 华为技术有限公司 Method and device for determining distortion parameters of camera
KR20210022016A (en) * 2019-08-02 2021-03-02 네이버랩스 주식회사 Method and system for improving depth information of feature points using camera and lidar

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104142157A (en) * 2013-05-06 2014-11-12 北京四维图新科技股份有限公司 Calibration method, device and equipment
CN109521403A (en) * 2017-09-19 2019-03-26 百度在线网络技术(北京)有限公司 The parameter calibration method and device of multi-line laser radar, equipment and readable medium
CN110021039A (en) * 2018-11-15 2019-07-16 山东理工大学 The multi-angle of view material object surface point cloud data initial registration method of sequence image constraint
WO2020259506A1 (en) * 2019-06-27 2020-12-30 华为技术有限公司 Method and device for determining distortion parameters of camera
KR20210022016A (en) * 2019-08-02 2021-03-02 네이버랩스 주식회사 Method and system for improving depth information of feature points using camera and lidar
CN110456330A (en) * 2019-08-27 2019-11-15 中国人民解放军国防科技大学 Method and system for automatically calibrating external parameter without target between camera and laser radar
CN111965624A (en) * 2020-08-06 2020-11-20 北京百度网讯科技有限公司 Calibration method, device and equipment for laser radar and camera and readable storage medium
CN112085801A (en) * 2020-09-08 2020-12-15 清华大学苏州汽车研究院(吴江) Calibration method for three-dimensional point cloud and two-dimensional image fusion based on neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于点云中心的激光雷达与相机联合标定方法研究;康国华;张琪;张晗;徐伟证;张文豪;;仪器仪表学报(第12期);全文 *

Also Published As

Publication number Publication date
CN113077523A (en) 2021-07-06

Similar Documents

Publication Publication Date Title
CN111179358B (en) Calibration method, device, equipment and storage medium
JP6722323B2 (en) System and method for imaging device modeling and calibration
CN110276808B (en) Method for measuring unevenness of glass plate by combining single camera with two-dimensional code
CN110689581B (en) Structured light module calibration method, electronic device and computer readable storage medium
CN111062378B (en) Image processing method, model training method, target detection method and related device
Kersten et al. Image-based low-cost systems for automatic 3D recording and modelling of archaeological finds and objects
CN113592989B (en) Three-dimensional scene reconstruction system, method, equipment and storage medium
JP3983573B2 (en) Stereo image characteristic inspection system
CN111383279B (en) External parameter calibration method and device and electronic equipment
CN107077743A (en) System and method for the dynamic calibration of array camera
WO2021098448A1 (en) Sensor calibration method and device, storage medium, calibration system, and program product
CN105654547B (en) Three-dimensional rebuilding method
CN113077523B (en) Calibration method, calibration device, computer equipment and storage medium
CN109658497B (en) Three-dimensional model reconstruction method and device
WO2022217988A1 (en) Sensor configuration scheme determination method and apparatus, computer device, storage medium, and program
CN112816949A (en) Calibration method and device of sensor, storage medium and calibration system
CN110738703A (en) Positioning method and device, terminal and storage medium
CN112927307A (en) Calibration method, calibration device, electronic equipment and storage medium
Wenzel et al. High-resolution surface reconstruction from imagery for close range cultural Heritage applications
CN115359130A (en) Radar and camera combined calibration method and device, electronic equipment and storage medium
JP7432793B1 (en) Mapping methods, devices, chips and module devices based on three-dimensional point clouds
CN115546379A (en) Data processing method and device and computer equipment
US8818124B1 (en) Methods, apparatus, and systems for super resolution of LIDAR data sets
CN112446926A (en) Method and device for calibrating relative position of laser radar and multi-eye fisheye camera
CN117392237A (en) Robust laser radar-camera self-calibration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant