CN117315046A - Method and device for calibrating looking-around camera, electronic equipment and storage medium - Google Patents

Method and device for calibrating looking-around camera, electronic equipment and storage medium Download PDF

Info

Publication number
CN117315046A
CN117315046A CN202311283921.9A CN202311283921A CN117315046A CN 117315046 A CN117315046 A CN 117315046A CN 202311283921 A CN202311283921 A CN 202311283921A CN 117315046 A CN117315046 A CN 117315046A
Authority
CN
China
Prior art keywords
target
corner
original
image
corner points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311283921.9A
Other languages
Chinese (zh)
Inventor
李海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Geely Holding Group Co Ltd
Ningbo Geely Automobile Research and Development Co Ltd
Original Assignee
Zhejiang Geely Holding Group Co Ltd
Ningbo Geely Automobile Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Geely Holding Group Co Ltd, Ningbo Geely Automobile Research and Development Co Ltd filed Critical Zhejiang Geely Holding Group Co Ltd
Priority to CN202311283921.9A priority Critical patent/CN117315046A/en
Publication of CN117315046A publication Critical patent/CN117315046A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application discloses a method, a device, electronic equipment and a storage medium for calibrating an all-around camera, which relate to the technical field of intelligent driving and are used for improving the calibration precision, the calibration efficiency and the calibration success rate of the all-around camera, and the method comprises the following steps: firstly, acquiring an original image of a target vehicle, and performing de-distortion operation on the original image to obtain a target image; then, identifying all target corner points in the target image based on the geometric characteristics of the original corner points; further, calculating to obtain a target external parameter according to the pixel value of the target angular point, the priori world coordinate value and the internal parameter; and finally, calculating a reprojection error of the target image and a bird's-eye view seam error of an adjacent target camera based on the original image, the target external reference and the internal reference, and determining that the target camera is calibrated if the reprojection error and the bird's-eye view seam error meet preset requirements.

Description

Method and device for calibrating looking-around camera, electronic equipment and storage medium
Technical Field
The application mainly relates to the technical field of intelligent driving, in particular to a method and a device for calibrating a looking-around camera, electronic equipment and a storage medium.
Background
With the rapid development of intelligent driving automobiles, the automobile industry has a rapid increase in the demand for assisting intelligent driving functions under a visual scheme, so that a 360-degree, 540-degree and other multi-camera looking-around system has become a conventional configuration of intelligent automobiles, and parameter calibration on multiple cameras is the basis of the looking-around system, and the final effect of the driving assisting system is directly influenced by the quality of the parameter calibration.
Currently, the process of calibrating parameters of a looking-around camera is generally divided into two parts, namely, converting a camera coordinate system into an image coordinate system, wherein the part is the conversion from a three-dimensional point to a two-dimensional point, and comprises an internal reference of the looking-around camera, wherein the internal reference of the looking-around camera is a description of physical characteristics of the camera, such as focal length, resolution and the like. And secondly, converting the world coordinate system into a camera coordinate system, wherein the part is the conversion from a three-dimensional point to a three-dimensional point, and comprises the external parameters of the looking-around camera, wherein the external parameters of the looking-around camera determine the position and the orientation of the camera in a certain three-dimensional space, such as rotation parameters, translation parameters and the like.
In the related art, when the external parameters of the looking-around camera are calibrated, the checkerboard is usually adopted for calibration, namely, the checkerboard is arranged in a calibration workshop, after the vehicle is stopped at a fixed position, the checkerboard is arranged in the front, rear, left and right directions of the vehicle, parameters of the checkerboard are recorded, the external parameters of the looking-around camera are calibrated by combining with the parameters such as physical coordinates of the corner points of the checkerboard, and the like.
Disclosure of Invention
The application provides a method, a device, electronic equipment and a storage medium for calibrating an all-around camera, which are used for improving the precision and the success rate of calibrating the all-around camera.
In a first aspect, the present application provides a method for calibrating an looking-around camera, including:
acquiring an original image of a target vehicle, and performing de-distortion operation on the original image to obtain a target image;
identifying all target corner points in the target image based on the geometric characteristics of the original corner points; wherein, the geometric features represent the linear and symmetrical relation of the original angular points;
calculating to obtain a target external parameter according to the pixel value of the target angular point, the priori world coordinate value and the internal parameter;
and calculating a re-projection error of the target image and a bird's-eye view seam error of an adjacent target camera based on the original image, the target external reference and the internal reference, and determining that the target camera is calibrated if the re-projection error and the bird's-eye view seam error meet preset requirements.
In an alternative embodiment, identifying all target corner points in the target image based on the geometric features of the original corner points comprises:
for the target images respectively acquired from the front and rear directions of the target vehicle, the following operations are respectively performed:
Acquiring all original corner points in a target image;
determining a plurality of parallel horizontal straight lines in the original corner points;
and taking the original corner points which are symmetrical about the center point on the straight line as target corner points.
In an alternative embodiment, identifying all target corner points in the target image based on the geometric features of the original corner points comprises:
for target images respectively acquired from left and right directions of a target vehicle, the following operations are respectively performed:
acquiring all original corner points in a target image;
detecting edge points of the original corner points, and obtaining all edge points in the target image;
obtaining a target rectangular area in the target image based on the edge points;
determining a plurality of parallel horizontal straight lines in a target rectangular area;
and taking the corner point positioned on the straight line and the corner point of the vertex of the target rectangular area as target corner points.
In an alternative embodiment, calculating a reprojection error of the target image and a bird's-eye view seam error of an adjacent target camera based on the original image, the target external parameter and the memory, and if the reprojection error and the bird's-eye view seam error meet preset requirements, determining that the target camera is calibrated, includes:
According to the external reference and the internal reference of the target, performing inverse perspective transformation on the original image to obtain a bird's eye view of the target vehicle;
acquiring a first world coordinate of a target corner in a bird's eye view;
performing coordinate conversion according to the first world coordinate to obtain a first pixel coordinate of the target corner;
obtaining a second pixel coordinate of the target corner through a corner detection algorithm;
the pixel value of the second pixel coordinate is differenced with the pixel value of the first pixel coordinate, and a target pixel difference of the target corner point is obtained;
and if the target pixel difference is smaller than the preset threshold value, determining that the target camera calibration is completed.
In an optional implementation manner, when calculating the reprojection error of the target image and the aerial view seam error of the adjacent target camera based on the original image, the target external reference and the internal reference, if the reprojection error and the aerial view seam error meet the preset requirements, determining that the target camera calibration is completed, the method further comprises:
according to the external reference and the internal reference of the target, performing inverse perspective transformation on the original image to obtain a bird's eye view of the target vehicle;
acquiring a fourth pixel coordinate of a common view point corresponding to the common view area in the aerial view; the common view area is an overlapping area of the acquisition range between two adjacent target cameras;
Performing coordinate conversion according to the third pixel coordinate to obtain a second world coordinate of the common view point;
acquiring a difference value between a third world coordinate of the actually measured common view point and a Euclidean distance of the second world coordinate;
if the Euclidean distance difference value is smaller than a preset threshold value, determining that the calibration of the target camera is completed; the Euclidean distance difference characterizes a bird's eye view seam error of the adjacent target cameras.
In a second aspect, the present application provides a pan-around camera calibration device, comprising:
the processing module is used for acquiring an original image of the target vehicle, and performing de-distortion operation on the original image to obtain a target image;
the identification module is used for identifying all target angular points in the target image based on the geometric characteristics of the original angular points; wherein, the geometric features represent the linear and symmetrical relation of the original angular points;
the computing module is used for computing and obtaining the external parameters of the target according to the pixel values of the corner points of the target, the priori world coordinate values and the internal parameters;
the determining module is used for calculating the reprojection error of the target image and the aerial view seam error of the adjacent target camera based on the original image, the target external reference and the internal reference, and determining that the target camera is calibrated if the reprojection error and the seam error meet preset requirements.
In an alternative embodiment, when all target corner points are identified in the target image based on the geometric features of the original corner points, the identification module is specifically configured to:
for the target images respectively acquired from the front and rear directions of the target vehicle, the following operations are respectively performed:
acquiring all original corner points in the target image;
determining a plurality of parallel horizontal straight lines in the original corner points;
and taking the original corner points which are symmetrical about the center point on the straight line as target corner points.
In an alternative embodiment, when all target corner points are identified in the target image based on the geometric features of the original corner points, the identification module is specifically configured to:
for target images respectively acquired from the left and right directions of the target vehicle, the following operations are respectively performed:
acquiring all original corner points in a target image;
detecting edge points of the original corner points, and obtaining all edge points in the target image;
obtaining a target rectangular area in the target image based on the edge points;
determining a plurality of parallel horizontal straight lines in a target rectangular area;
and taking the corner point positioned on the horizontal line and the corner point of the vertex of the target rectangular area as target corner points.
In an optional implementation manner, when calculating a reprojection error of the target image and a bird's-eye view seam error of an adjacent target camera based on the original image, the target external reference and the internal reference, if the reprojection error and the bird's-eye view seam error meet preset requirements, the determining module is specifically configured to:
according to the external reference and the internal reference of the target, performing inverse perspective transformation on the original image to obtain a bird's eye view of the target vehicle;
acquiring a first world coordinate of a target corner on a bird's eye view;
performing coordinate conversion according to the first world coordinate to obtain a first pixel coordinate of the target corner;
obtaining a second pixel coordinate of the target corner through a corner detection algorithm;
the pixel value of the second pixel coordinate is differenced with the pixel value of the first pixel coordinate, and the target pixel difference of the target corner point is obtained;
and if the target pixel difference is smaller than a preset threshold value, determining that the target camera calibration is completed.
In an optional implementation manner, when calculating the reprojection error of the target image and the aerial view seam error of the adjacent target camera based on the original image, the target external reference and the internal reference, if the reprojection error and the aerial view seam error meet preset requirements, the determining module is further configured to:
According to the external reference and the internal reference of the target, performing inverse perspective transformation on the original image to obtain a bird's eye view of the target vehicle;
acquiring a third pixel coordinate of a common view point corresponding to the common view area in the aerial view; the common view area is an overlapping area of the acquisition range between two adjacent target cameras;
performing coordinate conversion according to the third pixel coordinate to obtain a second world coordinate of the common view point;
acquiring a difference value between a third world coordinate of the actually measured common view point and a Euclidean distance of the second world coordinate;
if the Euclidean distance difference value is smaller than a preset threshold value, determining that the calibration of the target camera is completed; the Euclidean distance difference represents the aerial view seam error of the adjacent target cameras.
In a third aspect, the present application provides an electronic device, including:
a memory for storing a computer program;
and the processor is used for realizing the steps of the calibrating method of the looking-around camera when executing the computer program stored in the memory.
In a fourth aspect, the present application provides a computer readable storage medium having a computer program stored therein, which when executed by a processor, implements the steps of a method for calibrating a look-around camera as described above.
Through the technical scheme in the above-mentioned one or more embodiments of the present application, the embodiments of the present application have at least the following beneficial effects:
in the calibrating method of the looking-around camera provided by the embodiment of the application, firstly, an original image of a target vehicle is obtained, and the original image is subjected to de-distortion operation to obtain the target image; the distorted lines and the like are restored through the de-distortion operation, so that the accuracy of the subsequent image processing is improved; then, identifying all target corner points in the target image based on the geometric characteristics of the original corner points; the geometric characteristics characterize the linear and symmetrical relation of the original corner points, and the selection of the target corner points is performed based on the geometric characteristics of the original corner points, so that the problems that in the related art, the selection of the target corner points is inaccurate and the available target corner points are less due to the fact that the corner points are selected manually and the corner points are selected through connectivity are avoided; further, calculating to obtain a target external parameter according to the target angular point and the internal parameter; and finally, calculating the re-projection error of the target image based on the original image, the target external reference and the internal reference, and determining that the calibration of the target camera is completed if the re-projection error meets the preset requirement.
By adopting the mode, the method selects the target corner based on the geometric characteristics of the original corner, and can avoid the problem that the connectivity cannot identify the corner. When the target angular points are selected, the geometric features based on the original angular points automatically select the spatial distribution of the target angular points uniformly and symmetrically, the calibration precision of the looking-around camera is improved, manual intervention is not needed at all, the inefficiency caused by incomplete automation of manual angular point selection is avoided, the problem that most available angular points cannot be selected and calibration is unsuccessful due to the fact that the calibration is not successful when the angular points are selected based on connectivity is avoided, the angular points with low identification precision can be automatically removed when the target angular points are selected, the calibration precision is improved, the re-projection error of the target image is calculated while the calibration precision of the looking-around camera can be improved, and the calibration precision of the looking-around camera is fully quantized according to the difference value between the converted coordinates and the coordinates actually detected through the mutual conversion of the world coordinate system and the pixel coordinate system.
The technical effects of each of the second to fourth aspects and the technical effects that may be achieved by each of the aspects are referred to above for the technical effects that may be achieved by each of the first aspect and the various possible aspects of the first aspect, and the detailed description is not repeated here.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it will be apparent that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art. In the drawings:
fig. 1 is a schematic implementation flow chart of a calibrating method for a pan-around camera according to an embodiment of the present application;
FIG. 2 is a schematic diagram of determining a horizontal straight line according to an embodiment of the present application;
fig. 3 is a schematic view of a common view area of a pan-around camera according to an embodiment of the present application;
fig. 4 is a schematic diagram of a Delaunay triangulation network of a target corner according to an embodiment of the present application;
fig. 5 is a schematic illustration of missing detection of a target corner according to an embodiment of the present application;
fig. 6 is a schematic diagram of false detection of a target corner according to an embodiment of the present application;
fig. 7 is a schematic diagram of Delaunay triangulation network of target corner points under another target pattern according to an embodiment of the present application;
fig. 8 is a schematic diagram of a calibrating device for an looking-around camera according to an embodiment of the present application;
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the technical solutions of the present application, but not all embodiments. All other embodiments, which can be made by a person of ordinary skill in the art without any inventive effort, based on the embodiments described in the present application are intended to be within the scope of the technical solutions of the present application.
It should be noted that "a plurality of" is understood as "at least two" in the description of the present application. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. A is connected with B, and can be represented as follows: both cases of direct connection of A and B and connection of A and B through C. In addition, in the description of the present application, the words "first," "second," and the like are used merely for distinguishing between the descriptions and not be construed as indicating or implying a relative importance or order.
Embodiments of the present application are described in detail below with reference to the accompanying drawings.
With the rapid development of intelligent driving automobiles, the automobile industry has a rapid increase in the requirement of the function of assisting intelligent driving under a visual scheme, so that a 360-degree, 540-degree and other multi-camera looking-around system has become the conventional configuration of the intelligent automobiles, and the parameter calibration of a plurality of cameras is the basis of the looking-around system, and the final effect of the driving assisting system is directly influenced by the quality of the parameter calibration.
Currently, the process of calibrating parameters of a looking-around camera is generally divided into two parts, namely, converting a camera coordinate system into an image coordinate system, wherein the part is the conversion from a three-dimensional point to a two-dimensional point, and comprises an internal reference of the camera, wherein the internal reference of the looking-around camera is a description of physical characteristics of the camera, such as focal length, resolution and the like. And secondly, converting the world coordinate system into a camera coordinate system, wherein the part is the conversion from a three-dimensional point to a three-dimensional point, and comprises the external parameters of the looking-around camera, wherein the external parameters of the looking-around camera determine the position and the orientation of the camera in a certain three-dimensional space, such as rotation parameters, translation parameters and the like.
In the related art, when the external parameters of the looking-around camera are calibrated, the checkerboard is usually adopted for calibration, namely, the checkerboard is arranged in a calibration workshop, after the vehicle is stopped at a fixed position, the checkerboard is arranged in the front, rear, left and right directions of the vehicle, parameters such as physical coordinates of corner points of the checkerboard are recorded, and the external parameters of the looking-around camera are calibrated.
In view of the above, in order to solve the above technical problems, the present application provides a calibration method for an looking-around camera, which includes: firstly, acquiring an original image of a target vehicle, and performing de-distortion operation on the original image to obtain a target image; then, identifying all target corner points in the target image based on the geometric characteristics of the original corner points; further, calculating to obtain a target external parameter according to the target angular point and the internal parameter; and finally, calculating the re-projection error of the target image based on the original image, the target external reference and the internal reference, and determining that the calibration of the target camera is completed if the re-projection error meets the preset requirement. By the method, usable corner points can be effectively selected, manual selection is not needed, the problem of inaccurate corner point selection is avoided, and the problem of small number of usable corner points due to corner point selection based on connectivity is avoided.
It should be noted that the following description of the preferred embodiments of the present application is given by way of illustration and explanation only, and is not intended to limit the present application, and the features of the embodiments of the present application and the embodiments thereof may be combined with each other without conflict.
Referring to fig. 1, a schematic implementation flow chart of a calibrating method for a pan-looking camera according to an embodiment of the present application is shown:
s1: and obtaining an original image of the target vehicle, and performing de-distortion operation on the original image to obtain the target image.
In the embodiment of the application, when performing external parameter calibration on each looking-around camera in the looking-around system of the target vehicle, a checkerboard (or other target patterns, such as round holes, for example) may be arranged around the target vehicle.
Further, the vehicle body of the target vehicle is provided with a through-view camera, for example, the through-view camera is mounted in four directions of the front, rear, left, and right of the vehicle body of the target vehicle, and an original image of the target vehicle including the checkerboard can be acquired by the through-view cameras. It is understood that when the original image is acquired, the original images are acquired from the front, rear, left and right directions of the target vehicle, respectively, and thus the front, rear, left and right directions of the target vehicle can be acquired.
Due to the manufacturing precision of the looking-around camera and the assembly process, or due to the angle, rotation, scaling, etc. of the original image when shooting, the acquired original image often has a certain distortion. Therefore, the undistorted operation needs to be performed on the original image acquired by each of the looking around cameras. And restoring the distorted lines in each original image to obtain a target image.
By the method, the original image is subjected to distortion operation, so that the visual angle correction of the original image is realized, the recognition accuracy of diagonal points is further improved, and the calibration accuracy of the external parameters of the looking-around camera can be improved.
S2: all target corner points are identified in the target image based on the geometrical characteristics of the original corner points.
In the embodiment of the application, the geometric features characterize the linear and symmetrical relation of the original corner points. Because the angles of view of the front, back, left and right looking-around cameras are different, different methods are adopted for the looking-around cameras in different directions to acquire the target corner point in the original image.
In an alternative embodiment, the following operations are performed for target images respectively acquired from the front and rear directions of the target vehicle, respectively:
first, an original image is converted into a grayscale image, and then a region of interest (Region of Interest, ROI) is delineated on the grayscale image and a mask is set. Further, using corner detection operators (such as the Tomas operator and the Sobel operator) in the ROI area, the original corner and the coordinate pixel value of the original corner are detected. In order to improve the angular point identification precision, the sub-pixel angular point detection is carried out by setting the size of a Gaussian window, the minimum value of residual convergence and the maximum iteration number, and the sub-pixel purification is carried out on the pixel coordinates of the original angular point.
Next, a random sample consensus (Random Sample Consensus, RANSAC) algorithm is used to determine a plurality of horizontal lines parallel to each other from the processed raw corner points. Referring to fig. 2, two original corner points are randomly found out, connected to form a straight line, and used as an imaginary fitting straight line L, and then points within a certain distance threshold around the straight line L are drawn into the straight line, so that a complete straight line is obtained. When determining the horizontal straight line, the inclination of the straight line needs to be judged, and if the inclination of the straight line is larger than a certain threshold value, the straight line does not meet the requirement. By the method, a plurality of parallel horizontal straight lines can be determined in the original corner points.
Further, center points are determined on the plurality of horizontal straight lines respectively, and original corner points symmetrical about the respective center points on the plurality of horizontal straight lines are taken as target corner points.
In the embodiment of the application, when the center point is selected, firstly, the intersection point of the horizontal straight line and the ground vertical line of the optical axis of the looking-around camera is determined, and then, the point closest to the phase angle point is determined as the center point.
In an alternative embodiment, the following operations are performed for target images respectively acquired from the left and right directions of the target vehicle, respectively:
First, all original corner points in a target image are acquired. Specifically, in order to enhance the image information, the information such as texture details, edges and the like is more clearly shown in the target image, and the target image is firstly converted into a gray scale image. Then, the ROI area is defined on the gray scale map and a mask is set, whereby the processing time for the gray scale map can be reduced and the processing accuracy can be improved. Further, image gradient calculation and gradient threshold judgment are performed on each pixel point in the ROI area on the gray scale image, so that edge points in the ROI area on the gray scale image are detected.
In addition, the ROI area of the gray scale image is subjected to gridding (Grid transformation) processing, then, line fitting is performed based on edge points in the Grid, intersection points of intersecting lines of two pairs are obtained, and a target rectangular area is obtained in the target image according to the intersection points of intersecting lines of two pairs by gradient operation of the lines.
In the embodiment of the present application, after the target rectangular area is obtained in the target image, a RANSAC algorithm is also adopted to determine a plurality of parallel horizontal lines in the target rectangular area.
After the target rectangular area and the plurality of horizontal straight lines are determined, four vertexes of the target rectangular area and original corner points positioned on the plurality of horizontal straight lines are taken as target corner points.
S3: and calculating the external parameters of the target according to the pixel values of the corner points of the target, the priori world coordinate values and the internal parameters.
In an alternative embodiment, referring to fig. 3, the looking-around cameras are respectively arranged in the front, rear, left and right directions of the target vehicle, where a is a front looking-around camera, B is a rear looking-around camera, C is a left looking-around camera, and D is a right looking-around camera, and in this embodiment, the looking-around camera may be a fisheye camera with 180 degrees, 360 degrees or other angles. A. B, C, D four looking around cameras can each acquire a field of view, wherein a common field of view exists between each two adjacent looking around cameras. For example, there is a common view region between the A-looking camera and the C-looking camera, or a common view region between the A-looking camera and the D-looking camera.
Further, determining a pixel value and a priori world coordinate value of a pixel coordinate corresponding to the target angular point, wherein in the embodiment of the application, when the target external parameter is calculated, the target angular point comprises a co-view angular point of a co-view area and a single-shot angular point of each view area of the target camera. And then, calculating the external parameters of the target based on the pixel values of the pixel coordinates corresponding to the target angular points, the priori world coordinate values and the external parameters of the looking-around camera provided by a looking-around camera manufacturer.
In the embodiment of the application, the calibration of the looking-around camera is mainly that pose calculation is performed through the conversion of the target corner point in the world coordinate system and the pixel coordinate system.
Specifically, the conversion relation between the world coordinate system and the pixel coordinate system is as follows:
wherein,representing the internal parameters of the looking-around camera>Representing the look-around camera external parameters.
In the embodiment of the present application, after obtaining the pixel coordinates corresponding to the target corner, knowing the coordinates of the target corner under the world coordinate system and the pixel coordinates and the internal parameters of the co-view corner on the image plane, the target external parameters of the look-around camera may be solved by PNP (selective-n-Points, PNP).
In the embodiment of the application, after the target external parameters of the looking-around camera are solved through PNP solution, an optimizer target external parameters can be further constructed for optimization.
Specifically, the obtained object external parameters comprise three rotational degrees of freedom and three translational degrees of freedom, the six degrees of freedom are subjected to iterative convergence through a construction loss function, the 3D points are subjected to coordinate conversion and conversion under a camera coordinate system, normalization is carried out, the conversion is carried out under an image coordinate system, errors of projection points and image points are calculated, an optimizer parameter and an optimization method (such as an LM algorithm) are set, a solver is called for optimization, and the obtained optimization parameter is used as the final rotational degrees of freedom and translational degrees of freedom.
S4: and calculating a re-projection error of the target image and a bird's-eye view seam error of an adjacent target camera based on the original image, the target external reference and the internal reference, and determining that the target camera is calibrated if the re-projection error and the bird's-eye view seam error meet preset requirements.
Since the look-around camera has already calibrated the internal parameters before being mounted on the target vehicle, after obtaining the target external parameters of the look-around camera, the reverse perspective stitching operation can be performed according to the original image, the target external parameters and the internal parameters, so as to obtain a Bird's Eye View (BEV) of the target vehicle.
And then, acquiring a first world coordinate of the target angular point in the aerial view, and converting the first world coordinate of the target angular point into a first pixel coordinate through coordinate conversion. In the embodiment of the application, the second pixel coordinate (actually measured pixel coordinate) of the target corner may also be obtained by using a corner detection algorithm. After the first pixel coordinate and the second pixel coordinate of the target corner are obtained, the pixel value of the second pixel coordinate is differenced from the pixel value of the first pixel coordinate, and the target pixel difference of the target corner is obtained. Further, it is determined whether the target pixel difference is less than a preset threshold, for example, the target pixel difference is 1pixel and the preset threshold is 2pixel. And if the target pixel difference is smaller than the preset threshold value, determining that the target camera calibration is completed. When the target pixel difference of the target corner is determined, the target corner includes a co-view corner of the co-view area of the target camera and a single-shot corner of the field of view corresponding to the target camera.
In an alternative embodiment, after obtaining the aerial view of the target vehicle, obtaining third pixel coordinates of the common view point in the aerial view; converting the third pixel coordinate of the common view point into the second world coordinate of the common view point through coordinate conversion; and (3) taking the Euclidean distance between the third world coordinate and the second world coordinate of the actually measured common view point, and if the Euclidean distance difference between the third world coordinate and the second world coordinate is smaller than a preset threshold, namely the aerial view seam error of the adjacent target camera is smaller than the preset threshold, determining that the target camera calibration is completed. It should be noted that, when determining the seam error of the aerial view of the adjacent target cameras, the target corner points used only include the common view points between the adjacent target cameras.
And mapping characterization between 3D and 2D is carried out on the target angular points, and the precision of the calibration of the looking-around camera is represented in a quantization manner on the pixel difference and the European distance difference.
Similarly, after the target camera is calibrated, other three looking-around cameras on the other target vehicles are calibrated in the same way.
In the embodiment of the application, after the target angular point is determined based on the geometric features of the original angular point, the external parameters of the looking-around camera can be further calculated, and the accuracy of the looking-around camera calibration is quantified by calculating the pixel difference and the Euclidean distance difference of the target angular point.
Further, in the embodiment of the present application, it may also be determined whether the target corner is missed or misprimed. Referring to fig. 4, after determining the target corner, a Delaunay triangulation network is constructed on the target image, and then two-dimensional pixel coordinate values of the final target corner are found in the Delaunay triangulation network according to the information such as the number of the known corner, the row number and the like.
By constructing the Delaunay triangulation network on the target image, parameter information of the preset target pattern can be obtained, for example, a plurality of rectangular blocks are known in the Delaunay triangulation network, and the number and the row and column number of each of the plurality of rectangular blocks. Referring to fig. 5, if a rectangular block in a certain row and a certain column is missing, for example, a rectangular block in the first column of the first row in fig. 5 is missing, it indicates that the detection of the target corner point has missed.
Referring to fig. 6, if rectangular blocks appear in the area outside the Delaunay triangulation network, that is, after the Delaunay triangulation network is determined in the target image through the information of the number of the known angular points, the row and column numbers and the target angular points, as shown in fig. 6, other rectangular blocks are located outside the Delaunay triangulation network, the false detection is indicated in the detection of the target angular points. As shown in fig. 6, when the selection of the target corner is performed based on the linear relationship of the original corner, the corner No. 1 does not belong to a corner symmetric about the symmetric point on the straight line, and should not appear in the Delaunay triangulation network, and therefore, the corner No. 1 in fig. 6 belongs to the false detection corner. As can be seen from the Delaunay triangulation network and the prior pixel difference, the No. 2 corner in fig. 6 also belongs to the false detection corner, for example, the pixel difference between the No. 5 corner and the No. 4 corner, the pixel difference between the No. 6 corner and the No. 3 corner in fig. 6 are equal, and the pixel difference between the No. 4 corner and the No. 2 corner or the pixel difference between the No. 3 corner and the No. 2 corner is too small, so that the No. 2 corner can be judged to also belong to the false detection corner.
In addition, in extracting the target corner points based on the geometric features, the target pattern provided around the target vehicle is not limited to only a checkerboard, but may be other aggregate shapes, for example, circular holes.
Therefore, if the target pattern is a circular hole, when the target corner is extracted, as shown in fig. 7, a Delaunay triangulation network may also be constructed on the target image. Similarly, if the target pattern is a circular hole, the drain or false detection of the target corner may be determined by referring to the above manner.
By the method, the problem that the calibration accuracy of the looking-around camera is low and most available angular points cannot be selected due to the fact that the angular points are selected through connectivity due to inaccuracy of manual angular point selection can be avoided. The success rate of calibrating the looking-around camera is improved by judging the missing detection and the false detection of the target angular point.
Based on the same inventive concept, the embodiment of the present application further provides a calibration device for an looking-around camera, as shown in fig. 8, where the device includes: a processing module 801, an identification module 802, a calculation module 803, and a determination module 804; wherein,
the processing module 801 is configured to obtain an original image of a target vehicle, and perform a de-distortion operation on the original image to obtain a target image;
An identifying module 802, configured to identify all target corner points in the target image based on geometric features of the original corner points; wherein, the geometric features represent the linear and symmetrical relation of the original angular points;
the calculating module 803 is configured to calculate a target external parameter according to the pixel value of the target angular point, the prior world coordinate value and the internal parameter;
the determining module is used for calculating the reprojection error of the target image and the aerial view seam error of the adjacent target camera based on the original image, the target external reference and the internal reference, and determining that the target camera is calibrated if the reprojection error and the aerial view seam error meet preset requirements.
In an alternative embodiment, when all target corner points are identified in the target image based on the geometric features of the original corner points, the identifying module 802 is specifically configured to:
for the target images respectively acquired from the front and rear directions of the target vehicle, the following operations are respectively performed:
acquiring all original corner points in a target image;
determining a plurality of parallel horizontal straight lines in the original corner points;
and taking the original corner points which are symmetrical about the center point on the straight line as target corner points.
In an alternative embodiment, when all target corner points are identified in the target image based on the geometric features of the original corner points, the identifying module 802 is specifically configured to:
For target images respectively acquired from left and right directions of a target vehicle, the following operations are respectively performed:
acquiring all original corner points in a target image;
detecting edge points of the original corner points, and obtaining all edge points in the target image;
obtaining a target rectangular area in the target image based on the edge points;
determining a plurality of parallel horizontal straight lines in a target rectangular area;
and taking the angle positioned on the straight line and the corner point of the vertex of the target rectangular area as the target corner point.
In an optional implementation manner, when calculating the reprojection error of the target image and the aerial view seam error of the adjacent target camera based on the original image, the target external parameter and the internal parameter, if the reprojection error and the aerial view seam error meet the preset requirements, the determining module 404 is specifically configured to:
according to the target external parameters and the internal parameters, performing inverse perspective transformation on the original image to obtain a bird's eye view of the target vehicle;
acquiring a first world coordinate of a target corner on a bird's eye view;
performing coordinate conversion according to the first world coordinate to obtain a first pixel coordinate of the target corner;
obtaining a second pixel coordinate of the target corner through a corner detection algorithm;
The pixel value of the second pixel coordinate is differenced with the pixel value of the first pixel coordinate, and the pixel difference of the target corner point is obtained;
and if the pixel difference is smaller than the preset threshold value, determining that the calibration of the target camera is completed.
In an alternative embodiment, when calculating the re-projection error of the target image based on the original image, the target external parameter, and the internal parameter, if the re-projection error meets a preset requirement, the determining module 804 is further configured to:
according to the external reference and the internal reference of the target, performing inverse perspective transformation on the original image to obtain a bird's eye view of the target vehicle;
acquiring a third pixel coordinate of a target corner in the aerial view;
performing coordinate conversion according to the third pixel coordinates to obtain second world coordinates of the target corner points;
acquiring a Euclidean distance difference value between a third world coordinate and a second world coordinate of an actually measured target angular point;
and if the Euclidean distance difference value is smaller than the preset threshold value, determining that the calibration of the target camera is completed.
It should be noted that, the above device provided in this embodiment of the present application can implement all the method steps in the embodiment of the calibration method of the pan-looking camera, and can achieve the same technical effects, and detailed descriptions of the same parts and beneficial effects as those in the embodiment of the method are omitted herein.
Based on the same inventive concept, the embodiment of the present application further provides an electronic device, where the electronic device may implement the function of the foregoing method for calibrating an looking-around camera, and referring to fig. 9, the electronic device includes:
at least one processor 901, and a memory 902 connected to the at least one processor 901, a specific connection medium between the processor 901 and the memory 902 is not limited in the embodiment of the present application, and in fig. 9, the processor 901 and the memory 902 are connected by a bus 900 as an example. Bus 900 is shown in bold lines in fig. 9, and the manner in which other components are connected is illustrated schematically and not by way of limitation. The bus 900 may be divided into an address bus, a data bus, a control bus, etc., and is represented by only one thick line in fig. 9 for convenience of representation, but does not represent only one bus or one type of bus. Alternatively, the processor 901 may also be referred to as a controller, and the names are not limited.
In the embodiment of the present application, the memory 902 stores instructions executable by the at least one processor 901, and the at least one processor 901 may perform the above-described pan-around camera calibration method by executing the instructions stored in the memory 902. The processor 901 may implement the functions of the respective modules in the apparatus shown in fig. 8.
The processor 901 is a control center of the apparatus, and may connect various parts of the entire control device using various interfaces and lines, and by executing or executing instructions stored in the memory 902 and invoking data stored in the memory 902, various functions of the apparatus and processing data, thereby performing overall monitoring of the apparatus.
In one possible design, processor 901 may include one or more processing units, and processor 901 may integrate an application processor that primarily processes operating systems, user interfaces, application programs, and the like, and a modem processor that primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 901. In some embodiments, processor 901 and memory 902 may be implemented on the same chip, and in some embodiments they may be implemented separately on separate chips.
The processor 901 may be a general purpose processor such as a Central Processing Unit (CPU), digital signal processor, application specific integrated circuit, field programmable gate array or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, and may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the calibrating method for the looking-around camera disclosed in the embodiment of the application can be directly embodied and executed by a hardware processor or by a combination of hardware and software modules in the processor.
The memory 902 is a non-volatile computer-readable storage medium that can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 902 may include at least one type of storage medium, which may include, for example, flash Memory, hard disk, multimedia card, card Memory, random access Memory (Random Access Memory, RAM), static random access Memory (Static Random Access Memory, SRAM), programmable Read-Only Memory (Programmable Read Only Memory, PROM), read-Only Memory (ROM), charged erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory), magnetic Memory, magnetic disk, optical disk, and the like. Memory 902 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 902 of the present embodiment may also be circuitry or any other device capable of implementing a memory function for storing program instructions and/or data.
By programming the processor 901, codes corresponding to the method for calibrating the looking-around camera described in the foregoing embodiment may be cured into the chip, so that the chip can execute the steps of the method for calibrating the looking-around camera in the embodiment shown in fig. 1 during operation. How to design and program the processor 901 is a technology well known to those skilled in the art, and will not be described in detail herein.
Based on the same inventive concept, the embodiments of the present application also provide a storage medium storing computer instructions that, when executed on a computer, cause the computer to perform the above-described pan-around camera calibration method.
In some possible embodiments, aspects of the pan-around camera calibration method provided herein may also be implemented in the form of a program product comprising program code for causing the control apparatus to carry out the steps of the pan-around camera calibration method according to various exemplary embodiments of the present application as described herein above when the program product is run on a device.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (12)

1. A method for calibrating a pan-around camera, the method comprising:
acquiring an original image of a target vehicle, and performing de-distortion operation on the original image to obtain a target image;
identifying all target corner points in the target image based on the geometric characteristics of the original corner points; wherein the geometric features characterize the linear and symmetrical relationship of the original corner points;
Calculating to obtain a target external parameter according to the pixel value of the target angular point, the priori world coordinate value and the internal parameter;
and calculating a reprojection error of the target image and a bird's-eye view seam error of an adjacent target camera based on the original image, the target external reference and the internal reference, and determining that the target camera is calibrated if the reprojection error and the bird's-eye view seam error meet preset requirements.
2. The method of claim 1, wherein the identifying all target corner points in the target image based on the geometric features of the original corner points comprises:
for the target images respectively acquired from the front and rear directions of the target vehicle, the following operations are respectively performed:
acquiring all original corner points in the target image;
determining a plurality of parallel horizontal straight lines in the original corner points;
and taking the original corner points which are symmetrical about the center point on the straight line as target corner points.
3. The method of claim 1, wherein the identifying all target corner points in the target image based on the geometric features of the original corner points comprises:
for target images respectively acquired from the left and right directions of the target vehicle, the following operations are respectively performed:
Acquiring all original corner points in the target image;
detecting edge points of the original corner points, and obtaining all edge points in the target image;
obtaining a target rectangular area in the target image based on the edge points;
determining a plurality of horizontal straight lines parallel to each other in the target rectangular area;
and taking the corner point positioned on the straight line and the corner point of the vertex of the target rectangular area as target corner points.
4. The method of claim 1, wherein the calculating the re-projection error of the target image and the bird's-eye view seam error of the adjacent target camera based on the original image, the target external reference, and the internal reference, determining that the target camera calibration is completed if the re-projection error and the bird's-eye view seam error meet a preset requirement, comprises:
according to the target external parameters and the internal parameters, performing inverse perspective transformation on the original image to obtain a bird's eye view of the target vehicle;
acquiring a first world coordinate of a target corner in the aerial view;
performing coordinate transformation according to the first world coordinate to obtain a first pixel coordinate of the target corner;
obtaining a second pixel coordinate of the target corner through a corner detection algorithm;
The pixel value of the second pixel coordinate is differenced with the pixel value of the first pixel coordinate, and the target pixel difference of the target corner point is obtained;
and if the target pixel difference is smaller than a preset threshold value, determining that the target camera calibration is completed.
5. The method of claim 1, wherein when the re-projection error of the target image and the bird's-eye view seam error of the adjacent target camera are calculated based on the original image, the target external reference, and the internal reference, if the re-projection error and the bird's-eye view seam error satisfy a preset requirement, determining that the target camera calibration is completed, further comprises:
according to the target external parameters and the internal parameters, performing inverse perspective transformation on the original image to obtain a bird's eye view of the target vehicle;
acquiring a third pixel coordinate of a common view point corresponding to the common view area in the aerial view; the common view area is an overlapping area of an acquisition range between two adjacent target cameras;
performing coordinate conversion according to the third pixel coordinate to obtain a second world coordinate of the common view point;
acquiring a third world coordinate of the actually measured common view point and a Euclidean distance difference value of the third world coordinate and the second world coordinate;
If the Euclidean distance difference value is smaller than a preset threshold value, determining that the calibration of the target camera is completed; and the Euclidean distance difference value characterizes the aerial view seam error of the adjacent target cameras.
6. An all-around camera calibration device, the device comprising:
the processing module is used for acquiring an original image of the target vehicle, and performing de-distortion operation on the original image to obtain a target image;
the identification module is used for identifying all target angular points in the target image based on the geometric characteristics of the original angular points; wherein the geometric features characterize the linear and symmetrical relationship of the original corner points;
the calculating module is used for calculating the external parameters of the target according to the pixel values of the corner points of the target, the priori world coordinate values and the internal parameters;
the determining module is used for calculating the reprojection error of the target image and the aerial view seam error of the adjacent target camera based on the original image, the target external reference and the internal reference, and determining that the target camera is calibrated if the reprojection error and the aerial view seam error meet preset requirements.
7. The apparatus of claim 6, wherein the identification module is specifically configured to, when all target corner points are identified in the target image based on the geometric features of the original corner points:
For the target images respectively acquired from the front and rear directions of the target vehicle, the following operations are respectively performed:
acquiring all original corner points in the target image;
determining a plurality of parallel horizontal straight lines in the original corner points;
and taking the original corner points which are symmetrical about the center point on the straight line as target corner points.
8. The apparatus of claim 6, wherein the identification module is specifically configured to, when all target corner points are identified in the target image based on the geometric features of the original corner points:
for target images respectively acquired from the left and right directions of the target vehicle, the following operations are respectively performed:
acquiring all original corner points in the target image;
detecting edge points of the original corner points, and obtaining all edge points in the target image;
obtaining a target rectangular area in the target image based on the edge points;
determining a plurality of horizontal straight lines parallel to each other in the target rectangular area;
and taking the corner point positioned on the straight line and the corner point of the vertex of the target rectangular area as target corner points.
9. The apparatus of claim 6, wherein when the re-projection error of the target image and the bird's-eye view seam error of the adjacent target camera are calculated based on the original image, the target external reference, and the internal reference, if the re-projection error and the bird's-eye view seam error satisfy a preset requirement, the determining module is specifically configured to:
According to the target external parameters and the internal parameters, performing inverse perspective transformation on the original image to obtain a bird's eye view of the target vehicle;
acquiring a first world coordinate of a target corner point on the aerial view;
performing coordinate transformation according to the first world coordinate to obtain a first pixel coordinate of the target corner;
obtaining a second pixel coordinate of the target corner through a corner detection algorithm;
the pixel value of the second pixel coordinate is differenced with the pixel value of the first pixel coordinate, and the target pixel difference of the target corner point is obtained;
and if the target pixel difference is smaller than a preset threshold value, determining that the target camera calibration is completed.
10. The apparatus of claim 6, wherein when the re-projection error of the target image and the bird's-eye view seam error of the adjacent target camera are calculated based on the original image, the target external reference, and the internal reference, if the re-projection error and the bird's-eye view seam error satisfy a preset requirement, the determining module is further configured to:
according to the target external parameters and the internal parameters, performing inverse perspective transformation on the original image to obtain a bird's eye view of the target vehicle;
Acquiring a third pixel coordinate of a common view point corresponding to the common view area in the aerial view; wherein the common view area is an overlapping area of the acquisition range between two adjacent target cameras
Performing coordinate conversion according to the third pixel coordinate to obtain a second world coordinate of the common view point;
acquiring a third world coordinate of the actually measured common view point and a Euclidean distance difference value of the third world coordinate and the second world coordinate;
if the Euclidean distance difference value is smaller than a preset threshold value, determining that the calibration of the target camera is completed; and the Euclidean distance difference value characterizes the aerial view seam error of the adjacent target cameras.
11. An electronic device, comprising:
a memory for storing a computer program;
a processor for carrying out the method steps of any one of claims 1-5 when executing a computer program stored on said memory.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the method steps of any of claims 1-5.
CN202311283921.9A 2023-09-28 2023-09-28 Method and device for calibrating looking-around camera, electronic equipment and storage medium Pending CN117315046A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311283921.9A CN117315046A (en) 2023-09-28 2023-09-28 Method and device for calibrating looking-around camera, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311283921.9A CN117315046A (en) 2023-09-28 2023-09-28 Method and device for calibrating looking-around camera, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117315046A true CN117315046A (en) 2023-12-29

Family

ID=89296720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311283921.9A Pending CN117315046A (en) 2023-09-28 2023-09-28 Method and device for calibrating looking-around camera, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117315046A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117830439A (en) * 2024-03-05 2024-04-05 南昌虚拟现实研究院股份有限公司 Multi-camera system pose calibration method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117830439A (en) * 2024-03-05 2024-04-05 南昌虚拟现实研究院股份有限公司 Multi-camera system pose calibration method and device

Similar Documents

Publication Publication Date Title
CN109658454B (en) Pose information determination method, related device and storage medium
EP3678096A1 (en) Method for calculating a tow hitch position
CN110969662A (en) Fisheye camera internal reference calibration method and device, calibration device controller and system
CN110827361B (en) Camera group calibration method and device based on global calibration frame
CN117315046A (en) Method and device for calibrating looking-around camera, electronic equipment and storage medium
CN112686950B (en) Pose estimation method, pose estimation device, terminal equipment and computer readable storage medium
CN110490943B (en) Rapid and accurate calibration method and system of 4D holographic capture system and storage medium
CN114067001B (en) Vehicle-mounted camera angle calibration method, terminal and storage medium
CN112381847A (en) Pipeline end head space pose measuring method and system
CN111123242A (en) Combined calibration method based on laser radar and camera and computer readable storage medium
CN111681186A (en) Image processing method and device, electronic equipment and readable storage medium
CN115147499A (en) Calibration parameter determination method, hybrid calibration plate, device, equipment and medium
CN115830135A (en) Image processing method and device and electronic equipment
CN112308934A (en) Calibration detection method and device, storage medium and computing equipment
CN111699513B (en) Calibration plate, internal parameter calibration method, machine vision system and storage device
CN110807730A (en) Image geometric correction method and device and electronic equipment
CN110766731A (en) Method and device for automatically registering panoramic image and point cloud and storage medium
CN110458951B (en) Modeling data acquisition method and related device for power grid pole tower
CN115294277B (en) Three-dimensional reconstruction method and device of object, electronic equipment and storage medium
CN115601336A (en) Method and device for determining target projection and electronic equipment
CN115546314A (en) Sensor external parameter calibration method and device, equipment and storage medium
CN115131273A (en) Information processing method, ranging method and device
CN113255405A (en) Parking space line identification method and system, parking space line identification device and storage medium
CN110838147A (en) Camera module detection method and device
CN117115242B (en) Identification method of mark point, computer storage medium and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination