CN117274393A - Method, device, equipment and storage medium for determining camera external parameter calibration coefficient - Google Patents

Method, device, equipment and storage medium for determining camera external parameter calibration coefficient Download PDF

Info

Publication number
CN117274393A
CN117274393A CN202311068997.XA CN202311068997A CN117274393A CN 117274393 A CN117274393 A CN 117274393A CN 202311068997 A CN202311068997 A CN 202311068997A CN 117274393 A CN117274393 A CN 117274393A
Authority
CN
China
Prior art keywords
matching point
points
point pairs
sampling
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311068997.XA
Other languages
Chinese (zh)
Inventor
张宇阳
蒲玉平
蓝海波
滕雨橦
陈晓炬
兰永亮
王帆帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Thundersoft Co ltd
Original Assignee
Xi'an Thundersoft Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Thundersoft Co ltd filed Critical Xi'an Thundersoft Co ltd
Priority to CN202311068997.XA priority Critical patent/CN117274393A/en
Publication of CN117274393A publication Critical patent/CN117274393A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a method, a device, equipment and a storage medium for determining camera external parameter calibration coefficients, wherein the method comprises the following steps: acquiring an image acquired by a camera, pixel points of the vehicle in the image and position points of the vehicle acquired by positioning equipment in an acquisition area of the camera in the running process of the vehicle; registering and aligning the pixel points and the position points according to the acquisition time of the pixel points and the acquisition time of the position points to obtain a plurality of matching point pairs; distributing the matching point pairs to different area blocks of the acquisition area according to the positions and the number of the matching point pairs in the image until the number of the matching point pairs in each area block meets the preset distribution stop condition; and respectively calculating homography matrixes between pixel points and position points in each area block according to the matching point pairs in each area block. According to the embodiment of the application, the external parameter calibration coefficient with higher precision can be obtained, and further the precision of camera external parameter calibration is improved.

Description

Method, device, equipment and storage medium for determining camera external parameter calibration coefficient
Technical Field
The application belongs to the technical field of artificial intelligence, and particularly relates to a method, a device, equipment and a storage medium for determining an external parameter calibration coefficient of a camera.
Background
With the rapid development of artificial intelligence and automatic driving technologies, cameras with intelligent algorithms are becoming an indispensable part in vehicle-road coordination and intelligent traffic because of their ability to detect, track and locate traffic participants with high accuracy within their visual range.
In order to accurately determine the position and the traveling direction of a traffic participant, it is generally required to acquire the relative positioning information of the traffic participant under a camera coordinate system through a camera, and then convert the relative positioning information into absolute positioning information under a world coordinate system through an external parameter calibration method.
However, when external parameter calibration is performed on a camera by means of the prior art, the process is complicated, the calibration precision is low, and accurate positioning of traffic participants is difficult to obtain.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for determining the external parameter calibration coefficient of a camera, which can obtain the external parameter calibration coefficient with higher precision, thereby improving the precision of external parameter calibration of the camera.
In a first aspect, an embodiment of the present application provides a method for determining a calibration coefficient of a camera external parameter, where the method for determining the calibration coefficient of the camera external parameter includes: acquiring an image acquired by a camera, pixel points of the vehicle in the image and position points of the vehicle acquired by positioning equipment in an acquisition area of the camera in the running process of the vehicle; registering and aligning the pixel points and the position points according to the acquisition time of the pixel points and the acquisition time of the position points to obtain a plurality of matching point pairs; distributing the matching point pairs to different area blocks of the acquisition area according to the positions and the number of the matching point pairs in the image until the number of the matching point pairs in each area block meets the preset distribution stop condition; and respectively calculating homography matrixes between pixel points and position points in each area block according to the matching point pairs in each area block, wherein the homography matrixes are external parameter calibration coefficients of the camera.
According to an embodiment of the first aspect of the present application, according to the acquisition time of the pixel point and the acquisition time of the position point, the registration alignment is performed on the pixel point and the position point, so as to obtain a plurality of matching point pairs, including: sampling the first data points according to the acquisition time of the first data points to obtain N first sampling points; sampling the second data point for M times according to the acquisition time of the second data point to obtain M groups of second sampling points, wherein each group of second sampling points comprises N second sampling points, and the sampling time T of the second sampling points during the mth sampling 2i =T 1i +m.delta.t, where T 1i The sampling time of the first sampling point is represented, δt represents a step unit of the sampling time difference of the first sampling point and the second sampling point; respectively forming a quasi-matching point pair by the N first sampling points and N second sampling points in the M groups of second sampling points to obtain M groups of quasi-matching point pairs, wherein the sampling time differences of the first sampling points and the second sampling points forming the quasi-matching point pairs are equal in the same group of quasi-matching point pairs; calculating the re-projection error of each group of quasi-matching point pairs; taking a group of quasi-matching point pairs with the minimum reprojection error as matching point pairs; one of the first data point and the second data point is a pixel point, and the other is a position point.
According to any of the foregoing embodiments of the first aspect of the present application, calculating the re-projection error of each set of quasi-matching point pairs includes: based on a direct linear transformation algorithm, calculating a homography matrix between a first sampling point and a second sampling point in each set of quasi-matching point pairs; calculating the re-projection error of each quasi-matching point pair in each quasi-matching point pair according to the homography matrix and the first sampling point and the second sampling point which form the quasi-matching point pair in each quasi-matching point pair; and taking the average value of the re-projection errors of the quasi-matching point pairs in each quasi-matching point group as the re-projection error of each quasi-matching point pair.
According to any of the foregoing embodiments of the first aspect of the present application, according to positions and numbers of a plurality of matching point pairs in an image, the matching point pairs are allocated to different area blocks of an acquisition area until the number of matching point pairs in each area block meets a preset allocation stop condition, including: determining first distances from the plurality of matching point pairs to first target edges of the image respectively according to positions of the plurality of matching point pairs in the image, wherein the first target edges are any one edge of the image; based on the dichotomy and the first distance, distributing the matching point pairs to different area blocks of the acquisition area; determining second distances from the matching points in the area blocks which do not meet the preset distribution stop conditions to second target edges of the area blocks respectively under the condition that the area blocks with the number of the matching point pairs which do not meet the preset distribution stop conditions exist; based on the dichotomy and the second distance, reassigning the matching point pairs to different region blocks of the acquisition region; under the condition that the number of the matching point pairs does not meet the preset distribution stop condition, determining the distance from the matching point pairs in the area blocks which do not meet the preset distribution stop condition to the third target edge of the area blocks as a first distance, returning to the distribution of the matching point pairs in different area blocks of the acquisition area based on the dichotomy and the first distance until the number of the matching point pairs in each area block meets the preset distribution stop condition; the directions of the first target edge and the third target edge are parallel to each other, and the directions of the second target edge and the first target edge and the third target edge are perpendicular to each other.
According to any one of the foregoing embodiments of the first aspect of the present application, according to a matching point pair in each area block, a homography matrix between a pixel point and a location point in each area block is calculated, including: and respectively inputting the matching points in each region block into a direct linear transformation algorithm to obtain homography matrixes between the pixel points and the position points in each region block.
According to any one of the foregoing embodiments of the first aspect of the present application, after calculating the homography matrix between the pixel points and the position points in each area block according to the matching point pair in each area block, the method for determining the camera extrinsic calibration coefficients further includes: and (3) carrying out iterative operation on the homography matrix based on a Gaussian-Newton algorithm and/or a Levenberg-Marquardt algorithm to obtain homography matrixes between pixel points and position points in each region block with higher precision.
According to any of the foregoing embodiments of the first aspect of the present application, the preset allocation stop condition is that when the matching point pairs in the area blocks are allocated, there are area blocks with the number of the matching point pairs smaller than the target number, and the target number is the minimum number of the matching point pairs required for calculating the homography matrix.
In a second aspect, an embodiment of the present application provides a device for determining a calibration coefficient of a camera external parameter, where the device for determining a calibration coefficient of a camera external parameter includes: the acquisition module is used for acquiring an image acquired by the camera, pixel points of the vehicle in the image and position points of the vehicle acquired by the positioning equipment in an acquisition area of the camera in the running process of the vehicle; the registration alignment module is used for registering and aligning the pixel points and the position points according to the acquisition time of the pixel points and the acquisition time of the position points to obtain a plurality of matching point pairs; the distribution module is used for distributing the matching point pairs to different area blocks of the acquisition area according to the positions and the number of the matching point pairs in the image until the number of the matching point pairs in each area block meets the preset distribution stop condition; and the calculation module is used for respectively calculating homography matrixes between the pixel points and the position points in each area block according to the matching point pairs in each area block, wherein the homography matrixes are external parameter calibration coefficients of the camera.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor performing the steps of the method of determining a camera extrinsic calibration coefficient as provided in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method for determining a camera extrinsic calibration coefficient as provided in the first aspect.
According to the method, the device, the equipment and the storage medium for determining the camera external parameter calibration coefficient, the camera external parameter calibration coefficient is determined by combining an image acquired by the camera with pixel points of a vehicle in the image and positioning points of the vehicle acquired by the positioning equipment in an acquisition area of the camera. And registering and aligning the pixel points and the position points according to the acquisition time of the pixel points and the acquisition time of the position points, so that the corresponding relation between the pixel points and the position points is more accurate, and a plurality of matching point pairs are obtained. According to the positions and the number of the matching point pairs in the image, the matching point pairs are distributed to different area blocks of the acquisition area, so that homography matrixes of all the area blocks can be calculated respectively according to the matching point pairs in the different area blocks. Because the road surface inside each area block has smaller span compared with the road surface of the whole acquisition area, the assumption requirement that the road surface is a plane when the homography matrix is calculated is more similar, and therefore, the precision of the homography matrix of each area block obtained by calculation is higher. According to the embodiment of the application, the determination accuracy of the external parameter calibration coefficients of different area blocks of the camera acquisition area can be improved, and the accuracy of camera external parameter calibration is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described, and it is possible for a person skilled in the art to obtain other drawings according to these drawings without inventive effort.
Fig. 1 is a flow chart of a method for determining calibration coefficients of camera external parameters according to an embodiment of the present application;
FIG. 2 is a flowchart of another method for determining calibration coefficients of camera external parameters according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a device for determining calibration coefficients of external parameters of a camera according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application are described in detail below to make the objects, technical solutions and advantages of the present application more apparent, and to further describe the present application in conjunction with the accompanying drawings and the detailed embodiments. It should be understood that the specific embodiments described herein are intended to be illustrative of the application and are not intended to be limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by showing examples of the present application.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It should be understood that the term "and/or" as used herein is merely one relationship describing the association of the associated objects, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Accordingly, this application is intended to cover such modifications and variations of this application as fall within the scope of the appended claims (the claims) and their equivalents. The embodiments provided in the examples of the present application may be combined with each other without contradiction.
Before describing the technical solution provided by the embodiments of the present application, in order to facilitate understanding of the embodiments of the present application, the present application first specifically describes a problem existing in the prior art:
as described above, the inventors of the present application found that when external parameter calibration is performed on a camera by means of the prior art, the process is complicated, the calibration accuracy is low, and it is difficult to obtain accurate positioning of traffic participants.
In the related art, when the camera is calibrated by external parameters, a calibrating person A holds a positioning device to move in an acquisition area of the camera, data of a plurality of position points are measured, another calibrating person B acquires images acquired by the camera, the data of a plurality of pixel points in the images of the positioning device held by the calibrating person A, and the acquired position points and the data of the pixel points are used for calculating external parameter calibration coefficients of the camera.
Since the acquisition area of the camera generally consists of a road, there may be a greater risk of traffic accidents when the calibration person a holds the positioning device in the acquisition area of the camera. When calculating the external parameter calibration coefficient of a camera, data of 30-50 position points and pixel points are generally required to be acquired, and the time for the calibration personnel A and the calibration personnel B to cooperatively complete the acquisition of one point is usually about 30 seconds, namely, about 15-20 minutes are required for the camera only in the data acquisition process, so that the labor and time cost required by the external parameter calibration of the camera are greatly increased, the personnel are not easy to deploy and the calibration efficiency is lower.
In addition, when the related art calculates the external parameter calibration coefficient of the camera by using the collected data, the collected area of the camera is usually modeled as a plane, and the external parameter calibration coefficient of the camera is calculated based on the modeled plane, but the road surface in the actual scene may sometimes be composed of a plurality of different planes, so the modeled plane often does not meet the requirement of plane assumption, resulting in lower precision of the external parameter calibration coefficient of the camera calculated by using the related art.
In order to solve the problems in the prior art, the embodiment of the application provides a method, a device, equipment and a storage medium for determining the calibration coefficient of an external parameter of a camera.
The method for determining the calibration coefficient of the external parameters of the camera provided by the embodiment of the application is first described below.
Fig. 1 is a flowchart of a method for determining a calibration coefficient of an external parameter of a camera according to an embodiment of the present application. As shown in fig. 1, the method for determining the camera external parameter calibration coefficient may include the following steps S101 to S104.
S101, acquiring an image acquired by a camera, pixel points of the vehicle in the image and position points of the vehicle acquired by positioning equipment in an acquisition area of the camera in the running process of the vehicle.
S102, registering and aligning the pixel points and the position points according to the acquisition time of the pixel points and the acquisition time of the position points to obtain a plurality of matching point pairs.
S103, distributing the matching point pairs to different area blocks of the acquisition area according to the positions and the number of the matching point pairs in the image until the number of the matching point pairs in each area block meets a preset distribution stop condition.
S104, respectively calculating homography matrixes between pixel points and position points in each area block according to the matching point pairs in each area block, wherein the homography matrixes are external parameter calibration coefficients of the camera.
The specific implementation of each of the above steps will be described in detail below.
According to the method for determining the camera external parameter calibration coefficient, the camera external parameter calibration coefficient is determined by combining the image acquired by the camera with the pixel points of the vehicle in the image and the position points of the vehicle acquired by the positioning equipment in the acquisition area of the camera. And registering and aligning the pixel points and the position points according to the acquisition time of the pixel points and the acquisition time of the position points, so that the corresponding relation between the pixel points and the position points is more accurate, and a plurality of matching point pairs are obtained. According to the positions and the number of the matching point pairs in the image, the matching point pairs are distributed to different area blocks of the acquisition area, so that homography matrixes of all the area blocks can be calculated respectively according to the matching point pairs in the different area blocks. Because the road surface inside each area block has smaller span compared with the road surface of the whole acquisition area, the assumption requirement that the road surface is a plane when the homography matrix is calculated is more similar, and therefore, the precision of the homography matrix of each area block obtained by calculation is higher. According to the embodiment of the application, the determination accuracy of the external parameter calibration coefficients of different area blocks of the camera acquisition area can be improved, and the accuracy of camera external parameter calibration is further improved.
A specific implementation of each of the above steps is described below.
In S101, the camera may be a camera loaded with an object detection algorithm and having no distortion. Cameras are typically installed at road intersections or roadsides, and when a vehicle is traveling within an acquisition area of the camera, the camera photographs the vehicle and outputs coordinates of a pixel point of a center ground point of the vehicle (representing a point of the vehicle in a world coordinate system) in an image, that is, two-dimensional coordinates of the center ground point of the vehicle in the camera coordinate system, based on an object detection algorithm.
The positioning device can be a vehicle-mounted real-time dynamic (Real Time Kinematic, RTK) measuring device, the RTK device is mounted on a vehicle, and can collect longitude and latitude data and altitude data of each position point of the vehicle on an actual road in real time in the running process of the vehicle to obtain a three-dimensional coordinate formed by the longitude and latitude data and the altitude data, or a two-dimensional homogeneous coordinate corresponding to the longitude and latitude data, namely, a three-dimensional coordinate of a central grounding point of the vehicle under a world coordinate system.
The image collected by the camera, the pixel points of the vehicle in the image and the position points of the vehicle collected by the positioning equipment in the collecting area of the camera are all sent to a personal computer (Personal Computer, PC), and the PC records the data collected by the camera and the positioning equipment at the same time.
Because the RTK equipment is installed on the vehicle, when the vehicle loaded with the RTK equipment replaces a calibration person to hold the RTK equipment to collect data on a road, the risk of traffic accidents in the calibration process can be greatly reduced.
By utilizing the vehicle acquisition data loaded with the RTK equipment, the longitude and latitude data and the altitude data of a large number of vehicles when traveling in each area block in the camera acquisition area can be acquired in a short time, the labor and time cost required by camera external parameter calibration are saved, and the number of acquired position points is far greater than that of the position points acquired during manual calibration. For example, in a typical 200m by 50m scenario, a vehicle loaded with an RTK device generally only needs 5 to 10 minutes to complete the acquisition of vehicle position point data, the data acquisition efficiency is higher, and the result is more accurate when more position point data are used to calculate the external parameter calibration coefficient of the camera.
Because a plurality of cameras with different visual angles can be erected on the same road, the acquisition areas of the cameras can be overlapped, and therefore, data acquired by a vehicle loaded with RTK equipment can also be used for determining the external parameter calibration coefficients of the cameras, and data acquisition is not required to be carried out on each camera independently, so that the external parameter calibration efficiency of the cameras is further improved.
In S102, the data collected by the camera and the data collected by the RTK device are typically not aligned in time because of possible differences in the sampling frequency and clock source of the camera and the RTK device. In the process of calculating the external parameter calibration coefficient of the camera, a relatively precise corresponding relation between the three-dimensional coordinate under the world coordinate system and the two-dimensional coordinate under the camera coordinate system is needed, so that the pixel points and the position points are registered and aligned according to the acquisition time of the pixel points and the acquisition time of the position points.
As an implementation manner of S102, S102 may specifically include: sampling the first data points according to the acquisition time of the first data points to obtain N first sampling points; performing the second data point according to the acquisition time of the second data pointM times of sampling are carried out to obtain M groups of second sampling points, each group of second sampling points comprises N second sampling points, and the sampling time T of the second sampling points during the mth time of sampling 2i =T 1i +m.delta.t, where T 1i The sampling time of the first sampling point is represented, δt represents a step unit of the sampling time difference of the first sampling point and the second sampling point; respectively forming a quasi-matching point pair by the N first sampling points and N second sampling points in the M groups of second sampling points to obtain M groups of quasi-matching point pairs, wherein the sampling time differences of the first sampling points and the second sampling points forming the quasi-matching point pairs are equal in the same group of quasi-matching point pairs; calculating the re-projection error of each group of quasi-matching point pairs; taking a group of quasi-matching point pairs with the minimum reprojection error as matching point pairs; one of the first data point and the second data point is a pixel point, and the other is a position point.
For example, when the first data point is a pixel point, the second data point may be a location point; when the first data point is a location point, the second data point may be a pixel point. Assuming that the sampling time difference between the pixel point and the position point is within 3s when the camera and the RTK device acquire the data of the vehicle at the same moment in the actual scene, when the pixel point and the position point are sampled to determine the matching point pair, the step unit of the sampling time difference between the first sampling point and the second sampling point can be preset to be 10ms, and the pixel point and the position point are registered and aligned by adopting an exhaustion method.
As an example, a first data point is taken as a pixel point, and a second data point is taken as a position point. Sampling the pixel point according to the acquisition time of the pixel point to obtain N first sampling points at different moments, wherein the data of the N first sampling points obtained by sampling can be expressed as Img i =Img(T 1i ). Taking 10ms as a stepping unit, sampling the position points 300 times to obtain 300 groups of second sampling points, wherein each group of second sampling points comprises N second sampling points at different moments, and the data of the N second sampling points obtained by the mth sampling can be expressed as RTK i =RTK(T 2i )=RTK(T 1i +m×δt), where δt=10 ms.
And forming quasi-matching point pairs by the N first sampling points and N second sampling points in 300 groups of second sampling points respectively to obtain 300 groups of quasi-matching point pairs, wherein each group of quasi-matching point pairs comprises N quasi-matching point pairs. In the same set of quasi-matching point pairs, the sampling time differences of the first sampling points and the second sampling points forming the quasi-matching point pairs are m times delta t. For example, when N first sampling points and N second sampling points obtained by sampling when m=1 form a set of quasi-matching point pairs, the sampling time difference between the first sampling points and the second sampling points of each quasi-matching point pair is 10ms, that is, the first sampling points and the second sampling points form N quasi-matching point pairs according to the acquisition time sequence of the pixel points and the acquisition time sequence of the position points in a one-to-one correspondence manner.
After obtaining M sets of quasi-matching point pairs, it is necessary to calculate the re-projection error of each set of quasi-matching point pairs, so as to determine a set of quasi-matching point pairs with the smallest re-projection error as the final matching point pair, thereby performing the distribution of the subsequent matching point pairs.
Before computing the re-projection error of each set of quasi-matching point pairs, a homography matrix of each set of quasi-matching point pairs needs to be computed. For a center ground point of a vehicle in a three-dimensional scene, a conversion relationship between its two-dimensional coordinates in a camera coordinate system without distortion and its three-dimensional coordinates in a world coordinate system can be expressed as the following formula (1):
p=K(RP+t) (1)
Wherein P represents a two-dimensional homogeneous coordinate of the central grounding point of the vehicle under the camera coordinate system, P represents a three-dimensional coordinate of the central grounding point of the vehicle under the world coordinate system, K represents an internal reference calibration coefficient of the camera, and R and t represent a rotation coefficient and a translation coefficient of the world coordinate system relative to the camera coordinate system respectively.
When the center ground point of the vehicle is located on one plane in the world coordinate system, the following formula (2) is satisfied:
n T P=1 (2)
wherein n represents the normal vector of the plane, n T Represents the transposed matrix of n. Substituting the above formula (1) into the above formula (2) can result in the following formula (3):
p=K(RP+tn T P)=K(R+tn T )P=H·P (3)
wherein H represents a homography matrix between two-dimensional homogeneous coordinates of a central grounding point of the vehicle under a camera coordinate system and three-dimensional coordinates under a world coordinate system, and conversion between coordinates of pixel points under the camera coordinate system and coordinates of position points under the world coordinate system can be realized by utilizing the homography matrix. Therefore, after obtaining M sets of quasi-matching point pairs, a homography matrix for each set of quasi-matching point pairs can be obtained according to the above formula (3).
According to some embodiments of the present application, optionally, calculating the re-projection error of each set of quasi-matching point pairs may specifically include: based on a direct linear transformation algorithm, calculating a homography matrix between a first sampling point and a second sampling point in each set of quasi-matching point pairs; calculating the re-projection error of each quasi-matching point pair in each quasi-matching point pair according to the homography matrix and the first sampling point and the second sampling point which form the quasi-matching point pair in each quasi-matching point pair; and taking the average value of the re-projection errors of the quasi-matching point pairs in each quasi-matching point group as the re-projection error of each quasi-matching point pair.
The first sampling point and the second sampling point in each quasi-matching point pair are respectively input into a direct linear transformation algorithm (Direct Linear Transform, DLT), and a homography matrix H corresponding to each quasi-matching point pair can be calculated by combining the direct linear transformation algorithm i . According to the homography matrix H obtained by calculation i And N first sampling points and N second sampling points of each set of quasi-matching point pairs forming quasi-matching point pairs, and calculating a re-projection error of each quasi-matching point pair in each set of quasi-matching point pairs according to the following formula (4):
e i =H i ·RTK i -Img i (4)
n reprojection error values are obtained, namely, the reprojection error of each quasi-matching point pair in each quasi-matching point pair. And taking the average value of N re-projection errors of N pairs of the quasi matching points in each group of quasi matching points as the re-projection error of each group of quasi matching point pairs, thereby obtaining the respective re-projection errors of M groups of quasi matching point pairs.
Determining a set of quasi-matching point pairs with minimum reprojection error as final matching point pairs, wherein the set of quasi-matching pointsThe sampling time difference of the first sampling point and the second sampling point which form the quasi-matching point pair in the pair is closest to the sampling time difference of the pixel point and the position point collected by the camera and the RTK equipment in the actual scene, and the homography matrix H corresponding to the quasi-matching point pair is recorded i And the sampling time difference m is delta t of the first sampling points and the second sampling points of the quasi-matching point pair, which are formed by the quasi-matching point pair, is used as the registration time difference of the pixel points and the position points, so that the data of the pixel points acquired by the camera and the data of the position points acquired by the RTK equipment can be aligned in time.
In S103, since the acquisition area of the camera is a fixed area, the image range acquired by the camera is a fixed range, and the matching point pairs are allocated to different area blocks of the acquisition area of the camera according to the positions and the number of the plurality of matching point pairs in the fixed image range. Since the number of matching point pairs in each area block is required to be not smaller than the minimum number required for calculation, i.e. the target number, when the matching point pairs are allocated in combination with the direct linear transformation algorithm to calculate the homography matrix between the pixel points and the position points in each area block, once the situation that the number of matching point pairs is smaller than the target number of area blocks occurs when the matching point pairs in one area block are allocated, the allocation of the area blocks is stopped until the number of matching point pairs in each area block is greater than or equal to the target number, and after the allocation, the area blocks with the number of matching point pairs smaller than the target number exist. Illustratively, the target number may be set to 10, which is not limited by the embodiments of the present application.
As an implementation manner of S103, S103 may specifically include: determining first distances from the plurality of matching point pairs to first target edges of the image respectively according to positions of the plurality of matching point pairs in the image, wherein the first target edges are any one edge of the image; based on the dichotomy and the first distance, distributing the matching point pairs to different area blocks of the acquisition area; determining second distances from the matching points in the area blocks which do not meet the preset distribution stop conditions to second target edges of the area blocks respectively under the condition that the area blocks with the number of the matching point pairs which do not meet the preset distribution stop conditions exist; based on the dichotomy and the second distance, reassigning the matching point pairs to different region blocks of the acquisition region; and under the condition that the number of the matching point pairs does not meet the preset distribution stop condition, determining the distance from the matching point pairs in the area blocks which do not meet the preset distribution stop condition to the third target edge of the area blocks as a first distance, returning to the distribution of the matching point pairs in different area blocks of the acquisition area based on the dichotomy and the first distance until the number of the matching point pairs in each area block meets the preset distribution stop condition.
For example, the first target edge and the third target edge may be oriented parallel to each other, and the second target edge and the first target edge and the third target edge may be oriented perpendicular to each other. When the first target edge is an upper edge or a lower edge of the image, the second target edge may be a left edge or a right edge of the region block, and the third target edge may be an upper edge or a lower edge of the region block; when the first target edge is a left edge or a right edge of the image, the second target edge may be an upper edge or a lower edge of the region block, and the third target edge may be a left edge or a right edge of the region block.
Because cameras typically capture traffic participants in their acquisition area at a nodding angle, the matching point pairs near the lower edge of the image are closer to the camera and the matching point pairs near the upper edge of the image are farther from the camera.
As an example, a case is described in which a first target edge is a lower edge of an image, a second target edge is a left edge of a region block, and a third target edge is a lower edge of the region block. According to the positions of the matching point pairs in the image, calculating the first distances d from the matching point pairs to the lower edge of the image according to the distance relation from the camera from near to far 1 Obtaining a first distance d from the matching point pair furthest from the camera to the lower edge of the image 1max . Based on dichotomy and d 1max And/2, distributing a plurality of matching point pairs into different area blocks of the acquisition area from bottom to top. If there are areas where the number of matching point pairs does not satisfy the preset allocation stop conditionAnd (3) re-distributing the area blocks which do not meet the preset distribution stop condition, otherwise, directly ending the distribution of the matching point pairs.
When the regional blocks are distributed again, calculating second distances d between a plurality of matching points in the regional blocks and the left edge of the regional blocks respectively according to the distance relation from left to right 2 Obtaining a second distance d from the matching point pair furthest from the left edge of the regional block to the left edge of the regional block 2max . Based on dichotomy and d 2max And/2, reassigning the plurality of matching point pairs from left to right to different area blocks of the acquisition area. If the number of the matching point pairs does not meet the preset allocation stop condition, then the area blocks which do not meet the preset allocation stop condition are allocated again, otherwise, the allocation of the matching point pairs is directly ended.
At this time, when the region block is reassigned, a third distance d between the plurality of matching point pairs in the region block and the lower edge of the region block is calculated 3 Obtaining a third distance d from the matching point pair furthest from the lower edge of the area block to the lower edge of the area block 3max Will d 3max Is determined as d 1max The return is based on dichotomy and d 1max And (2) distributing the plurality of matching point pairs to different area blocks of the acquisition area from bottom to top, and repeatedly executing the steps until the number of the matching point pairs in each area block meets the preset distribution stop condition, thereby completing the self-adaptive area division of the acquisition area of the camera and the distribution of the matching point pairs.
As another example, when the first target edge is the upper edge of the image, the first distances d between the plurality of matching point pairs and the upper edge of the image can be calculated according to the distance relation from the camera from far to near according to the positions of the plurality of matching point pairs in the image 1 Obtaining a first distance d from the matching point pair closest to the camera and farthest from the upper edge of the image to the upper edge of the image 1max . Based on dichotomy and d 1max And/2, distributing a plurality of matching points into different area blocks of the acquisition area from top to bottom. Alternatively, when the second target edge is the right edge of the region block, the second target edge may be the right edge of the region blockDistance relation, calculating second distances d between a plurality of matching points in the region block and the right edge of the region block respectively 2 Obtaining a second distance d from the matching point pair furthest from the right edge of the regional block to the right edge of the regional block 2max . Based on dichotomy and d 2max And/2, reassigning the plurality of matching point pairs from right to left to different area blocks of the acquisition area.
It should be noted that the foregoing allocation manner of the matching point pairs is only illustrative, and is not limited to the embodiments of the present application.
In S104, according to the matching point pair in each area block, the homography matrix between the pixel point and the position point in each area block is calculated by combining with the direct linear transformation algorithm, and the homography matrix obtained by calculation can be used as the external parameter calibration coefficient of the camera.
According to some embodiments of the present application, optionally, calculating a homography matrix between the pixel points and the location points in each region block according to the matching point pair in each region block may specifically include: and respectively inputting the matching points in each region block into a direct linear transformation algorithm to obtain homography matrixes between the pixel points and the position points in each region block.
After the matching point pairs in each divided area block are obtained, the matching point pairs in each area block are respectively input into a direct linear transformation algorithm, and a homography matrix corresponding to each area block respectively, namely a homography matrix between the pixel points and the position points in each area block, can be directly obtained based on the direct linear transformation algorithm.
As another implementation manner of the method for determining the camera external parameter calibration coefficient of the present application, as shown in fig. 2, after S104, the method for determining the camera external parameter calibration coefficient may further include the following step S201.
And S201, carrying out iterative operation on the homography matrix based on a Gaussian-Newton algorithm and/or a Levenberg-Marquardt algorithm to obtain homography matrixes between pixel points and position points in each region block with higher precision.
And taking homography matrixes among pixel points and position points in each area block obtained through calculation of a direct linear transformation algorithm as initialized homography matrixes, and respectively inputting each initialized homography matrix into a Gaussian-Newton algorithm and/or a Levenberg-Marquardt algorithm to further optimize the initialized homography matrixes so as to obtain homography matrixes among the pixel points and the position points in each area block with higher precision. The homography matrix corresponding to each area block can be used as the external parameter calibration coefficient of the camera in the area block and is used for being responsible for conversion between the pixel points and the position points in the area block.
It should be noted that, the method for determining the camera external parameter calibration coefficient provided by the embodiment of the application can be applied to any scene in which external parameter calibration is required to be performed on a camera.
Based on the method for determining the camera external parameter calibration coefficient provided by the embodiment, correspondingly, the application also provides a specific implementation mode of the device for determining the camera external parameter calibration coefficient. Please refer to the following examples.
Referring first to fig. 3, a device 300 for determining a calibration coefficient of an external parameter of a camera according to an embodiment of the present application includes the following modules:
the acquiring module 301 is configured to acquire an image acquired by a camera, a pixel point of the vehicle in the image, and a position point of the vehicle acquired by a positioning device in an acquisition area of the camera during a driving process of the vehicle;
the registration alignment module 302 is configured to perform registration alignment on the pixel point and the position point according to the acquisition time of the pixel point and the acquisition time of the position point, so as to obtain a plurality of matching point pairs;
the allocation module 303 is configured to allocate the matching point pairs to different area blocks of the acquisition area according to positions and numbers of the plurality of matching point pairs in the image, until the number of the matching point pairs in each area block meets a preset allocation stop condition;
the calculating module 304 is configured to calculate, according to the matching point pairs in each area block, a homography matrix between the pixel points and the position points in each area block, where the homography matrix is an external parameter calibration coefficient of the camera.
The device for determining the external parameter calibration coefficient of the camera combines the image acquired by the camera with the pixel points of the vehicle in the image and the position points of the vehicle acquired by the positioning equipment in the acquisition area of the camera to determine the external parameter calibration coefficient of the camera. And registering and aligning the pixel points and the position points according to the acquisition time of the pixel points and the acquisition time of the position points, so that the corresponding relation between the pixel points and the position points is more accurate, and a plurality of matching point pairs are obtained. According to the positions and the number of the matching point pairs in the image, the matching point pairs are distributed to different area blocks of the acquisition area, so that homography matrixes of all the area blocks can be calculated respectively according to the matching point pairs in the different area blocks. Because the road surface inside each area block has smaller span compared with the road surface of the whole acquisition area, the assumption requirement that the road surface is a plane when the homography matrix is calculated is more similar, and therefore, the precision of the homography matrix of each area block obtained by calculation is higher. According to the embodiment of the application, the determination accuracy of the external parameter calibration coefficients of different area blocks of the camera acquisition area can be improved, and the accuracy of camera external parameter calibration is further improved.
In some embodiments, the registration alignment module 302 is specifically configured to sample the first data point according to the acquisition time of the first data point, to obtain N first sampling points; sampling the second data point for M times according to the acquisition time of the second data point to obtain M groups of second sampling points, wherein each group of second sampling points comprises N second sampling points, and the sampling time T of the second sampling points during the mth sampling 2i =T 1i +m.delta.t, where T 1i The sampling time of the first sampling point is represented, δt represents a step unit of the sampling time difference of the first sampling point and the second sampling point; respectively forming a quasi-matching point pair by the N first sampling points and N second sampling points in the M groups of second sampling points to obtain M groups of quasi-matching point pairs, wherein the sampling time differences of the first sampling points and the second sampling points forming the quasi-matching point pairs are equal in the same group of quasi-matching point pairs; calculating the re-projection error of each group of quasi-matching point pairs; taking a group of quasi-matching point pairs with the minimum reprojection error as matching point pairs; one of the first data point and the second data point is a pixel point, and the other is a position point.
In some embodiments, the registration alignment module 302 may be further configured to calculate a homography matrix between the first sampling point and the second sampling point in each set of quasi-matching point pairs based on a direct linear transformation algorithm; calculating the re-projection error of each quasi-matching point pair in each quasi-matching point pair according to the homography matrix and the first sampling point and the second sampling point which form the quasi-matching point pair in each quasi-matching point pair; and taking the average value of the re-projection errors of the quasi-matching point pairs in each quasi-matching point group as the re-projection error of each quasi-matching point pair.
In some embodiments, the allocating module 303 is specifically configured to determine, according to the positions of the plurality of matching point pairs in the image, a first distance between each of the plurality of matching point pairs and a first target edge of the image, where the first target edge is any edge of the image; based on the dichotomy and the first distance, distributing the matching point pairs to different area blocks of the acquisition area; determining second distances from the matching points in the area blocks which do not meet the preset distribution stop conditions to second target edges of the area blocks respectively under the condition that the area blocks with the number of the matching point pairs which do not meet the preset distribution stop conditions exist; based on the dichotomy and the second distance, reassigning the matching point pairs to different region blocks of the acquisition region; under the condition that the number of the matching point pairs does not meet the preset distribution stop condition, determining the distance from the matching point pairs in the area blocks which do not meet the preset distribution stop condition to the third target edge of the area blocks as a first distance, returning to the distribution of the matching point pairs in different area blocks of the acquisition area based on the dichotomy and the first distance until the number of the matching point pairs in each area block meets the preset distribution stop condition; the directions of the first target edge and the third target edge are parallel to each other, and the directions of the second target edge and the first target edge and the third target edge are perpendicular to each other.
In some embodiments, the calculating module 304 is specifically configured to input the matching points in each region block to a direct linear transformation algorithm, to obtain a homography matrix between the pixel points and the position points in each region block.
In some embodiments, the apparatus 300 for determining a calibration coefficient of an external parameter of a camera may further include an operation module, configured to perform iterative operation on a homography matrix based on a gaussian-newton algorithm and/or a levenberg-marquardt algorithm, to obtain homography matrices between pixel points and position points in each region block with higher precision.
In some embodiments, the preset allocation stop condition is that when the matching point pairs in the area blocks are allocated, there are area blocks with the number of the matching point pairs smaller than the target number, and the target number is the minimum number of the matching point pairs required for calculating the homography matrix.
Each module in the apparatus shown in fig. 3 has a function of implementing each step in fig. 1, and can achieve a corresponding technical effect, which is not described herein for brevity.
Based on the method for determining the camera external parameter calibration coefficient provided by the embodiment, correspondingly, the application also provides a specific implementation mode of the electronic equipment. Please refer to the following examples.
Fig. 4 shows a schematic hardware structure of an electronic device according to an embodiment of the present application.
The electronic device may comprise a processor 401 and a memory 402 in which computer program instructions are stored.
In particular, the processor 401 described above may include a central processing unit (Central Processing Unit, CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured to implement one or more integrated circuits of embodiments of the present application.
Memory 402 may include mass storage for data or instructions. By way of example, and not limitation, memory 402 may comprise a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the foregoing. In one example, the memory 402 may include removable or non-removable (or fixed) media, or the memory 402 is a non-volatile solid state memory. Memory 402 may be internal or external to the integrated gateway disaster recovery device.
In one example, memory 402 may be Read Only Memory (ROM). In one example, the ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory, or a combination of two or more of these.
Memory 402 may include Read Only Memory (ROM), random Access Memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory includes one or more tangible (non-transitory) computer-readable storage media (e.g., memory devices) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors) it is operable to perform the operations described with reference to a method according to an aspect of the present application.
The processor 401 reads and executes the computer program instructions stored in the memory 402 to implement the methods/steps S101 to S104 in the embodiment shown in fig. 1, and achieve the corresponding technical effects achieved by executing the methods/steps in the embodiment shown in fig. 1, which are not described herein for brevity.
In one example, the electronic device may also include a communication interface 403 and a bus 410. As shown in fig. 4, the processor 401, the memory 402, and the communication interface 403 are connected by a bus 410 and perform communication with each other.
The communication interface 403 is mainly used to implement communication between each module, device, unit and/or apparatus in the embodiments of the present application.
Bus 410 includes hardware, software, or both, coupling components of the electronic device to one another. By way of example, and not limitation, the buses may include an accelerated graphics port (Accelerated Graphics Port, AGP) or other graphics Bus, an enhanced industry standard architecture (Extended Industry Standard Architecture, EISA) Bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an industry standard architecture (Industry Standard Architecture, ISA) Bus, an infiniband interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a micro channel architecture (MCa) Bus, a Peripheral Component Interconnect (PCI) Bus, a PCI-Express (PCI-X) Bus, a Serial Advanced Technology Attachment (SATA) Bus, a video electronics standards association local (VLB) Bus, or other suitable Bus, or a combination of two or more of the above. Bus 410 may include one or more buses, where appropriate. Although embodiments of the present application describe and illustrate a particular bus, the present application contemplates any suitable bus or interconnect.
In addition, in combination with the method for determining the calibration coefficient of the external parameter of the camera in the above embodiment, the embodiments of the present application may be implemented by providing a computer readable storage medium. The computer readable storage medium has stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement a method of determining a camera extrinsic calibration coefficient in any of the above embodiments. Examples of computer readable storage media include non-transitory computer readable storage media such as electronic circuits, semiconductor memory devices, ROMs, random access memories, flash memories, erasable ROMs (EROM), floppy disks, CD-ROMs, optical disks, hard disks.
It should be clear that the present application is not limited to the particular arrangements and processes described above and illustrated in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions, or change the order between steps, after appreciating the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be different from the order in the embodiments, or several steps may be performed simultaneously.
Aspects of the present application are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to being, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware which performs the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In the foregoing, only the specific embodiments of the present application are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, which are intended to be included in the scope of the present application.

Claims (10)

1. A method for determining a calibration coefficient of an external parameter of a camera, the method comprising:
acquiring an image acquired by a camera, pixel points of the vehicle in the image and position points of the vehicle acquired by positioning equipment in an acquisition area of the camera in the running process of the vehicle;
registering and aligning the pixel points and the position points according to the acquisition time of the pixel points and the acquisition time of the position points to obtain a plurality of matching point pairs;
distributing the matching point pairs to different area blocks of the acquisition area according to the positions and the number of the matching point pairs in the image until the number of the matching point pairs in each area block meets a preset distribution stop condition;
And respectively calculating homography matrixes between pixel points and position points in each area block according to the matching point pairs in each area block, wherein the homography matrixes are external parameter calibration coefficients of the camera.
2. The method of claim 1, wherein the registering and aligning the pixel point and the location point according to the acquisition time of the pixel point and the acquisition time of the location point to obtain a plurality of matching point pairs includes:
sampling the first data points according to the acquisition time of the first data points to obtain N first sampling points;
sampling the second data point for M times according to the acquisition time of the second data point to obtain M groups of second sampling points,each group of second sampling points comprises N second sampling points, and the sampling time T of the second sampling points in the mth sampling 2i =T 1i +m.delta.t, where T 1i The sampling time of the first sampling point is represented, δt represents a step unit of the sampling time difference of the first sampling point and the second sampling point;
forming quasi-matching point pairs by the N first sampling points and N second sampling points in the M groups of second sampling points respectively to obtain M groups of quasi-matching point pairs, wherein the sampling time differences of the first sampling points and the second sampling points forming the quasi-matching point pairs in the same group of quasi-matching point pairs are equal;
Calculating the re-projection error of each group of quasi-matching point pairs;
taking a group of quasi matching point pairs with the minimum reprojection error as the matching point pairs;
one of the first data point and the second data point is the pixel point, and the other is the position point.
3. The method of claim 2, wherein said calculating the re-projection error for each set of quasi-matched point pairs comprises:
based on a direct linear transformation algorithm, calculating a homography matrix between a first sampling point and a second sampling point in each set of quasi-matching point pairs;
calculating the re-projection error of each quasi-matching point pair in each quasi-matching point pair according to the homography matrix and the first sampling points and the second sampling points which form the quasi-matching point pair in each quasi-matching point pair;
and taking the average value of the re-projection errors of the quasi-matching point pairs in each quasi-matching point group as the re-projection error of each quasi-matching point pair.
4. The method of claim 1, wherein assigning the matching points to different region blocks of the acquisition region according to the positions and numbers of the matching point pairs in the image until the number of matching point pairs in each region block satisfies a preset assignment stop condition, comprises:
Determining first distances from the plurality of matching point pairs to first target edges of the image respectively according to the positions of the plurality of matching point pairs in the image, wherein the first target edges are any edge of the image;
assigning the matching points to different region blocks of the acquisition region based on a dichotomy and the first distance;
determining second distances from the matching points in the area blocks which do not meet the preset distribution stop condition to second target edges of the area blocks respectively under the condition that the area blocks with the number which does not meet the preset distribution stop condition exist;
reassigning the matching point pairs to different region blocks of the acquisition region based on a dichotomy and the second distance;
under the condition that the number of the matching point pairs does not meet the region blocks of the preset distribution stop condition, determining the distance from the matching point pairs in the region blocks which do not meet the preset distribution stop condition to the third target edge of the region blocks as the first distance, returning to the bisection method and the first distance, and distributing the matching point pairs to different region blocks of the acquisition region until the number of the matching point pairs in each region block meets the preset distribution stop condition;
The directions of the first target edge and the third target edge are parallel, and the directions of the second target edge and the first target edge and the third target edge are perpendicular.
5. The method according to claim 1, wherein the calculating the homography matrix between the pixel points and the position points in each region block according to the matching point pair in each region block includes:
and respectively inputting the matching points in each region block into a direct linear transformation algorithm to obtain homography matrixes between the pixel points and the position points in each region block.
6. The method of claim 1, wherein after the computing of homography matrices between pixel points and location points in respective region blocks from matching point pairs in each region block, the method further comprises:
and carrying out iterative operation on the homography matrix based on a Gaussian-Newton algorithm and/or a Levenberg-Marquardt algorithm to obtain homography matrixes between pixel points and position points in each region block with higher precision.
7. The method according to claim 1 or 4, wherein the preset allocation stop condition is that when the matching point pairs in the area blocks are allocated, there are area blocks in which the number of matching point pairs is smaller than a target number, the target number being a minimum number of matching point pairs required for calculating a homography matrix.
8. A device for determining a calibration coefficient of an external parameter of a camera, the device comprising:
the acquisition module is used for acquiring an image acquired by a camera, pixel points of the vehicle in the image and position points of the vehicle acquired by positioning equipment in an acquisition area of the camera in the running process of the vehicle;
the registration alignment module is used for carrying out registration alignment on the pixel points and the position points according to the acquisition time of the pixel points and the acquisition time of the position points to obtain a plurality of matching point pairs;
the distribution module is used for distributing the matching points to different area blocks of the acquisition area according to the positions and the number of the matching point pairs in the image until the number of the matching point pairs in each area block meets a preset distribution stop condition;
and the calculation module is used for respectively calculating homography matrixes between the pixel points and the position points in each area block according to the matching point pairs in each area block, wherein the homography matrixes are external parameter calibration coefficients of the camera.
9. An electronic device, the electronic device comprising: a processor, a memory and a computer program stored on the memory and executable on the processor, which when executed by the processor performs the steps of the method of determining camera extrinsic calibration coefficients according to any one of the claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method of determining a camera extrinsic calibration coefficient according to any one of claims 1 to 7.
CN202311068997.XA 2023-08-23 2023-08-23 Method, device, equipment and storage medium for determining camera external parameter calibration coefficient Pending CN117274393A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311068997.XA CN117274393A (en) 2023-08-23 2023-08-23 Method, device, equipment and storage medium for determining camera external parameter calibration coefficient

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311068997.XA CN117274393A (en) 2023-08-23 2023-08-23 Method, device, equipment and storage medium for determining camera external parameter calibration coefficient

Publications (1)

Publication Number Publication Date
CN117274393A true CN117274393A (en) 2023-12-22

Family

ID=89207181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311068997.XA Pending CN117274393A (en) 2023-08-23 2023-08-23 Method, device, equipment and storage medium for determining camera external parameter calibration coefficient

Country Status (1)

Country Link
CN (1) CN117274393A (en)

Similar Documents

Publication Publication Date Title
CA3027921C (en) Integrated sensor calibration in natural scenes
CN111383279B (en) External parameter calibration method and device and electronic equipment
CN102612634B (en) A calibration apparatus, a distance measurement system and a calibration method
US10909395B2 (en) Object detection apparatus
CN112435300B (en) Positioning method and device
CN110288659B (en) Depth imaging and information acquisition method based on binocular vision
CN110608746B (en) Method and device for determining the position of a motor vehicle
US10996337B2 (en) Systems and methods for constructing a high-definition map based on landmarks
CN113989766A (en) Road edge detection method and road edge detection equipment applied to vehicle
WO2019097422A2 (en) Method and system for enhanced sensing capabilities for vehicles
CN112633035B (en) Driverless vehicle-based lane line coordinate true value acquisition method and device
CN111538008B (en) Transformation matrix determining method, system and device
CN109029418A (en) A method of vehicle is positioned in closed area
CN114820769A (en) Vehicle positioning method and device, computer equipment, storage medium and vehicle
CN114360093A (en) Roadside parking space inspection method based on Beidou RTK, SLAM positioning and image analysis
US11087150B2 (en) Detection and validation of objects from sequential images of a camera by using homographies
CN111862208B (en) Vehicle positioning method, device and server based on screen optical communication
CN112255604A (en) Method and device for judging accuracy of radar data and computer equipment
CN116958935A (en) Multi-view-based target positioning method, device, equipment and medium
CN115100290B (en) Monocular vision positioning method, monocular vision positioning device, monocular vision positioning equipment and monocular vision positioning storage medium in traffic scene
CN117274393A (en) Method, device, equipment and storage medium for determining camera external parameter calibration coefficient
CN113727434B (en) Vehicle-road cooperative auxiliary positioning system and method based on edge computing gateway
CN114120795B (en) Map drawing method and device
CN113874681A (en) Point cloud map quality evaluation method and system
CN112985455B (en) Precision evaluation method and device of positioning and attitude determination system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination