CN113112551B - Camera parameter determining method and device, road side equipment and cloud control platform - Google Patents

Camera parameter determining method and device, road side equipment and cloud control platform Download PDF

Info

Publication number
CN113112551B
CN113112551B CN202110429760.4A CN202110429760A CN113112551B CN 113112551 B CN113112551 B CN 113112551B CN 202110429760 A CN202110429760 A CN 202110429760A CN 113112551 B CN113112551 B CN 113112551B
Authority
CN
China
Prior art keywords
camera
detection
target camera
determining
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110429760.4A
Other languages
Chinese (zh)
Other versions
CN113112551A (en
Inventor
苑立彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202110429760.4A priority Critical patent/CN113112551B/en
Publication of CN113112551A publication Critical patent/CN113112551A/en
Application granted granted Critical
Publication of CN113112551B publication Critical patent/CN113112551B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method and a device for determining camera parameters, road side equipment and a cloud control platform, and relates to the technical fields of artificial intelligence, computer vision and intelligent traffic. The specific implementation scheme is as follows: determining a detection error between at least one co-view camera and a target camera as a detection error associated with the target camera; wherein an overlapping acquisition region exists between the target camera and the co-view camera; and determining the parameter accuracy of the target camera according to the detection error associated with the target camera. According to the embodiment of the application, the accuracy of the camera parameters can be improved.

Description

Camera parameter determining method and device, road side equipment and cloud control platform
Technical Field
The application relates to the field of image processing, in particular to artificial intelligence, computer vision and intelligent traffic technology, and specifically relates to a method and a device for determining camera parameters, road side equipment and a cloud control platform.
Background
The intelligent traffic system is an important means for improving the traffic system, and the calibration of camera parameters is a very critical link in the image acquisition process in the intelligent traffic system.
The camera parameters can be obtained through experiments and calculation, the process of solving the parameters is called camera calibration (or camera calibration), and the quality of the calibration result directly influences the accuracy of the result generated by the operation of the camera sensor, so that the next image processing can be influenced.
Disclosure of Invention
The application provides a method and device for determining camera parameters, road side equipment and a cloud control platform.
According to an aspect of the present application, a method of determining camera parameters is provided.
Determining a detection error between at least one co-view camera and a target camera as a detection error associated with the target camera; wherein an overlapping acquisition region exists between the target camera and the co-view camera;
and determining the parameter accuracy of the target camera according to the detection error associated with the target camera.
According to another aspect of the present application, there is provided a determining apparatus of camera parameters, including:
a camera detection error determining module, configured to determine a detection error between at least one common view camera and a target camera as a detection error associated with the target camera; wherein an overlapping acquisition region exists between the target camera and the co-view camera;
And the camera parameter accuracy detection module is used for determining the parameter accuracy of the target camera according to the detection error associated with the target camera.
According to another aspect of the present application, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of determining camera parameters described in any of the embodiments of the present application.
According to another aspect of the present application, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method of determining camera parameters according to any of the embodiments of the present application.
According to another aspect of the present application, there is provided a road side device, including an electronic device as described in any embodiment of the present application.
According to another aspect of the application, a cloud control platform is provided, including an electronic device as described in any embodiment of the application.
According to another aspect of the present application, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method of determining camera parameters as described in any of the embodiments of the present application.
According to the embodiment of the application, the accuracy of the camera parameters can be improved.
It should be understood that the description of this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
FIG. 1 is a schematic diagram of a method of determining camera parameters according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a method of determining camera parameters according to an embodiment of the present application;
FIG. 3 is a schematic illustration of a front bolt face captured image according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a fisheye camera capturing images according to an embodiment of the application;
FIG. 5 is a schematic illustration of a rear bolt face captured image according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a detection location in an image captured by a target camera according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a detection position in an image captured by a common view camera according to an embodiment of the present application;
FIG. 8 is a schematic diagram of detection positions of a target camera and a co-view camera in a co-view camera captured image according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a camera parameter determination apparatus according to an embodiment of the present application;
fig. 10 is a block diagram of an electronic device for implementing a method of determining camera parameters according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a flowchart of a method for determining camera parameters according to an embodiment of the present application, where the embodiment may be applicable to a case of accurately detecting camera parameters. The method of the embodiment can be executed by a camera parameter determining device, the device can be realized in a software and/or hardware mode, and the device is specifically configured in an electronic device with a certain data operation capability, and the electronic device can be a client device, a mobile phone, a tablet personal computer, a vehicle-mounted terminal, a desktop computer and the like, or can be server-side equipment.
S101, determining a detection error between at least one common view camera and a target camera as a detection error associated with the target camera; wherein there is an overlapping acquisition region between the target camera and the co-view camera.
The overlapping acquisition area exists between the target camera and the co-view camera, which may refer to an image acquisition area range of the co-view camera and an overlapping acquisition area exists between the image acquisition area range of the target camera. The overlapping acquisition region is the overlapping viewing region between the co-view camera and the target camera, i.e., the co-view region.
The detection error is used to quantify the difference between the detection result of one of the common view cameras and the detection result of the target camera. The detection results need to be unified in the same coordinate system to be compared. In general, for a system with a plurality of cameras, the world coordinate system is an absolute coordinate system, and meanwhile, pixel points on an image collected by the cameras are usually mapped to coordinates in the world coordinate system for subsequent processing and widely applied to a plurality of fields, so that detection results can be selectively converted into the world coordinate system and then compared. Accordingly, the detection error may refer to an error between coordinates obtained by detecting the same position by the co-view camera and the target camera, where the coordinates refer to three-dimensional coordinates in the real three-dimensional world. That is, the detection error is a quantized value for the detection result difference between the common view camera and the target camera in the world coordinate system.
In addition, the target camera has at least one common view camera, and one detection error exists between each common view camera and the target camera, and accordingly, the number of detection errors associated with the target camera comprises at least one.
S102, determining the parameter accuracy of the target camera according to the detection error associated with the target camera.
The common view camera and the target camera are calibrated cameras. In the embodiment of the application, the camera parameters are used to describe the correlation between the position of a point on a spatial object in the world coordinate system and the corresponding point of the point in the image. Camera parameters may refer to model parameters of a three-dimensional model that images a camera. The process of solving the camera parameters is called calibration of the camera parameters. The camera parameters may include at least one of internal parameters (internal parameters), external parameters (external parameters), transformation relations between distortion coefficients and cameras, and the like. The parameters of the target camera may refer to parameters obtained by calibrating the target camera, and optionally, the parameters of the target camera may include external parameters of the target camera.
The detection error associated with the target camera can quantitatively reflect the actual distance error in the real three-dimensional world. The parameter accuracy of the target camera can be quantitatively determined based on the detection error, and the detection accuracy of the parameter accuracy of the target camera is further improved. It can be appreciated that the smaller the detection error associated with the target camera, the more accurate the parameters of the target camera; the larger the detection error associated with the target camera, the less accurate the parameters of the target camera.
According to the technical scheme, the detection error between the target camera and at least one common view camera is determined, the detection error can be quantized, so that the detection accuracy and precision of the detection error are improved, the parameter accuracy of the target camera is determined according to the detection error, the parameter accuracy of the target camera can be detected according to specific quantized values, the detection accuracy of the parameter accuracy of the target camera is improved, quantitative judgment of the detection error is provided, follow-up adjustment of camera parameters is facilitated, and the accuracy of the camera parameters is improved.
Fig. 2 is a flowchart of another method for determining camera parameters according to an embodiment of the present application, which is further optimized and expanded based on the above technical solution, and may be combined with the above various alternative embodiments. And the detection error between the determined common view camera and the target camera is embodied as follows: acquiring an image acquired by a target camera and an image acquired by a common view camera, determining an overlapping acquisition area between the common view camera and the target camera, and determining a detection point in the overlapping acquisition area; respectively acquiring detection positions of the target camera and the common view camera aiming at the detection points; and calculating the distance between the detection position of the target camera and the detection position of the common view camera, and determining the distance as the detection error between the common view camera and the target camera.
S201, acquiring an image acquired by a target camera and an image acquired by a common view camera, and determining an overlapping acquisition area between the common view camera and the target camera.
And respectively adopting the target camera and the common view camera to acquire images of the same scene, and acquiring images acquired by the target camera and images acquired by the common view camera. A search query may be performed on two images, with overlapping regions being determined in each of the two images and being the overlapping acquisition regions. Alternatively, the overlapping acquisition area may be determined in each image by performing image matching on the two images, and exemplary, the point set of the overlapping area may be obtained by polygon intersection, and then the point set information of the images may be restored by using a homography matrix, so as to identify the overlapping area. Alternatively, the overlapping acquisition area can be determined by respectively projecting the high-precision map onto the images acquired by the cameras and querying the same area range on the two images.
Optionally, the collection area of the target camera and the collection area of the co-view camera comprise the same intersection area.
In the embodiment of the application, at least one target camera and a common view camera corresponding to each target camera exist in a plurality of cameras for image acquisition aiming at scenes formed by the same intersection and nearby lanes. In a target camera and a co-view camera, the acquisition area of the target camera and the acquisition area of the co-view camera include the same intersection area.
The target camera and the common view camera are all front gun cameras, namely the same type of cameras for collecting scenes in the center of the intersection, and the collecting area of the target camera and the collecting area of the common view camera are both the center area of the intersection. The target camera and the co-view camera may refer to any two front bolt faces located on different poles at the same intersection.
As another example, the target camera and the co-view camera are adjacent cameras, the target camera and the co-view camera include a front camera and a fisheye camera, or a fisheye camera and a rear camera, and the target camera is illustratively a front camera and the co-view camera is a fisheye camera; the target camera is a fisheye camera, and the common view camera comprises a front gun camera or a rear gun camera; the target camera is a rear gun camera, and the common view camera is a fish-eye camera. The acquisition area of the target camera is adjacent to the acquisition area of the co-view camera, and the target camera and the co-view camera are located on the same object, for example on the same stick, i.e. the shooting positions of the target camera and the co-view camera are adjacent.
In the intersection area, a front bolt, a fisheye camera and a rear bolt are usually arranged on a monitoring rod. The front gun camera is used for acquiring a gun camera of a scene image close to the direction of the intersection, and the rear gun camera is used for acquiring a gun camera of a scene image far from the direction of the intersection. In a specific example, the front camera, the fisheye camera and the rear camera on the same monitor rod are used for image acquisition, fig. 3 is an image acquired by the front camera, fig. 4 is an image acquired by the fisheye camera, and fig. 5 is an image acquired by the rear camera.
The intelligent road information acquisition method and the intelligent road information acquisition system are applied to the traffic field, two cameras which acquire images of the same road junction area and have a common view area are determined to be the target camera and the common view camera, and the intelligent road information acquisition method and the intelligent road information acquisition system can be used for calibrating parameters of the cameras for monitoring the road junction, so that accurate road information is provided for an intelligent traffic system, and the cooperative sensing precision of a vehicle and a road is improved.
S202, at least one detection point is determined in the overlapped acquisition area.
The detection points are preset points, typically points that are easily identifiable and are on the boundary of the object. The method comprises the steps of determining detection points in an overlapping acquisition area, namely, inquiring the same detection point in images acquired by two cameras respectively, namely, determining at least one detection point in the overlapping acquisition area of images acquired by a target camera, determining at least one detection point in the overlapping acquisition area of images acquired by a common view camera so as to detect the same detection point, thereby obtaining a detection result of the same detection point, and calculating a detection error.
The number of detection points includes at least one. When the number of detection points is plural, the target camera and the common view camera detect the corresponding detection positions for each detection point, respectively. Alternatively, the number of detection points is 2.
Optionally, the detection points include corner points on a marker line in the overlapping acquisition region.
The detection point is a pre-configured, real point of acquisition area that is in overlap between the target camera and the co-view camera. Projecting the overlapped acquisition area onto an image acquired by the target camera, wherein the overlapped acquisition area can be determined in the image acquired by the target camera; likewise, the overlapping acquisition regions are projected onto the image acquired by the common view camera, and the overlapping acquisition regions can be determined in the image acquired by the common view camera.
The marking line may be a line that can be clearly distinguished from other lines, or a line dedicated to a specific application scenario. Illustratively, in the intelligent transportation field, the sign line may include: lane lines, guide lines or lines (e.g., arrows), etc.
Corner points are used to distinguish from other points and may be referred to as extreme points, i.e. points where the property is in some way particularly pronounced. For example, the corner points may be isolated points with the greatest or smallest intensity on certain attributes, intersections of two lines, points on two adjacent objects with different principal directions, end points of line segments, points with the greatest local curvature on a curve, vertices of a polygon, or circle centers of circles, etc.
By configuring the detection points as corner points on the mark lines in the overlapped acquisition areas, the detection points can be quickly found in the two images, the detection efficiency of the detection points is improved, the detection points have markedness, the error probability of detecting other points can be reduced, and the position detection accuracy of the detection points is improved.
S203, for each detection point, respectively acquiring detection positions of the target camera and the common view camera for the detection point.
The detection position may refer to a position of the detection point in an image acquired by the camera, where the detection result is projected under a unified coordinate system, which may be a world coordinate system. And the target camera detects the detection point to obtain a detection position obtained by calculating the parameters based on the target camera. And the common view camera detects the same detection point to obtain a detection position obtained by parameter calculation based on common view parameters. The detection position of the target camera and the detection position of the common view camera are calculated by different camera parameters. When the parameters of the target camera and the parameters of the common view camera are correct, the detection position of the target camera and the detection position of the common view camera are the same. The target camera detects each detection point to obtain a corresponding detection position, and the common view camera detects each detection point to obtain a corresponding detection position.
Optionally, the acquiring the detection position of the target camera for the detection point includes: in the overlapping acquisition area, acquiring two-dimensional pixel coordinates of the detection point, and converting the two-dimensional pixel coordinates into three-dimensional camera coordinates under a three-dimensional camera coordinate system corresponding to the target camera; and converting the three-dimensional camera coordinates into world camera coordinates in a world coordinate system according to the external parameters of the target camera, and determining the world camera coordinates as a detection position of the target camera for the detection point.
The overlapping acquisition area refers to an area in the image, and pixel points in the overlapping acquisition area are coordinate points in a two-dimensional coordinate system. The target camera detects the detection points in the overlapping acquisition area to obtain two-dimensional pixel coordinates, and the two-dimensional pixel coordinates are converted into three-dimensional camera coordinates under a three-dimensional camera coordinate system corresponding to the target camera based on internal parameters of the target camera. Based on the external parameters of the target camera, converting the three-dimensional camera coordinates into world camera coordinates in a world coordinate system, and determining the world camera coordinates as a detection position of the target camera for the detection point. Specifically, the two-dimensional pixel coordinates may be converted into three-dimensional camera coordinates in a three-dimensional camera coordinate system by the following formula:
Converting the three-dimensional camera coordinates into world camera coordinates in a world coordinate system by the following formula:
wherein,representing homogeneous coordinates representing two-dimensional pixel coordinates, for example>Representing three-dimensional camera coordinates in the camera coordinate system, < >>The world camera coordinates in the world coordinate system coordinates are represented, K represents the camera internal parameters, R is the rotation matrix, t is the translation vector, and GROUND_COEFF represents the coordinates in the camera coordinate system to which the GROUND equation is applied. Wherein, the external parameters of the camera comprise R, t, ground equation and the like. The camera internal parameters are used for converting the two-dimensional pixel coordinates into three-dimensional camera coordinates under a three-dimensional camera coordinate system corresponding to the camera; the external parameters of the camera are used to convert the three-dimensional camera coordinates into world camera coordinates in the world coordinate system.
Further, the step of acquiring the detection position of the common view camera for the detection point may be referred to.
In the embodiment of the application, the internal parameters of the camera are correct by default, and the difference between the detection positions is determined by the external parameters of the camera, so that the detection error is used for determining the accuracy of the external parameters of the target camera.
The two-dimensional pixel coordinates of the detection points are determined in the overlapping acquisition area and converted into three-dimensional camera coordinates, and finally converted into world camera coordinates according to the external parameters of the target camera, so that the accuracy of the external parameters of the target camera is determined as the detection position determined by the external parameters of the target camera, quantitative judgment of detection errors is provided for the external parameters of the camera, and subsequent adjustment of the external parameters of the camera is facilitated, so that the accuracy of the external parameters of the camera is improved.
In a specific example, fig. 6 is an image collected by the target camera, as shown in fig. 6, a black circle is a detection position of the detection point in the image collected by the target camera, fig. 7 is an image collected by the co-view camera, as shown in fig. 7, a black circle is a detection position of the detection point in the image collected by the co-view camera, fig. 8 is an image collected by the co-view camera, as shown in fig. 8, a black circle is a detection position of the target camera mapped to a detection position in the image collected by the co-view camera, and a black circle is a detection position of the detection point in the image collected by the co-view camera. The two black circles are not overlapped, and the distance between the detection positions corresponding to the two black circles is the detection error of the target camera and the common view camera aiming at the detection point.
S204, calculating the distance between the detection position of the target camera and the detection position of the common view camera for each detection point, and determining the distance as the detection error between the common view camera and the target camera as the detection error associated with the target camera; wherein there is an overlapping acquisition region between the target camera and the co-view camera.
One detection point corresponds to one detection error, and the detection error corresponding to the detection point is calculated according to the detection position of the detection point in the target camera and the detection position in the common view camera. The distance between the detection position of the target camera and the detection position of the co-view camera can be calculated by a formula between the two points. The number of detection points is the same as the number of detection errors. The detection position of the target camera and the detection position of the common view camera are both projected to the real three-dimensional world, and correspondingly, the distance between the two detection positions is the length in the real three-dimensional world, so that the detection error can be the actual error in the real three-dimensional world.
In the application scene of intelligent traffic, some camera parameter detection methods utilize a high-precision map to project mark points in the image to an image acquired by the camera, and calculate the superposition error of pixels, which is actually to evaluate the pixel level on the image, and is equivalent to qualitatively detecting the camera parameters accurately. According to the method and the device for detecting the camera parameters, the detection errors associated with the target camera are calculated, the actual distance errors in the real three-dimensional world can be quantitatively reflected, the detection errors of the target camera are converted into the length of the real three-dimensional world, and the accuracy of the detection errors can be improved, so that the accuracy of the detection errors for detecting the camera parameters is improved, and the accuracy of the accuracy detection results of the camera parameters can be improved.
S205, determining the parameter accuracy of the target camera according to the detection error associated with the target camera.
Optionally, the determining the parameter accuracy of the target camera according to the detection error associated with the target camera includes: comparing the detection error associated with the target camera with an error threshold; and determining the parameter accuracy of the target camera according to at least one comparison result.
The error threshold is used for judging whether the parameters of the target camera are accurate. And determining whether the parameters of the target camera are accurate according to the magnitude relation between the detection error and the error threshold.
The number of the target cameras and each of the cameras have at least one detection error, and at the same time, the target cameras and each of the cameras have at least one detection error for at least one detection point, and the number of the detection errors associated with the target cameras correspondingly includes at least one. Each detection error can be compared with an error threshold, and correspondingly, at least one comparison result can be obtained. The comparison result can be counted to determine the parameter accuracy of the target camera.
In the case where the number of comparison results is one, illustratively, in the case where the detection error is smaller than the error threshold, determining that the parameters of the target camera are accurate; and under the condition that the detection error is greater than or equal to an error threshold value, determining the parameter inaccuracy of the target camera. For example, the detection error is 0.24 m, the error threshold is 0.6 m, and 0.24 is less than 0.6, so that the detection error is less than the error threshold, and the parameters of the target camera are determined accurately.
In the case that the number of comparison results is plural, for example, a first number of comparison results in which the detection error is smaller than the error threshold value and a second number of comparison results in which the detection error is greater than or equal to the error threshold value may be counted, and in the case that the first number is greater than the second number, the parameter of the target camera is determined to be accurate; and under the condition that the first quantity is less than or equal to the second quantity, determining the parameter inaccuracy of the target camera. Or, if the second number is 0, determining that the parameters of the target camera are accurate; in the case that the second number is not 0, the parameters of the target camera are determined to be inaccurate. In addition, there are other cases, and the setting may be performed as needed, which is not particularly limited.
The parameter accuracy of the target camera is determined according to at least one comparison result by configuring an error threshold and comparing with the detection error, so that the parameter detection accuracy can be quantitatively compared, and meanwhile, a plurality of comparison results can be comprehensively considered, and the parameter detection accuracy can be further improved.
Optionally, the determining the parameter accuracy of the target camera according to at least one comparison result includes: under the condition that each detection error is smaller than the error threshold value, determining that the parameters of the target camera are accurate; and under the condition that the detection error is larger than or equal to the error threshold value, determining that the parameters of the target camera are inaccurate.
The detection errors of the target camera and any one of the co-view cameras are smaller than an error threshold, and the fact that the detection position determined according to the parameters of the target camera is very close to a true value is indicated, so that the parameters of the target camera are determined accurately; the detection error of the target camera and at least one co-view camera is greater than or equal to an error threshold, which indicates that the detection position determined according to the parameters of the target camera has a large difference from the true value, so that the parameters of the target camera are determined to be inaccurate.
In practice, under the condition that the parameters of the target camera are accurate, the detection errors of the target camera and any one of the co-view cameras are very small, that is, the detection positions of the target camera and any one of the co-view cameras are very close. Correspondingly, whether the parameters of the target camera are accurate can be judged by counting whether the detection positions of the target camera are similar to the detection positions of the common view cameras.
The detection error of the target camera and any one of the common view cameras can be recorded by the following tables 1 and 2.
TABLE 1
TABLE 2
Wherein cameras 1-4 are 4 cameras having the same overlapping acquisition area as each other. Cameras 1-4 have overlapping acquisition areas with cameras 5-8, respectively. The target camera and each of the co-view cameras calculate detection errors for two detection points. Wherein the first row in tables 1 and 2 represents the target camera and the first row represents the common view camera. In tables 1 and 2, there are two detection errors between each target camera and the common view camera. In table 1, there is no detection error between camera 1 and camera 1, between camera 2 and camera 2, between camera 3 and camera 3, and between camera 4 and camera 4, indicated by the horizontal line.
The parameter accuracy of the target camera is determined only under the condition that the detection errors of the target camera and any one of the common view cameras are smaller than the error threshold, and the judgment standard of the parameter accuracy of the camera is improved, so that the parameter detection accuracy can be improved.
In addition, in the case of determining that the external parameters of the target camera are inaccurate, it may further include: and (5) performing next examination or recalibration on the external parameters of the inaccurate target camera.
According to the technical scheme, the images acquired by the target camera and the formula camera are acquired, the overlapping acquisition area is determined, corresponding camera parameters are used in the overlapping acquisition area, the detection positions of the detection points are calculated, the distance between the two detection positions is calculated to be determined as the detection error, the detection error can be converted into the length, the accuracy of the detection error is improved, the accuracy of the camera parameters detected by the detection error is improved, and the accuracy of the accuracy detection result of the camera parameters is finally improved.
Fig. 9 is a block diagram of a camera parameter determination apparatus in the embodiment of the present application according to the embodiment of the present application, which is applicable to a case of generating an image sample for target detection of a truncated object. The device is realized by software and/or hardware, and is specifically configured in the electronic equipment with certain data operation capability.
The apparatus 300 for determining camera parameters shown in fig. 9 includes: a camera detection error determination module 301 and a camera parameter accuracy detection module 302; wherein,
a camera detection error determining module 301, configured to determine a detection error between at least one co-view camera and a target camera, as a detection error associated with the target camera; wherein an overlapping acquisition region exists between the target camera and the co-view camera;
The camera parameter accuracy detection module 302 is configured to determine parameter accuracy of the target camera according to a detection error associated with the target camera.
According to the technical scheme, the detection error between the target camera and at least one common view camera is determined, the detection error can be quantized, so that the detection accuracy and precision of the detection error are improved, the parameter accuracy of the target camera is determined according to the detection error, the parameter accuracy of the target camera can be detected according to specific quantized values, the detection accuracy of the parameter accuracy of the target camera is improved, quantitative judgment of the detection error is provided, follow-up adjustment of camera parameters is facilitated, and the accuracy of the camera parameters is improved.
Further, the camera detection error determining module 301 includes: the overlapping acquisition area determining unit is used for acquiring the image acquired by the target camera and the image acquired by the common view camera and determining an overlapping acquisition area between the common view camera and the target camera; a detection point determining unit for determining at least one detection point in the overlapping acquisition region; a position detection unit for acquiring detection positions of the target camera and the common view camera for the detection points, respectively, for each of the detection points; and a detection error calculation unit configured to calculate, for each detection point, a distance between a detection position of the target camera and a detection position of the co-view camera, and determine the distance as a detection error between the co-view camera and the target camera.
Further, the position detection unit includes: the three-dimensional camera coordinate determining subunit is used for acquiring the two-dimensional pixel coordinates of the detection point in the overlapping acquisition area and converting the two-dimensional pixel coordinates into three-dimensional camera coordinates under a three-dimensional camera coordinate system corresponding to the target camera; and the world camera coordinate determining subunit is used for converting the three-dimensional camera coordinate into a world camera coordinate under a world coordinate system according to the external parameters of the target camera and determining the world camera coordinate as a detection position of the target camera for the detection point.
Further, the detection points comprise corner points on the marker lines in the overlapping acquisition areas.
Further, the camera parameter accuracy detection module 302 includes: an error threshold comparing unit for comparing the detection error associated with the target camera with an error threshold; and the comparison analysis unit is used for determining the parameter accuracy of the target camera according to at least one comparison result.
Further, the comparison analysis unit includes: an accurate detection subunit, configured to determine that parameters of the target camera are accurate when each of the detection errors is smaller than the error threshold; and the inaccuracy detection subunit is used for determining inaccuracy of the parameters of the target camera under the condition that the detection error is larger than or equal to the error threshold value.
Further, the acquisition area of the target camera and the acquisition area of the common view camera comprise the same intersection area.
The object detection device can execute the method for determining the camera parameters provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of executing the method for determining the camera parameters.
According to embodiments of the present application, there is also provided an electronic device, a readable storage medium, a drive test device, a cloud control platform, and a computer program product.
As shown in fig. 10, a block diagram of an electronic device according to a method for determining camera parameters according to an embodiment of the present application is shown. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 10, the apparatus 400 includes a computing unit 401 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 402 or a computer program loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In RAM 403, various programs and data required for the operation of device 400 may also be stored. The computing unit 401, ROM 402, and RAM 403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Various components in device 400 are connected to I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, etc.; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408, such as a magnetic disk, optical disk, etc.; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the device 400 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 401 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 401 performs the respective methods and processes described above, for example, a determination method of camera parameters. For example, in some embodiments, the method of determining camera parameters may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 400 via the ROM 402 and/or the communication unit 409. When the computer program is loaded into RAM 403 and executed by the computing unit 401, one or more steps of the above-described method of determining camera parameters may be performed. Alternatively, in other embodiments, the computing unit 401 may be configured to perform the method of determining camera parameters by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The electronic device provided by any embodiment of the application can be applied to the intelligent traffic system or a platform for providing services for the intelligent traffic system.
Optionally, the road side device may include an electronic device provided in any embodiment of the present application.
The road side equipment comprises electronic equipment, communication components and the like, and the electronic equipment can be integrated with the communication components or arranged in a split mode. The electronic device may acquire data, such as pictures and videos, of a perception device (e.g., a roadside camera) for image video processing and data computation. Optionally, the electronic device itself may also have a perceived data acquisition function and a communication function, such as an artificial intelligence (Artificial Intelligence, AI) camera, and the electronic device may perform image video processing and data computation directly based on the acquired perceived data.
The Road Side device (RSU) is a core of the intelligent Road system, and plays roles of connecting Road Side facilities, transmitting Road information to the vehicle-mounted terminal and the cloud, and can realize a background communication function, an information broadcasting function, a high-precision positioning foundation enhancement function and the like.
By configuring the electronic equipment provided by any embodiment of the application in the road side equipment, the road side equipment can detect the parameter accuracy of the camera, the detection accuracy of the parameter of the camera is improved, further the road side equipment can perform subsequent operation according to the accurate image acquired by the camera, and the operation accuracy is improved, for example, the detection accuracy of objects such as pedestrians or vehicles is improved.
Optionally, the cloud control platform may include the electronic device provided in any embodiment of the present application.
The cloud control platform performs processing at the cloud, and electronic equipment included in the cloud control platform can acquire data of sensing equipment (such as a road side camera), such as pictures, videos and the like, so that image video processing and data calculation are performed; the cloud control platform can also be called a vehicle-road collaborative management platform, an edge computing platform, a cloud computing platform, a central system or a cloud server.
By configuring the electronic equipment provided by any embodiment of the application in the cloud control platform, the cloud control platform can detect the parameter accuracy of the camera, the detection accuracy of the parameter of the camera is improved, and then the cloud control platform transmits an accurate target detection result to needed equipment for subsequent operation, so that the operation accuracy, such as obstacle avoidance accuracy, planned route safety and the like, are improved.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
According to the technical scheme, the detection error between the target camera and at least one common view camera is determined, the detection error can be quantized, so that the detection accuracy and precision of the detection error are improved, the parameter accuracy of the target camera is determined according to the detection error, the parameter accuracy of the target camera can be detected according to specific quantized values, the detection accuracy of the parameter accuracy of the target camera is improved, quantitative judgment of the detection error is provided, follow-up adjustment of camera parameters is facilitated, and the accuracy of the camera parameters is improved.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (14)

1. A method of determining camera parameters, comprising:
determining a detection error between at least one co-view camera and a target camera as a detection error associated with the target camera; wherein an overlapping acquisition region exists between the target camera and the co-view camera;
determining the parameter accuracy of the target camera according to the detection error associated with the target camera;
the determining a detection error between the at least one co-view camera and the target camera comprises:
acquiring an image acquired by a target camera and an image acquired by the common view camera, and determining an overlapping acquisition area between the common view camera and the target camera;
determining at least one detection point in the overlapping acquisition region;
for each detection point, respectively acquiring detection positions of the target camera and the common view camera for the detection point;
for each detection point, calculating a distance between a detection position of the target camera and a detection position of the common view camera, and determining the distance as a detection error between the common view camera and the target camera;
the obtaining the detection position of the target camera for the detection point includes:
In the overlapping acquisition area, acquiring two-dimensional pixel coordinates of the detection point, and converting the two-dimensional pixel coordinates into three-dimensional camera coordinates under a three-dimensional camera coordinate system corresponding to the target camera;
and converting the three-dimensional camera coordinates into world camera coordinates in a world coordinate system according to the external parameters of the target camera, and determining the world camera coordinates as a detection position of the target camera for the detection point.
2. The method of claim 1, wherein the detection points comprise corner points on a marker line in the overlapping acquisition region.
3. The method of claim 1, wherein the determining the accuracy of the parameters of the target camera based on the detection error associated with the target camera comprises:
comparing the detection error associated with the target camera with an error threshold;
and determining the parameter accuracy of the target camera according to at least one comparison result.
4. A method according to claim 3, wherein said determining the parameter accuracy of the target camera based on at least one comparison result comprises:
under the condition that each detection error is smaller than the error threshold value, determining that the parameters of the target camera are accurate;
And under the condition that the detection error is larger than or equal to the error threshold value, determining that the parameters of the target camera are inaccurate.
5. The method of claim 1, wherein the acquisition region of the target camera and the acquisition region of the co-view camera comprise the same intersection region.
6. A camera parameter determination apparatus, comprising:
a camera detection error determining module, configured to determine a detection error between at least one common view camera and a target camera as a detection error associated with the target camera; wherein an overlapping acquisition region exists between the target camera and the co-view camera;
the camera parameter accuracy detection module is used for determining the parameter accuracy of the target camera according to the detection error associated with the target camera;
the camera detection error determination module includes:
the overlapping acquisition area determining unit is used for acquiring the image acquired by the target camera and the image acquired by the common view camera and determining an overlapping acquisition area between the common view camera and the target camera;
a detection point determining unit for determining at least one detection point in the overlapping acquisition region;
a position detection unit for acquiring detection positions of the target camera and the common view camera for the detection points, respectively, for each of the detection points;
A detection error calculation unit configured to calculate, for each of the detection points, a distance between a detection position of the target camera and a detection position of the co-view camera, and determine a detection error between the co-view camera and the target camera;
the position detection unit includes:
the three-dimensional camera coordinate determining subunit is used for acquiring the two-dimensional pixel coordinates of the detection point in the overlapping acquisition area and converting the two-dimensional pixel coordinates into three-dimensional camera coordinates under a three-dimensional camera coordinate system corresponding to the target camera;
and the world camera coordinate determining subunit is used for converting the three-dimensional camera coordinate into a world camera coordinate under a world coordinate system according to the external parameters of the target camera and determining the world camera coordinate as a detection position of the target camera for the detection point.
7. The apparatus of claim 6, wherein the detection points comprise corner points on a marker line in the overlapping acquisition region.
8. The apparatus of claim 6, wherein the camera parameter accuracy detection module comprises:
an error threshold comparing unit for comparing the detection error associated with the target camera with an error threshold;
and the comparison analysis unit is used for determining the parameter accuracy of the target camera according to at least one comparison result.
9. The apparatus of claim 8, wherein the comparative analysis unit comprises:
an accurate detection subunit, configured to determine that parameters of the target camera are accurate when each of the detection errors is smaller than the error threshold;
and the inaccuracy detection subunit is used for determining inaccuracy of the parameters of the target camera under the condition that the detection error is larger than or equal to the error threshold value.
10. The apparatus of claim 6, wherein the acquisition region of the target camera and the acquisition region of the co-view camera comprise a same intersection region.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of determining camera parameters of any one of claims 1-5.
12. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of determining camera parameters of any one of claims 1-5.
13. A roadside device comprising the electronic device of claim 11.
14. A cloud control platform comprising the electronic device of claim 11.
CN202110429760.4A 2021-04-21 2021-04-21 Camera parameter determining method and device, road side equipment and cloud control platform Active CN113112551B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110429760.4A CN113112551B (en) 2021-04-21 2021-04-21 Camera parameter determining method and device, road side equipment and cloud control platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110429760.4A CN113112551B (en) 2021-04-21 2021-04-21 Camera parameter determining method and device, road side equipment and cloud control platform

Publications (2)

Publication Number Publication Date
CN113112551A CN113112551A (en) 2021-07-13
CN113112551B true CN113112551B (en) 2023-12-19

Family

ID=76719150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110429760.4A Active CN113112551B (en) 2021-04-21 2021-04-21 Camera parameter determining method and device, road side equipment and cloud control platform

Country Status (1)

Country Link
CN (1) CN113112551B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113592951A (en) * 2021-07-14 2021-11-02 阿波罗智联(北京)科技有限公司 Method and device for calibrating external parameters of vehicle-road cooperative middle-road side camera and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101231754A (en) * 2008-02-03 2008-07-30 四川虹微技术有限公司 Multi-visual angle video image depth detecting method and depth estimating method
CN102743184A (en) * 2012-05-14 2012-10-24 清华大学 Geometrical parameter calibration method of X-ray cone beam computed tomography system
CN104766291A (en) * 2014-01-02 2015-07-08 株式会社理光 Method and system for calibrating multiple cameras
CN106228564A (en) * 2016-07-29 2016-12-14 国网河南省电力公司郑州供电公司 The outer parameter two step associating online calibration method of many mesh camera and system
CN109360245A (en) * 2018-10-26 2019-02-19 魔视智能科技(上海)有限公司 The external parameters calibration method of automatic driving vehicle multicamera system
CN111435539A (en) * 2019-01-15 2020-07-21 苏州沃迈智能科技有限公司 Multi-camera system external parameter calibration method based on joint optimization
CN112381889A (en) * 2020-11-19 2021-02-19 北京百度网讯科技有限公司 Camera inspection method, device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3040941B1 (en) * 2014-12-29 2017-08-02 Dassault Systèmes Method for calibrating a depth camera

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101231754A (en) * 2008-02-03 2008-07-30 四川虹微技术有限公司 Multi-visual angle video image depth detecting method and depth estimating method
CN102743184A (en) * 2012-05-14 2012-10-24 清华大学 Geometrical parameter calibration method of X-ray cone beam computed tomography system
CN104766291A (en) * 2014-01-02 2015-07-08 株式会社理光 Method and system for calibrating multiple cameras
CN106228564A (en) * 2016-07-29 2016-12-14 国网河南省电力公司郑州供电公司 The outer parameter two step associating online calibration method of many mesh camera and system
CN109360245A (en) * 2018-10-26 2019-02-19 魔视智能科技(上海)有限公司 The external parameters calibration method of automatic driving vehicle multicamera system
CN111435539A (en) * 2019-01-15 2020-07-21 苏州沃迈智能科技有限公司 Multi-camera system external parameter calibration method based on joint optimization
CN112381889A (en) * 2020-11-19 2021-02-19 北京百度网讯科技有限公司 Camera inspection method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
飞行时间深度相机和彩色相机的联合标定;周杰 等;《信号处理》;第33卷(第01期);第69-77页 *

Also Published As

Publication number Publication date
CN113112551A (en) 2021-07-13

Similar Documents

Publication Publication Date Title
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
KR102581429B1 (en) Method and apparatus for detecting obstacle, electronic device, storage medium and program
CN113989450B (en) Image processing method, device, electronic equipment and medium
CN113657224B (en) Method, device and equipment for determining object state in vehicle-road coordination
CN112598750B (en) Road side camera calibration method and device, electronic equipment and storage medium
CN111241988B (en) Method for detecting and identifying moving target in large scene by combining positioning information
CN112560684B (en) Lane line detection method, lane line detection device, electronic equipment, storage medium and vehicle
CN113029128B (en) Visual navigation method and related device, mobile terminal and storage medium
WO2022217988A1 (en) Sensor configuration scheme determination method and apparatus, computer device, storage medium, and program
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN112967344A (en) Method, apparatus, storage medium, and program product for camera external reference calibration
CN112967345A (en) External parameter calibration method, device and system of fisheye camera
CN113112551B (en) Camera parameter determining method and device, road side equipment and cloud control platform
CN113344906A (en) Vehicle-road cooperative camera evaluation method and device, road side equipment and cloud control platform
CN114429631B (en) Three-dimensional object detection method, device, equipment and storage medium
CN115375774A (en) Method, apparatus, device and storage medium for determining external parameters of a camera
CN116129422A (en) Monocular 3D target detection method, monocular 3D target detection device, electronic equipment and storage medium
CN113470103B (en) Method and device for determining camera acting distance in vehicle-road cooperation and road side equipment
CN115272482A (en) Camera external reference calibration method and storage medium
CN114757824A (en) Image splicing method, device, equipment and storage medium
CN112991463A (en) Camera calibration method, device, equipment, storage medium and program product
CN111950420A (en) Obstacle avoidance method, device, equipment and storage medium
CN113379591B (en) Speed determination method, speed determination device, electronic device and storage medium
CN113870365B (en) Camera calibration method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant