CN113409405B - Method, device, equipment and storage medium for evaluating camera calibration position - Google Patents

Method, device, equipment and storage medium for evaluating camera calibration position Download PDF

Info

Publication number
CN113409405B
CN113409405B CN202110814910.3A CN202110814910A CN113409405B CN 113409405 B CN113409405 B CN 113409405B CN 202110814910 A CN202110814910 A CN 202110814910A CN 113409405 B CN113409405 B CN 113409405B
Authority
CN
China
Prior art keywords
camera
image
determining
coordinate system
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110814910.3A
Other languages
Chinese (zh)
Other versions
CN113409405A (en
Inventor
王少博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Puhengnuo Information Technology Co ltd
Original Assignee
Jiangsu Puhengnuo Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Puhengnuo Information Technology Co ltd filed Critical Jiangsu Puhengnuo Information Technology Co ltd
Priority to CN202110814910.3A priority Critical patent/CN113409405B/en
Publication of CN113409405A publication Critical patent/CN113409405A/en
Application granted granted Critical
Publication of CN113409405B publication Critical patent/CN113409405B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides methods, apparatus, devices, and storage media for assessing camera calibration positions of target objects. The present disclosure relates to the field of intelligent transportation. The method comprises the following steps: acquiring the position of a reference area under an object coordinate system, wherein the object coordinate system is established based on a target object; acquiring a first image and a second image with the first camera and the second camera, respectively, the first image and the second image being captured for a reference region, respectively; determining a first image region corresponding to the reference region in the first image and a second image region corresponding to the reference region in the second image based on the calibration positions of the first camera and the second camera in the object coordinate system and the positions of the reference regions, respectively; the calibration positions of the first camera and the second camera are evaluated based on the difference of the first image area and the second image area. According to the scheme, the camera calibration position accuracy can be conveniently and intuitively evaluated, so that the accuracy of driving data is ensured.

Description

Method, device, equipment and storage medium for evaluating camera calibration position
Technical Field
The present disclosure relates to the field of intelligent transportation, and more particularly, to a method, apparatus, electronic device, and computer-readable storage medium for evaluating a camera calibration position of a target object.
Background
In the field of intelligent transportation, several cameras are required to be installed on a vehicle to assist the operation of the vehicle. For example, a lane keeping assist system is installed in a vehicle, and the system is equipped with a plurality of cameras, and an alarm is given to remind a driver to adjust the driving route of the vehicle when the vehicle is displayed not to drive according to lane markings based on the pictures captured by the cameras. In addition, in the unmanned field, one important source of vehicle processing data is the information captured by the camera. Therefore, the position of the camera relative to the vehicle is critical, and how to effectively evaluate the accuracy of the calibrated position of the camera relative to the vehicle is one goal that the designer desires to achieve.
Disclosure of Invention
The present disclosure provides a method, apparatus, electronic device, and computer-readable storage medium for evaluating a camera calibration position of a target object.
According to a first aspect of the present disclosure, a method for evaluating a camera calibration position of a target object is provided. The method comprises the following steps: acquiring a position of a reference region under an object coordinate system, wherein the object coordinate system is established based on a target object, the target object comprising at least a first camera and a second camera, the first camera and the second camera being at different positions of the target object; acquiring a first image and a second image with the first camera and the second camera, respectively, wherein the first image and the second image are captured for the reference region, respectively; determining a first image region corresponding to the reference region in the first image and a second image region corresponding to the reference region in the second image, respectively, based on the calibration positions of the first camera and the second camera in the object coordinate system and the position of the reference region; and evaluating the calibration positions of the first camera and the second camera based on the difference of the first image area and the second image area.
According to a second aspect of the present disclosure, an apparatus for evaluating a camera calibration position of a target object is provided. The device comprises: a position acquisition module configured to acquire a position of a reference region under an object coordinate system, wherein the object coordinate system is established based on a target object, the target object comprising at least a first camera and a second camera, the first camera and the second camera being at different positions of the target object; an image acquisition module configured to acquire a first image and a second image with the first camera and the second camera, respectively, wherein the first image and the second image are captured for the reference region, respectively; an image region determination module configured to determine a first image region corresponding to the reference region in the first image and a second image region corresponding to the reference region in the second image, respectively, based on calibration positions of the first camera and the second camera in the object coordinate system and the position of the reference region; and a calibration evaluation module configured to evaluate the calibration positions of the first camera and the second camera based on a difference of the first image area and the second image area.
According to a third aspect of the present disclosure, there is provided an electronic device comprising one or more processors; and storage means for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement a method according to the first aspect of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements a method according to the first aspect of the present disclosure.
According to a fifth aspect of the present disclosure there is provided a computer program product comprising computer program instructions for implementing the method of the first aspect of the present disclosure by a processor.
When the camera calibration position of the target object is estimated, the same reference area is captured through different cameras, so that the camera calibration position accuracy is quantized.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 illustrates a schematic diagram of an example environment in which various embodiments of the present disclosure can be implemented;
FIG. 2 illustrates a flowchart of a method for assessing camera calibration position of a target object according to an exemplary embodiment of the present disclosure;
FIG. 3 shows a schematic view of images of the same reference area taken by different cameras according to an exemplary embodiment of the present disclosure;
FIG. 4 illustrates a block diagram of an apparatus for assessing camera calibration position of a target object in accordance with an exemplary embodiment of the present disclosure; and
FIG. 5 illustrates a block diagram of a computing device capable of implementing various embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
As described above, in the intelligent transportation field, whether the position of the camera relative to the vehicle is accurate or not relates to the user experience of the intelligent transportation user. If the camera's position relative to the vehicle is inaccurate, the calculated results may be inaccurate once these inaccurate data are used to perform intelligent traffic operations. In particular, in the unmanned field, if these inaccurate results are utilized, serious safety accidents may be induced.
There are various methods of calibrating the position of a camera relative to a vehicle in the prior art. However, after calibration, the data of these calibrations may become inaccurate during the running of the vehicle due to vibration and other factors. In existing solutions, the user of the vehicle is required to go to the after-market department of the vehicle periodically to evaluate the calibrated position of the camera. This would additionally increase the time cost for the user, which would greatly reduce the user's use experience.
In view of the foregoing, embodiments of the present disclosure provide a solution for evaluating a camera calibration position of a target object. Exemplary embodiments of the present disclosure will be described in detail below in conjunction with fig. 1 to 5.
FIG. 1 illustrates a schematic diagram of an example environment 100 in which various embodiments of the present disclosure may be implemented. As shown in fig. 1, in an environment 100, a vehicle 110 is on a field. For example, the vehicle 110 may be traveling in a road of a traffic network or parked in a parking lot. In the environment of fig. 1, the nominal position of the camera on the vehicle 110 may be estimated.
In the context of the present disclosure, the term "vehicle" may take various forms. The vehicle 110 may be an electric vehicle, a fuel-fired vehicle, or a hybrid vehicle. In some embodiments, vehicle 110 may be a car, truck, trailer, motorcycle, bus, farm vehicle, recreational vehicle, or construction vehicle, among others. In some embodiments, vehicle 110 may take the form of, for example, a ship, an aircraft, a train, or the like. In some embodiments, vehicle 110 may be a home vehicle, an operational passenger vehicle, or an operational freight vehicle, among others. In some embodiments, vehicle 110 may be a vehicle equipped with certain autopilot capabilities, which may include, but are not limited to, assisted drive capabilities, semi-autopilot capabilities, altitude autopilot capabilities, or full autopilot capabilities.
As shown in fig. 1, a plurality of cameras are provided on the vehicle 110, such as a left camera 132 located on the left side of the vehicle 110, a front camera 134 located on the front side of the vehicle 110, a right camera 136 located on the right side of the vehicle 110, and a rear camera 138 located on the rear side of the vehicle 110. It should be understood that the number and location of cameras is merely illustrative and not limiting. More or fewer cameras may be provided on the vehicle 110 and some of these cameras may be located on the same side of the vehicle 110. The present disclosure is not limited in this respect.
In some embodiments, the camera may be a fisheye camera having a view angle of approximately or equal to 180 °. Of course, other types of cameras may be used as long as the camera captures a clearly usable image.
Fig. 2 illustrates a flow chart of a method 200 of scoring a calibrated position of a camera according to some embodiments of the present disclosure. The method 200 may be performed by various types of computing devices in the vehicle 110.
At block 202, the position of the reference region 120 is acquired under an object coordinate system O-xyz, wherein the object coordinate system is established based on the target object.
In some embodiments, the target object may be the vehicle 110 shown in fig. 1. The object coordinate system O-xyz may be established at the center of the rear axle of the vehicle 110. That is, the origin O of the object coordinate system O-xyz may be set at the center of the rear axle of the vehicle 110. However, it should be understood that such an arrangement is merely illustrative. The object coordinate system O-xyz may be set at any position of the vehicle 110 according to different usage scenarios. Embodiments of the present disclosure are not limited in this regard. Although the object coordinate system O-xyz is shown in a two-dimensional form in the top view of FIG. 1, it should be understood that the object coordinate system O-xyz is a three-dimensional coordinate system established on the vehicle 110. The z-axis of the three-dimensional coordinate system may be a direction parallel to the direction of gravity, and the x-axis and the y-axis of the three-dimensional coordinate system may be parallel to the width direction and the length direction of the vehicle. It should be understood that this is also merely illustrative and that the particular manner in which the object coordinate system O-xyz is established is not limited by embodiments of the present disclosure.
In some embodiments, the reference region 120 may be a virtual region selected under the object coordinate system O-xyz, such as a cube. For example, in the embodiment of FIG. 1, the reference region 120 is a cube having sides parallel to the axis of the object coordinate system O-xyz. It should be understood that this is for ease of calculation only. In other embodiments, the reference region 120 may be in other forms, such as a cylinder, sphere, etc.
In other embodiments, the reference area 120 may also be some actual reference object located in the same space as the vehicle 110, such as a roadblock, a road sign, a street lamp, a flagpole, a person, another vehicle, and so on. The specific form of the reference region 120 is not limited by the embodiments of the present disclosure.
In some embodiments, as shown in fig. 1, the first and second cameras may be cameras located on adjacent sides of the vehicle 110, such as a left camera 132 located on the left side and a front camera 134 located on the front side of the vehicle 110, respectively. Or the first camera may be, for example, a right camera 136 located on the right side of the vehicle 110 and the second camera may be, for example, a rear camera 138 located on the rear side of the vehicle 110. However, it should be understood that this is merely illustrative and not limiting. In other embodiments, the first camera and the second camera may also be cameras located on the same side of the vehicle 110. For example, if a plurality of cameras are mounted on the left side of the vehicle 110, the first camera and the second camera may also be cameras mounted at different positions on the left side of the vehicle 110. For another example, the first camera and the second camera may also be mounted on different non-adjacent sides of the vehicle 110, e.g., the first camera may be the left camera 132 of the vehicle and the second camera may be the right camera 136 of the vehicle, so long as such first camera and second camera may simultaneously capture the reference region 120 of the object coordinate system O-xyz.
For illustrative purposes only, embodiments of the present disclosure will be described further below with respect to the left camera 132 and the front camera 134 as examples.
Referring back to fig. 2, at block 204, a first image 332 and a second image 334, respectively, are acquired for the reference region 120 using the left camera 132 and the front camera 134. Referring to fig. 1, the reference area 120 may be disposed at the left front of the vehicle 110 so that the left camera 132 and the front camera 134 can simultaneously photograph the reference area 120.
Fig. 3 shows a schematic view of a first image 332 and a second image 334 of the reference area 120 captured by the left camera 132 and the front camera 134 according to an exemplary embodiment of the present disclosure.
Referring back to FIG. 2, at block 206, a first image region 322 corresponding to the reference region 120 in the first image 332 and a second image region 324 corresponding to the reference region 120 in the second image 334 are determined based on the nominal positions of the left camera 132 and the front camera 134 in the object coordinate system O-xyz and the position of the reference region 120 in the object coordinate system O-xyz, respectively.
The position of the left camera 132 relative to the object coordinate system O-xyz may be calibrated in various ways. That is, a transformation matrix from the left camera coordinate system established on the left camera 132 to the object coordinate system O-xyz is known. Similarly, the position of the front camera 134 relative to the object coordinate system O-xyz may be calibrated in a variety of ways, and thus a transformation matrix from the front camera coordinate system established on the front camera 134 to the object coordinate system O-xyz is also known. Thus, referring to FIG. 1, from the nominal position of the left camera 132 and the acquired position of the reference region 120 relative to the object coordinate system O-xyz, a first image region 322 of the reference region 120 in a first image 332 captured by the left camera 132 may be determined. Similarly, based on the calibration position of the front camera 134 and the acquired position of the reference region 120 relative to the object coordinate system O-xyz, a second image region 324 of the reference region 120 in a second image 334 taken by the front camera 134 may be determined.
As shown in fig. 3, in some embodiments, the first image region 322 may be a rectangular region in the first image 332. Similarly, the second image area 324 may be a rectangular area in the second image 334. Although the first image area 322 and the second image area 324 are shown as having the same size, in other embodiments, both may have different sizes.
Referring back to fig. 2, at block 208, the nominal positions of the left camera 132 and the front camera 134 are evaluated based on the difference between the first image area 322 and the second image area 324.
In some embodiments, pixel values for each pixel point in the first image region 322 and the second image region 324 may be acquired separately. Based on the difference between the pixel values obtained from the two regions, and based on this difference, a corresponding evaluation score is given.
Various methods may be used to measure the difference between the first image region 322 and the second image region 324. In some embodiments, the difference between the pixels obtained for the two regions may take into account certain metrics of the pixels, and based on these metrics, pixels characterizing the respective regions are synthesized. For example, referring to fig. 3, if the corresponding first image region 322 of the reference region 120 in the first image 332 is a4×4 region, the pixel values of the 16 pixels are sequentially acquired. In some embodiments, the pixel values herein may be RGB values of the pixel points. After the pixel values are obtained, some statistics of the pixel values may be obtained and used as an indicator of the obtained pixel values in the first image area 322, such as a maximum value V max, a minimum value V min, an average value V mean, a mean square error V mse, and so on.
In some embodiments, these indices may be respectively given corresponding weights as the feature value C1 of the first image region 322. For example, the characteristic value C1 may be calculated using the formula (1):
C1=w1·Vmax+w2·Vmin+w3·Vmean+w4·Vmse (1)
Wherein w 1、w2、w3 and w 4 are weights w i corresponding to maximum V max, minimum V min, average V mean and mean square error V mse, respectively. It should be appreciated that although only the indices of maximum V max, minimum V min, average V mean, and mean square error V mse are considered in calculating the feature value C1 of the first image area 322, more or fewer indices may be included depending on the implementation scenario. The specific number and meaning of indices are not limited by the embodiments of the present disclosure. In addition, the specific weight w i corresponding to each index can also be adjusted according to the specific usage scenario.
Similar to the first image area 322, the feature value C2 associated with the second image area 324 may also be acquired. By calculating the difference between the characteristic value C1 of the first image area 322 and the characteristic value C2 of the second image area 324, the degree of difference between the left camera 132 and the front camera 134 installed at different positions of the vehicle 110 can be evaluated to evaluate whether the calibration position of the camera is accurate.
For example, if the difference between the first characteristic value C1 and the second characteristic value C2 is below a certain threshold, indicating that the calibration positions of the left camera 132 and the front camera 134 are relatively accurate, a higher evaluation score may be assigned. If the difference between the first characteristic value C1 and the second characteristic value C2 is higher than a certain threshold value, indicating that the calibration positions of the left camera 132 and the front camera 134 are relatively inaccurate, a lower evaluation score may be assigned. In some embodiments, if the difference between the first characteristic value C1 and the second characteristic value C2 is too large, an alert may be provided to a user or maintainer of the vehicle 110, thereby ensuring safe driving of the vehicle 110.
According to the embodiment of the disclosure, whether the calibration position of the camera is accurate or not can be effectively quantized, so that the accuracy of the calibration position can be intuitively evaluated.
In some embodiments, when the first image 332 and the second image 334 are acquired, the captured images are preprocessed to be converted into top view images. Specifically, the reference area 120 is photographed using the left camera 132 and the front camera 134, thereby acquiring a first original image and a second original image, respectively. The first and second original images are then transformed to determine top view images for the reference region 120 in the first and second original images, respectively. The preprocessing of the image may be achieved using various methods known or developed in the future. The particular manner of pretreatment is not limited by the embodiments of the present disclosure. According to such an implementation, for images captured by the same reference area 120, distortion caused by the camera being at different positions of the vehicle 120 can be eliminated, thereby further improving usability of the captured images.
In some embodiments, determining a first image region 322 in the first image 332 that corresponds to the reference region 120 includes: a spatial transformation matrix T1 of the left camera coordinate system of the left camera 132 with respect to the object coordinate system O-xyz is determined and the first image area 322 is calculated from the spatial transformation matrix T1, the parameters of the left camera 132 and the position of the reference area 120 in the object coordinate system O-xyz. In some embodiments, determining a second image region 324 in the second image 334 that corresponds to the reference region 120 includes: a spatial transformation matrix T2 of the front camera coordinate system of the front camera 134 relative to the object coordinate system O-xyz is determined and a second image area 324 is calculated from the spatial transformation matrix T2, the parameters of the front camera 134 and the position of the reference area 120 in the object coordinate system O-xyz.
In some embodiments, the parameters of the left camera 132 and/or the front camera 134 include one or more of focal length, distortion, thickness of the lens. It should be understood that the parameters of the cameras listed herein are merely illustrative and not limiting.
In a further embodiment, if the difference between the determined characteristic value C1 of the first image area 322 and the characteristic value C2 of the second image area 324 is less than a preset threshold, the inaccuracy indicating the positions calibrated by the left camera 132 and the front camera 134 may also be optimized by fine tuning.
When fine tuning is desired, an attempted adjustment may first be made to the calibration position of the left camera 132 and/or the front camera 134. In some embodiments, only one of the left camera 132 and the front camera 134 may be adjusted. In other embodiments, the left camera 132 and the front camera 134 may be adjusted simultaneously. Subsequently, based on the adjusted calibration position in the object coordinate system O-xyz and the position of the reference region, an adjusted first image region 322 'and second image region 324' are obtained. Similar to the steps mentioned above, the assessment of the nominal positions of the left camera 132 and the front camera 134 is updated based on the differences between the adjusted first image area 322 'and the second image area 324'. For example, the updated assessment score may be determined based on the difference between the adjusted first image region 322 'and the second image region 324'.
If the updated evaluation score is higher than the original evaluation score, i.e. the difference between the adjusted first image area 322 'and the second image area 324' is smaller than the difference between the original first image area 322 and the second image area 324, this means that an attempted adjustment of the calibration positions of the left camera 132 and the front camera 134 is meaningful, in which case the original calibration position can be replaced with the updated calibration position and the updated calibration position used for further processing. Such adjustment is advantageous to further optimize the calibration between the left camera 132 and the front camera 134.
If the updated evaluation score is lower than the original evaluation score, i.e. the difference between the adjusted first image area 322 'and the second image area 324' is greater than the difference between the original first image area 322 and the second image area 324, this means that the attempted adjustment of the calibration positions of the left camera 132 and the front camera 134 is unsuccessful, when the original calibration positions are not updated and replaced, and the next attempted adjustment can be started.
According to such an implementation, the nominal position of the camera on the vehicle 110 may be optimized. Since such optimization may occur at any time during the period of use of the vehicle 110, such as when the vehicle 110 is parked in a parking lot or is traveling in a traffic road. Real-time optimization can be effectively achieved. This means that the user of the vehicle 110 does not need to spend additional time adjusting the calibration position of the camera, thereby greatly improving the user experience.
Although the aspects of the present disclosure are described above with reference to left camera 132 and front camera 134, it is understood that three or more cameras on vehicle 110 are contemplated in assessing the nominal positions of the cameras, so long as the cameras can simultaneously capture images of the same reference area 120. The specific manner is not described in detail.
Fig. 4 schematically illustrates a block diagram of an apparatus 400 for assessing camera calibration positions of a target object according to an exemplary embodiment of the present disclosure. Specifically, the apparatus 400 includes: a position acquisition module 402 configured to acquire a position of the reference region under an object coordinate system, wherein the object coordinate system is established based on a target object, the target object comprising at least a first camera and a second camera, the first camera and the second camera being at different positions of the target object; an image acquisition module 404 configured to acquire a first image and a second image with the first camera and the second camera, respectively, wherein the first image and the second image are captured for a reference region, respectively; an image region determination module 406 configured to determine a first image region corresponding to the reference region in the first image and a second image region corresponding to the reference region in the second image, respectively, based on calibration positions of the first camera and the second camera in the object coordinate system and the position of the reference region; and a calibration evaluation module 408 configured to evaluate the calibration positions of the first camera and the second camera based on the difference of the first image area and the second image area.
In some embodiments, the calibration evaluation module is further configured to: determining a first characteristic value of the first image area and a second characteristic value of the second image area respectively based on pixel values of pixel points in the first image area and the second image area; and evaluating the calibration positions of the first camera and the second camera based on the difference between the first characteristic value and the second characteristic value.
In some embodiments, determining the first characteristic value comprises: determining a plurality of first indicators of the first plurality of pixel values based on a first plurality of pixel values of a plurality of pixel points in the first image region; and determining a first feature value based on the plurality of first indicators and weights corresponding to the plurality of first indicators; and determining the second characteristic value comprises: determining a plurality of second indicators of the second plurality of pixel values based on the second plurality of pixel values of the plurality of pixel points in the second image region; and determining a second characteristic value based on the plurality of second indices and weights corresponding to the plurality of second indices.
In some embodiments, acquiring the first image and the second image comprises: photographing a reference area with a first camera and a second camera to determine a first original image and a second original image; and transforming the first and second original images to determine top-view images in the first and second original images, respectively, for the reference region.
In some embodiments, determining the first image region includes: determining a first spatial transformation matrix of the first camera relative to the object coordinate system; and determining a first image region based on the first spatial transformation matrix, the first parameter of the first camera, and the position of the reference region in the object coordinate system; determining the second image region includes: determining a second spatial transformation matrix of the second camera relative to the object coordinate system; and determining a second image region based on the second spatial transformation matrix, a second parameter of the second camera, and a position of the reference region in the object coordinate system.
In some embodiments, evaluating the calibration positions of the first camera and the second camera includes: determining a first evaluation score in response to the difference being below a first threshold; and responsive to the difference being above the first threshold, determining a second evaluation score, wherein the second evaluation score is lower than the first evaluation score.
In some embodiments, the apparatus 400 further comprises: an adjustment module configured to adjust calibration positions of the first camera and the second camera in the object coordinate system in response to the difference being above a first threshold and below a second threshold; an image region adjustment module configured to obtain adjusted first and second image regions based on adjusted calibration positions of the first and second cameras and positions of the reference region in the object coordinate system; and an assessment updating module configured to update an assessment of the calibration positions of the first camera and the second camera based on the adjusted differences of the first image area and the second image area.
In some embodiments, the first parameter comprises one or more of a focal length, a distortion, a thickness of a lens of the first camera; and the second parameter includes one or more of a focal length, a distortion, a thickness of a lens of the second camera.
In some embodiments, the first plurality of indicators of the first plurality of pixel values includes one or more of an average value, a maximum value, a minimum value, and a mean square error of the first plurality of pixel values, and the second plurality of indicators of the second plurality of pixel values includes one or more of an average value, a maximum value, a minimum value, and a mean square error of the second plurality of pixel values.
According to the technical solution of the embodiment of the present disclosure, the same reference area 120 is photographed based on at least two cameras on the vehicle 110, and according to the difference in the captured images, the accuracy of the calibration positions of the cameras is quantified in real time, and the calibration positions can be adjusted, so that the accuracy of driving data is ensured. The scheme of the disclosure can be used for not only manually driven vehicles, but also unmanned vehicles.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 5 illustrates a schematic block diagram of an example electronic device 500 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the apparatus 500 includes a computing unit 501 that can perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The computing unit 501, ROM 502, and RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Various components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, etc.; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508 such as a magnetic disk, an optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 501 performs the various methods and processes described above, such as method 200. For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into RAM 503 and executed by computing unit 501, one or more steps of method 200 described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the method 200 by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (18)

1. A method for assessing camera calibration position of a target object, comprising:
Acquiring the position of a reference area under an object coordinate system, wherein the object coordinate system is established based on a target object, the target object comprises at least a first camera and a second camera, and the first camera and the second camera are positioned at different positions of the target object;
Acquiring a first image and a second image with the first camera and the second camera, respectively, wherein the first image and the second image are captured for the reference region, respectively;
determining a first image region corresponding to the reference region in the first image and a second image region corresponding to the reference region in the second image, respectively, based on the calibration positions of the first camera and the second camera in the object coordinate system and the positions of the reference region; and
Evaluating the nominal positions of the first and second cameras based on differences in the first and second image areas;
Wherein evaluating the nominal positions of the first camera and the second camera comprises:
determining a first characteristic value of the first image area and a second characteristic value of the second image area respectively based on pixel values of pixel points in the first image area and the second image area; and
The calibration positions of the first camera and the second camera are evaluated based on the difference of the first characteristic value and the second characteristic value.
2. The method according to claim 1,
Wherein determining the first characteristic value comprises:
Determining a plurality of first indicators of a plurality of pixel values of a plurality of pixel points in the first image region based on the first plurality of pixel values; and
Determining the first feature value based on the plurality of first indicators and weights corresponding to the plurality of first indicators; and
Wherein determining the second characteristic value comprises:
determining a plurality of second indicators of a second plurality of pixel values based on a second plurality of pixel values of a plurality of pixel points in the second image region; and
The second feature value is determined based on the plurality of second indices and weights corresponding to the plurality of second indices.
3. The method of claim 1, wherein acquiring the first image and the second image comprises:
photographing the reference region with the first camera and the second camera to determine a first original image and a second original image; and
The first and second original images are transformed to determine top-view images in the first and second original images, respectively, for the reference region.
4. The method according to claim 1,
Wherein determining the first image region comprises:
Determining a first spatial transformation matrix of the first camera relative to the object coordinate system; and
Determining the first image region based on the first spatial transformation matrix, a first parameter of the first camera, and the position of the reference region in the object coordinate system; and
Wherein determining the second image region comprises:
Determining a second spatial transformation matrix of the second camera relative to the object coordinate system; and
The second image region is determined based on the second spatial transformation matrix, a second parameter of the second camera, and the position of the reference region in the object coordinate system.
5. The method of claim 1, wherein evaluating the nominal positions of the first and second cameras comprises:
determining a first evaluation score in response to the difference being below a first threshold; and
A second evaluation score is determined in response to the difference being above the first threshold, wherein the second evaluation score is lower than the first evaluation score.
6. The method of claim 5, further comprising:
responsive to the difference being above the first threshold and below a second threshold, adjusting the nominal positions of the first and second cameras in the object coordinate system;
Obtaining adjusted first and second image areas based on the adjusted calibration positions of the first and second cameras and the positions of the reference area in the object coordinate system; and
Based on the adjusted differences of the first and second image areas, an assessment of the nominal positions of the first and second cameras is updated.
7. The method of claim 4, wherein
The first parameter comprises one or more of a focal length, distortion, thickness of a lens of the first camera; and
The second parameter includes one or more of a focal length, distortion, thickness of a lens of the second camera.
8. The method of claim 2, wherein
The first indices of the first plurality of pixel values include one or more of an average, a maximum, a minimum, a mean square error, and
The plurality of second indices of the second plurality of pixel values includes one or more of an average, a maximum, a minimum, and a mean square error of the second plurality of pixel values.
9. An apparatus for evaluating a camera calibration position of a target object, comprising:
a position acquisition module configured to acquire a position of a reference region under an object coordinate system, wherein the object coordinate system is established based on a target object, the target object comprising at least a first camera and a second camera, the first camera and the second camera being at different positions of the target object;
An image acquisition module configured to acquire a first image and a second image with the first camera and the second camera, respectively, wherein the first image and the second image are captured for the reference region, respectively;
an image region determination module configured to determine a first image region corresponding to the reference region in the first image and a second image region corresponding to the reference region in the second image, respectively, based on calibration positions of the first camera and the second camera in the object coordinate system and the positions of the reference region; and
A calibration evaluation module configured to evaluate the calibration positions of the first camera and the second camera based on a difference of the first image area and the second image area;
Wherein the calibration evaluation module is further configured to:
determining a first characteristic value of the first image area and a second characteristic value of the second image area respectively based on pixel values of pixel points in the first image area and the second image area; and
The calibration positions of the first camera and the second camera are evaluated based on the difference of the first characteristic value and the second characteristic value.
10. An apparatus according to claim 9,
Wherein determining the first characteristic value comprises:
Determining a plurality of first indicators of a plurality of pixel values of a plurality of pixel points in the first image region based on the first plurality of pixel values; and
Determining the first feature value based on the plurality of first indicators and weights corresponding to the plurality of first indicators; and
Wherein determining the second characteristic value comprises:
determining a plurality of second indicators of a second plurality of pixel values based on a second plurality of pixel values of a plurality of pixel points in the second image region; and
The second feature value is determined based on the plurality of second indices and weights corresponding to the plurality of second indices.
11. The apparatus of claim 9, wherein acquiring the first image and the second image comprises:
photographing the reference region with the first camera and the second camera to determine a first original image and a second original image; and
The first and second original images are transformed to determine top-view images in the first and second original images, respectively, for the reference region.
12. An apparatus according to claim 9,
Wherein determining the first image region comprises:
Determining a first spatial transformation matrix of the first camera relative to the object coordinate system; and
Determining the first image region based on the first spatial transformation matrix, a first parameter of the first camera, and the position of the reference region in the object coordinate system; and
Wherein determining the second image region comprises:
Determining a second spatial transformation matrix of the second camera relative to the object coordinate system; and
The second image region is determined based on the second spatial transformation matrix, a second parameter of the second camera, and the position of the reference region in the object coordinate system.
13. The apparatus of claim 9, wherein evaluating the nominal positions of the first and second cameras comprises:
determining a first evaluation score in response to the difference being below a first threshold; and
A second evaluation score is determined in response to the difference being above the first threshold, wherein the second evaluation score is lower than the first evaluation score.
14. The apparatus of claim 13, further comprising:
an adjustment module configured to adjust the nominal positions of the first and second cameras in the object coordinate system in response to the difference being above the first threshold and below a second threshold;
an image region adjustment module configured to obtain adjusted first and second image regions based on the adjusted positions of the reference region and calibration positions of the first and second cameras in the object coordinate system; and
An evaluation update module configured to update an evaluation of the calibration positions of the first camera and the second camera based on the adjusted differences of the first image area and the second image area.
15. The apparatus of claim 12, wherein
The first parameter comprises one or more of a focal length, distortion, thickness of a lens of the first camera; and
The second parameter includes one or more of a focal length, distortion, thickness of a lens of the second camera.
16. The apparatus of claim 10, wherein
The first indices of the first plurality of pixel values include one or more of an average, a maximum, a minimum, a mean square error, and
The plurality of second indices of the second plurality of pixel values includes one or more of an average, a maximum, a minimum, and a mean square error of the second plurality of pixel values.
17. An electronic device, comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-8.
CN202110814910.3A 2021-07-19 2021-07-19 Method, device, equipment and storage medium for evaluating camera calibration position Active CN113409405B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110814910.3A CN113409405B (en) 2021-07-19 2021-07-19 Method, device, equipment and storage medium for evaluating camera calibration position

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110814910.3A CN113409405B (en) 2021-07-19 2021-07-19 Method, device, equipment and storage medium for evaluating camera calibration position

Publications (2)

Publication Number Publication Date
CN113409405A CN113409405A (en) 2021-09-17
CN113409405B true CN113409405B (en) 2024-07-05

Family

ID=77686912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110814910.3A Active CN113409405B (en) 2021-07-19 2021-07-19 Method, device, equipment and storage medium for evaluating camera calibration position

Country Status (1)

Country Link
CN (1) CN113409405B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758150B (en) * 2023-05-15 2024-04-30 阿里云计算有限公司 Position information determining method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110766760A (en) * 2019-10-21 2020-02-07 北京百度网讯科技有限公司 Method, device, equipment and storage medium for camera calibration
CN111951335A (en) * 2020-08-13 2020-11-17 珠海格力电器股份有限公司 Method, device, processor and image acquisition system for determining camera calibration parameters

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112435300B (en) * 2019-08-26 2024-06-04 华为云计算技术有限公司 Positioning method and device
CN112840374A (en) * 2020-06-30 2021-05-25 深圳市大疆创新科技有限公司 Image processing method, image acquisition device, unmanned aerial vehicle system and storage medium
CN111815719B (en) * 2020-07-20 2023-12-22 阿波罗智能技术(北京)有限公司 External parameter calibration method, device and equipment of image acquisition equipment and storage medium
CN112738487B (en) * 2020-12-24 2022-10-11 阿波罗智联(北京)科技有限公司 Image projection method, device, equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110766760A (en) * 2019-10-21 2020-02-07 北京百度网讯科技有限公司 Method, device, equipment and storage medium for camera calibration
CN111951335A (en) * 2020-08-13 2020-11-17 珠海格力电器股份有限公司 Method, device, processor and image acquisition system for determining camera calibration parameters

Also Published As

Publication number Publication date
CN113409405A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
JP6494719B2 (en) Traffic signal map creation and detection
US20180316905A1 (en) Camera parameter set calculation method, recording medium, and camera parameter set calculation apparatus
JP6473571B2 (en) TTC measuring device and TTC measuring program
JP2019024196A (en) Camera parameter set calculating apparatus, camera parameter set calculating method, and program
JP2019096072A (en) Object detection device, object detection method and program
JP2019008460A (en) Object detection device and object detection method and program
WO2018120040A1 (en) Obstacle detection method and device
CN112967344B (en) Method, device, storage medium and program product for calibrating camera external parameters
CN113706704B (en) Method and equipment for planning route based on high-precision map and automatic driving vehicle
CN109345591B (en) Vehicle posture detection method and device
CN113409405B (en) Method, device, equipment and storage medium for evaluating camera calibration position
WO2018149539A1 (en) A method and apparatus for estimating a range of a moving object
WO2021175119A1 (en) Method and device for acquiring 3d information of vehicle
WO2021129073A1 (en) Distance measurement method and device
CN116486351A (en) Driving early warning method, device, equipment and storage medium
CN117612132A (en) Method and device for complementing bird's eye view BEV top view and electronic equipment
CN116091567A (en) Registration method and device of automatic driving vehicle, electronic equipment and vehicle
CN112477868B (en) Collision time calculation method and device, readable storage medium and computer equipment
JP6507590B2 (en) Image conversion apparatus and image conversion method
CN113689552A (en) Vehicle-mounted all-round-view model adjusting method and device, electronic equipment and storage medium
CN116109711A (en) Driving assistance method and device and electronic equipment
CN114565681B (en) Camera calibration method, device, equipment, medium and product
CN112215033B (en) Method, device and system for generating panoramic looking-around image of vehicle and storage medium
CN112970029A (en) Deep neural network processing for sensor blind detection in autonomous machine applications
CN115345919B (en) Depth determination method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240612

Address after: 211135 2, B unit 300, Zhihui Road, Kirin science and Technology Innovation Park, Jiangning District, Nanjing, Jiangsu.

Applicant after: Jiangsu Puhengnuo Information Technology Co.,Ltd.

Country or region after: China

Address before: 2 / F, baidu building, 10 Shangdi 10th Street, Haidian District, Beijing 100094

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

Country or region before: China

GR01 Patent grant