WO2022048493A1 - 摄像头外参标定的方法与装置 - Google Patents

摄像头外参标定的方法与装置 Download PDF

Info

Publication number
WO2022048493A1
WO2022048493A1 PCT/CN2021/114890 CN2021114890W WO2022048493A1 WO 2022048493 A1 WO2022048493 A1 WO 2022048493A1 CN 2021114890 W CN2021114890 W CN 2021114890W WO 2022048493 A1 WO2022048493 A1 WO 2022048493A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
reference object
parameters
precision map
calibration reference
Prior art date
Application number
PCT/CN2021/114890
Other languages
English (en)
French (fr)
Inventor
任小荣
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21863558.9A priority Critical patent/EP4198901A4/en
Publication of WO2022048493A1 publication Critical patent/WO2022048493A1/zh
Priority to US18/177,930 priority patent/US20230206500A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/771Feature selection, e.g. selecting representative features from a multi-dimensional feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present application relates to the field of data processing, in particular to a method and device for calibrating external parameters of a camera.
  • Camera calibration represents the process of obtaining camera parameters.
  • the camera parameters include internal and external parameters, the internal parameters are the parameters of the camera itself, and the external parameters are parameters related to the installation position of the camera, such as pitch angle, rotation angle and yaw angle.
  • camera calibration is divided into two categories: traditional camera calibration method and camera self-calibration method.
  • the traditional camera calibration method uses a calibration plate for calibration, but it is only suitable for scenes where the camera is still.
  • the external parameters of the camera may change due to the vibration of the vehicle due to road reasons, so the camera parameters need to be dynamically calibrated.
  • the existing camera dynamic calibration method is the camera self-calibration method, which uses lane lines for calibration, but this method requires the vehicle to drive in the center, which is highly subjective, resulting in low accuracy of external parameter calibration.
  • this method is only suitable for specific roads. , such as horizontal and straight roads.
  • the present application provides a method and device for calibrating external parameters of a camera. By using a high-precision map to calibrate the external parameters of the camera, the calibration accuracy of the external parameters of the camera can be improved.
  • a method for calibrating external parameters of a camera comprising: acquiring a captured image of a camera, where the captured image is an image captured by the camera using the calibration reference object as a shooting object; An image and a high-precision map are captured, and external parameters of the camera are obtained, and the high-precision map includes the calibration reference object.
  • the high-precision map includes the calibration reference object, which means that the high-precision map has location information of the calibration reference object.
  • the external parameters of the camera are acquired through the actual captured image of the calibration reference object and the high-precision map, so that the operation of measuring the three-dimensional coordinates of the calibration reference object relative to the camera is not required in the process of obtaining the external parameters.
  • the three-dimensional coordinates of the calibration reference object relative to the camera are obtained by measurement.
  • the accuracy of measuring and calibrating the three-dimensional coordinates of the reference object relative to the camera is low, resulting in low accuracy of the external parameter calibration of the camera.
  • the external parameters of the camera are obtained through the captured image of the calibration reference object and the high-precision map, and there is no need to perform the operation of measuring the three-dimensional coordinates of the calibration reference object relative to the camera, so that the calibration accuracy of the external parameters of the camera can no longer be limited. Therefore, the external parameter calibration accuracy of the camera can be improved.
  • the calibration reference object may be an object around the camera.
  • the calibration reference may be a road feature.
  • the calibration reference object may be any one of the following road features: lane lines, signboards, pole-like objects, road signs, and traffic lights.
  • the identification plate is, for example, a traffic identification plate or a pole plate
  • the pole-like object is, for example, a street light pole and the like.
  • the acquiring the external parameters of the camera according to the captured image and the high-precision map includes: acquiring two parameters of the calibration reference object on the captured image. dimensional coordinates; according to the positioning information of the camera, determine the position of the camera on the high-precision map, and based on the position of the camera on the high-precision map, obtain the calibration reference object in the high-precision map.
  • acquiring the three-dimensional coordinates of the calibration reference object relative to the camera based on the position of the calibration reference object on the high-precision map includes: according to the calibration reference According to the absolute position of the calibration reference object and the absolute position of the camera, the relative position of the calibration reference object relative to the camera is calculated. three-dimensional coordinates.
  • the high-precision map has a function of generating relative positions of two positioning points on the map; wherein, based on the position of the calibration reference object on the high-precision map, Obtaining the three-dimensional coordinates of the calibration reference object relative to the camera includes: based on the position of the camera on the high-precision map and the position of the calibration reference object on the high-precision map, using the high-precision map The precision map generates three-dimensional coordinates of the calibration reference object relative to the camera.
  • the positioning information of the camera can be obtained by any one or a combination of the following positioning technologies: carrier-phase differential (real-time kinematic, RTK) technology based on satellite positioning, matching positioning technology based on vision or lidar.
  • RTK real-time kinematic
  • the three-dimensional coordinates of the calibration reference object relative to the camera are obtained by measurement, resulting in that the external parameter calibration accuracy of the camera depends on the measurement accuracy of the three-dimensional coordinates.
  • the three-dimensional coordinates of the calibration reference object relative to the camera are obtained by using a high-precision map, rather than by measurement, thereby avoiding that the calibration accuracy of the external parameters of the camera is limited by the measurement accuracy.
  • the accuracy of calibrating the three-dimensional coordinates of the reference object relative to the camera can be improved, so that the external parameter calibration accuracy of the camera can be improved.
  • the calibration reference object is a road feature; wherein, acquiring the external parameters of the camera according to the captured image and the high-precision map includes: acquiring multiple set of camera parameters, each set of camera parameters includes internal and external parameters; according to the multiple sets of camera parameters and the positioning information of the cameras, use the high-precision map to generate multiple road feature projection images; from the multiple road features A matched road feature projection image with the highest matching degree with the captured image is obtained from the projected image; and the camera's external parameters are obtained according to a set of camera parameters corresponding to the matched road feature projected image.
  • the acquiring multiple sets of camera parameters includes: taking the initial value of the rotation matrix of the camera as a reference, and using a preset step size to generate multiple sets of rotation matrix simulation values;
  • the sets of camera parameters are generated by sets of rotation matrix simulation values.
  • the obtaining of multiple sets of camera parameters includes: using the rotation matrix and translation matrix of the camera as a benchmark, and using corresponding step sizes, generating multiple sets of rotation matrix simulation values and multiple sets of translation matrix values.
  • Matrix simulation values generate multiple sets of camera parameters according to multiple sets of rotation matrix simulation values and multiple sets of translation matrix simulation values.
  • the shape of the road features in the high-precision map is a binary image; the matching road with the highest matching degree with the captured image is obtained from the plurality of road feature projection images.
  • the feature projection image includes: acquiring a binary map of the captured image; and acquiring a matching road feature projection image with the highest matching degree with the binary map of the captured image from the plurality of road feature projection images.
  • the three-dimensional coordinates of the calibration reference object relative to the camera are obtained by measurement, resulting in that the external parameter calibration accuracy of the camera depends on the measurement accuracy of the three-dimensional coordinates.
  • the external parameters of the camera are obtained by using the road feature projection function of the high-precision map, rather than by measuring the three-dimensional coordinates of the calibration reference object relative to the camera, thereby avoiding the limitation of the calibration accuracy of the external parameters of the camera. for measurement accuracy.
  • high-precision camera extrinsic parameter calibration can be achieved.
  • the camera is a vehicle-mounted camera, and the vehicle on which the camera is carried may be in a stationary state or in a moving state.
  • an apparatus for calibrating external parameters of a camera including: an acquisition unit configured to acquire a captured image of a camera, where the captured image is an image captured by the camera using the calibration reference object as a shooting object; processing The unit is configured to acquire the external parameters of the camera according to the captured image and the high-precision map, and the high-precision map includes the calibration reference object.
  • the processing unit is configured to acquire the external parameters of the camera through the following operations: acquiring the two-dimensional coordinates of the calibration reference object on the captured image;
  • the positioning information of the camera is used to determine the position of the camera on the high-precision map, and based on the position of the camera on the high-precision map, the position of the calibration reference object on the high-precision map is obtained. ;
  • Based on the position of the calibration reference object on the high-precision map obtain the three-dimensional coordinates of the calibration reference object relative to the camera; According to the two-dimensional coordinates and the three-dimensional coordinates, calculate the camera's three-dimensional coordinates.
  • External reference is configured to acquire the external parameters of the camera through the following operations: acquiring the two-dimensional coordinates of the calibration reference object on the captured image;
  • the positioning information of the camera is used to determine the position of the camera on the high-precision map, and based on the position of the camera on the high-precision map, the position of the calibration reference object on the high-precision map is obtained
  • the processing unit obtains the three-dimensional coordinates of the calibration reference object relative to the camera through the following operations: according to the position of the calibration reference object on the high-precision map, obtain all the coordinates.
  • the absolute position of the calibration reference object; according to the absolute position of the calibration reference object and the absolute position of the camera, the three-dimensional coordinates of the calibration reference object relative to the camera are calculated.
  • the high-precision map has a function of generating the relative positions of two positioning points on the map; the processing unit obtains a three-dimensional image of the calibration reference object relative to the camera through the following operations: Coordinates: Based on the position of the camera on the high-precision map and the position of the calibration reference object on the high-precision map, use the high-precision map to generate a three-dimensional image of the calibration reference object relative to the camera. coordinate.
  • the calibration reference object is a road feature
  • the processing unit is configured to acquire the external parameters of the camera through the following operations: acquiring multiple groups of camera parameters, each group of cameras The parameters include internal and external parameters; according to the multiple sets of camera parameters and the positioning information of the cameras, use the high-precision map to generate multiple road feature projection images; obtain from the multiple road feature projection images and the The matched road feature projection image with the highest image matching degree is captured; the external parameters of the camera are acquired according to a set of camera parameters corresponding to the matched road feature projection image.
  • the processing unit is configured to obtain multiple sets of camera parameters through the following operations: based on the initial value of the rotation matrix of the camera, use a preset step size to generate multiple sets of rotation matrix simulation values. ; Generate the multiple sets of camera parameters according to the multiple sets of rotation matrix simulation values respectively.
  • the processing unit is configured to obtain multiple sets of camera parameters through the following operations: respectively taking the rotation matrix and translation matrix of the camera as a benchmark, and using corresponding step sizes to generate multiple sets of rotation matrix simulations. value and multiple sets of translation matrix simulation values; generate multiple sets of camera parameters according to multiple sets of rotation matrix simulation values and multiple sets of translation matrix simulation values.
  • the processing unit is configured to obtain the matched road feature projection image through the following operations: obtaining the captured image; The binary map is obtained; the matching road feature projection image with the highest matching degree with the binary map of the captured image is obtained from the plurality of road feature projection images.
  • the camera is a vehicle-mounted camera, and the vehicle on which the camera is carried may be in a stationary state or in a moving state.
  • an apparatus for calibrating external parameters of a camera includes a processor, the processor is coupled with a memory, the memory is used for storing computer programs or instructions, and the processor is used for executing the computer programs or instructions stored in the memory, so that The method of the first aspect is performed.
  • the apparatus includes one or more processors.
  • the apparatus may further include a memory coupled to the processor.
  • the device may include one or more memories.
  • the memory may be integrated with the processor, or provided separately.
  • the apparatus may also include a data interface.
  • a computer-readable medium storing program code for execution by a device, the program code comprising for performing the method in the above-mentioned first aspect.
  • a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect above.
  • a chip in a sixth aspect, includes a processor and a data interface, the processor reads an instruction stored in a memory through the data interface, and executes the method in the first aspect.
  • the chip may further include a memory, in which instructions are stored, the processor is configured to execute the instructions stored in the memory, and when the instructions are executed, the The processor is configured to perform the method in the first aspect above.
  • the present application can improve the calibration accuracy of the external parameters of the camera by using the captured image of the high-precision map and the calibration reference to obtain the external parameters of the camera.
  • FIG. 1 is a schematic flowchart of a method for calibrating external parameters of a camera provided by an embodiment of the present application.
  • FIG. 2 is another schematic flowchart of a method for calibrating external parameters of a camera provided by an embodiment of the present application.
  • FIG. 3 is another schematic flowchart of a method for calibrating external parameters of a camera provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of using a captured image to match a plurality of road feature projection images in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a captured image and a binary image thereof in an embodiment of the present application.
  • FIG. 6 is a schematic block diagram of an apparatus for calibrating external parameters of a camera provided by an embodiment of the present application.
  • FIG. 7 is another schematic block diagram of an apparatus for calibrating external parameters of a camera provided by an embodiment of the present application.
  • Camera calibration also known as camera calibration
  • M represents the transformation matrix between the three-dimensional space point X W and the two-dimensional image point X P , which can be called a projection matrix.
  • Some elements in the projection matrix M characterize the parameters of the camera. Camera calibration is to obtain the projection matrix M.
  • the parameters of the camera include internal and external parameters.
  • the internal parameters are the parameters of the camera itself, such as the focal length, etc.
  • the external parameters are parameters related to the installation position of the camera, such as pitch angle, rotation angle and yaw angle.
  • the transformation matrix corresponding to the internal parameter may be referred to as the internal parameter transformation matrix M 1
  • the transformation matrix corresponding to the external parameter may be referred to as the external parameter transformation matrix M 2 .
  • the relationship between the three-dimensional space point X W and the two-dimensional image point X P can also be expressed as:
  • Camera calibration generally requires a calibration reference (also referred to as a calibration or a reference).
  • the calibration reference object represents the object shot by the camera during the camera calibration process.
  • the three-dimensional space point X W may be the coordinates of the calibration reference object in the world coordinate system
  • the two-dimensional image point X P may be the two-dimensional coordinates of the calibration reference object on the image plane of the camera.
  • Input of camera calibration the two-dimensional coordinates (ie, image point coordinates) of the calibration reference object on the image plane of the camera, and the three-dimensional coordinates (ie, three-dimensional space coordinates) of the calibration reference object relative to the camera.
  • the two-dimensional coordinates of the calibration reference object on the camera head plane may correspond to the two-dimensional image point X P in the above example;
  • the three-dimensional coordinates of the calibration reference object relative to the camera may correspond to the three-dimensional space point X W in the above example, or, Rigid body transformation of point X W in 3D space.
  • Camera calibration output camera parameters, including internal and external parameters.
  • camera calibration is a very critical link, and the accuracy of its calibration results directly affects the accuracy of the results produced by the camera work.
  • camera calibration can be divided into two categories: traditional camera calibration method (static) and camera self-calibration method (dynamic).
  • the traditional camera calibration method is a static camera calibration method. Specifically, in an environment where the camera is still, a calibration plate (ie, a calibration reference) is used to obtain the input of the camera calibration, thereby calculating the internal and external parameters of the camera.
  • the two-dimensional coordinates of the calibration reference object on the camera head plane are obtained by using the imaging results of the calibration plate in different directions of the camera, and the three-dimensional coordinates of the calibration reference object relative to the camera are obtained by measuring the calibration plate.
  • the disadvantage of the traditional camera calibration method is that it can only be applied to the environment where the camera is still, and the positioning of the calibration board is relatively high, the calibration process is cumbersome, and the efficiency is low, which is difficult to achieve in many application scenarios.
  • the vehicle will vibrate during the driving process due to road reasons, which will cause the external parameters of the camera to change. At this time, if the camera is not calibrated in real time, it will further affect the vehicle camera system. The accuracy of subsequent operations.
  • the camera self-calibration method is a dynamic camera calibration method, which does not need to be calibrated with a calibration board.
  • the camera self-calibration method is to use the distance between the vehicle and the lane line (ie, the calibration reference object) and the vanishing point to calibrate the camera to obtain the external parameters of the camera.
  • the three-dimensional coordinates of the lane line relative to the camera are obtained by measurement
  • the two-dimensional coordinates of the lane line on the camera head plane are obtained according to the captured image of the lane line by the camera
  • the three-dimensional coordinates of the lane line relative to the camera and the lane line are obtained.
  • the two-dimensional coordinates on the plane of the camera head are calculated to obtain the external parameters of the camera.
  • the disadvantage of the camera self-calibration method is that many conditions are required, for example, the driving of the vehicle is required to be centered, which is highly subjective, resulting in low camera calibration accuracy.
  • the current camera self-calibration method is only suitable for specific roads, such as horizontal, straight roads, and has low generality.
  • High-precision map also referred to as high-precision map
  • High-precision map is one of the core technologies of unmanned driving, which is a high-precision electronic map.
  • the maps we use every day for navigation and querying geographic information belong to traditional maps, and their main service targets are human drivers. Different from traditional maps, the main service object of high-precision maps is driverless cars, or machine drivers.
  • a high-resolution map is a vector map of road features.
  • High-resolution maps include geometry and location information of road features.
  • Road features include, but are not limited to, lane lines, signs (eg, traffic signs or poles), pole-like objects (eg, light poles), pavement markings, traffic lights.
  • high-precision maps can provide accurate road geometry and outline and position information of road facilities (position in the world coordinate system, ie absolute coordinate position).
  • High-resolution maps also contain geometric descriptions of various road features. For example, the high-precision location information of the geometric corners of road features can be queried in the high-precision map.
  • current high-resolution maps in shapefile format support range queries and vector projections.
  • the shape of road features in high-precision maps is a binary map.
  • the high-resolution map may have the function of projecting images of road features. For example, given the parameters of the camera (including internal and external parameters) and the positioning information of the camera, the high-precision map can output the projected image of road features based on the geometric model imaged by the camera.
  • the high-precision map involved in the embodiments of the present application refers to a high-precision electronic map in the field of unmanned driving technology, rather than a traditional map.
  • the external parameter calibration accuracy of the existing camera dynamic calibration method is low.
  • the present application provides a method and device for calibrating external parameters of a camera, by using a high-precision map to obtain the external parameters of the camera, so as to improve the calibration accuracy of the external parameters of the camera.
  • FIG. 1 is a schematic flowchart of a method 100 for calibrating external parameters of a camera according to an embodiment of the present application.
  • the method 100 includes steps S110 and S120.
  • S110 Acquire a photographed image of the camera, where the photographed image is an image photographed by the camera with a calibration reference object as a photographing object.
  • the calibration reference object is included in the captured image.
  • the captured image is an actual captured image of the calibration reference object.
  • the calibration reference object can be an object around the camera.
  • the calibration reference may be a road feature.
  • the calibration reference may be any of the following road features: lane lines, signage, pole-like objects, road markings, traffic lights.
  • the identification plate is, for example, a traffic identification plate or a pole plate
  • the pole-like object is, for example, a street light pole and the like.
  • a calibration reference may also be referred to as a calibration or reference.
  • S120 Acquire external parameters of the camera according to the captured image and the high-precision map, and the high-precision map includes a calibration reference object.
  • the high-precision map includes a calibration reference object, which means that the high-precision map has the position information of the calibration reference object.
  • a high-resolution map is a vector map of road features.
  • the high-precision map includes the geometric shape and position information of road features, that is, the high-precision map can provide accurate road geometry and outline and position information (absolute coordinate position) of road facilities. Therefore, the high-precision position information of the geometric corner points of the road features can be queried in the high-precision map.
  • the high-precision map includes the position information of the calibration reference object
  • the high-precision position information of the calibration reference object can be queried in the high-precision map.
  • the external parameters of the camera are acquired through the actual captured image of the calibration reference object and the high-precision map, so that the operation of measuring the three-dimensional coordinates of the calibration reference object relative to the camera does not need to be performed in the process of obtaining the external parameters.
  • the three-dimensional coordinates of the calibration reference object relative to the camera are obtained by measurement.
  • the accuracy of measuring and calibrating the three-dimensional coordinates of the reference object relative to the camera is low, resulting in low accuracy of the external parameter calibration of the camera.
  • the external parameters of the camera are obtained through the captured image of the calibration reference object and the high-precision map, and there is no need to perform the operation of measuring the three-dimensional coordinates of the calibration reference object relative to the camera, so that the calibration accuracy of the external parameters of the camera can no longer be achieved. Limited by measurement accuracy.
  • the camera in this embodiment of the present application may be in a static state or in a moving state.
  • the method for calibrating external parameters of a camera provided by the embodiments of the present application can be applied to a vehicle-mounted camera system.
  • the camera in the embodiment of the present application is a vehicle-mounted camera, and the vehicle where the camera is located may be in a stationary state or in a moving state.
  • step S120 the implementation manner of acquiring the external parameters of the camera according to the captured image and the high-precision map may include implementation manner 1 and implementation manner 2 to be described below.
  • step S120 further includes steps S121 to S124.
  • the two-dimensional coordinates of the calibration reference object on the image plane of the camera are obtained.
  • the manner of obtaining the two-dimensional coordinates of the calibration reference object on the image plane of the camera is in the prior art, which is not described in detail in this embodiment of the present application.
  • S122 Determine the position of the camera on the high-precision map according to the positioning information of the camera, and obtain the position of the calibration reference object on the high-precision map based on the position of the camera on the high-precision map.
  • the position of the camera can be located on the high-precision map based on the positioning information of the camera, and then the position of the calibration reference object on the high-precision map can be found according to the positioning of the camera on the high-precision map .
  • the acquisition of the positioning information of the camera can be realized by using any one or a combination of the following positioning technologies: carrier-phase differential (real-time kinematic, RTK) technology based on satellite positioning, vision-based or lidar-based matching positioning technology.
  • carrier-phase differential real-time kinematic, RTK
  • the positioning information of the camera is the absolute position of the camera (that is, the coordinates of the camera in the world coordinate system).
  • step S122 the following steps 1) and 2) may be used to obtain the position of the calibration reference object on the high-precision map.
  • Step 1) according to the position of the camera on the high-precision map and the captured image of the camera obtained in step S110, determine the target road feature used as the calibration reference in the high-precision map.
  • step 2) the position of the target road feature in the high-precision map is determined as the position of the calibration reference object on the high-precision map.
  • step 1) may further include the following sub-step 1) and sub-step 2).
  • Sub-step 1) according to the position of the camera on the high-precision map, obtain candidate target road features on the high-precision map. For example, road features on the high-precision map whose distance from the position of the camera on the high-precision map is less than a certain value may be used as candidate target road features.
  • Sub-step 2) extract the geometric feature of the calibration reference object from the captured image obtained in step S110, and use the geometric feature of the calibration reference object to compare the geometric features of each road feature in the candidate target road features, A road feature with the best comparison result (for example, the highest matching degree of geometric features) is regarded as the target road feature used as the calibration reference.
  • step 1) other feasible comparison methods can also be used, and the target road feature used as the calibration reference object is determined in the high-precision map by using the actual captured image of the calibration reference object by the camera.
  • the accuracy of locating the calibration reference object on the high-precision map can be improved.
  • step S123 the high-precision map can be used to obtain the three-dimensional coordinates of the calibration reference object relative to the camera in various ways.
  • step S123 includes: obtaining the absolute position of the calibration reference object according to the position of the calibration reference object on the high-precision map; calculating and obtaining the calibration according to the absolute position of the calibration reference object and the absolute position of the camera. The three-dimensional coordinates of the reference object relative to the camera.
  • the absolute position of the calibration reference object and the absolute position of the camera represent the coordinates of the calibration reference object and the camera in the same coordinate system.
  • the absolute position of the calibration reference object is the coordinates of the calibration reference object in the world coordinate system
  • the absolute position of the camera is the coordinates of the camera in the world coordinate system.
  • the absolute position of the camera can be obtained based on the positioning information of the camera.
  • the positioning information of the camera may itself be the absolute position of the camera.
  • the absolute position of the calibration reference object can be obtained based on the position of the calibration reference object on the high-precision map.
  • the high-precision map has the function of generating the relative positions of two positioning points on the map; step S123 includes: based on the position of the camera on the high-precision map and the calibration reference object on the high-precision map. The position of the calibration reference object relative to the camera is generated using a high-precision map.
  • the three-dimensional coordinates of the calibration reference object relative to the camera can be generated by using the high-precision map.
  • S124 Calculate the external parameters of the camera according to the two-dimensional coordinates of the calibration reference object on the captured image and the three-dimensional coordinates of the calibration reference object relative to the camera.
  • the external parameters of the camera can be calculated based on the geometric model of camera imaging, according to the two-dimensional coordinates of the calibration reference object on the image plane of the camera and the three-dimensional coordinates of the calibration reference object relative to the camera.
  • the specific algorithm is in the prior art, which is neither limited nor described in detail in this application.
  • the three-dimensional coordinates of the calibration reference object relative to the camera are obtained by measurement, resulting in that the external parameter calibration accuracy of the camera depends on the measurement accuracy of the three-dimensional coordinates.
  • the three-dimensional coordinates of the calibration reference object relative to the camera are obtained by using a high-precision map, rather than by measurement, thereby avoiding that the calibration accuracy of the external parameters of the camera is limited by the measurement accuracy.
  • the accuracy of calibrating the three-dimensional coordinates of the reference object relative to the camera can be improved, so that the external parameter calibration accuracy of the camera can be improved.
  • the calibration reference object is a road feature.
  • step S120 further includes steps S125 to S128.
  • S125 Acquire multiple sets of camera parameters, where each set of camera parameters includes internal and external parameters.
  • each set of camera parameters includes camera internal parameters, distortion parameters and external parameters.
  • the external parameters include translation matrix and rotation matrix.
  • the current camera parameters can be used as a benchmark, and a preset step size can be used to simulate and generate multiple sets of camera parameters.
  • the camera's internal parameters, distortion parameters, and translation matrix may change with a small possibility or a small change range. It can be assumed that these parameters remain unchanged at their initial values, and the camera's rotation matrix may change. Therefore, simulation values of multiple rotation matrices can be generated based on the current rotation matrix of the camera, thereby generating multiple sets of camera parameters.
  • step S125 includes: taking the initial value of the rotation matrix of the camera as a reference, using a preset step size to generate multiple groups of rotation matrix simulation values; respectively generating multiple groups of cameras according to the multiple groups of rotation matrix simulation values. parameter.
  • the rotation matrix is respectively changed toward two relative rotation directions (for example, left rotation and right rotation), thereby generating multiple rotation matrices Simulation values (eg, generate 8000 quasi-matrix simulation values). Then, based on these multiple rotation matrix simulation values, multiple sets of camera parameters are generated. That is to say, the rotation matrices of different groups in the multiple groups of camera parameters are different, and the remaining parameters (internal parameters, distortion parameters, and translation matrices) can be the same.
  • the preset step size may be specifically determined according to application requirements.
  • the preset step size is 0.2 degrees (0.2°).
  • the number of groups of the parameters of the multiple groups of cameras may also be specifically determined according to application requirements.
  • step S125 includes: using the rotation matrix and translation matrix of the camera as a reference, and using corresponding step sizes, generating multiple sets of rotation matrix simulation values and multiple sets of translation matrix simulation values; Matrix simulation values and multiple sets of translation matrix simulation values generate multiple sets of camera parameters. That is to say, the rotation matrices and translation matrices of different groups in the multiple groups of camera parameters are different, and the remaining parameters (internal participating distortion parameters) may be the same.
  • S126 according to the multiple sets of camera parameters and the positioning information of the cameras, use a high-precision map to generate multiple road feature projection images.
  • the position of the camera in the high-precision map is queried in the high-precision map. Then, based on the camera's internal parameters, distortion parameters, and external parameters, it is projected in a high-precision map to form a road feature projection image (binary map) corresponding to the geometric model imaged by the camera.
  • a road feature projection image binary map
  • multiple road feature projection images are shown in FIG. 4 .
  • S127 Obtain a matching road feature projection image with the highest matching degree with the captured image from the plurality of road feature projection images.
  • each image in a plurality of road feature projection images is matched by using the actual captured image of the calibration reference object.
  • a method for matching the captured image and the road feature projection image may be to calculate the average pixel deviation of the two images.
  • the average pixel deviation of the captured image and each of the plurality of road feature projection images is calculated separately.
  • the road feature projection image with the smallest average pixel deviation is used as the matched road feature projection image.
  • step S127 includes: processing the captured image of the camera acquired in step S110 into an image of a first form, where the first form represents the form of the road feature projection image supported by the high-precision map;
  • the matching road feature projection image with the highest matching degree with the image of the first form of the captured image is obtained from the plurality of road feature projection images.
  • the road feature projection images usually supported by high-precision maps are binary images.
  • the image of the first form mentioned in this embodiment is a binary image.
  • step S127 includes: acquiring a binary image of a captured image of the camera; and acquiring a matching road feature projection image with the highest matching degree with the binary image of the captured image from a plurality of road feature projection images.
  • the method for obtaining a binary image of a captured image includes two steps: the first step is to use a neural network (NN) inference model to perform semantic pixel-level segmentation of the captured image, so as to realize the segmentation of road features (lane lines, signs, etc.)
  • the contours of road features are extracted from the segmented image to generate a binary image.
  • the captured image and the binary image extracted from the captured image are shown as the left and right images in FIG. 5 , respectively.
  • step S126 the actual captured image of the camera can also be processed into the image of the other form before matching.
  • S128 Acquire the external parameters of the camera according to a set of camera parameters corresponding to the projected image of the matched road feature.
  • each set of camera parameters includes internal parameters, distortion parameters, translation matrix and rotation matrix, then the translation matrix and rotation matrix of the camera can be obtained from the set of camera parameters corresponding to the projected image of the matching road feature, that is, the camera to be calibrated can be obtained.
  • External reference the translation matrix and rotation matrix of the camera can be obtained from the set of camera parameters corresponding to the projected image of the matching road feature, that is, the camera to be calibrated can be obtained.
  • the rotation matrix of the camera can be obtained only from a set of camera parameters corresponding to the projected image of the road feature, that is, the camera external parameters to be calibrated can be obtained.
  • the external parameters of the camera are obtained by using the road feature projection function of the high-precision map, rather than by measuring the three-dimensional coordinates of the calibration reference object relative to the camera, thereby avoiding the limitation of the calibration accuracy of the external parameters of the camera. for measurement accuracy.
  • high-precision camera extrinsic parameter calibration can be achieved.
  • a high-precision map is used by using the high-precision map to generate a projected image of road features according to the calibrated camera parameters, so as to guide the unmanned vehicle to drive safely.
  • the road feature projection function of the high-precision map is reversely applied, which subtly improves the calibration accuracy of the external parameters of the camera.
  • the embodiments of the present application may be applicable to dynamic camera calibration, and may also be applicable to camera static calibration.
  • the camera in the embodiment of the present application is a vehicle-mounted camera, and the vehicle on which the camera is carried is in a moving state.
  • the camera calibration solution provided by the present application can improve the calibration accuracy of the camera's external parameters by using a high-precision map to calibrate the external parameters of the camera.
  • the calibration reference object can be any type of road feature, and is not strictly limited to be the lane line (in the existing camera self-calibration method, the calibration reference object is limited to the lane line).
  • the calibration reference object in the camera calibration solution provided by this application may be any one of the following lane features: lane lines, signs, pole-like objects, road signs, and traffic lights.
  • the identification plate is, for example, a traffic identification plate or a pole plate
  • the pole-like object is, for example, a street light pole and the like.
  • the camera calibration solution provided in this application can be applied to both dynamic camera calibration and static camera calibration.
  • the camera calibration solution provided by the present application does not have the limitation of a specific road in the scenario of dynamic camera calibration, and does not require the vehicle to drive in the center. Therefore, the camera calibration solution provided in this application has good versatility.
  • the camera calibration solution provided in this application can be applied to the camera parameter calibration link of the assembly line of the autonomous driving vehicle, and is not necessarily limited to a fixed calibration workshop, and can calibrate all cameras at the same time, saving calibration time.
  • the camera calibration solution provided in this application can also be applied to scenarios where the external parameters change during use after the vehicle leaves the factory, and real-time online correction or periodic calibration is required.
  • the camera calibration solution provided by the present application can greatly reduce the dependence on the calibration workshop, and realize the high-precision calibration of the external parameters of the vehicle camera anytime and anywhere (ie, online in real time).
  • the camera calibration solution provided in this application can also be applied to the calibration field of other sensors (eg, lidar).
  • the location information of the calibration reference object can also be obtained by using a high-precision map.
  • FIG. 6 is an apparatus 600 for calibrating external parameters of a camera provided by an embodiment of the present application.
  • the apparatus 600 includes an acquisition unit 610 and a processing unit 620 .
  • the acquiring unit 610 is configured to acquire a photographed image of the camera, where the photographed image is an image photographed by the camera with the calibration reference object as the photographing object.
  • the processing unit 620 is configured to acquire the external parameters of the camera according to the captured image and the high-precision map, and the high-precision map includes a calibration reference object.
  • the processing unit 620 is configured to acquire the external parameters of the camera through the following operations: acquiring the two-dimensional coordinates of the calibration reference object on the captured image; determining the location of the camera on the high-precision map according to the positioning information of the camera. Based on the position of the camera on the high-precision map, the position of the calibration reference object on the high-precision map is obtained; based on the position of the calibration reference object on the high-precision map, the three-dimensional coordinates of the calibration reference object relative to the camera are obtained; according to two Dimensional coordinates and three-dimensional coordinates are calculated to obtain the external parameters of the camera.
  • the processing unit 620 obtains the three-dimensional coordinates of the calibration reference object relative to the camera through the following operations: obtaining the absolute position of the calibration reference object according to the position of the calibration reference object on the high-precision map; The absolute position and the absolute position of the camera are calculated to obtain the three-dimensional coordinates of the calibration reference object relative to the camera.
  • the high-precision map has the function of generating the relative positions of two positioning points on the map; the processing unit 620 obtains the three-dimensional coordinates of the calibration reference object relative to the camera through the following operations: The position on the map and the position of the calibration reference object on the high-precision map are used to generate the three-dimensional coordinates of the calibration reference object relative to the camera using the high-precision map.
  • the calibration reference object is a road feature
  • the processing unit 620 is configured to obtain the external parameters of the camera through the following operations: obtain multiple sets of camera parameters, each set of camera parameters includes internal and external parameters; group camera parameters and camera positioning information, and use high-precision maps to generate multiple road feature projection images; obtain the matching road feature projection image with the highest matching degree with the captured image from the multiple road feature projection images; The corresponding set of camera parameters to obtain the external parameters of the camera.
  • the processing unit 620 is configured to obtain multiple sets of camera parameters through the following operations: taking the initial value of the rotation matrix of the camera as a benchmark, and using a preset step size to generate multiple sets of rotation matrix simulation values; Multiple sets of rotation matrix simulation values generate multiple sets of camera parameters.
  • the processing unit 620 is configured to obtain multiple sets of camera parameters through the following operations: respectively taking the rotation matrix and translation matrix of the camera as benchmarks, and using corresponding step sizes to generate multiple sets of rotation matrix simulation values. and multiple sets of translation matrix simulation values; according to multiple sets of rotation matrix simulation values and multiple sets of translation matrix simulation values, multiple sets of camera parameters are generated.
  • the processing unit 620 is configured to obtain a projection image matching the road feature through the following operations: obtaining a binary image of the captured image; The matching road feature projection image with the highest matching degree with the binary image of the captured image is obtained from the road feature projection images.
  • the camera is a vehicle-mounted camera, and the vehicle on which the camera is carried may be in a stationary state or in a moving state.
  • an embodiment of the present application further provides an apparatus 700 for calibrating external parameters of a camera.
  • the apparatus 700 includes a processor 710, the processor 710 is coupled with a memory 720, the memory 720 is used for storing computer programs or instructions, and the processor 710 is used for executing the computer programs or instructions stored in the memory 720, so that the methods in the above method embodiments are implemented 100 is executed.
  • the apparatus 700 may further include a memory 720 .
  • the apparatus 700 may further include a data interface 730, and the data interface 730 is used for data transmission with the outside world.
  • Embodiments of the present application further provide a computer-readable medium, where the computer-readable medium stores program codes for device execution, where the program codes include the methods for executing the foregoing embodiments.
  • the embodiments of the present application also provide a computer program product containing instructions, when the computer program product is run on a computer, the computer is made to execute the method of the above embodiment.
  • An embodiment of the present application further provides a chip, the chip includes a processor and a data interface, and the processor reads an instruction stored in a memory through the data interface, and executes the method of the above embodiment.
  • the chip may further include a memory, the memory stores instructions, the processor is configured to execute the instructions stored in the memory, and when the instructions are executed, the processor is configured to execute the methods in the foregoing embodiments.
  • the disclosed systems, devices and methods may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Navigation (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本申请涉及人工智能领域,具体涉及自动驾驶领域,提供一种摄像头外参标定的方法与装置,该方法包括:获取摄像头的拍摄图像,拍摄图像为摄像头以标定参照物为拍摄对象所拍摄的图像;根据拍摄图像与高精度地图,获取摄像头的外参,高精度地图中包含标定参照物。通过摄像头对标定参照物的拍摄图像以及高精度地图获取摄像头的外参,可以提升摄像头外参的标定精度。本申请可以应用在智能汽车、网联汽车、新能汽车或自动驾驶汽车上。

Description

摄像头外参标定的方法与装置
本申请要求于2020年9月4日提交中国专利局、申请号为202010919175.8、申请名称为“摄像头外参标定的方法与装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及数据处理领域,具体涉及一种摄像头外参标定的方法与装置。
背景技术
摄像头标定表示获取摄像头参数的过程。摄像头参数包括内参与外参,内参为摄像头自身的参数,外参为摄像头的安装位置相关的参数,例如俯仰角、旋转角与偏航角等。
目前,摄像头标定分为两类:传统的摄像头标定法与摄像头自标定法。传统的摄像头标定法使用标定板进行标定,但只适用于摄像头静止的场景。在车载摄像头系统中,由于道路原因车辆可能发生振动从而会导致摄像头的外参发生变化,所以需要对摄像头参数进行动态标定。现有的摄像头动态标定方法为摄像头自标定法,其利用车道线来进行标定,但是该方法要求车辆居中驾驶,主观性强,导致外参标定精度较低,此外,该方法只适用于特定道路,例如水平与笔直道路。
因此,需要提升摄像头动态标定的精度,尤其是摄像机外参的动态标定精度。
发明内容
本申请提供一种摄像头外参标定的方法与装置,通过使用高精度地图标定摄像头的外参,可以提升摄像头外参的标定精度。
第一方面,提供一种摄像头外参标定的方法,所述方法包括:获取摄像头的拍摄图像,所述拍摄图像为所述摄像头以所述标定参照物为拍摄对象所拍摄的图像;根据所述拍摄图像与高精度地图,获取所述摄像头的外参,所述高精度地图中包含所述标定参照物。
所述高精度地图中包含所述标定参照物,表示,所述高精度地图中具有所述标定参照物的位置信息。
通过标定参照物的实际拍摄图像以及高精度地图获取所述摄像头的外参,使得在获得外参的过程中无需执行测量标定参照物相对于摄像头的三维坐标的操作。
在现有的摄像头动态标定方法中,标定参照物相对摄像头的三维坐标是通过测量得到的。而在摄像头移动过程中,测量标定参照物相对摄像头的三维坐标的精度是较低的,从而导致摄像头的外参标定精度较低。
在本申请中,通过标定参照物的拍摄图像以及高精度地图获取摄像头的外参,无需执行测量标定参照物相对于摄像头的三维坐标的操作,从而可以实现摄像头的外参标定精度不再受限于测量精度,因此,可以提升摄像头的外参标定精度。
所述标定参照物可以为摄像头周围的物体。例如,所述标定参照物可以为道路特征物。作为示例,所述标定参照物可以为如下道路特征物中的任一种:车道线、标识牌、杆类物体、路面标识、交通灯。其中,标识牌例如交通标识牌或杆牌,杆类物体例如为路灯杆等。
结合第一方面,在一种可能的实现方式中,所述根据所述拍摄图像与高精度地图,获取所述摄像头的外参,包括:获取所述标定参照物在所述拍摄图像上的二维坐标;根据所述摄像头的定位信息,确定所述摄像头在所述高精度地图上的位置,并基于所述摄像头在所述高精度地图上的位置,获取所述标定参照物在所述高精度地图上的位置;基于所述标定参照物在所述高精度地图上的位置,获取所述标定参照物相对于所述摄像头的三维坐标;根据所述二维坐标与所述三维坐标,计算得到所述摄像头的外参。
可选地,作为一种实施方式,所述基于所述标定参照物在所述高精度地图上的位置,获取所述标定参照物相对于所述摄像头的三维坐标,包括:根据所述标定参照物在所述高精度地图上的位置,获取所述标定参照物的绝对位置;根据所述标定参照物的绝对位置以及所述摄像头的绝对位置,计算得到所述标定参照物相对于所述摄像头的三维坐标。
可选地,作为另一种实施方式,所述高精度地图具有生成地图上两个定位点的相对位置的功能;其中,所述基于所述标定参照物在所述高精度地图上的位置,获取所述标定参照物相对于所述摄像头的三维坐标,包括:基于所述摄像头在所述高精度地图上的位置以及所述标定参照物在所述高精度地图上的位置,利用所述高精度地图生成所述标定参照物相对于所述摄像头的三维坐标。
所述摄像头的定位信息可以通过如下定位技术中的任一种或多种的组合获得:基于卫星定位的载波相位差分(real-time kinematic,RTK)技术,基于视觉或激光雷达的匹配定位技术。
在现有的摄像头动态标定方法中,标定参照物相对摄像头的三维坐标是通过测量得到的,导致摄像头的外参标定精度依赖于该三维坐标的测量精度。
在本实现方式中,是利用高精度地图获取标定参照物相对摄像头的三维坐标,而非通过测量的方式获得,从而避免了摄像头的外参标定精度受限于测量精度。此外,因为高精度地图的高精度属性,可以提高标定参照物相对摄像头的三维坐标的精度,从而可以提高摄像头的外参标定精度。
结合第一方面,在一种可能的实现方式中,所述标定参照物为道路特征物;其中,所述根据所述拍摄图像与高精度地图,获取所述摄像头的外参,包括:获取多组摄像头参数,每组摄像头参数包括内参与外参;根据所述多组摄像头参数以及所述摄像头的定位信息,使用所述高精度地图生成多个道路特征投影图像;从所述多个道路特征投影图像中获取与所述拍摄图像匹配度最高的匹配道路特征投影图像;根据所述匹配道路特征投影图像所对应的一组摄像头参数,获取所述摄像头的外参。
可选地,作为一种实施方式,所述获取多组摄像头参数,包括:以所述摄像头的旋转矩阵的初始值为基准,采用预设步长生成多组旋转矩阵仿真值;分别根据所述多组旋转矩阵仿真值生成所述多组摄像头参数。
可选地,作为另一种实施方式,所述获取多组摄像头参数,包括:分别以摄像头的旋转矩阵与平移矩阵为基准,采用相应的步长,生成多组旋转矩阵仿真值与多组平移矩阵仿真值;根据多组旋转矩阵仿真值与多组平移矩阵仿真值,生成多组摄像头参数。
应理解,在实际获取多组摄像头参数的过程中,可以根据应用需求具体确定哪些类型的参数不变,哪些类型的参数改变。
可选地,在一些实现方式中,所述高精度地图中道路特征物的形态为二值图;所述从所述多个道路特征投影图像中获取与所述拍摄图像匹配度最高的匹配道路特征投影图像,包括:获取所述拍摄图像的二值图;从所述多个道路特征投影图像中获取与所述拍摄图像的二值图匹配度最高的匹配道路特征投影图像。
通过将摄像头对标定参照物的拍摄图像处理为二值图,使得其与高精度地图中道路特征物的形态一致,即使其与多个道路特征投影图像的形态一致,这有利于提高图像匹配准确度,从而可以实现外参标定的高精度。
在现有的摄像头动态标定方法中,标定参照物相对摄像头的三维坐标是通过测量得到的,导致摄像头的外参标定精度依赖于该三维坐标的测量精度。
在本实施例中,通过利用高精度地图的道路特征物投影功能获取摄像头的外参,而非通过测量标定参照物相对摄像头的三维坐标的方式获得,从而避免了摄像头的外参标定精度受限于测量精度。此外,因为高精度地图的高精度属性,因此,可以实现摄像头外参标定的高精度。
可选地,所述摄像头为车载摄像头,所述摄像头承载于的车辆可以处于静止状态,也可以处于移动状态。
第二方面,提供一种摄像头外参标定的装置,包括:获取单元,用于获取摄像头的拍摄图像,所述拍摄图像为所述摄像头以所述标定参照物为拍摄对象所拍摄的图像;处理单元,用于根据所述拍摄图像与高精度地图,获取所述摄像头的外参,所述高精度地图中包含所述标定参照物。
结合第二方面,在一种可能的实现方式中,所述处理单元用于通过如下操作获取所述摄像头的外参:获取所述标定参照物在所述拍摄图像上的二维坐标;根据所述摄像头的定位信息,确定所述摄像头在所述高精度地图上的位置,并基于所述摄像头在所述高精度地图上的位置,获取所述标定参照物在所述高精度地图上的位置;基于所述标定参照物在所述高精度地图上的位置,获取所述标定参照物相对于所述摄像头的三维坐标;根据所述二维坐标与所述三维坐标,计算得到所述摄像头的外参。
可选地,作为一种实施方式,所述处理单元通过如下操作获取所述标定参照物相对于所述摄像头的三维坐标:根据所述标定参照物在所述高精度地图上的位置,获取所述标定参照物的绝对位置;根据所述标定参照物的绝对位置以及所述摄像头的绝对位置,计算得到所述标定参照物相对于所述摄像头的三维坐标。
可选地,作为另一种实施方式,所述高精度地图具有生成地图上两个定位点的相对位置的功能;所述处理单元通过如下操作获取所述标定参照物相对于所述摄像头的三维坐标:基于所述摄像头在所述高精度地图上的位置以及所述标定参照物在所述高精度地图上的位置,利用所述高精度地图生成所述标定参照物相对于所述摄像头的三维坐标。
结合第二方面,在一种可能的实现方式中,所述标定参照物为道路特征物;所述处理单元用于通过如下操作获取所述摄像头的外参:获取多组摄像头参数,每组摄像头参数包括内参与外参;根据所述多组摄像头参数以及所述摄像头的定位信息,使用所述高精度地图生成多个道路特征投影图像;从所述多个道路特征投影图像中获取与所述拍摄图像匹配 度最高的匹配道路特征投影图像;根据所述匹配道路特征投影图像所对应的一组摄像头参数,获取所述摄像头的外参。
可选地,作为一种实施方式,所述处理单元用于通过如下操作获取多组摄像头参数:以所述摄像头的旋转矩阵的初始值为基准,采用预设步长生成多组旋转矩阵仿真值;分别根据所述多组旋转矩阵仿真值生成所述多组摄像头参数。
可选地,作为另一种实施方式,所述处理单元用于通过如下操作获取多组摄像头参数:分别以摄像头的旋转矩阵与平移矩阵为基准,采用相应的步长,生成多组旋转矩阵仿真值与多组平移矩阵仿真值;根据多组旋转矩阵仿真值与多组平移矩阵仿真值,生成多组摄像头参数。
可选地,在一些实现方式中,若所述高精度地图中道路特征物的形态为二值图,所述处理单元用于通过如下操作获取所述匹配道路特征投影图像:获取所述拍摄图像的二值图;从所述多个道路特征投影图像中获取与所述拍摄图像的二值图匹配度最高的匹配道路特征投影图像。
可选地,所述摄像头为车载摄像头,所述摄像头承载于的车辆可以处于静止状态,也可以处于移动状态。
第三方面,提供一种摄像头外参标定的装置,该装置包括处理器,该处理器与存储器耦合,该存储器用于存储计算机程序或指令,处理器用于执行存储器存储的计算机程序或指令,使得第一方面中的方法被执行。
可选地,该装置包括的处理器为一个或多个。
可选地,该装置中还可以包括与处理器耦合的存储器。
可选地,该装置包括的存储器可以为一个或多个。
可选地,该存储器可以与该处理器集成在一起,或者分离设置。
可选地,该装置还可以包括数据接口。
第四方面,提供一种计算机可读介质,该计算机可读介质存储用于设备执行的程序代码,该程序代码包括用于执行上述第一方面中的方法。
第五方面,提供一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述第一方面中的方法。
第六方面,提供一种芯片,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,执行上述第一方面中的方法。
可选地,作为一种实现方式,所述芯片还可以包括存储器,所述存储器中存储有指令,所述处理器用于执行所述存储器上存储的指令,当所述指令被执行时,所述处理器用于执行上述第一方面中的方法。
基于上述描述可知,本申请通过利用高精度地图与标定参照物的拍摄图像获取摄像头的外参,可以提升摄像头的外参标定精度。
附图说明
图1是本申请实施例提供的摄像头外参标定的方法的示意性流程图。
图2是本申请实施例提供的摄像头外参标定的方法的另一示意性流程图。
图3是本申请实施例提供的摄像头外参标定的方法的再一示意性流程图。
图4是本申请实施例中利用拍摄图像匹配多个道路特征投影图像的示意图。
图5是本申请实施例中拍摄图像及其二值图的示意图。
图6是本申请实施例提供的摄像头外参标定的装置的示意性框图。
图7是本申请实施例提供的摄像头外参标定的装置的另一示意性框图。
具体实施方式
为了便于理解本申请实施例,下面先介绍本申请实施例涉及的几个概念。
1、摄像头标定(camera calibration)(也称为相机标定)
1)摄像头标定的定义
基于摄像头的成像原理可知,摄像头成像的几何模型中三维空间点与图像平面上二维像点之间存在对应关系,这个对应关系是由摄像头的参数决定的。获得摄像头的参数的过程就称之为摄像头标定。关于摄像头的成像原理为现有技术,本文对此不作详述。
作为示例,假设将摄像头成像的几何模型中的三维空间点记为X W,将摄像头成像的几何模型中图像平面上二维像点记为X P,三维空间点X W与二维像点X P之间的关系可以表示如下:
X P=MX W
其中,M表示三维空间点X W与二维像点X P之间的转换矩阵,可以称为投影矩阵。该投影矩阵M中的一些元素表征摄像头的参数。摄像头标定就是要获取该投影矩阵M。
摄像头的参数包括内参与外参。其中,内参为摄像头自身的参数,例如,焦距等。外参为摄像头的安装位置相关的参数,例如俯仰角、旋转角和偏航角等。
可以将内参对应的转换矩阵称为内参转换矩阵M 1,将外参对应的转换矩阵称为外参转换矩阵M 2。在上面的示例中,三维空间点X W与二维像点X P之间的关系还可以表示为:
X P=M 1M 2X W=MX W
摄像头标定一般来说需要标定参照物(也可称为标定物或参照物)。标定参照物表示,摄像头标定过程中,摄像头所拍摄的对象。
例如,在上面示例中,三维空间点X W可以是标定参照物在世界坐标系下的坐标,二维像点X P可以是标定参照物在摄像头的像平面上的二维坐标。
2)摄像头标定的输入与输出
摄像头标定的输入:标定参照物在摄像头的像平面上的二维坐标(即像点坐标),标定参照物相对于摄像头的三维坐标(即三维空间坐标)。
例如,标定参照物在摄像头像平面上的二维坐标可以对应上面示例中的二维像点X P;标定参照物相对于摄像头的三维坐标可以对应上面示例中的三维空间点X W,或者,三维空间点X W的刚体变换。
摄像头标定的输出:摄像头的参数,包括内参与外参。
无论是在图像测量或者机器视觉应用中,摄像头标定都是非常关键的环节,其标定结果的精度直接影响摄像头工作产生结果的准确性。
从广义上来分,目前可将摄像头标定分为两类:传统的摄像头标定法(静态)与摄像头自标定法(动态)。
3)传统的摄像头标定法(静态)
传统的摄像头标定法是一种摄像头静态标定法,具体为,在摄像头静止的环境下,使用标定板(即标定参照物)获得摄像头标定的输入,从而计算出摄像头的内参与外参。其中,利用标定板在摄像头不同方位的成像结果获得标定参照物在摄像头像平面上的二维坐标,通过对标定板的测量获得标定参照物相对于摄像头的三维坐标。
传统的摄像头标定法的缺点是,只能适用于摄像头静止的环境,而且对标定板的摆放位置要求较高,标定过程繁琐,效率低下,在许多应用场景中难以实现。例如,在车载摄像头系统中,由于道路原因车辆在行驶过程中会产生振动,从而会导致摄像头外参发生变化,此时摄像头如果没有进行实时标定,会进一步影响车载摄像头系统后续操作的准确性。
4)摄像头自标定法(动态)
摄像头自标定法是一种摄像头动态标定法,无需利用标定板进行标定。目前,摄像头自标定方法为,利用车辆与车道线(即标定参照物)的距离与消失点进行摄像头标定,获得摄像头的外参。作为示例,通过测量获得车道线相对于摄像头的三维坐标,根据摄像头对车道线的拍摄图像获得车道线在摄像头像平面上的二维坐标,然后根据车道线相对于摄像头的三维坐标与车道线在摄像头像平面上的二维坐标,计算获得摄像头的外参。
但是,摄像头自标定法的缺点是,需要的条件较多,例如,要求驾驶车辆居中行驶,该要求主观性很强,导致摄像头标定精度较低。此外,当前的摄像头自标定法只适用于特定道路,例如,水平、笔直的道路,通用性较低。
2、高精度地图(也可简称为高精地图)
高精度地图是无人驾驶核心技术之一,它是一种高精度的电子地图。我们日常使用的用于导航、查询地理信息的地图都属于传统地图,其主要服务对象是人类驾驶员。与传统地图不同,高精度地图的主要服务对象是无人驾驶车,或者说是机器驾驶员。
高精度地图区别于传统地图的一个重要特征就是精度,传统地图只能有米级的精度,对于车辆来说,米级的精度是完全不够的。高精度地图做到了厘米级的精度,这对于确保无人驾驶的安全性至关重要。
高精度地图是道路特征物的矢量地图。高精度地图包括道路特征物的几何形状和位置信息。道路特征物包括但不限于,车道线、标识牌(例如交通标识牌或杆牌)、杆类物体(例如路灯杆)、路面标识、交通灯。
换句话说,高精度地图可以提供精确的道路几何和道路设施的轮廓和位置信息(世界坐标系下的位置,即绝对坐标位置)。
高精度地图还包含各种道路特征物的几何描述。例如,可以在高精度地图中查询到道路特征物的几何角点的高精度位置信息。例如,目前的shapefile格式的高精度地图可支持范围查询和矢量投影。
目前,高精度地图中道路特征物的形态为二值图。
高精度地图可以具有投影道路特征物图像的功能。例如,给定摄像头的参数(包括内参与外参)与摄像头的定位信息,高精度地图可以基于摄像头成像的几何模型输出道路特征物投影图像。
需要说明的是,本申请实施例中涉及的高精度地图指的是无人驾驶技术领域中的高精度电子地图,而非传统地图。
如前文描述,现有的摄像头动态标定方法的外参标定精度较低。
本申请提供一种摄像头外参标定的方法与装置,通过使用高精度地图获取摄像头的外参,以提升摄像头外参标定精度。
下面将结合附图,对本申请中的技术方案进行描述。
图1为本申请实施例提供的摄像头外参标定的方法100的示意性流程图。该方法100包括步骤S110与S120。
S110,获取摄像头的拍摄图像,拍摄图像为摄像头以标定参照物为拍摄对象所拍摄的图像。换句话说,该拍摄图像中包含该标定参照物。
应理解,该拍摄图像为标定参照物的实际拍摄图像。
标定参照物可以为摄像头周围的物体。例如,标定参照物可以为道路特征物。作为示例,标定参照物可以为如下道路特征物中的任一种:车道线、标识牌、杆类物体、路面标识、交通灯。其中,标识牌例如交通标识牌或杆牌,杆类物体例如为路灯杆等。
标定参照物还可以称为标定物或参照物。
S120,根据拍摄图像与高精度地图,获取摄像头的外参,高精度地图中包含标定参照物。
高精度地图中包含标定参照物,表示,高精度地图中具有该标定参照物的位置信息。
如前文描述,高精度地图是道路特征物的矢量地图。高精度地图包括道路特征物的几何形状和位置信息,即高精度地图可以提供精确的道路几何和道路设施的轮廓和位置信息(绝对坐标位置)。因此可以在高精度地图中查询到道路特征物的几何角点的高精度位置信息。
应理解,只要高精度地图中包含该标定参照物的位置信息,则可以在高精度地图中查询到该标定参照物的高精度位置信息。
通过标定参照物的实际拍摄图像以及高精度地图获取摄像头的外参,使得在获得外参的过程中无需执行测量标定参照物相对于摄像头的三维坐标的操作。
如前文描述,在现有的摄像头动态标定方法中,标定参照物相对摄像头的三维坐标是通过测量得到的。而在摄像头移动过程中,测量标定参照物相对摄像头的三维坐标的精度是较低的,从而导致摄像头的外参标定精度较低。
在本申请实施例中,通过标定参照物的拍摄图像以及高精度地图获取摄像头的外参,无需执行测量标定参照物相对于摄像头的三维坐标的操作,从而可以实现摄像头的外参标定精度不再受限于测量精度。
本申请实施例中的摄像头可以处于静止状态,也可以处于移动状态。
本申请实施例提供的摄像头外参标定的方法可以适用于车载摄像头系统。例如,本申请实施例中的摄像头为车载摄像头,摄像头所在的车辆可以处于静止状态,也可以处于移动状态。
应理解,本申请实施例可以应用于摄像头外参静态标定,也可以应用于摄像头外参动态标定。
在步骤S120中,根据拍摄图像与高精度地图获取摄像头的外参的实现方式可以包括下文将描述的实现方式一与实现方式二。
实现方式一
如图2所示,步骤S120进一步包括步骤S121至步骤S124。
S121,获取标定参照物在拍摄图像上的二维坐标。
即获取标定参照物在摄像头的像平面上的二维坐标。获取标定参照物在摄像头的像平面上的二维坐标的方式为现有技术,本申请实施例对此不作详述。
S122,根据摄像头的定位信息,确定摄像头在高精度地图上的位置,并基于摄像头在高精度地图上的位置,获取标定参照物在高精度地图上的位置。
例如,获取摄像头的定位信息后,可以基于摄像头的定位信息在高精度地图上定位出该摄像头的位置,然后根据摄像头在高精度地图上的定位可以找到该标定参照物在高精度地图上的位置。
其中,关于摄像头的定位信息的获取,可以使用如下定位技术中的任一种或多种的组合来实现:基于卫星定位的载波相位差分(real-time kinematic,RTK)技术,基于视觉或激光雷达的匹配定位技术。
应理解,还可以采用其他可行的定位技术获取摄像头的定位信息。
例如,摄像头的定位信息为摄像头的绝对位置(即摄像头在世界坐标系下的坐标)。
可选地,在步骤S122中,可以通过如下步骤1)与步骤2),获取标定参照物在高精度地图上的位置。
步骤1),根据摄像头在高精度地图上的位置以及步骤S110获得的摄像头的拍摄图像,在高精度地图中确定用作该标定参照物的目标道路特征物。
步骤2),将该目标道路特征物在高精度地图中的位置确定为该标定参照物在高精度地图上的位置。
其中,步骤1)可以进一步包如下子步骤1)与子步骤2)。
子步骤1),根据摄像头在高精度地图上的位置,在高精度地图上获取候选目标道路特征物。例如,可以将高精度地图上与摄像头在高精度地图上的位置的距离小于一定数值的道路特征物作为候选目标道路特征物。
子步骤2),从步骤S110获得的拍摄图像中提取该标定参照物的几何特征,并使用该标定参照物的几何特征,对候选目标道路特征物中各个道路特征物的几何特征进行比对,将比对结果最好(例如,几何特征匹配度最高)的一个道路特征物视为用作该标定参照物的目标道路特征物。
应理解,在步骤1)中,还可以采用其他可行的对比方法,利用摄像头对标定参照物的实际拍摄图像,在高精度地图中确定用作该标定参照物的目标道路特征物。
通过根据摄像头在高精度地图上的位置以及步骤S110获得的摄像头的拍摄图像,获取标定参照物在高精度地图上的位置,可以提高在高精度地图上定位该标定参照物的准确性。
S123,基于标定参照物在高精度地图上的位置,获取标定参照物相对于摄像头的三维坐标。
在步骤S123中,可以利用高精度地图,通过多种方式获得标定参照物相对于摄像头的三维坐标。
可选地,作为一种实施方式,步骤S123包括:根据标定参照物在高精度地图上的位置,获取标定参照物的绝对位置;根据标定参照物的绝对位置以及摄像头的绝对位置,计算得到标定参照物相对于摄像头的三维坐标。
标定参照物的绝对位置与摄像头的绝对位置表示,标定参照物与摄像头分别在同一坐标系下的坐标。例如,标定参照物的绝对位置为标定参照物在世界坐标系下的坐标,摄像头的绝对位置为摄像头在世界坐标系下的坐标。
应理解,摄像头的绝对位置可以基于摄像头的定位信息获得。例如,摄像头的定位信息可以本身就是摄像头的绝对位置。标定参照物的绝对位置可以基于标定参照物在高精度地图上的位置获得。
可选地,作为另一种实施方式,高精度地图具有生成地图上两个定位点的相对位置的功能;步骤S123包括:基于摄像头在高精度地图上的位置以及标定参照物在高精度地图上的位置,利用高精度地图生成标定参照物相对于摄像头的三维坐标。
应理解,给定摄像头在高精度地图上的定位点与标定参照物在高精度地图上的定位点,可以利用高精度地图生成标定参照物相对于摄像头的三维坐标。
S124,根据标定参照物在拍摄图像上的二维坐标与标定参照物相对于摄像头的三维坐标,计算得到摄像头的外参。
参见前文对摄像头标定的介绍,可以基于摄像头成像的几何模型,根据标定参照物在摄像头的像平面上的二维坐标与标定参照物相对于摄像头的三维坐标,计算得到摄像头的外参。具体算法为现有技术,本申请对此不作限定,也不作详述。
在现有的摄像头动态标定方法中,标定参照物相对摄像头的三维坐标是通过测量得到的,导致摄像头的外参标定精度依赖于该三维坐标的测量精度。
在本申请实施例中,是利用高精度地图获取标定参照物相对摄像头的三维坐标,而非通过测量的方式获得,从而避免了摄像头的外参标定精度受限于测量精度。此外,因为高精度地图的高精度属性,可以提高标定参照物相对摄像头的三维坐标的精度,从而可以提高摄像头的外参标定精度。
实现方式二
在实现方式二中,标定参照物为道路特征物。
如图3所示,步骤S120进一步包括步骤S125至步骤S128。
S125,获取多组摄像头参数,每组摄像头参数包括内参与外参。
例如,每组摄像头参数包括摄像头内参、畸变参数与外参。其中,外参包括平移矩阵与旋转矩阵。
例如,可以当前的摄像头参数为基准,采用预设步长,仿真生成多组摄像头参数。
应理解,在摄像头移动过程中,摄像头的内参、畸变参数以及平移矩阵发生变化的可能性或变化幅度较小,可以假设这些参数保持初始值不变,摄像头的旋转矩阵可能会发生变化。所以,可以基于摄像头当前的旋转矩阵,生成多个旋转矩阵的仿真值,从而生成多组摄像头参数。
可选地,作为一种实施方式,步骤S125包括:以摄像头的旋转矩阵的初始值为基准,采用预设步长生成多组旋转矩阵仿真值;分别根据多组旋转矩阵仿真值生成多组摄像头参数。
作为一个示例,以摄像头当前的旋转矩阵为基准,按照预设步长,分别朝着两个相对的旋转方向(例如,向左旋转与向右旋转)改变该旋转矩阵,从而生成多个旋转矩阵仿真值(例如,生成8000个选准矩阵仿真值)。然后基于这多个旋转矩阵仿真值,生成多组 摄像头参数。也就是说,这多组摄像头参数中不同组的旋转矩阵不同,其余参数(内参、畸变参数与平移矩阵)可以相同。
其中,该预设步长可以根据应用需求具体确定。例如,预设步长为0.2度(0.2°)。
其中,关于多组摄像头参数的组数,也可以根据应用需求具体确定。
或者,作为另一种实施方式,步骤S125包括:分别以摄像头的旋转矩阵与平移矩阵为基准,采用相应的步长,生成多组旋转矩阵仿真值与多组平移矩阵仿真值;根据多组旋转矩阵仿真值与多组平移矩阵仿真值,生成多组摄像头参数。也就是说,这多组摄像头参数中不同组的旋转矩阵与平移矩阵不同,其余参数(内参与畸变参数)可以相同。
应理解,在实际获取多组摄像头参数的过程中,可以根据应用需求具体确定哪些类型的参数不变,哪些类型的参数改变。
S126,根据多组摄像头参数以及摄像头的定位信息,使用高精度地图生成多个道路特征投影图像。
例如,首先,根据摄像头的定位信息,确定摄像头在高精度地图上的位置;然后基于摄像头在高精度地图上的位置、步骤S125获得的多组摄像头参数、以及摄像头成像的几何模型,在高精度地图中进行投影,生成多组摄像头参数分别对应的多个道路特征投影图像。
换句话说,根据摄像头的定位信息,在高精度地图里查询摄像头在高精度地图中的位置。然后,基于摄像头的内参、畸变参数与外参,在高精度地图里进行投影,形成对应摄像头成像的几何模型的道路特征物投影图像(二值图)。作为示例,多个道路特征投影图像如图4所示。
S127,从多个道路特征投影图像中获取与拍摄图像匹配度最高的匹配道路特征投影图像。
作为示例,如图4所示,利用标定参照物的实际拍摄图像,对多个道路特征投影图像中的每个图像进行匹配。
对拍摄图像与道路特征投影图像进行匹配的方法可以为计算两个图像的平均像素偏差。
例如,分别计算该拍摄图像与多个道路特征投影图像中的每个图像的平均像素偏差。最后将平均像素偏差最小的道路特征投影图像作为匹配道路特征投影图像。
还可以采用其他可行的方法,执行图像匹配的操作。
还应理解,还可以采用其他可行的图像匹配方法,对摄像头的拍摄图像与在高精度地图上生成的多个道路特征投影图像进行匹配。
可选地,在一些实施例中,步骤S127包括:将步骤S110中获取的摄像头的拍摄图像处理为第一形态的图像,该第一形态表示高精度地图所支持的道路特征投影图像的形态;从多个道路特征投影图像中获取与拍摄图像的第一形态的图像匹配度最高的匹配道路特征投影图像。
例如,当前技术中,高精度地图通常支持的道路特征投影图像是二值图。在该场景下,本实施例中提及的第一形态的图像为二值图。
可选地,作为一种实施方式,步骤S127包括:获取摄像头的拍摄图像的二值图;从多个道路特征投影图像中获取与拍摄图像的二值图匹配度最高的匹配道路特征投影图像。
例如,获取拍摄图像的二值图的方法包括两个步骤:第一步,利用神经网络(neural network,NN)推理模型对拍摄图像进行语义像素级分割,实现对道路特征物(车道线、标识牌、路灯杆等)的识别提取;第二步,在分割好的图像上对道路特征物的轮廓进行提取,生成二值图。作为示例,拍摄图像与从拍摄图像中提取的二值图分别如图5中的左图与右图所示。
应理解,通过将摄像头对标定参照物的拍摄图像处理为二值图,使得其与高精度地图中道路特征物的形态一致,即使其与多个道路特征投影图像的形态一致,这有利于提高图像匹配准确度,从而可以实现外参标定的高精度。
需要说明的是,随着技术演进,若高精度地图可以支持其他形态的道路特征投影图,则在步骤S126中,还可以将摄像头的实际拍摄图像处理为该其他形态的图像后,再进行匹配。
S128,根据匹配道路特征投影图像所对应的一组摄像头参数,获取摄像头的外参。
例如,每组摄像头参数包括内参、畸变参数、平移矩阵与旋转矩阵,则可以从匹配道路特征投影图像所对应的一组摄像头参数中,获取摄像头的平移矩阵与旋转矩阵,即获得待标定的摄像头外参。
应理解,如果摄像头动态标定的目标是,只标定摄像头的旋转矩阵,则可以只从匹配道路特征投影图像所对应的一组摄像头参数中,获取摄像头的旋转矩阵,即获得待标定的摄像头外参。
在本实施例中,通过利用高精度地图的道路特征物投影功能获取摄像头的外参,而非通过测量标定参照物相对摄像头的三维坐标的方式获得,从而避免了摄像头的外参标定精度受限于测量精度。此外,因为高精度地图的高精度属性,因此,可以实现摄像头外参标定的高精度。
在现有技术中,高精度地图的使用方式为,根据标定后的摄像头参数,利用高精度地图产生道路特征物的投影图像,从而指引无人驾驶汽车安全行驶。
本申请实施例反向应用了高精度地图的道路特征物投影功能,巧妙地是提高了摄像头的外参标定精度。
需要说明的是,除了上文中描述的实现方式一与实现方式二,通过利用高精度地图获取摄像头的外参的方案均落入本申请保护范围。
本申请实施例可以适用于摄像头动态标定,也可以适用于摄像头静态标定。
例如,本申请实施例中的摄像头为车载摄像头,且摄像头承载于的车辆处于移动状态。
基于上述描述可知,本申请提供的摄像头标定方案,通过利用高精度地图来标定摄像头外参,能够提高摄像头的外参标定精度。
在本申请提供的摄像头标定方案中,标定参照物可以为任何类型的道路特征物,并没有严格限制其为车道线(现有的摄像头自标定方法中限定标定参照物为车道线)。例如,本申请提供的摄像头标定方案中的标定参照物可以为下列车道特征物中的任一种:车道线、标识牌、杆类物体、路面标识、交通灯。其中,标识牌例如交通标识牌或杆牌,杆类物体例如为路灯杆等。
此外,本申请提供的摄像头标定方案既可以适用于摄像头动态标定,也可以适用于摄像头静态标定。并且,本申请提供的摄像头标定方案在摄像头动态标定的场景下,并没有 特定道路的限定,也无需车辆居中行驶。因此,本申请提供的摄像头标定方案具有很好的通用性。
应理解,本申请提供的摄像头标定方案可以应用于自动驾驶车辆装配下线的摄像头参数标定环节,不必限于固定的标定车间,且可同时标定所有摄像头,节省标定时间。
还应理解,本申请提供的摄像头标定方案也可以应用于,车辆出厂后,在使用过程中导致外参变化、需要实时在线修正或定期校准的场景。
还应理解,本申请提供的摄像头标定方案可以极大的降低对标定车间的依赖,实现随时随地(即实时在线)的对车载摄像头外参的高精度标定。
还应理解,本申请提供的摄像头标定方案还可以应用在其他传感器(如激光雷达)的标定领域。例如,在对其他传感器(如激光雷达)的参数进行标定时,也可以使用高精度地图获取标定参照物的位置信息。
本文中描述的各个实施例可以为独立的方案,也可以根据内在逻辑进行组合,这些方案都落入本申请的保护范围中。
上文描述了本申请提供的方法实施例,下文将描述本申请提供的装置实施例。应理解,装置实施例的描述与方法实施例的描述相互对应,因此,未详细描述的内容可以参见上文方法实施例,为了简洁,这里不再赘述。
图6为本申请实施例提供的摄像头外参标定的装置600。装置600包括获取单元610与处理单元620。
获取单元610,用于获取摄像头的拍摄图像,拍摄图像为摄像头以标定参照物为拍摄对象所拍摄的图像。
处理单元620,用于根据拍摄图像与高精度地图,获取摄像头的外参,高精度地图中包含标定参照物。
可选地,作为一个实施例,处理单元620用于通过如下操作获取摄像头的外参:获取标定参照物在拍摄图像上的二维坐标;根据摄像头的定位信息,确定摄像头在高精度地图上的位置,并基于摄像头在高精度地图上的位置,获取标定参照物在高精度地图上的位置;基于标定参照物在高精度地图上的位置,获取标定参照物相对于摄像头的三维坐标;根据二维坐标与三维坐标,计算得到摄像头的外参。
可选地,作为一种实施方式,处理单元620通过如下操作获取标定参照物相对于摄像头的三维坐标:根据标定参照物在高精度地图上的位置,获取标定参照物的绝对位置;根据摄像头的绝对位置以及摄像头的绝对位置,计算得到标定参照物相对于摄像头的三维坐标。
可选地,作为另一种实施方式,高精度地图具有生成地图上两个定位点的相对位置的功能;处理单元620通过如下操作获取标定参照物相对于摄像头的三维坐标:基于摄像头在高精度地图上的位置以及标定参照物在高精度地图上的位置,利用高精度地图生成标定参照物相对于摄像头的三维坐标。
可选地,作为另一个实施例,标定参照物为道路特征物;处理单元620用于通过如下操作获取摄像头的外参:获取多组摄像头参数,每组摄像头参数包括内参与外参;根据多组摄像头参数以及摄像头的定位信息,使用高精度地图生成多个道路特征投影图像;从多个道路特征投影图像中获取与拍摄图像匹配度最高的匹配道路特征投影图像;根据匹配道 路特征投影图像所对应的一组摄像头参数,获取摄像头的外参。
可选地,作为一种实施方式,处理单元620用于通过如下操作获取多组摄像头参数:以摄像头的旋转矩阵的初始值为基准,采用预设步长生成多组旋转矩阵仿真值;分别根据多组旋转矩阵仿真值生成多组摄像头参数。
可选地,作为另一种实施方式,处理单元620用于通过如下操作获取多组摄像头参数:分别以摄像头的旋转矩阵与平移矩阵为基准,采用相应的步长,生成多组旋转矩阵仿真值与多组平移矩阵仿真值;根据多组旋转矩阵仿真值与多组平移矩阵仿真值,生成多组摄像头参数。
可选地,在一些实现方式中,若高精度地图中道路特征物的形态为二值图,处理单元620用于通过如下操作获取匹配道路特征投影图像:获取拍摄图像的二值图;从多个道路特征投影图像中获取与拍摄图像的二值图匹配度最高的匹配道路特征投影图像。
可选地,摄像头为车载摄像头,摄像头承载于的车辆可以处于静止状态,也可以处于移动状态。
如图7所示,本申请实施例还提供一种摄像头外参标定的装置700。该装置700包括处理器710,处理器710与存储器720耦合,存储器720用于存储计算机程序或指令,处理器710用于执行存储器720存储的计算机程序或指令,使得上文方法实施例中的方法100被执行。
可选地,如图7所示,该装置700还可以包括存储器720。
可选地,如图7所示,该装置700还可以包括数据接口730,数据接口730用于与外界进行数据的传输。
本申请实施例还提供一种计算机可读介质,该计算机可读介质存储用于设备执行的程序代码,该程序代码包括用于执行上述实施例的方法。
本申请实施例还提供一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述实施例的方法。
本申请实施例还提供一种芯片,该芯片包括处理器与数据接口,处理器通过数据接口读取存储器上存储的指令,执行上述实施例的方法。
可选地,作为一种实现方式,该芯片还可以包括存储器,存储器中存储有指令,处理器用于执行存储器上存储的指令,当指令被执行时,处理器用于执行上述实施例中的方法。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通 过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (14)

  1. 一种摄像头外参标定的方法,其特征在于,包括:
    获取摄像头的拍摄图像,所述拍摄图像为所述摄像头以所述标定参照物为拍摄对象所拍摄的图像;
    根据所述拍摄图像与高精度地图,获取所述摄像头的外参,所述高精度地图中包含所述标定参照物。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述拍摄图像与高精度地图,获取所述摄像头的外参,包括:
    获取所述标定参照物在所述拍摄图像上的二维坐标;
    根据所述摄像头的定位信息,确定所述摄像头在所述高精度地图上的位置,并基于所述摄像头在所述高精度地图上的位置,获取所述标定参照物在所述高精度地图上的位置;
    根据所述标定参照物在所述高精度地图上的位置,获取所述标定参照物的绝对位置;
    根据所述标定参照物的绝对位置以及所述摄像头的绝对位置,获取所述标定参照物相对于所述摄像头的三维坐标;
    根据所述二维坐标与所述三维坐标,计算得到所述摄像头的外参。
  3. 根据权利要求1所述的方法,其特征在于,所述标定参照物为道路特征物;
    其中,所述根据所述拍摄图像与高精度地图,获取所述摄像头的外参,包括:
    获取多组摄像头参数,每组摄像头参数包括内参与外参;
    根据所述多组摄像头参数以及所述摄像头的定位信息,使用所述高精度地图生成多个道路特征投影图像;
    从所述多个道路特征投影图像中获取与所述拍摄图像匹配度最高的匹配道路特征投影图像;
    根据所述匹配道路特征投影图像所对应的一组摄像头参数,获取所述摄像头的外参。
  4. 根据权利要求3所述的方法,其特征在于,所述获取多组摄像头参数,包括:
    以所述摄像头的旋转矩阵的初始值为基准,采用预设步长生成多组旋转矩阵仿真值;
    分别根据所述多组旋转矩阵仿真值生成所述多组摄像头参数。
  5. 根据权利要求3或4所述的方法,其特征在于,所述高精度地图中道路特征物的形态为二值图;
    所述从所述多个道路特征投影图像中获取与所述拍摄图像匹配度最高的匹配道路特征投影图像,包括:
    获取所述拍摄图像的二值图;
    从所述多个道路特征投影图像中获取与所述拍摄图像的二值图匹配度最高的匹配道路特征投影图像。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述摄像头为车载摄像头,所述摄像头承载于的车辆处于移动状态。
  7. 一种摄像头外参标定的装置,其特征在于,包括:
    获取单元,用于获取摄像头的拍摄图像,所述拍摄图像为所述摄像头以所述标定参照 物为拍摄对象所拍摄的图像;
    处理单元,用于根据所述拍摄图像与高精度地图,获取所述摄像头的外参,所述高精度地图中包含所述标定参照物。
  8. 根据权利要求7所述的装置,其特征在于,所述处理单元用于:
    获取所述标定参照物在所述拍摄图像上的二维坐标;
    根据所述摄像头的定位信息,确定所述摄像头在所述高精度地图上的位置,并基于所述摄像头在所述高精度地图上的位置,获取所述标定参照物在所述高精度地图上的位置;
    根据所述标定参照物在所述高精度地图上的位置,获取所述标定参照物的绝对位置;
    根据所述标定参照物的绝对位置以及所述摄像头的绝对位置,获取所述标定参照物相对于所述摄像头的三维坐标;
    根据所述二维坐标与所述三维坐标,计算得到所述摄像头的外参。
  9. 根据权利要求7所述的装置,其特征在于,所述标定参照物为道路特征物;所述处理单元用于:
    获取多组摄像头参数,每组摄像头参数包括内参与外参;
    根据所述多组摄像头参数以及所述摄像头的定位信息,使用所述高精度地图生成多个道路特征投影图像;
    从所述多个道路特征投影图像中获取与所述拍摄图像匹配度最高的匹配道路特征投影图像;
    根据所述匹配道路特征投影图像所对应的一组摄像头参数,获取所述摄像头的外参。
  10. 根据权利要求9所述的装置,其特征在于,所述处理单元用于通过如下操作获取所述多组摄像头参数:
    以所述摄像头的旋转矩阵的初始值为基准,采用预设步长生成多组旋转矩阵仿真值;
    分别根据所述多组旋转矩阵仿真值生成所述多组摄像头参数。
  11. 根据权利要求9或10所述的装置,其特征在于,所述高精度地图中道路特征物的形态为二值图;
    所述处理单元用于通过如下操作获取所述匹配道路特征投影图像:
    获取所述拍摄图像的二值图;
    从所述多个道路特征投影图像中获取与所述拍摄图像的二值图匹配度最高的匹配道路特征投影图像。
  12. 根据权利要求7-11中任一项所述的装置,其特征在于,所述摄像头为车载摄像头,所述摄像头承载于的车辆处于移动状态。
  13. 一种摄像头外参标定的装置,其特征在于,包括:
    处理器,用于执行存储器中存储的计算机指令,以使得所述装置执行如权利要求1至6中任一项所述的方法。
  14. 一种计算机存储介质,其特征在于,其上存储有计算机程序,所述计算机程序被计算机执行时,以使得实现如权利要求1至6中任一项所述的方法。
PCT/CN2021/114890 2020-09-04 2021-08-27 摄像头外参标定的方法与装置 WO2022048493A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21863558.9A EP4198901A4 (en) 2020-09-04 2021-08-27 METHOD AND APPARATUS FOR CALIBRATING EXTRINSIC PARAMETERS OF A CAMERA
US18/177,930 US20230206500A1 (en) 2020-09-04 2023-03-03 Method and apparatus for calibrating extrinsic parameter of a camera

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010919175.8A CN114140533A (zh) 2020-09-04 2020-09-04 摄像头外参标定的方法与装置
CN202010919175.8 2020-09-04

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/177,930 Continuation US20230206500A1 (en) 2020-09-04 2023-03-03 Method and apparatus for calibrating extrinsic parameter of a camera

Publications (1)

Publication Number Publication Date
WO2022048493A1 true WO2022048493A1 (zh) 2022-03-10

Family

ID=80438664

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/114890 WO2022048493A1 (zh) 2020-09-04 2021-08-27 摄像头外参标定的方法与装置

Country Status (4)

Country Link
US (1) US20230206500A1 (zh)
EP (1) EP4198901A4 (zh)
CN (1) CN114140533A (zh)
WO (1) WO2022048493A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116047440A (zh) * 2023-03-29 2023-05-02 陕西欧卡电子智能科技有限公司 一种端到端的毫米波雷达与摄像头外参标定方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063490A (zh) * 2022-06-30 2022-09-16 阿波罗智能技术(北京)有限公司 车辆相机外参标定方法、装置、电子设备及存储介质
CN116958271B (zh) * 2023-06-06 2024-07-16 阿里巴巴(中国)有限公司 标定参数确定方法以及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101617943A (zh) * 2008-07-04 2010-01-06 株式会社东芝 X射线摄影装置、x射线摄影方法以及图像处理装置
CN109214980A (zh) * 2017-07-04 2019-01-15 百度在线网络技术(北京)有限公司 一种三维姿态估计方法、装置、设备和计算机存储介质
CN110148164A (zh) * 2019-05-29 2019-08-20 北京百度网讯科技有限公司 转换矩阵生成方法及装置、服务器及计算机可读介质
WO2019221349A1 (ko) * 2018-05-17 2019-11-21 에스케이텔레콤 주식회사 차량용 카메라 캘리브레이션 장치 및 방법
CN110728720A (zh) * 2019-10-21 2020-01-24 北京百度网讯科技有限公司 用于相机标定的方法、装置、设备和存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822939A (zh) * 2017-07-06 2021-12-21 华为技术有限公司 车载传感器的外部参数标定的方法和设备
KR102022388B1 (ko) * 2018-02-27 2019-09-18 (주)캠시스 실세계 물체 정보를 이용한 카메라 공차 보정 시스템 및 방법
CN110751693B (zh) * 2019-10-21 2023-10-13 北京百度网讯科技有限公司 用于相机标定的方法、装置、设备和存储介质
CN111553956A (zh) * 2020-05-20 2020-08-18 北京百度网讯科技有限公司 拍摄装置的标定方法、装置、电子设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101617943A (zh) * 2008-07-04 2010-01-06 株式会社东芝 X射线摄影装置、x射线摄影方法以及图像处理装置
CN109214980A (zh) * 2017-07-04 2019-01-15 百度在线网络技术(北京)有限公司 一种三维姿态估计方法、装置、设备和计算机存储介质
WO2019221349A1 (ko) * 2018-05-17 2019-11-21 에스케이텔레콤 주식회사 차량용 카메라 캘리브레이션 장치 및 방법
CN110148164A (zh) * 2019-05-29 2019-08-20 北京百度网讯科技有限公司 转换矩阵生成方法及装置、服务器及计算机可读介质
CN110728720A (zh) * 2019-10-21 2020-01-24 北京百度网讯科技有限公司 用于相机标定的方法、装置、设备和存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4198901A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116047440A (zh) * 2023-03-29 2023-05-02 陕西欧卡电子智能科技有限公司 一种端到端的毫米波雷达与摄像头外参标定方法
CN116047440B (zh) * 2023-03-29 2023-06-09 陕西欧卡电子智能科技有限公司 一种端到端的毫米波雷达与摄像头外参标定方法

Also Published As

Publication number Publication date
US20230206500A1 (en) 2023-06-29
EP4198901A4 (en) 2024-02-21
EP4198901A1 (en) 2023-06-21
CN114140533A (zh) 2022-03-04

Similar Documents

Publication Publication Date Title
CN109461211B (zh) 基于视觉点云的语义矢量地图构建方法、装置和电子设备
WO2022048493A1 (zh) 摄像头外参标定的方法与装置
CN109993793B (zh) 视觉定位方法及装置
CN110176032B (zh) 一种三维重建方法及装置
CN113657224B (zh) 车路协同中用于确定对象状态的方法、装置、设备
US11227395B2 (en) Method and apparatus for determining motion vector field, device, storage medium and vehicle
WO2018120040A1 (zh) 一种障碍物检测方法及装置
CN110969064B (zh) 一种基于单目视觉的图像检测方法、装置及存储设备
CN112288825B (zh) 相机标定方法、装置、电子设备、存储介质和路侧设备
CN108519102B (zh) 一种基于二次投影的双目视觉里程计算方法
CN112967344B (zh) 相机外参标定的方法、设备、存储介质及程序产品
CN113989450A (zh) 图像处理方法、装置、电子设备和介质
CN110766760B (zh) 用于相机标定的方法、装置、设备和存储介质
CN112232275B (zh) 基于双目识别的障碍物检测方法、系统、设备及存储介质
CN112700486B (zh) 对图像中路面车道线的深度进行估计的方法及装置
CN110766761B (zh) 用于相机标定的方法、装置、设备和存储介质
US20240062415A1 (en) Terminal device localization method and related device therefor
CN114037762B (zh) 基于图像与高精度地图配准的实时高精度定位方法
CN115410167A (zh) 目标检测与语义分割方法、装置、设备及存储介质
CN114662587B (zh) 一种基于激光雷达的三维目标感知方法、装置及系统
CN111950428A (zh) 目标障碍物识别方法、装置及运载工具
CN110348351B (zh) 一种图像语义分割的方法、终端和可读存储介质
CN111833443A (zh) 自主机器应用中的地标位置重建
CN116740681B (zh) 目标检测方法、装置、车辆和存储介质
CN114648639B (zh) 一种目标车辆的检测方法、系统及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21863558

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021863558

Country of ref document: EP

Effective date: 20230315

NENP Non-entry into the national phase

Ref country code: DE