WO2022048493A1 - Procédé et appareil d'étalonnage de paramètres extrinsèques d'une caméra - Google Patents

Procédé et appareil d'étalonnage de paramètres extrinsèques d'une caméra Download PDF

Info

Publication number
WO2022048493A1
WO2022048493A1 PCT/CN2021/114890 CN2021114890W WO2022048493A1 WO 2022048493 A1 WO2022048493 A1 WO 2022048493A1 CN 2021114890 W CN2021114890 W CN 2021114890W WO 2022048493 A1 WO2022048493 A1 WO 2022048493A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
reference object
parameters
precision map
calibration reference
Prior art date
Application number
PCT/CN2021/114890
Other languages
English (en)
Chinese (zh)
Inventor
任小荣
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21863558.9A priority Critical patent/EP4198901A4/fr
Publication of WO2022048493A1 publication Critical patent/WO2022048493A1/fr
Priority to US18/177,930 priority patent/US20230206500A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/771Feature selection, e.g. selecting representative features from a multi-dimensional feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present application relates to the field of data processing, in particular to a method and device for calibrating external parameters of a camera.
  • Camera calibration represents the process of obtaining camera parameters.
  • the camera parameters include internal and external parameters, the internal parameters are the parameters of the camera itself, and the external parameters are parameters related to the installation position of the camera, such as pitch angle, rotation angle and yaw angle.
  • camera calibration is divided into two categories: traditional camera calibration method and camera self-calibration method.
  • the traditional camera calibration method uses a calibration plate for calibration, but it is only suitable for scenes where the camera is still.
  • the external parameters of the camera may change due to the vibration of the vehicle due to road reasons, so the camera parameters need to be dynamically calibrated.
  • the existing camera dynamic calibration method is the camera self-calibration method, which uses lane lines for calibration, but this method requires the vehicle to drive in the center, which is highly subjective, resulting in low accuracy of external parameter calibration.
  • this method is only suitable for specific roads. , such as horizontal and straight roads.
  • the present application provides a method and device for calibrating external parameters of a camera. By using a high-precision map to calibrate the external parameters of the camera, the calibration accuracy of the external parameters of the camera can be improved.
  • a method for calibrating external parameters of a camera comprising: acquiring a captured image of a camera, where the captured image is an image captured by the camera using the calibration reference object as a shooting object; An image and a high-precision map are captured, and external parameters of the camera are obtained, and the high-precision map includes the calibration reference object.
  • the high-precision map includes the calibration reference object, which means that the high-precision map has location information of the calibration reference object.
  • the external parameters of the camera are acquired through the actual captured image of the calibration reference object and the high-precision map, so that the operation of measuring the three-dimensional coordinates of the calibration reference object relative to the camera is not required in the process of obtaining the external parameters.
  • the three-dimensional coordinates of the calibration reference object relative to the camera are obtained by measurement.
  • the accuracy of measuring and calibrating the three-dimensional coordinates of the reference object relative to the camera is low, resulting in low accuracy of the external parameter calibration of the camera.
  • the external parameters of the camera are obtained through the captured image of the calibration reference object and the high-precision map, and there is no need to perform the operation of measuring the three-dimensional coordinates of the calibration reference object relative to the camera, so that the calibration accuracy of the external parameters of the camera can no longer be limited. Therefore, the external parameter calibration accuracy of the camera can be improved.
  • the calibration reference object may be an object around the camera.
  • the calibration reference may be a road feature.
  • the calibration reference object may be any one of the following road features: lane lines, signboards, pole-like objects, road signs, and traffic lights.
  • the identification plate is, for example, a traffic identification plate or a pole plate
  • the pole-like object is, for example, a street light pole and the like.
  • the acquiring the external parameters of the camera according to the captured image and the high-precision map includes: acquiring two parameters of the calibration reference object on the captured image. dimensional coordinates; according to the positioning information of the camera, determine the position of the camera on the high-precision map, and based on the position of the camera on the high-precision map, obtain the calibration reference object in the high-precision map.
  • acquiring the three-dimensional coordinates of the calibration reference object relative to the camera based on the position of the calibration reference object on the high-precision map includes: according to the calibration reference According to the absolute position of the calibration reference object and the absolute position of the camera, the relative position of the calibration reference object relative to the camera is calculated. three-dimensional coordinates.
  • the high-precision map has a function of generating relative positions of two positioning points on the map; wherein, based on the position of the calibration reference object on the high-precision map, Obtaining the three-dimensional coordinates of the calibration reference object relative to the camera includes: based on the position of the camera on the high-precision map and the position of the calibration reference object on the high-precision map, using the high-precision map The precision map generates three-dimensional coordinates of the calibration reference object relative to the camera.
  • the positioning information of the camera can be obtained by any one or a combination of the following positioning technologies: carrier-phase differential (real-time kinematic, RTK) technology based on satellite positioning, matching positioning technology based on vision or lidar.
  • RTK real-time kinematic
  • the three-dimensional coordinates of the calibration reference object relative to the camera are obtained by measurement, resulting in that the external parameter calibration accuracy of the camera depends on the measurement accuracy of the three-dimensional coordinates.
  • the three-dimensional coordinates of the calibration reference object relative to the camera are obtained by using a high-precision map, rather than by measurement, thereby avoiding that the calibration accuracy of the external parameters of the camera is limited by the measurement accuracy.
  • the accuracy of calibrating the three-dimensional coordinates of the reference object relative to the camera can be improved, so that the external parameter calibration accuracy of the camera can be improved.
  • the calibration reference object is a road feature; wherein, acquiring the external parameters of the camera according to the captured image and the high-precision map includes: acquiring multiple set of camera parameters, each set of camera parameters includes internal and external parameters; according to the multiple sets of camera parameters and the positioning information of the cameras, use the high-precision map to generate multiple road feature projection images; from the multiple road features A matched road feature projection image with the highest matching degree with the captured image is obtained from the projected image; and the camera's external parameters are obtained according to a set of camera parameters corresponding to the matched road feature projected image.
  • the acquiring multiple sets of camera parameters includes: taking the initial value of the rotation matrix of the camera as a reference, and using a preset step size to generate multiple sets of rotation matrix simulation values;
  • the sets of camera parameters are generated by sets of rotation matrix simulation values.
  • the obtaining of multiple sets of camera parameters includes: using the rotation matrix and translation matrix of the camera as a benchmark, and using corresponding step sizes, generating multiple sets of rotation matrix simulation values and multiple sets of translation matrix values.
  • Matrix simulation values generate multiple sets of camera parameters according to multiple sets of rotation matrix simulation values and multiple sets of translation matrix simulation values.
  • the shape of the road features in the high-precision map is a binary image; the matching road with the highest matching degree with the captured image is obtained from the plurality of road feature projection images.
  • the feature projection image includes: acquiring a binary map of the captured image; and acquiring a matching road feature projection image with the highest matching degree with the binary map of the captured image from the plurality of road feature projection images.
  • the three-dimensional coordinates of the calibration reference object relative to the camera are obtained by measurement, resulting in that the external parameter calibration accuracy of the camera depends on the measurement accuracy of the three-dimensional coordinates.
  • the external parameters of the camera are obtained by using the road feature projection function of the high-precision map, rather than by measuring the three-dimensional coordinates of the calibration reference object relative to the camera, thereby avoiding the limitation of the calibration accuracy of the external parameters of the camera. for measurement accuracy.
  • high-precision camera extrinsic parameter calibration can be achieved.
  • the camera is a vehicle-mounted camera, and the vehicle on which the camera is carried may be in a stationary state or in a moving state.
  • an apparatus for calibrating external parameters of a camera including: an acquisition unit configured to acquire a captured image of a camera, where the captured image is an image captured by the camera using the calibration reference object as a shooting object; processing The unit is configured to acquire the external parameters of the camera according to the captured image and the high-precision map, and the high-precision map includes the calibration reference object.
  • the processing unit is configured to acquire the external parameters of the camera through the following operations: acquiring the two-dimensional coordinates of the calibration reference object on the captured image;
  • the positioning information of the camera is used to determine the position of the camera on the high-precision map, and based on the position of the camera on the high-precision map, the position of the calibration reference object on the high-precision map is obtained. ;
  • Based on the position of the calibration reference object on the high-precision map obtain the three-dimensional coordinates of the calibration reference object relative to the camera; According to the two-dimensional coordinates and the three-dimensional coordinates, calculate the camera's three-dimensional coordinates.
  • External reference is configured to acquire the external parameters of the camera through the following operations: acquiring the two-dimensional coordinates of the calibration reference object on the captured image;
  • the positioning information of the camera is used to determine the position of the camera on the high-precision map, and based on the position of the camera on the high-precision map, the position of the calibration reference object on the high-precision map is obtained
  • the processing unit obtains the three-dimensional coordinates of the calibration reference object relative to the camera through the following operations: according to the position of the calibration reference object on the high-precision map, obtain all the coordinates.
  • the absolute position of the calibration reference object; according to the absolute position of the calibration reference object and the absolute position of the camera, the three-dimensional coordinates of the calibration reference object relative to the camera are calculated.
  • the high-precision map has a function of generating the relative positions of two positioning points on the map; the processing unit obtains a three-dimensional image of the calibration reference object relative to the camera through the following operations: Coordinates: Based on the position of the camera on the high-precision map and the position of the calibration reference object on the high-precision map, use the high-precision map to generate a three-dimensional image of the calibration reference object relative to the camera. coordinate.
  • the calibration reference object is a road feature
  • the processing unit is configured to acquire the external parameters of the camera through the following operations: acquiring multiple groups of camera parameters, each group of cameras The parameters include internal and external parameters; according to the multiple sets of camera parameters and the positioning information of the cameras, use the high-precision map to generate multiple road feature projection images; obtain from the multiple road feature projection images and the The matched road feature projection image with the highest image matching degree is captured; the external parameters of the camera are acquired according to a set of camera parameters corresponding to the matched road feature projection image.
  • the processing unit is configured to obtain multiple sets of camera parameters through the following operations: based on the initial value of the rotation matrix of the camera, use a preset step size to generate multiple sets of rotation matrix simulation values. ; Generate the multiple sets of camera parameters according to the multiple sets of rotation matrix simulation values respectively.
  • the processing unit is configured to obtain multiple sets of camera parameters through the following operations: respectively taking the rotation matrix and translation matrix of the camera as a benchmark, and using corresponding step sizes to generate multiple sets of rotation matrix simulations. value and multiple sets of translation matrix simulation values; generate multiple sets of camera parameters according to multiple sets of rotation matrix simulation values and multiple sets of translation matrix simulation values.
  • the processing unit is configured to obtain the matched road feature projection image through the following operations: obtaining the captured image; The binary map is obtained; the matching road feature projection image with the highest matching degree with the binary map of the captured image is obtained from the plurality of road feature projection images.
  • the camera is a vehicle-mounted camera, and the vehicle on which the camera is carried may be in a stationary state or in a moving state.
  • an apparatus for calibrating external parameters of a camera includes a processor, the processor is coupled with a memory, the memory is used for storing computer programs or instructions, and the processor is used for executing the computer programs or instructions stored in the memory, so that The method of the first aspect is performed.
  • the apparatus includes one or more processors.
  • the apparatus may further include a memory coupled to the processor.
  • the device may include one or more memories.
  • the memory may be integrated with the processor, or provided separately.
  • the apparatus may also include a data interface.
  • a computer-readable medium storing program code for execution by a device, the program code comprising for performing the method in the above-mentioned first aspect.
  • a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect above.
  • a chip in a sixth aspect, includes a processor and a data interface, the processor reads an instruction stored in a memory through the data interface, and executes the method in the first aspect.
  • the chip may further include a memory, in which instructions are stored, the processor is configured to execute the instructions stored in the memory, and when the instructions are executed, the The processor is configured to perform the method in the first aspect above.
  • the present application can improve the calibration accuracy of the external parameters of the camera by using the captured image of the high-precision map and the calibration reference to obtain the external parameters of the camera.
  • FIG. 1 is a schematic flowchart of a method for calibrating external parameters of a camera provided by an embodiment of the present application.
  • FIG. 2 is another schematic flowchart of a method for calibrating external parameters of a camera provided by an embodiment of the present application.
  • FIG. 3 is another schematic flowchart of a method for calibrating external parameters of a camera provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of using a captured image to match a plurality of road feature projection images in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a captured image and a binary image thereof in an embodiment of the present application.
  • FIG. 6 is a schematic block diagram of an apparatus for calibrating external parameters of a camera provided by an embodiment of the present application.
  • FIG. 7 is another schematic block diagram of an apparatus for calibrating external parameters of a camera provided by an embodiment of the present application.
  • Camera calibration also known as camera calibration
  • M represents the transformation matrix between the three-dimensional space point X W and the two-dimensional image point X P , which can be called a projection matrix.
  • Some elements in the projection matrix M characterize the parameters of the camera. Camera calibration is to obtain the projection matrix M.
  • the parameters of the camera include internal and external parameters.
  • the internal parameters are the parameters of the camera itself, such as the focal length, etc.
  • the external parameters are parameters related to the installation position of the camera, such as pitch angle, rotation angle and yaw angle.
  • the transformation matrix corresponding to the internal parameter may be referred to as the internal parameter transformation matrix M 1
  • the transformation matrix corresponding to the external parameter may be referred to as the external parameter transformation matrix M 2 .
  • the relationship between the three-dimensional space point X W and the two-dimensional image point X P can also be expressed as:
  • Camera calibration generally requires a calibration reference (also referred to as a calibration or a reference).
  • the calibration reference object represents the object shot by the camera during the camera calibration process.
  • the three-dimensional space point X W may be the coordinates of the calibration reference object in the world coordinate system
  • the two-dimensional image point X P may be the two-dimensional coordinates of the calibration reference object on the image plane of the camera.
  • Input of camera calibration the two-dimensional coordinates (ie, image point coordinates) of the calibration reference object on the image plane of the camera, and the three-dimensional coordinates (ie, three-dimensional space coordinates) of the calibration reference object relative to the camera.
  • the two-dimensional coordinates of the calibration reference object on the camera head plane may correspond to the two-dimensional image point X P in the above example;
  • the three-dimensional coordinates of the calibration reference object relative to the camera may correspond to the three-dimensional space point X W in the above example, or, Rigid body transformation of point X W in 3D space.
  • Camera calibration output camera parameters, including internal and external parameters.
  • camera calibration is a very critical link, and the accuracy of its calibration results directly affects the accuracy of the results produced by the camera work.
  • camera calibration can be divided into two categories: traditional camera calibration method (static) and camera self-calibration method (dynamic).
  • the traditional camera calibration method is a static camera calibration method. Specifically, in an environment where the camera is still, a calibration plate (ie, a calibration reference) is used to obtain the input of the camera calibration, thereby calculating the internal and external parameters of the camera.
  • the two-dimensional coordinates of the calibration reference object on the camera head plane are obtained by using the imaging results of the calibration plate in different directions of the camera, and the three-dimensional coordinates of the calibration reference object relative to the camera are obtained by measuring the calibration plate.
  • the disadvantage of the traditional camera calibration method is that it can only be applied to the environment where the camera is still, and the positioning of the calibration board is relatively high, the calibration process is cumbersome, and the efficiency is low, which is difficult to achieve in many application scenarios.
  • the vehicle will vibrate during the driving process due to road reasons, which will cause the external parameters of the camera to change. At this time, if the camera is not calibrated in real time, it will further affect the vehicle camera system. The accuracy of subsequent operations.
  • the camera self-calibration method is a dynamic camera calibration method, which does not need to be calibrated with a calibration board.
  • the camera self-calibration method is to use the distance between the vehicle and the lane line (ie, the calibration reference object) and the vanishing point to calibrate the camera to obtain the external parameters of the camera.
  • the three-dimensional coordinates of the lane line relative to the camera are obtained by measurement
  • the two-dimensional coordinates of the lane line on the camera head plane are obtained according to the captured image of the lane line by the camera
  • the three-dimensional coordinates of the lane line relative to the camera and the lane line are obtained.
  • the two-dimensional coordinates on the plane of the camera head are calculated to obtain the external parameters of the camera.
  • the disadvantage of the camera self-calibration method is that many conditions are required, for example, the driving of the vehicle is required to be centered, which is highly subjective, resulting in low camera calibration accuracy.
  • the current camera self-calibration method is only suitable for specific roads, such as horizontal, straight roads, and has low generality.
  • High-precision map also referred to as high-precision map
  • High-precision map is one of the core technologies of unmanned driving, which is a high-precision electronic map.
  • the maps we use every day for navigation and querying geographic information belong to traditional maps, and their main service targets are human drivers. Different from traditional maps, the main service object of high-precision maps is driverless cars, or machine drivers.
  • a high-resolution map is a vector map of road features.
  • High-resolution maps include geometry and location information of road features.
  • Road features include, but are not limited to, lane lines, signs (eg, traffic signs or poles), pole-like objects (eg, light poles), pavement markings, traffic lights.
  • high-precision maps can provide accurate road geometry and outline and position information of road facilities (position in the world coordinate system, ie absolute coordinate position).
  • High-resolution maps also contain geometric descriptions of various road features. For example, the high-precision location information of the geometric corners of road features can be queried in the high-precision map.
  • current high-resolution maps in shapefile format support range queries and vector projections.
  • the shape of road features in high-precision maps is a binary map.
  • the high-resolution map may have the function of projecting images of road features. For example, given the parameters of the camera (including internal and external parameters) and the positioning information of the camera, the high-precision map can output the projected image of road features based on the geometric model imaged by the camera.
  • the high-precision map involved in the embodiments of the present application refers to a high-precision electronic map in the field of unmanned driving technology, rather than a traditional map.
  • the external parameter calibration accuracy of the existing camera dynamic calibration method is low.
  • the present application provides a method and device for calibrating external parameters of a camera, by using a high-precision map to obtain the external parameters of the camera, so as to improve the calibration accuracy of the external parameters of the camera.
  • FIG. 1 is a schematic flowchart of a method 100 for calibrating external parameters of a camera according to an embodiment of the present application.
  • the method 100 includes steps S110 and S120.
  • S110 Acquire a photographed image of the camera, where the photographed image is an image photographed by the camera with a calibration reference object as a photographing object.
  • the calibration reference object is included in the captured image.
  • the captured image is an actual captured image of the calibration reference object.
  • the calibration reference object can be an object around the camera.
  • the calibration reference may be a road feature.
  • the calibration reference may be any of the following road features: lane lines, signage, pole-like objects, road markings, traffic lights.
  • the identification plate is, for example, a traffic identification plate or a pole plate
  • the pole-like object is, for example, a street light pole and the like.
  • a calibration reference may also be referred to as a calibration or reference.
  • S120 Acquire external parameters of the camera according to the captured image and the high-precision map, and the high-precision map includes a calibration reference object.
  • the high-precision map includes a calibration reference object, which means that the high-precision map has the position information of the calibration reference object.
  • a high-resolution map is a vector map of road features.
  • the high-precision map includes the geometric shape and position information of road features, that is, the high-precision map can provide accurate road geometry and outline and position information (absolute coordinate position) of road facilities. Therefore, the high-precision position information of the geometric corner points of the road features can be queried in the high-precision map.
  • the high-precision map includes the position information of the calibration reference object
  • the high-precision position information of the calibration reference object can be queried in the high-precision map.
  • the external parameters of the camera are acquired through the actual captured image of the calibration reference object and the high-precision map, so that the operation of measuring the three-dimensional coordinates of the calibration reference object relative to the camera does not need to be performed in the process of obtaining the external parameters.
  • the three-dimensional coordinates of the calibration reference object relative to the camera are obtained by measurement.
  • the accuracy of measuring and calibrating the three-dimensional coordinates of the reference object relative to the camera is low, resulting in low accuracy of the external parameter calibration of the camera.
  • the external parameters of the camera are obtained through the captured image of the calibration reference object and the high-precision map, and there is no need to perform the operation of measuring the three-dimensional coordinates of the calibration reference object relative to the camera, so that the calibration accuracy of the external parameters of the camera can no longer be achieved. Limited by measurement accuracy.
  • the camera in this embodiment of the present application may be in a static state or in a moving state.
  • the method for calibrating external parameters of a camera provided by the embodiments of the present application can be applied to a vehicle-mounted camera system.
  • the camera in the embodiment of the present application is a vehicle-mounted camera, and the vehicle where the camera is located may be in a stationary state or in a moving state.
  • step S120 the implementation manner of acquiring the external parameters of the camera according to the captured image and the high-precision map may include implementation manner 1 and implementation manner 2 to be described below.
  • step S120 further includes steps S121 to S124.
  • the two-dimensional coordinates of the calibration reference object on the image plane of the camera are obtained.
  • the manner of obtaining the two-dimensional coordinates of the calibration reference object on the image plane of the camera is in the prior art, which is not described in detail in this embodiment of the present application.
  • S122 Determine the position of the camera on the high-precision map according to the positioning information of the camera, and obtain the position of the calibration reference object on the high-precision map based on the position of the camera on the high-precision map.
  • the position of the camera can be located on the high-precision map based on the positioning information of the camera, and then the position of the calibration reference object on the high-precision map can be found according to the positioning of the camera on the high-precision map .
  • the acquisition of the positioning information of the camera can be realized by using any one or a combination of the following positioning technologies: carrier-phase differential (real-time kinematic, RTK) technology based on satellite positioning, vision-based or lidar-based matching positioning technology.
  • carrier-phase differential real-time kinematic, RTK
  • the positioning information of the camera is the absolute position of the camera (that is, the coordinates of the camera in the world coordinate system).
  • step S122 the following steps 1) and 2) may be used to obtain the position of the calibration reference object on the high-precision map.
  • Step 1) according to the position of the camera on the high-precision map and the captured image of the camera obtained in step S110, determine the target road feature used as the calibration reference in the high-precision map.
  • step 2) the position of the target road feature in the high-precision map is determined as the position of the calibration reference object on the high-precision map.
  • step 1) may further include the following sub-step 1) and sub-step 2).
  • Sub-step 1) according to the position of the camera on the high-precision map, obtain candidate target road features on the high-precision map. For example, road features on the high-precision map whose distance from the position of the camera on the high-precision map is less than a certain value may be used as candidate target road features.
  • Sub-step 2) extract the geometric feature of the calibration reference object from the captured image obtained in step S110, and use the geometric feature of the calibration reference object to compare the geometric features of each road feature in the candidate target road features, A road feature with the best comparison result (for example, the highest matching degree of geometric features) is regarded as the target road feature used as the calibration reference.
  • step 1) other feasible comparison methods can also be used, and the target road feature used as the calibration reference object is determined in the high-precision map by using the actual captured image of the calibration reference object by the camera.
  • the accuracy of locating the calibration reference object on the high-precision map can be improved.
  • step S123 the high-precision map can be used to obtain the three-dimensional coordinates of the calibration reference object relative to the camera in various ways.
  • step S123 includes: obtaining the absolute position of the calibration reference object according to the position of the calibration reference object on the high-precision map; calculating and obtaining the calibration according to the absolute position of the calibration reference object and the absolute position of the camera. The three-dimensional coordinates of the reference object relative to the camera.
  • the absolute position of the calibration reference object and the absolute position of the camera represent the coordinates of the calibration reference object and the camera in the same coordinate system.
  • the absolute position of the calibration reference object is the coordinates of the calibration reference object in the world coordinate system
  • the absolute position of the camera is the coordinates of the camera in the world coordinate system.
  • the absolute position of the camera can be obtained based on the positioning information of the camera.
  • the positioning information of the camera may itself be the absolute position of the camera.
  • the absolute position of the calibration reference object can be obtained based on the position of the calibration reference object on the high-precision map.
  • the high-precision map has the function of generating the relative positions of two positioning points on the map; step S123 includes: based on the position of the camera on the high-precision map and the calibration reference object on the high-precision map. The position of the calibration reference object relative to the camera is generated using a high-precision map.
  • the three-dimensional coordinates of the calibration reference object relative to the camera can be generated by using the high-precision map.
  • S124 Calculate the external parameters of the camera according to the two-dimensional coordinates of the calibration reference object on the captured image and the three-dimensional coordinates of the calibration reference object relative to the camera.
  • the external parameters of the camera can be calculated based on the geometric model of camera imaging, according to the two-dimensional coordinates of the calibration reference object on the image plane of the camera and the three-dimensional coordinates of the calibration reference object relative to the camera.
  • the specific algorithm is in the prior art, which is neither limited nor described in detail in this application.
  • the three-dimensional coordinates of the calibration reference object relative to the camera are obtained by measurement, resulting in that the external parameter calibration accuracy of the camera depends on the measurement accuracy of the three-dimensional coordinates.
  • the three-dimensional coordinates of the calibration reference object relative to the camera are obtained by using a high-precision map, rather than by measurement, thereby avoiding that the calibration accuracy of the external parameters of the camera is limited by the measurement accuracy.
  • the accuracy of calibrating the three-dimensional coordinates of the reference object relative to the camera can be improved, so that the external parameter calibration accuracy of the camera can be improved.
  • the calibration reference object is a road feature.
  • step S120 further includes steps S125 to S128.
  • S125 Acquire multiple sets of camera parameters, where each set of camera parameters includes internal and external parameters.
  • each set of camera parameters includes camera internal parameters, distortion parameters and external parameters.
  • the external parameters include translation matrix and rotation matrix.
  • the current camera parameters can be used as a benchmark, and a preset step size can be used to simulate and generate multiple sets of camera parameters.
  • the camera's internal parameters, distortion parameters, and translation matrix may change with a small possibility or a small change range. It can be assumed that these parameters remain unchanged at their initial values, and the camera's rotation matrix may change. Therefore, simulation values of multiple rotation matrices can be generated based on the current rotation matrix of the camera, thereby generating multiple sets of camera parameters.
  • step S125 includes: taking the initial value of the rotation matrix of the camera as a reference, using a preset step size to generate multiple groups of rotation matrix simulation values; respectively generating multiple groups of cameras according to the multiple groups of rotation matrix simulation values. parameter.
  • the rotation matrix is respectively changed toward two relative rotation directions (for example, left rotation and right rotation), thereby generating multiple rotation matrices Simulation values (eg, generate 8000 quasi-matrix simulation values). Then, based on these multiple rotation matrix simulation values, multiple sets of camera parameters are generated. That is to say, the rotation matrices of different groups in the multiple groups of camera parameters are different, and the remaining parameters (internal parameters, distortion parameters, and translation matrices) can be the same.
  • the preset step size may be specifically determined according to application requirements.
  • the preset step size is 0.2 degrees (0.2°).
  • the number of groups of the parameters of the multiple groups of cameras may also be specifically determined according to application requirements.
  • step S125 includes: using the rotation matrix and translation matrix of the camera as a reference, and using corresponding step sizes, generating multiple sets of rotation matrix simulation values and multiple sets of translation matrix simulation values; Matrix simulation values and multiple sets of translation matrix simulation values generate multiple sets of camera parameters. That is to say, the rotation matrices and translation matrices of different groups in the multiple groups of camera parameters are different, and the remaining parameters (internal participating distortion parameters) may be the same.
  • S126 according to the multiple sets of camera parameters and the positioning information of the cameras, use a high-precision map to generate multiple road feature projection images.
  • the position of the camera in the high-precision map is queried in the high-precision map. Then, based on the camera's internal parameters, distortion parameters, and external parameters, it is projected in a high-precision map to form a road feature projection image (binary map) corresponding to the geometric model imaged by the camera.
  • a road feature projection image binary map
  • multiple road feature projection images are shown in FIG. 4 .
  • S127 Obtain a matching road feature projection image with the highest matching degree with the captured image from the plurality of road feature projection images.
  • each image in a plurality of road feature projection images is matched by using the actual captured image of the calibration reference object.
  • a method for matching the captured image and the road feature projection image may be to calculate the average pixel deviation of the two images.
  • the average pixel deviation of the captured image and each of the plurality of road feature projection images is calculated separately.
  • the road feature projection image with the smallest average pixel deviation is used as the matched road feature projection image.
  • step S127 includes: processing the captured image of the camera acquired in step S110 into an image of a first form, where the first form represents the form of the road feature projection image supported by the high-precision map;
  • the matching road feature projection image with the highest matching degree with the image of the first form of the captured image is obtained from the plurality of road feature projection images.
  • the road feature projection images usually supported by high-precision maps are binary images.
  • the image of the first form mentioned in this embodiment is a binary image.
  • step S127 includes: acquiring a binary image of a captured image of the camera; and acquiring a matching road feature projection image with the highest matching degree with the binary image of the captured image from a plurality of road feature projection images.
  • the method for obtaining a binary image of a captured image includes two steps: the first step is to use a neural network (NN) inference model to perform semantic pixel-level segmentation of the captured image, so as to realize the segmentation of road features (lane lines, signs, etc.)
  • the contours of road features are extracted from the segmented image to generate a binary image.
  • the captured image and the binary image extracted from the captured image are shown as the left and right images in FIG. 5 , respectively.
  • step S126 the actual captured image of the camera can also be processed into the image of the other form before matching.
  • S128 Acquire the external parameters of the camera according to a set of camera parameters corresponding to the projected image of the matched road feature.
  • each set of camera parameters includes internal parameters, distortion parameters, translation matrix and rotation matrix, then the translation matrix and rotation matrix of the camera can be obtained from the set of camera parameters corresponding to the projected image of the matching road feature, that is, the camera to be calibrated can be obtained.
  • External reference the translation matrix and rotation matrix of the camera can be obtained from the set of camera parameters corresponding to the projected image of the matching road feature, that is, the camera to be calibrated can be obtained.
  • the rotation matrix of the camera can be obtained only from a set of camera parameters corresponding to the projected image of the road feature, that is, the camera external parameters to be calibrated can be obtained.
  • the external parameters of the camera are obtained by using the road feature projection function of the high-precision map, rather than by measuring the three-dimensional coordinates of the calibration reference object relative to the camera, thereby avoiding the limitation of the calibration accuracy of the external parameters of the camera. for measurement accuracy.
  • high-precision camera extrinsic parameter calibration can be achieved.
  • a high-precision map is used by using the high-precision map to generate a projected image of road features according to the calibrated camera parameters, so as to guide the unmanned vehicle to drive safely.
  • the road feature projection function of the high-precision map is reversely applied, which subtly improves the calibration accuracy of the external parameters of the camera.
  • the embodiments of the present application may be applicable to dynamic camera calibration, and may also be applicable to camera static calibration.
  • the camera in the embodiment of the present application is a vehicle-mounted camera, and the vehicle on which the camera is carried is in a moving state.
  • the camera calibration solution provided by the present application can improve the calibration accuracy of the camera's external parameters by using a high-precision map to calibrate the external parameters of the camera.
  • the calibration reference object can be any type of road feature, and is not strictly limited to be the lane line (in the existing camera self-calibration method, the calibration reference object is limited to the lane line).
  • the calibration reference object in the camera calibration solution provided by this application may be any one of the following lane features: lane lines, signs, pole-like objects, road signs, and traffic lights.
  • the identification plate is, for example, a traffic identification plate or a pole plate
  • the pole-like object is, for example, a street light pole and the like.
  • the camera calibration solution provided in this application can be applied to both dynamic camera calibration and static camera calibration.
  • the camera calibration solution provided by the present application does not have the limitation of a specific road in the scenario of dynamic camera calibration, and does not require the vehicle to drive in the center. Therefore, the camera calibration solution provided in this application has good versatility.
  • the camera calibration solution provided in this application can be applied to the camera parameter calibration link of the assembly line of the autonomous driving vehicle, and is not necessarily limited to a fixed calibration workshop, and can calibrate all cameras at the same time, saving calibration time.
  • the camera calibration solution provided in this application can also be applied to scenarios where the external parameters change during use after the vehicle leaves the factory, and real-time online correction or periodic calibration is required.
  • the camera calibration solution provided by the present application can greatly reduce the dependence on the calibration workshop, and realize the high-precision calibration of the external parameters of the vehicle camera anytime and anywhere (ie, online in real time).
  • the camera calibration solution provided in this application can also be applied to the calibration field of other sensors (eg, lidar).
  • the location information of the calibration reference object can also be obtained by using a high-precision map.
  • FIG. 6 is an apparatus 600 for calibrating external parameters of a camera provided by an embodiment of the present application.
  • the apparatus 600 includes an acquisition unit 610 and a processing unit 620 .
  • the acquiring unit 610 is configured to acquire a photographed image of the camera, where the photographed image is an image photographed by the camera with the calibration reference object as the photographing object.
  • the processing unit 620 is configured to acquire the external parameters of the camera according to the captured image and the high-precision map, and the high-precision map includes a calibration reference object.
  • the processing unit 620 is configured to acquire the external parameters of the camera through the following operations: acquiring the two-dimensional coordinates of the calibration reference object on the captured image; determining the location of the camera on the high-precision map according to the positioning information of the camera. Based on the position of the camera on the high-precision map, the position of the calibration reference object on the high-precision map is obtained; based on the position of the calibration reference object on the high-precision map, the three-dimensional coordinates of the calibration reference object relative to the camera are obtained; according to two Dimensional coordinates and three-dimensional coordinates are calculated to obtain the external parameters of the camera.
  • the processing unit 620 obtains the three-dimensional coordinates of the calibration reference object relative to the camera through the following operations: obtaining the absolute position of the calibration reference object according to the position of the calibration reference object on the high-precision map; The absolute position and the absolute position of the camera are calculated to obtain the three-dimensional coordinates of the calibration reference object relative to the camera.
  • the high-precision map has the function of generating the relative positions of two positioning points on the map; the processing unit 620 obtains the three-dimensional coordinates of the calibration reference object relative to the camera through the following operations: The position on the map and the position of the calibration reference object on the high-precision map are used to generate the three-dimensional coordinates of the calibration reference object relative to the camera using the high-precision map.
  • the calibration reference object is a road feature
  • the processing unit 620 is configured to obtain the external parameters of the camera through the following operations: obtain multiple sets of camera parameters, each set of camera parameters includes internal and external parameters; group camera parameters and camera positioning information, and use high-precision maps to generate multiple road feature projection images; obtain the matching road feature projection image with the highest matching degree with the captured image from the multiple road feature projection images; The corresponding set of camera parameters to obtain the external parameters of the camera.
  • the processing unit 620 is configured to obtain multiple sets of camera parameters through the following operations: taking the initial value of the rotation matrix of the camera as a benchmark, and using a preset step size to generate multiple sets of rotation matrix simulation values; Multiple sets of rotation matrix simulation values generate multiple sets of camera parameters.
  • the processing unit 620 is configured to obtain multiple sets of camera parameters through the following operations: respectively taking the rotation matrix and translation matrix of the camera as benchmarks, and using corresponding step sizes to generate multiple sets of rotation matrix simulation values. and multiple sets of translation matrix simulation values; according to multiple sets of rotation matrix simulation values and multiple sets of translation matrix simulation values, multiple sets of camera parameters are generated.
  • the processing unit 620 is configured to obtain a projection image matching the road feature through the following operations: obtaining a binary image of the captured image; The matching road feature projection image with the highest matching degree with the binary image of the captured image is obtained from the road feature projection images.
  • the camera is a vehicle-mounted camera, and the vehicle on which the camera is carried may be in a stationary state or in a moving state.
  • an embodiment of the present application further provides an apparatus 700 for calibrating external parameters of a camera.
  • the apparatus 700 includes a processor 710, the processor 710 is coupled with a memory 720, the memory 720 is used for storing computer programs or instructions, and the processor 710 is used for executing the computer programs or instructions stored in the memory 720, so that the methods in the above method embodiments are implemented 100 is executed.
  • the apparatus 700 may further include a memory 720 .
  • the apparatus 700 may further include a data interface 730, and the data interface 730 is used for data transmission with the outside world.
  • Embodiments of the present application further provide a computer-readable medium, where the computer-readable medium stores program codes for device execution, where the program codes include the methods for executing the foregoing embodiments.
  • the embodiments of the present application also provide a computer program product containing instructions, when the computer program product is run on a computer, the computer is made to execute the method of the above embodiment.
  • An embodiment of the present application further provides a chip, the chip includes a processor and a data interface, and the processor reads an instruction stored in a memory through the data interface, and executes the method of the above embodiment.
  • the chip may further include a memory, the memory stores instructions, the processor is configured to execute the instructions stored in the memory, and when the instructions are executed, the processor is configured to execute the methods in the foregoing embodiments.
  • the disclosed systems, devices and methods may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Navigation (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

La présente invention concerne le domaine de l'intelligence artificielle, et plus précisément le domaine de la conduite autonome, et porte sur un procédé et un appareil d'étalonnage de paramètres extrinsèques d'une caméra. Le procédé consiste à : obtenir une image photographiée d'une caméra, l'image photographiée étant une image obtenue par la caméra photographiant un objet de référence d'étalonnage qui sert d'objet à photographier ; et en fonction de l'image photographiée et d'une carte de haute précision, obtenir des paramètres extrinsèques de la caméra, la carte de haute précision comprenant l'objet de référence d'étalonnage. Par obtention de paramètres extrinsèques d'une caméra en fonction d'une image photographiée de la caméra sur un objet de référence d'étalonnage et d'une carte de haute précision, la précision d'étalonnage des paramètres extrinsèques de la caméra est améliorée. La présente invention peut être appliquée à des véhicules intelligents, des véhicules connectés, des véhicules à énergies nouvelles ou des véhicules autonomes.
PCT/CN2021/114890 2020-09-04 2021-08-27 Procédé et appareil d'étalonnage de paramètres extrinsèques d'une caméra WO2022048493A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21863558.9A EP4198901A4 (fr) 2020-09-04 2021-08-27 Procédé et appareil d'étalonnage de paramètres extrinsèques d'une caméra
US18/177,930 US20230206500A1 (en) 2020-09-04 2023-03-03 Method and apparatus for calibrating extrinsic parameter of a camera

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010919175.8 2020-09-04
CN202010919175.8A CN114140533A (zh) 2020-09-04 2020-09-04 摄像头外参标定的方法与装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/177,930 Continuation US20230206500A1 (en) 2020-09-04 2023-03-03 Method and apparatus for calibrating extrinsic parameter of a camera

Publications (1)

Publication Number Publication Date
WO2022048493A1 true WO2022048493A1 (fr) 2022-03-10

Family

ID=80438664

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/114890 WO2022048493A1 (fr) 2020-09-04 2021-08-27 Procédé et appareil d'étalonnage de paramètres extrinsèques d'une caméra

Country Status (4)

Country Link
US (1) US20230206500A1 (fr)
EP (1) EP4198901A4 (fr)
CN (1) CN114140533A (fr)
WO (1) WO2022048493A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116047440A (zh) * 2023-03-29 2023-05-02 陕西欧卡电子智能科技有限公司 一种端到端的毫米波雷达与摄像头外参标定方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063490A (zh) * 2022-06-30 2022-09-16 阿波罗智能技术(北京)有限公司 车辆相机外参标定方法、装置、电子设备及存储介质
CN116958271A (zh) * 2023-06-06 2023-10-27 阿里巴巴(中国)有限公司 标定参数确定方法以及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101617943A (zh) * 2008-07-04 2010-01-06 株式会社东芝 X射线摄影装置、x射线摄影方法以及图像处理装置
CN109214980A (zh) * 2017-07-04 2019-01-15 百度在线网络技术(北京)有限公司 一种三维姿态估计方法、装置、设备和计算机存储介质
CN110148164A (zh) * 2019-05-29 2019-08-20 北京百度网讯科技有限公司 转换矩阵生成方法及装置、服务器及计算机可读介质
WO2019221349A1 (fr) * 2018-05-17 2019-11-21 에스케이텔레콤 주식회사 Dispositif et procédé d'étalonnage de caméra de véhicule
CN110728720A (zh) * 2019-10-21 2020-01-24 北京百度网讯科技有限公司 用于相机标定的方法、装置、设备和存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215083B (zh) * 2017-07-06 2021-08-31 华为技术有限公司 车载传感器的外部参数标定的方法和设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101617943A (zh) * 2008-07-04 2010-01-06 株式会社东芝 X射线摄影装置、x射线摄影方法以及图像处理装置
CN109214980A (zh) * 2017-07-04 2019-01-15 百度在线网络技术(北京)有限公司 一种三维姿态估计方法、装置、设备和计算机存储介质
WO2019221349A1 (fr) * 2018-05-17 2019-11-21 에스케이텔레콤 주식회사 Dispositif et procédé d'étalonnage de caméra de véhicule
CN110148164A (zh) * 2019-05-29 2019-08-20 北京百度网讯科技有限公司 转换矩阵生成方法及装置、服务器及计算机可读介质
CN110728720A (zh) * 2019-10-21 2020-01-24 北京百度网讯科技有限公司 用于相机标定的方法、装置、设备和存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4198901A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116047440A (zh) * 2023-03-29 2023-05-02 陕西欧卡电子智能科技有限公司 一种端到端的毫米波雷达与摄像头外参标定方法
CN116047440B (zh) * 2023-03-29 2023-06-09 陕西欧卡电子智能科技有限公司 一种端到端的毫米波雷达与摄像头外参标定方法

Also Published As

Publication number Publication date
EP4198901A1 (fr) 2023-06-21
EP4198901A4 (fr) 2024-02-21
US20230206500A1 (en) 2023-06-29
CN114140533A (zh) 2022-03-04

Similar Documents

Publication Publication Date Title
CN109461211B (zh) 基于视觉点云的语义矢量地图构建方法、装置和电子设备
WO2022048493A1 (fr) Procédé et appareil d'étalonnage de paramètres extrinsèques d'une caméra
CN109993793B (zh) 视觉定位方法及装置
CN113657224B (zh) 车路协同中用于确定对象状态的方法、装置、设备
CN110176032B (zh) 一种三维重建方法及装置
WO2018120040A1 (fr) Procédé et dispositif de détection d'obstacle
US11227395B2 (en) Method and apparatus for determining motion vector field, device, storage medium and vehicle
CN112288825B (zh) 相机标定方法、装置、电子设备、存储介质和路侧设备
CN108519102B (zh) 一种基于二次投影的双目视觉里程计算方法
CN112967344B (zh) 相机外参标定的方法、设备、存储介质及程序产品
CN110969064B (zh) 一种基于单目视觉的图像检测方法、装置及存储设备
CN113989450A (zh) 图像处理方法、装置、电子设备和介质
CN112232275B (zh) 基于双目识别的障碍物检测方法、系统、设备及存储介质
CN112700486B (zh) 对图像中路面车道线的深度进行估计的方法及装置
CN110766761B (zh) 用于相机标定的方法、装置、设备和存储介质
US20240062415A1 (en) Terminal device localization method and related device therefor
CN110766760A (zh) 用于相机标定的方法、装置、设备和存储介质
CN115410167A (zh) 目标检测与语义分割方法、装置、设备及存储介质
KR20230003803A (ko) 라이다 좌표계와 카메라 좌표계의 벡터 정합을 통한 자동 캘리브레이션 방법
CN114413958A (zh) 无人物流车的单目视觉测距测速方法
CN111950428A (zh) 目标障碍物识别方法、装置及运载工具
CN110348351B (zh) 一种图像语义分割的方法、终端和可读存储介质
CN114648639B (zh) 一种目标车辆的检测方法、系统及装置
WO2023283929A1 (fr) Procédé et appareil permettant d'étalonner des paramètres externes d'une caméra binoculaire
CN116343165A (zh) 一种3d目标检测系统、方法、终端设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21863558

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021863558

Country of ref document: EP

Effective date: 20230315

NENP Non-entry into the national phase

Ref country code: DE