WO2019170066A1 - 一种车载相机外参确定方法、装置、设备及系统 - Google Patents

一种车载相机外参确定方法、装置、设备及系统 Download PDF

Info

Publication number
WO2019170066A1
WO2019170066A1 PCT/CN2019/076915 CN2019076915W WO2019170066A1 WO 2019170066 A1 WO2019170066 A1 WO 2019170066A1 CN 2019076915 W CN2019076915 W CN 2019076915W WO 2019170066 A1 WO2019170066 A1 WO 2019170066A1
Authority
WO
WIPO (PCT)
Prior art keywords
marker
camera
coordinate system
positional relationship
vehicle
Prior art date
Application number
PCT/CN2019/076915
Other languages
English (en)
French (fr)
Inventor
杨硕
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2019170066A1 publication Critical patent/WO2019170066A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present application relates to the field of assisted driving technology, and in particular, to a method, device, device and system for determining an external reference of an in-vehicle camera.
  • the assisted driving system usually includes a plurality of in-vehicle cameras disposed at different positions of the vehicle body to collect images in different directions; and the images of the different directions are stitched into a large viewing angle by using parameters of the plurality of in-vehicle cameras The image is displayed to the driver and can be used to assist the driver.
  • the parameters of the on-board camera include the external reference, and the external reference is the positional relationship between the on-board camera and the vehicle body.
  • the solution for acquiring the external camera of the vehicle camera generally includes: parking the vehicle at the designated position, and setting a marker at another designated position near the vehicle, and predetermining a positional relationship between the parking position of the vehicle and the position of the marker; setting in the vehicle
  • Each of the in-vehicle cameras is aligned with the marker for image acquisition, and the external reference of the on-board camera is calculated based on the position of the marker in the acquired image and the predetermined positional relationship.
  • the parking position of the vehicle and the position of the marker are fixed. If the parking position of the vehicle is slightly deviated from the designated position, the accuracy of the acquired external reference is low.
  • the purpose of the embodiments of the present application is to provide a method, device, device and system for determining an external reference of an on-vehicle camera to improve the accuracy of obtaining an external reference.
  • an embodiment of the present application provides a method for determining an external parameter of an in-vehicle camera, including:
  • a positional relationship between the plurality of on-vehicle cameras and the vehicle body is determined.
  • an embodiment of the present application further provides an in-vehicle camera external parameter determining apparatus, including:
  • An acquisition module configured to acquire a calibration image corresponding to each vehicle camera; wherein, the calibration image corresponding to each vehicle camera is obtained according to the image captured by the vehicle camera, and the calibration image corresponding to the adjacent vehicle camera includes the same marker;
  • An identification module configured to identify a position of the marker in the calibration image corresponding to the onboard camera for each onboard camera
  • a first determining module configured to determine, according to the location, a positional relationship between the onboard camera and the identified marker
  • a conversion module configured to convert the plurality of on-vehicle cameras into the same coordinate system based on a determined position relationship between each on-vehicle camera and the marker;
  • a second determining module configured to determine a positional relationship between the plurality of onboard cameras and the vehicle body in the same coordinate system.
  • an embodiment of the present application further provides an electronic device, including a processor and a memory;
  • a memory for storing a computer program
  • the processor when used to execute a program stored on the memory, implements any of the above-described methods for determining the external parameters of the on-vehicle camera.
  • the embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, implements any of the above-mentioned vehicle camera external parameters. Determine the method.
  • an embodiment of the present application further provides an executable program code for being executed to execute any of the above-described on-board camera external parameter determining methods.
  • an embodiment of the present application further provides an assisted driving system, including: a processing device and a plurality of onboard cameras;
  • Each in-vehicle camera for transmitting the acquired image to the processing device;
  • the processing device is configured to obtain, according to the images collected by the plurality of on-board cameras, a calibration image corresponding to the plurality of on-vehicle cameras; wherein the calibration image corresponding to the adjacent on-vehicle camera includes the same marker;
  • An in-vehicle camera that recognizes a position of a marker in a calibration image corresponding to the on-vehicle camera, and determines a positional relationship between the on-vehicle camera and the recognized marker according to the position; each of the on-vehicle camera and the marker according to the determined
  • a positional relationship between the plurality of onboard cameras is converted into the same coordinate system based on one marker; and in the same coordinate system, a positional relationship between the plurality of onboard cameras and the vehicle body is determined.
  • the calibration image corresponding to the adjacent vehicle camera includes the same marker, and for each vehicle camera, the position of the marker is identified in the calibration image corresponding to the vehicle camera, and the location is determined according to the location
  • the positional relationship between the in-vehicle camera and the identified tag, the determined positional relationship includes the positional relationship between the same tag and the different in-vehicle cameras, so that a plurality of in-vehicle cameras can be converted based on one marker
  • the positional relationship between the plurality of on-vehicle cameras and the vehicle body is determined in the same coordinate system.
  • FIG. 1 is a schematic flowchart of a method for determining an external parameter of an in-vehicle camera according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of imaging provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a vehicle body position according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an external camera determining device for an in-vehicle camera according to an embodiment of the present disclosure
  • FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of an assist driving system according to an embodiment of the present application.
  • an embodiment of the present application provides a method, device, device, and system for determining an external reference of an in-vehicle camera.
  • the method can be applied to a processing device communicatively coupled to a plurality of in-vehicle cameras, or can be applied to any one of the in-vehicle cameras that are communicatively coupled to other in-vehicle cameras.
  • a plurality of markers may be disposed in the vicinity of the vehicle, and the image captured by the adjacent vehicle camera includes the same marker, so that the plurality of vehicle cameras can be converted into the same coordinate system based on one marker.
  • the positional relationship between the plurality of on-vehicle cameras and the vehicle body is determined. It can be seen that in this solution, only the adjacent vehicle camera is required to collect the same marker, and it is not necessary to fix the parking position of the vehicle and the position of the marker, thereby improving the accuracy of obtaining the external reference.
  • FIG. 1 is a schematic flowchart of a method for determining an external parameter of an in-vehicle camera according to an embodiment of the present application, including:
  • S101 Acquire a calibration image corresponding to each onboard camera.
  • the calibration image corresponding to each on-vehicle camera is obtained according to the image acquired by the on-vehicle camera, and the calibration image corresponding to the adjacent on-vehicle camera includes the same marker.
  • the original image captured by the on-board camera can be directly obtained as a calibration image; or, as another implementation manner, the original image collected by the multiple on-board cameras can be acquired, and the acquired multiple original images are distorted. Correction to get multiple calibration images.
  • the in-vehicle camera is a fisheye camera
  • the following image can be used to correct the distortion of the original image, that is, the fisheye image, to obtain a calibration image:
  • ⁇ d ⁇ (1+k 1 ⁇ 2 +k 2 ⁇ 4 +k 3 ⁇ 6 +k 4 ⁇ 8 );
  • x′ ( ⁇ d /r)a;
  • y′ ( ⁇ d /r)b;
  • X c , Y c , Z c represent the coordinates in the camera coordinate system
  • X, Y, Z represent the coordinates in the world coordinate system
  • k1-k4 represent the internal parameter distortion coefficient of the on-board camera
  • u, v represent the image coordinate system Imaging coordinates
  • a, b represent coordinates in the corrected image
  • x', y' represent the coordinates after distortion
  • R1 represents the amount of rotation of the coordinate in the world coordinate system to the coordinates of the camera coordinate system
  • T1 represents the coordinate in the world coordinate system. Converts the amount of translation to the coordinates in the camera coordinate system.
  • the on-board camera is a non-fisheye camera and conforms to the pinhole model
  • the original image can be corrected for distortion using the following equation to obtain a calibration image:
  • x, y, and z represent coordinates in the camera coordinate system
  • X, Y, and Z represent coordinates in the world coordinate system
  • k1-k6 represent internal parameter distortion coefficients of the on-vehicle camera
  • p 1 and p 2 represent internal tangential distortion coefficients.
  • c x and c y are the coordinates of the principal points in the camera internal reference
  • the principal point is the point where the camera's main optical axis intersects the image plane
  • u, v represents the imaging coordinates in the image coordinate system
  • x′′, y′′ represents the distortion coordinate of.
  • R1 represents the amount of rotation of coordinates in the world coordinate system to coordinates in the camera coordinate system
  • T1 represents the amount of translation of coordinates in the world coordinate system to coordinate in the camera coordinate system.
  • S102 For each on-vehicle camera, identify a position of the marker in the calibration image corresponding to the on-vehicle camera, and determine a positional relationship between the on-vehicle camera and the recognized marker according to the position.
  • the marker can be an object such as a calibration cloth, a checkerboard, etc. If the marker is a checkerboard, the marker in the image can be identified by a black and white square, and the marker is obtained in the image coordinate system. s position.
  • S102 may include: constructing a covariance matrix of the marker in a world coordinate system in which the marker is located; obtaining an initial rotation matrix R and an initial translation vector of the marker according to the covariance matrix T. Iteratively optimizing the initial rotation matrix R and the initial translation vector T by using a reprojection error between the coordinates of the marker in the calibration image and the coordinates in the world coordinate system in which the marker is located, to obtain the The positional relationship between the on-board camera and the marker.
  • the feature vector corresponding to the minimum eigenvalue of the covariance matrix and the coordinate mean of the covariance matrix may be calculated;
  • the feature vector and the coordinate mean value are transformed to obtain an initial rotation matrix R of the marker; and then the initial translation vector T of the marker is calculated according to the initial rotation matrix R and the coordinate mean value; and the marker is used in the calibration image.
  • the re-projection error between the coordinates and the coordinates in the world coordinate system in which the marker is located, the initial rotation matrix R and the initial translation vector T are iteratively optimized to obtain the coordinates and coordinates of the marker in the calibration image.
  • the Jacobian matrix can be constructed according to the partial and partial data of R and T, and the coordinates of the marker in the calibration image and the coordinates in the world coordinate system where the marker is located are used.
  • the principle of minimum re-projection error is to optimize R and T multiple iterations to obtain optimized R and T.
  • the specific number of iterations can be set according to the actual situation.
  • the optimized R and T are obtained, and the mapping relationship between the coordinates of the marker in the calibration image and the coordinates in the world coordinate system in which the marker is located is obtained.
  • the conversion relationship between the image coordinate system and the camera coordinate system can be obtained, combined with the above-mentioned "the mapping relationship between the coordinates of the marker in the calibration image and the coordinates in the world coordinate system in which the marker is located"
  • the conversion relationship between the camera coordinate system and the world coordinate system in which the marker is located is obtained, that is, the positional relationship between the in-vehicle camera and the marker.
  • the camera coordinate system is the XcYcZc coordinate system
  • the world coordinate system of the marker is the XYZ coordinate system
  • the camera is aligned with the marker for image acquisition, and the acquired image is distorted.
  • a calibration image is obtained, and the image coordinate system of the calibration image is a uv coordinate system, wherein point c is the center point of the calibration image.
  • the conversion relationship between the image coordinate system (uv coordinate system) and the camera coordinate system (XcYcZc coordinate system) can be obtained, and the image coordinate system (uv coordinate system) obtained above and the world coordinates of the marker object can be obtained.
  • the mapping relationship between the system (XYZ coordinate system) and the conversion relationship between the camera coordinate system (XcYcZc coordinate system) and the world coordinate system (XYZ coordinate system) where the marker is located that is, between the on-board camera and the marker Positional relationship.
  • S103 Convert the plurality of onboard cameras into the same coordinate system based on the determined positional relationship between each on-vehicle camera and the marker based on one marker.
  • the plurality of in-vehicle cameras can be converted into the same coordinate system based on one marker.
  • S101 may include: acquiring a first calibration image corresponding to the first camera, a second calibration image corresponding to the second camera, and a third calibration image corresponding to the third camera; wherein, in the first calibration image Included in the first calibration object, the second calibration image includes the first marker and the second marker, and the third calibration image includes the second marker;
  • the positional relationship between the first camera and the first marker, the positional relationship between the second camera and the first marker, the positional relationship between the second camera and the second marker, and the third camera and the first are determined in S102.
  • S103 includes: converting a world coordinate system in which the second marker is located according to a positional relationship between the second camera and the first marker and a positional relationship between the second camera and the second marker
  • the world coordinate system in which the first marker is located obtains the coordinates of each onboard camera in the world coordinate system in which the first marker is located.
  • a specific process of converting three cameras to the same coordinate system is described by taking three cameras as an example. If the number of in-vehicle cameras is greater than three, the process of converting a plurality of in-vehicle cameras into the same coordinate system is similar, and reference may be made to the present embodiment. For example, if there are four on-board cameras, the present embodiment may be used to convert three of the on-vehicle cameras into the same coordinate system, and the remaining one on-board camera is adjacent to at least one of the three on-vehicle cameras.
  • the corresponding image is included in the corresponding image between the adjacent cameras, so that the remaining one in-vehicle camera can also be converted into the same coordinate system based on the positional relationship between the adjacent cameras and the same marker.
  • the situation of five in-vehicle cameras or more in-vehicle cameras is similar and will not be enumerated one by one.
  • the coordinates of the second camera in the world coordinate system in which the first marker is located are P1 (x1, y1, z1); according to the second camera and the second marker
  • the positional relationship is obtained by obtaining the coordinates of the second camera in the world coordinate system in which the second marker is located, P2 (x2, y2, z2), and converting the world coordinate system in which the second marker is located to the first marker by using the following formula
  • the world coordinate system where the object is located is P1 (x1, y1, z1)
  • an in-vehicle camera is respectively disposed on the front side, the rear side, the left side, and the right side of the vehicle body.
  • the camera set on the front side is referred to as a front camera
  • the camera disposed on the rear side is referred to as a rear camera.
  • the camera set on the left side is called the left camera
  • the camera set on the right side is called the right camera
  • the left front side marker is recorded as the marker 1
  • the right front marker is recorded as the marker 2
  • the left rear marker is recorded.
  • the right rear marker is referred to as marker 4.
  • the front camera can image the marker 1 and the marker 2, the camera captures the markers 3 and 4, and the left camera can mark the marker 1 and the marker.
  • the object 3 performs image acquisition, and the right camera can perform image acquisition on the marker 2 and the marker 4.
  • the image contains the marker 3 and the marker 4
  • the left calibration image contains the marker 1 and the marker 3
  • the right calibration image contains the marker 2 and the marker 4.
  • the coordinates of the front camera in the world coordinate system of the marker 1 are determined, denoted as A; according to the positional relationship between the front camera and the marker 2, the front camera is determined in the world of the marker 2
  • the coordinates in the coordinate system are denoted as B; according to the relationship between A and B, the world coordinate system in which the marker 2 is located is converted to the world coordinate system in which the marker 1 is located.
  • the coordinates of the left camera in the world coordinate system of the marker 1 are determined, denoted as C; according to the positional relationship between the left camera and the marker 3, the left camera is determined to be at the marker 3
  • the coordinates in the world coordinate system are recorded as D; according to the relationship between C and D, the world coordinate system in which the marker 3 is located is converted to the world coordinate system in which the marker 1 is located.
  • the positional relationship between the right camera and the marker 2 and the conversion relationship between the world coordinate system in which the marker 1 is obtained and the world coordinate system in which the marker 2 is located, it is possible to be in the world coordinate system in which the marker 1 is located. , determine the coordinates of the right camera.
  • the positional relationship between the rear camera and the marker 3 and the conversion relationship between the world coordinate system in which the marker 1 is obtained and the world coordinate system in which the marker 3 is located, it is possible to be in the world coordinate system in which the marker 1 is located. , determine the coordinates of the rear camera.
  • a car camera on the front, rear, left and right sides of the car body, that is, a total of four car cameras, and for larger cars, such as buses, trucks, buses, etc. Since the vehicle body is long, a plurality of pairs of symmetrical car cameras can be disposed on the left and right sides of the vehicle body, or the vehicle cameras disposed on the left and right sides can also be asymmetrically distributed, and the distribution of the car camera is not limited.
  • the world coordinate system in which the marker 4 is located needs to be converted to the world coordinate system in which the marker 1 is located, and any of the following methods may be employed:
  • the coordinates of the rear camera in the world coordinate system of the marker 3 are determined, denoted as E; and the rear camera is determined according to the positional relationship between the rear camera and the marker 4.
  • the coordinates in the world coordinate system in which the marker 4 is located are denoted as F; according to the relationship between E and F, and the world coordinate system in which the marker 3 obtained in the above content is located and the world coordinate system in which the marker 1 is located.
  • the conversion relationship converts the world coordinate system in which the marker 4 is located to the world coordinate system in which the marker 1 is located.
  • the coordinates of the right camera in the world coordinate system of the marker 2 are determined, denoted as G; according to the positional relationship between the right camera and the marker 4, the right camera is determined.
  • the coordinate in the world coordinate system in which the marker 4 is located is denoted by H; according to the relationship between G and H, and the world coordinate system in which the marker 2 obtained in the above content is located and the world coordinate system in which the marker 1 is located.
  • the conversion relationship converts the world coordinate system in which the marker 4 is located to the world coordinate system in which the marker 1 is located.
  • S104 Determine the positional relationship between the plurality of onboard cameras and the vehicle body in the same coordinate system.
  • a pair of on-board cameras can be symmetrically arranged on the left and right sides of the vehicle body, and the pair of on-board cameras are placed at half the length of the vehicle body, and on the front side of the vehicle body, and at half the width of the vehicle body.
  • Set an in-vehicle camera front camera
  • the coordinates of the middle point of the left and right camera coordinates are the center coordinates of the vehicle body, according to the coordinates of the center of the vehicle and the coordinates of the front camera.
  • the angle between the connection and the north-south direction may be used as the rotation angle of the vehicle body, or the angle between the connection and the east-west direction may be used as the rotation angle of the vehicle body, and the like, and the like.
  • an in-vehicle camera is respectively disposed on the front side, the rear side, the left side, and the right side of the vehicle body, and the coordinates of each in-vehicle camera are respectively determined in the world coordinate system where the left front side marker is located;
  • S104 includes:
  • a positional relationship between the right camera and the vehicle body is obtained based on the fourth coordinate and the fifth coordinate and the vehicle body rotation angle.
  • the left and right cameras are symmetrically arranged, and the front and rear cameras are asymmetrically arranged.
  • Make the connection between the left camera and the right camera denote L1
  • make the vertical line L2 of L1 pass the camera to make the vertical line D1 of L1
  • the camera makes the vertical line D2 of L1
  • calculate the midpoint of D1 and D2 also It is the calculation (D1+D2)/2
  • the midpoint of D1 and D2 is the vertical line L3 of L2
  • the intersection of L2 and L3 is the coordinate of the center position of the vehicle body, that is, the fifth coordinate.
  • the angle between L2 and the horizontal or vertical line, or the angle between L3 and the horizontal or vertical line can be used as the body rotation angle.
  • the coordinate origin is the position of the marker 1, and the coordinates of the Ov point are determined in the coordinate system, that is, the coordinates of the center position of the vehicle body, that is, the fifth coordinate, in the coordinate system.
  • the angle ⁇ between the vertical line L2 of the front and rear camera lines and the coordinate H w is determined, that is, the body rotation angle.
  • the coordinates of the center position of the vehicle body and the rotation angle of the vehicle body are obtained, that is, the positional relationship between the vehicle body and the marker 1 is obtained, and the coordinates of each vehicle camera in the world coordinate system where the marker 1 is located have been obtained in the above content. That is, the positional relationship between each in-vehicle camera and the marker 1 is obtained, and thus, the positional relationship between each in-vehicle camera and the vehicle body is obtained.
  • the calibration image corresponding to the adjacent vehicle camera includes the same marker, and for each vehicle camera, the position of the marker is identified in the calibration image corresponding to the vehicle camera, according to the location, Determining a positional relationship between the in-vehicle camera and the identified marker, the determined positional relationship includes a positional relationship between the same marker and a different in-vehicle camera, and therefore, the plurality of in-vehicles can be based on one marker
  • the camera is switched to the same coordinate system, and in this same coordinate system, the positional relationship between the plurality of on-vehicle cameras and the vehicle body is determined.
  • the embodiment of the present application further provides an in-vehicle camera external parameter determining device, as shown in FIG. 5, including:
  • the obtaining module 501 is configured to acquire a calibration image corresponding to each vehicle camera; wherein the calibration image corresponding to each vehicle camera is obtained according to the image captured by the vehicle camera, and the calibration image corresponding to the adjacent vehicle camera includes the same marker;
  • the identification module 502 is configured to identify, for each on-vehicle camera, a position of the marker in the calibration image corresponding to the on-vehicle camera;
  • a first determining module 503, configured to determine, according to the location, a positional relationship between the in-vehicle camera and the identified tag;
  • the conversion module 504 is configured to convert the plurality of on-vehicle cameras into the same coordinate system based on the determined positional relationship between each on-vehicle camera and the marker based on one marker;
  • the second determining module 505 is configured to determine a positional relationship between the plurality of onboard cameras and the vehicle body in the same coordinate system.
  • the acquiring module 501 may be specifically configured to:
  • Distortion correction is performed on the acquired plurality of original images to obtain a plurality of calibration images.
  • the first determining module 503 may include: a constructing submodule, an obtaining submodule, and an iterative submodule (not shown), where
  • Obtaining a submodule configured to obtain an initial rotation matrix and an initial translation vector of the marker according to the covariance matrix
  • the obtaining submodule may be specifically used to:
  • the iterative sub-module can be specifically used to:
  • the initial rotation matrix and the initial translation vector are iteratively optimized to obtain a marker in the calibration image.
  • the acquiring module 501 may be specifically configured to:
  • first calibration image corresponding to the first camera, a second calibration image corresponding to the second camera, and a third calibration image corresponding to the third camera; wherein the first calibration image includes a first marker, the second The first image and the second marker are included in the calibration image, and the second marker is included in the third calibration image;
  • the conversion module 504 can be specifically configured to:
  • the acquiring module 501 may be specifically configured to:
  • the left calibration side image includes the left front side marker and the left rear side marker
  • the right calibration image includes the right front side marker and the right rear marker
  • the posterior calibration image includes the Describe the left posterior marker and the right posterior marker
  • the conversion module 504 can be specifically configured to:
  • the second determining module 505 is specifically configured to:
  • the calibration image corresponding to the adjacent vehicle camera includes the same marker, and for each vehicle camera, the position of the marker is identified in the calibration image corresponding to the vehicle camera, according to the location, Determining a positional relationship between the in-vehicle camera and the identified marker, the determined positional relationship includes a positional relationship between the same marker and a different in-vehicle camera, and therefore, the plurality of in-vehicles can be based on one marker
  • the camera is switched to the same coordinate system, and in this same coordinate system, the positional relationship between the plurality of on-vehicle cameras and the vehicle body is determined. It can be seen that in this solution, only the adjacent vehicle camera is required to collect the same marker, and it is not necessary to fix the parking position of the vehicle and the position of the marker, thereby improving the accuracy of obtaining the external reference.
  • the embodiment of the present application further provides an electronic device, which may include: a processor and a memory; a memory for storing a computer program; and a processor for executing the program stored on the memory to implement any of the foregoing Car camera external parameter determination method.
  • the electronic device can include a processor 601, a communication interface 602, a memory 603, and a communication bus 604, as shown in FIG. 6, wherein the processor 601, the communication interface 602, and the memory 603 are completed by the communication bus 604.
  • Communication a processor 601, a communication interface 602, a memory 603, and a communication bus 604, as shown in FIG. 6, wherein the processor 601, the communication interface 602, and the memory 603 are completed by the communication bus 604.
  • the processor 601 is configured to implement any of the above-described on-board camera external parameter determining methods when executing the program stored on the memory 603.
  • the communication bus mentioned in the above electronic device may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the communication bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in the figure, but it does not mean that there is only one bus or one type of bus.
  • the communication interface is used for communication between the above electronic device and other devices.
  • the memory may include a random access memory (RAM), and may also include a non-volatile memory (NVM), such as at least one disk storage.
  • RAM random access memory
  • NVM non-volatile memory
  • the memory may also be at least one storage device located away from the aforementioned processor.
  • the above processor may be a general-purpose processor, including a central processing unit (CPU), a network processor (NP), etc.; or may be a digital signal processing (DSP), dedicated integration.
  • CPU central processing unit
  • NP network processor
  • DSP digital signal processing
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • the embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, implements any of the above-described vehicle camera external parameter determining methods.
  • the embodiment of the present application further provides an executable program code for being executed to execute any of the above-described on-board camera external parameter determining methods.
  • the embodiment of the present application further provides an assist driving system, as shown in FIG. 7 , including: a processing device and a plurality of in-vehicle cameras: an in-vehicle camera 1 , an in-vehicle camera 2, an in-vehicle camera N, wherein
  • Each in-vehicle camera for transmitting the acquired image to the processing device;
  • the processing device is configured to obtain, according to the images collected by the plurality of on-board cameras, a calibration image corresponding to the plurality of on-vehicle cameras; wherein the calibration image corresponding to the adjacent on-vehicle camera includes the same marker;
  • An in-vehicle camera that recognizes a position of a marker in a calibration image corresponding to the on-vehicle camera, and determines a positional relationship between the on-vehicle camera and the recognized marker according to the position; each of the on-vehicle camera and the marker according to the determined
  • a positional relationship between the plurality of onboard cameras is converted into the same coordinate system based on one marker; and in the same coordinate system, a positional relationship between the plurality of onboard cameras and the vehicle body is determined.
  • the processing device can also be used to perform any of the above-described on-board camera external parameter determination methods.

Abstract

本申请实施例提供了一种车载相机外参确定方法、装置、设备及系统,方法包括:相邻车载相机所对应的标定图像中包含同一标记物,针对每个车载相机,在该车载相机对应的标定图像中识别标记物的位置,根据该位置,确定该车载相机与所识别的标记物之间的位置关系,所确定的位置关系中包含同一标记物与不同车载相机之间的位置关系,因此,可以以一个标记物为基准,将多个车载相机转换至同一坐标系中,在该同一坐标系中,确定多个车载相机与车身的位置关系。可见,本方案中,只需要相邻车载相机对同一标记物进行采集,并不需要固定车辆停放位置及标记物位置,提高了获取外参的准确度。

Description

一种车载相机外参确定方法、装置、设备及系统
本申请要求于2018年3月7日提交中国专利局、申请号为201810185257.7、发明名称为“一种车载相机外参确定方法、装置、设备及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及辅助驾驶技术领域,特别是涉及一种车载相机外参确定方法、装置、设备及系统。
背景技术
辅助驾驶系统中通常包含多个车载相机,该多个车载相机设置在车身的不同位置,以采集不同方向的图像;利用该多个车载相机的参数,将该不同方向的图像拼接成大视角的图像展示给驾驶者,可以起到辅助驾驶的作用。
车载相机的参数包括外参,外参即为车载相机与车身之间的位置关系。获取车载相机外参的方案一般包括:将车辆停放在指定位置,在车辆附近的另一指定位置处设置标记物,预先确定车辆停放位置与标记物所在位置之间的位置关系;该车辆中设置的每个车载相机都对准标记物进行图像采集,根据所采集图像中标记物的位置以及该预先确定的位置关系,计算车载相机的外参。
上述获取车载相机外参的方案中,车辆停放位置及标记物所在位置都是固定的,如果车辆停放位置与指定位置稍有偏离,则获取的外参准确度较低。
发明内容
本申请实施例的目的在于提供一种车载相机外参确定方法、装置、设备及系统,以提高获取外参的准确度。
为达到上述目的,本申请实施例提供一种车载相机外参确定方法,包括:
获取每个车载相机对应的标定图像;其中,每个车载相机对应的标定图像根据该车载相机采集的图像得到,相邻车载相机所对应的标定图像中包含同一标记物;
针对每个车载相机,在该车载相机对应的标定图像中识别标记物的位置,根据所述位置,确定该车载相机与所识别的标记物之间的位置关系;
根据所确定的每个车载相机与标记物之间的位置关系,以一个标记物为 基准,将所述多个车载相机转换至同一坐标系中;
在所述同一坐标系中,确定所述多个车载相机与车身的位置关系。
为达到上述目的,本申请实施例还提供一种车载相机外参确定装置,包括:
获取模块,用于获取每个车载相机对应的标定图像;其中,每个车载相机对应的标定图像根据该车载相机采集的图像得到,相邻车载相机所对应的标定图像中包含同一标记物;
识别模块,用于针对每个车载相机,在该车载相机对应的标定图像中识别标记物的位置;
第一确定模块,用于根据所述位置,确定该车载相机与所识别的标记物之间的位置关系;
转换模块,用于根据所确定的每个车载相机与标记物之间的位置关系,以一个标记物为基准,将所述多个车载相机转换至同一坐标系中;
第二确定模块,用于在所述同一坐标系中,确定所述多个车载相机与车身的位置关系。
为达到上述目的,本申请实施例还提供一种电子设备,包括处理器和存储器;
存储器,用于存放计算机程序;
处理器,用于执行存储器上所存放的程序时,实现上述任一种车载相机外参确定方法。
为达到上述目的,本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述任一种车载相机外参确定方法。
为达到上述目的,本申请实施例还提供一种可执行程序代码,所述可执行程序代码用于被运行以执行上述任一种车载相机外参确定方法。
为达到上述目的,本申请实施例还提供一种辅助驾驶系统,包括:处理设备及多个车载相机;
每个车载相机,用于将采集的图像发送至所述处理设备;
所述处理设备,用于根据所述多个车载相机采集的图像,得到所述多个车载相机对应的标定图像;其中,相邻车载相机所对应的标定图像中包含同一标记物;针对每个车载相机,在该车载相机对应的标定图像中识别标记物 的位置,根据所述位置,确定该车载相机与所识别的标记物之间的位置关系;根据所确定的每个车载相机与标记物之间的位置关系,以一个标记物为基准,将所述多个车载相机转换至同一坐标系中;在所述同一坐标系中,确定所述多个车载相机与车身的位置关系。
应用本申请所示实施例,相邻车载相机所对应的标定图像中包含同一标记物,针对每个车载相机,在该车载相机对应的标定图像中识别标记物的位置,根据该位置,确定该车载相机与所识别的标记物之间的位置关系,所确定的位置关系中包含同一标记物与不同车载相机之间的位置关系,因此,可以以一个标记物为基准,将多个车载相机转换至同一坐标系中,在该同一坐标系中,确定多个车载相机与车身的位置关系。可见,本方案中,只需要相邻车载相机对同一标记物进行采集,并不需要固定车辆停放位置及标记物位置,提高了获取外参的准确度。
附图说明
为了更清楚地说明本申请实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种车载相机外参确定方法的流程示意图;
图2为本申请实施例提供的成像示意图;
图3为本申请实施例提供的一种应用场景示意图;
图4为本申请实施例提供的一种车身位置示意图;
图5为本申请实施例提供的一种车载相机外参确定装置的结构示意图;
图6为本申请实施例提供的一种电子设备的结构示意图;
图7为本申请实施例提供的一种辅助驾驶系统的结构示意图。
具体实施方式
为使本申请的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本申请进一步详细说明。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
为了解决上述技术问题,本申请实施例提供了一种车载相机外参确定方 法、装置、设备及系统。该方法可以应用于与多个车载相机通信连接的处理设备,或者也可以应用于任意一个车载相机,该车载相机与其他车载相机通信连接。
本申请实施例中,可以在车辆附近设置多个标记物,相邻车载相机采集的图像中包括同一标记物,这样,可以以一个标记物为基准,将多个车载相机转换至同一坐标系中,在该同一坐标系中,确定多个车载相机与车身的位置关系。可见,本方案中,只需要相邻车载相机对同一标记物进行采集,并不需要固定车辆停放位置及标记物位置,提高了获取外参的准确度。
下面首先对本申请实施例提供的车载相机外参确定方法进行详细说明。下面内容中以执行主体为与多个车载相机通信连接的处理设备为例进行说明。
图1为本申请实施例提供的一种车载相机外参确定方法的流程示意图,包括:
S101:获取每个车载相机对应的标定图像。其中,每个车载相机对应的标定图像根据该车载相机采集的图像得到,相邻车载相机所对应的标定图像中包含同一标记物。
作为一种实施方式,可以直接获取车载相机采集到的原始图像作为标定图像;或者,作为另一种实施方式,可以获取多个车载相机采集的原始图像,对所获取的多张原始图像进行畸变校正,得到多张标定图像。
如果车载相机为鱼眼相机,则可以利用如下算式,对原始图像也就是鱼眼图像进行畸变校正,得到标定图像:
Figure PCTCN2019076915-appb-000001
a=X c/Z c;b=Y c/Z c;r 2=a 2+b 2;θ=arctan(r);
θ d=θ(1+k 1θ 2+k 2θ 4+k 3θ 6+k 4θ 8);x′=(θ d/r)a;y′=(θ d/r)b;
u=f x(x′+αy′)+c x;v=f yy′+c y
其中,X c、Y c、Z c表示相机坐标系中的坐标,X、Y、Z表示世界坐标系中的坐标,k1-k4表示车载相机的内参畸变系数,u,v表示图像坐标系中的成像坐标;a,b表示校正图像中的坐标;x′,y′表示畸变后的坐标,R1表示世界坐标系下坐标转换到相机坐标系下坐标的旋转量,T1表示世界坐标系下坐标转换到相机坐标系下坐标的平移量。
如果车载相机为非鱼眼相机,且符合针孔模型,则可以利用如下算式,对 原始图像进行畸变校正,得到标定图像:
Figure PCTCN2019076915-appb-000002
x′=x/z;y′=y/z;r 2=x′ 2+y′ 2
Figure PCTCN2019076915-appb-000003
Figure PCTCN2019076915-appb-000004
u=f x*″+c x;v=f y*y″+c y
其中,x、y、z表示相机坐标系中的坐标,X、Y、Z表示世界坐标系中的坐标,k1-k6表示车载相机的内参畸变系数,p 1、p 2表示内参切向畸变系数,c x和c y为相机内参中的主点坐标,主点也就是相机主光轴与像平面相交的点,u,v表示图像坐标系中的成像坐标;x″,y″表示畸变后的坐标。R1表示世界坐标系下坐标转换到相机坐标系下坐标的旋转量,T1表示世界坐标系下坐标转换到相机坐标系下坐标的平移量。
S102:针对每个车载相机,在该车载相机对应的标定图像中识别标记物的位置,根据该位置,确定该车载相机与所识别的标记物之间的位置关系。
举例来说,标记物可以为标定布、棋盘格等物体,如果标记物为棋盘格,则可以通过黑白相间的方格,识别出图像中的标记物,也就得到标记物在图像坐标系中的位置。
以一个车载相机为例来说,S102可以包括:在标记物所在的世界坐标系中,构建标记物的协方差矩阵;根据所述协方差矩阵,得到标记物的初始旋转矩阵R和初始平移向量T;利用标记物在标定图像中的坐标与在标记物所在的世界坐标系中的坐标之间的重投影误差,将所述初始旋转矩阵R及所述初始平移向量T进行迭代优化,得到该车载相机与标记物之间的位置关系。
具体来说,在标记物所在的世界坐标系中,构建标记物的协方差矩阵之后,可以计算所述协方差矩阵的最小特征值对应的特征向量、以及所述协方差矩阵的坐标均值;利用所述特征向量及所述坐标均值,变换得到标记物的初始旋转矩阵R;再根据所述初始旋转矩阵R及所述坐标均值,计算标记物的初始平移向量T;利用标记物在标定图像中的坐标与在标记物所在的世界坐标系中的坐标之间的重投影误差,将所述初始旋转矩阵R及所述初始平移向量T进行迭代优化,得到标记物在标定图像中的坐标与在标记物所在的世界坐标系中的坐标之间的映射关系;根据所述映射关系以及该车载相机的内参,得 到该车载相机与标记物之间的位置关系。
其中,得到初始旋转矩阵R及初始平移向量T后,可以根据R、T的偏导数据构造雅克比矩阵,再利用标记物在标定图像中的坐标与在标记物所在的世界坐标系中的坐标之间的重投影误差最小原理,对R、T进行多次迭代优化,得到优化后的R、T。具体的迭代次数可以根据实际情况进行设定。得到了优化后的R、T,也就得到了标记物在标定图像中的坐标与在标记物所在的世界坐标系中的坐标之间的映射关系。
根据车载相机的内参,可以得到图像坐标系与相机坐标系之间的转换关系,结合上述“标记物在标定图像中的坐标与在标记物所在的世界坐标系中的坐标之间的映射关系”,便得到了相机坐标系与标记物所在的世界坐标系之间的转换关系,也就是车载相机与标记物之间的位置关系。
如图2所示,假设车身后侧设置有一台相机,相机坐标系为XcYcZc坐标系,标记物所在的世界坐标系为XYZ坐标系;相机对准标记物进行图像采集,将采集的图像进行畸变校正后,得到标定图像,该标定图像的图像坐标系为uv坐标系,其中,c点为标定图像的中心点。
根据车载相机的内参,可以得到图像坐标系(uv坐标系)与相机坐标系(XcYcZc坐标系)之间的转换关系,结合上述得到的图像坐标系(uv坐标系)与标记物所在的世界坐标系(XYZ坐标系)的映射关系,便得到了相机坐标系(XcYcZc坐标系)与标记物所在的世界坐标系(XYZ坐标系)之间的转换关系,也就是车载相机与标记物之间的位置关系。
S103:根据所确定的每个车载相机与标记物之间的位置关系,以一个标记物为基准,将该多个车载相机转换至同一坐标系中。
由于S102中所确定的位置关系中包含同一标记物与不同车载相机之间的位置关系,因此,可以以一个标记物为基准,将多个车载相机转换至同一坐标系中。
作为一种实施方式,S101可以包括:获取第一相机对应的第一标定图像、第二相机对应的第二标定图像以及第三相机对应的第三标定图像;其中,所述第一标定图像中包含第一标记物,所述第二标定图像中包含所述第一标记物和第二标记物,所述第三标定图像中包含所述第二标记物;
这种情况下,S102中确定出第一相机与第一标记物的位置关系、第二相机与第一标记物的位置关系、第二相机与第二标记物的位置关系、第三相机 与第二标记物的位置关系;
S103包括:根据所述第二相机与所述第一标记物的位置关系以及所述第二相机与所述第二标记物的位置关系,将所述第二标记物所在的世界坐标系转换至所述第一标记物所在的世界坐标系,得到每个车载相机在所述第一标记物所在的世界坐标系中的坐标。
本实施方式中,以三个相机为例,介绍了将三个相机转换至同一坐标系的具体过程。如果车载相机的数量大于三,将多个车载相机转换至同一坐标系中的过程也类似,均可参考本实施方式。举例来说,假设存在四个车载相机,可以先利用本实施方式,将其中三个车载相机转换到同一坐标系中,剩余一个车载相机与该三个车载相机中至少一个车载相机相邻,相邻相机之间对应的图像中包含同一标记物,因此,可以基于相邻相机与该同一标记物的位置关系,将剩余一个车载相机也转换到该同一坐标系中。五个车载相机或者更多车载相机的情况类似,不再一一列举。
假设根据第二相机与第一标记物的位置关系,得到第二相机在第一标记物所在的世界坐标系中的坐标为P1(x1,y1,z1);根据第二相机与第二标记物的位置关系,得到第二相机在第二标记物所在的世界坐标系中的坐标为P2(x2,y2,z2),利用如下算式,将第二标记物所在的世界坐标系转换至第一标记物所在的世界坐标系:
Figure PCTCN2019076915-appb-000005
Figure PCTCN2019076915-appb-000006
举例来说,假设车身前侧、后侧、左侧、右侧分别设置一车载相机,为了方便描述,将前侧设置的相机称为前相机,将后侧设置的相机称为后相机,将左侧设置的相机称为左相机,将右侧设置的相机称为右相机;将左前侧标记物记为标记物1,将右前侧标记物记为标记物2,将左后侧标记物记为标记物3,将右后侧标记物记为标记物4。
如图3所示,车身周围存在四个标记物,前相机可以对标记物1和标记物2进行图像采集,后相机对标记物3和4进行图像采集,左相机可以对标记物1和标记物3进行图像采集,右相机可以对标记物2和标记物4进行图像采集。
获取前相机对应的前标定图像、左相机对应的左标定图像、右相机对应的右标定图像以及后相机对应的后标定图像;其中,前标定图像中包含标记 物1和标记物2,后标定图像中包含标记物3和标记物4,左标定图像中包含标记物1和标记物3,右标定图像中包含标记物2和标记物4。
确定出前相机与标记物1和标记物2的位置关系,后相机与标记物3和标记物4的位置关系,左相机与标记物1和标记物3的位置关系,右相机与标记物2和标记物4的位置关系。
根据前相机与标记物1的位置关系,确定出前相机在标记物1所在世界坐标系中的坐标,记为A;根据前相机与标记物2的位置关系,确定出前相机在标记物2所在世界坐标系中的坐标,记为B;根据A和B之间的关系,将标记物2所在的世界坐标系转换至标记物1所在的世界坐标系。
根据左相机与标记物1的位置关系,确定出左相机在标记物1所在世界坐标系中的坐标,记为C;根据左相机与标记物3的位置关系,确定出左相机在标记物3所在世界坐标系中的坐标,记为D;根据C和D之间的关系,将标记物3所在的世界坐标系转换至标记物1所在的世界坐标系。
根据右相机与标记物2的位置关系、以及上述得到的标记物1所在的世界坐标系与标记物2所在的世界坐标系之间的转换关系,便可以在标记物1所在的世界坐标系中,确定出右相机的坐标。
根据后相机与标记物3的位置关系、以及上述得到的标记物1所在的世界坐标系与标记物3所在的世界坐标系之间的转换关系,便可以在标记物1所在的世界坐标系中,确定出后相机的坐标。
这样,便将四个车载相机都转换至标记物1所在的世界坐标系。
举例来说,对于普通小型汽车来说,可以在车身的前后左右分别设置一个车载相机,也就是共设置了四个车载相机;而对于较大型的汽车,如公交车、卡车、大巴车等,由于车身较长,可以在车辆身左右两侧设置多对对称的车载相机,或者,左右两侧设置的车载相机也可以不对称分布,车载相机的分布情况具体不做限定。
如果相机数量较多,一些情况下,需要将标记物4所在的世界坐标系转换至标记物1所在的世界坐标系,可以采用如下任意一种方式:
第一种,根据后相机与标记物3的位置关系,确定出后相机在标记物3所在世界坐标系中的坐标,记为E;根据后相机与标记物4的位置关系,确定出后相机在标记物4所在世界坐标系中的坐标,记为F;根据E和F之间的关系、以及上述内容中得到的标记物3所在的世界坐标系与标记物1所在的世界坐标 系之间的转换关系,将标记物4所在的世界坐标系转换至标记物1所在的世界坐标系。
第二种,根据右相机与标记物2的位置关系,确定出右相机在标记物2所在世界坐标系中的坐标,记为G;根据右相机与标记物4的位置关系,确定出右相机在标记物4所在世界坐标系中的坐标,记为H;根据G和H之间的关系、以及上述内容中得到的标记物2所在的世界坐标系与标记物1所在的世界坐标系之间的转换关系,将标记物4所在的世界坐标系转换至标记物1所在的世界坐标系。
然后可以利用标记物1所在的世界坐标系与标记物4所在的世界坐标系之间的转换关系,可以将在标记物1所在的世界坐标系中,确定更多相机的坐标。
S104:在该同一坐标系中,确定该多个车载相机与车身的位置关系。
对于普通小型汽车来说,可以在车身左右对称设置一对车载相机(左相机和右相机),并将该对车载相机设置于车身长度一半处,另外,在车身前侧、且位于车身宽度一半处设置一个车载相机(前相机);这样,在同一坐标系中,确定左相机及右相机的坐标,左右相机坐标的中间点坐标即为车身中心坐标,根据车身中心坐标与前相机坐标的连线,可以确定车身旋转角。比如,可以将该连线与南北方向的夹角作为车身旋转角,也可以将连线与东西方向的夹角作为车身旋转角,等等,具体不做限定。
或者,延续图3中的例子,车身前侧、后侧、左侧、右侧分别设置一车载相机,且在左前侧标记物所在的世界坐标系中,分别确定出各个车载相机的坐标;这种情况下,S104包括:
在所述左前侧标记物所在的世界坐标系中,根据所述前相机的第一坐标、所述后相机的第二坐标、所述左相机的第三坐标以及所述右相机的第四坐标,计算车身中心位置的第五坐标、以及车身旋转角;
根据所述第一坐标与所述第五坐标及所述车身旋转角,得到所述前相机与车身的位置关系;
根据所述第二坐标与所述第五坐标及所述车身旋转角,得到所述后相机与车身的位置关系;
根据所述第三坐标与所述第五坐标及所述车身旋转角,得到所述左相机与车身的位置关系;
根据所述第四坐标与所述第五坐标及所述车身旋转角,得到所述右相机 与车身的位置关系。
如图3所示,左右相机对称设置,而前后相机非对称设置。作左相机与右相机的连线,记为L1,作L1的中垂线L2,过前相机作L1的垂线D1,过后相机作L1的垂线D2,计算D1与D2的中点,也就是计算(D1+D2)/2,过D1与D2的中点作L2的垂线L3,L2与L3的交点即为车身中心位置的坐标,也就是第五坐标。L2与水平线或者垂直线的夹角、或者L3与水平线或者垂直线的夹角均可作为车身旋转角。
如图4所示,假设坐标原点为标记物1所在位置,在该坐标系中确定出O v点的坐标,也就是计算车身中心位置的坐标,也就是上述第五坐标,在该坐标系中确定出前后相机连线的中垂线L2与坐标H w的夹角θ,也就是车身旋转角。得到了车身中心位置的坐标与车身旋转角,也就是得到了车身与标记物1之间的位置关系,而上面内容中已经得到了每个车载相机在标记物1所在的世界坐标系中的坐标,也就是得到了每个车载相机与标记物1之间的位置关系,因此,便得到了每个车载相机与车身的位置关系。
应用本申请图1所示实施例,相邻车载相机所对应的标定图像中包含同一标记物,针对每个车载相机,在该车载相机对应的标定图像中识别标记物的位置,根据该位置,确定该车载相机与所识别的标记物之间的位置关系,所确定的位置关系中包含同一标记物与不同车载相机之间的位置关系,因此,可以以一个标记物为基准,将多个车载相机转换至同一坐标系中,在该同一坐标系中,确定多个车载相机与车身的位置关系。可见,本方案中,一方面,只需要相邻车载相机对同一标记物进行采集,并不需要固定车辆停放位置及标记物位置,提高了获取外参的准确度;另一方面,在同一坐标系中,确定多个车载相机与车身的位置关系,标定效率较高。
与上述方法实施例相对应,本申请实施例还提供一种车载相机外参确定装置,如图5所示,包括:
获取模块501,用于获取每个车载相机对应的标定图像;其中,每个车载相机对应的标定图像根据该车载相机采集的图像得到,相邻车载相机所对应的标定图像中包含同一标记物;
识别模块502,用于针对每个车载相机,在该车载相机对应的标定图像中识别标记物的位置;
第一确定模块503,用于根据所述位置,确定该车载相机与所识别的标记 物之间的位置关系;
转换模块504,用于根据所确定的每个车载相机与标记物之间的位置关系,以一个标记物为基准,将所述多个车载相机转换至同一坐标系中;
第二确定模块505,用于在所述同一坐标系中,确定所述多个车载相机与车身的位置关系。
作为一种实施方式,获取模块501,具体可以用于:
获取多个车载相机采集的原始图像;
对所获取的多张原始图像进行畸变校正,得到多张标定图像。
作为一种实施方式,第一确定模块503,可以包括:构建子模块、获得子模块和迭代子模块(图中未示出),其中,
构建子模块,用于在标记物所在的世界坐标系中,构建标记物的协方差矩阵;
获得子模块,用于根据所述协方差矩阵,得到标记物的初始旋转矩阵和初始平移向量;
迭代子模块,用于利用标记物在标定图像中的坐标与在标记物所在的世界坐标系中的坐标之间的重投影误差,将所述初始旋转矩阵及所述初始平移向量进行迭代优化,得到该车载相机与标记物之间的位置关系。
作为一种实施方式,所述获得子模块,具体可以用于:
计算所述协方差矩阵的最小特征值对应的特征向量、以及所述协方差矩阵的坐标均值;利用所述特征向量及所述坐标均值,变换得到标记物的初始旋转矩阵;根据所述初始旋转矩阵及所述坐标均值,计算标记物的初始平移向量;
所述迭代子模块,具体可以用于:
利用标记物在标定图像中的坐标与在标记物所在的世界坐标系中的坐标之间的重投影误差,将所述初始旋转矩阵及所述初始平移向量进行迭代优化,得到标记物在标定图像中的坐标与在标记物所在的世界坐标系中的坐标之间的映射关系;根据所述映射关系以及该车载相机的内参,得到该车载相机与标记物之间的位置关系。
作为一种实施方式,获取模块501,具体可以用于:
获取第一相机对应的第一标定图像、第二相机对应的第二标定图像以及第三相机对应的第三标定图像;其中,所述第一标定图像中包含第一标记物, 所述第二标定图像中包含所述第一标记物和第二标记物,所述第三标定图像中包含所述第二标记物;
转换模块504,具体可以用于:
根据所述第二相机与所述第一标记物的位置关系以及所述第二相机与所述第二标记物的位置关系,将所述第二标记物所在的世界坐标系转换至所述第一标记物所在的世界坐标系,得到每个车载相机在所述第一标记物所在的世界坐标系中的坐标。
作为一种实施方式,获取模块501,具体可以用于:
获取前相机对应的前标定图像、左相机对应的左标定图像、右相机对应的右标定图像以及后相机对应的后标定图像;其中,所述前标定图像中包含左前侧标记物和右前侧标记物,所述左标定图像中包含所述左前侧标记物和左后侧标记物,所述右标定图像中包含所述右前侧标记物和右后侧标记物,所述后标定图像中包含所述左后侧标记物和所述右后侧标记物;
转换模块504,具体可以用于:
根据所述前相机与所述左前侧标记物的位置关系,在所述左前侧标记物所在的世界坐标系中,确定所述前相机的坐标;
根据所述前相机与所述左前侧标记物的位置关系以及所述前相机与所述右前侧标记物的位置关系,将所述右前侧标记物所在的世界坐标系转换至所述左前侧标记物所在的世界坐标系,再结合所述右相机与所述右前侧标记物的位置关系,在所述左前侧标记物所在的世界坐标系中,确定所述右相机的坐标;
根据所述左相机与所述左前侧标记物的位置关系,在所述左前侧标记物所在的世界坐标系中,确定所述左相机的坐标;
根据所述左相机与所述左前侧标记物的位置关系以及所述左相机与所述左后侧标记物的位置关系,将所述左后侧标记物所在的世界坐标系转换至所述左前侧标记物所在的世界坐标系,再结合所述后相机与所述左后侧标记物的位置关系,在所述左前侧标记物所在的世界坐标系中,确定所述后相机的坐标。
作为一种实施方式,第二确定模块505,具体可以用于:
在所述左前侧标记物所在的世界坐标系中,根据所述前相机的第一坐标、所述后相机的第二坐标、所述左相机的第三坐标以及所述右相机的第四坐标, 计算车身中心位置的第五坐标、以及车身旋转角;
根据所述第一坐标与所述第五坐标及所述车身旋转角,得到所述前相机与车身的位置关系;
根据所述第二坐标与所述第五坐标及所述车身旋转角,得到所述后相机与车身的位置关系;
根据所述第三坐标与所述第五坐标及所述车身旋转角,得到所述左相机与车身的位置关系;
根据所述第四坐标与所述第五坐标及所述车身旋转角,得到所述右相机与车身的位置关系。
应用本申请图5所示实施例,相邻车载相机所对应的标定图像中包含同一标记物,针对每个车载相机,在该车载相机对应的标定图像中识别标记物的位置,根据该位置,确定该车载相机与所识别的标记物之间的位置关系,所确定的位置关系中包含同一标记物与不同车载相机之间的位置关系,因此,可以以一个标记物为基准,将多个车载相机转换至同一坐标系中,在该同一坐标系中,确定多个车载相机与车身的位置关系。可见,本方案中,只需要相邻车载相机对同一标记物进行采集,并不需要固定车辆停放位置及标记物位置,提高了获取外参的准确度。
本申请实施例还提供了一种电子设备,该电子设备可以包括:处理器和存储器;存储器,用于存放计算机程序;处理器,用于执行存储器上所存放的程序时,实现上述任一种车载相机外参确定方法。
举例来说,该电子设备可以如图6所示,包括处理器601、通信接口602、存储器603和通信总线604,其中,处理器601,通信接口602,存储器603通过通信总线604完成相互间的通信,
存储器603,用于存放计算机程序;
处理器601,用于执行存储器603上所存放的程序时,实现上述任一种车载相机外参确定方法。
上述电子设备提到的通信总线可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。该通信总线可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
通信接口用于上述电子设备与其他设备之间的通信。
存储器可以包括随机存取存储器(Random Access Memory,RAM),也可以包括非易失性存储器(Non-Volatile Memory,NVM),例如至少一个磁盘存储器。可选的,存储器还可以是至少一个位于远离前述处理器的存储装置。
上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processing,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述任一种车载相机外参确定方法。
本申请实施例还提供一种可执行程序代码,所述可执行程序代码用于被运行以执行上述任一种车载相机外参确定方法。
本申请实施例还提供一种辅助驾驶系统,如图7所示,包括:处理设备及多个车载相机:车载相机1、车载相机2……车载相机N;其中,
每个车载相机,用于将采集的图像发送至所述处理设备;
所述处理设备,用于根据所述多个车载相机采集的图像,得到所述多个车载相机对应的标定图像;其中,相邻车载相机所对应的标定图像中包含同一标记物;针对每个车载相机,在该车载相机对应的标定图像中识别标记物的位置,根据所述位置,确定该车载相机与所识别的标记物之间的位置关系;根据所确定的每个车载相机与标记物之间的位置关系,以一个标记物为基准,将所述多个车载相机转换至同一坐标系中;在所述同一坐标系中,确定所述多个车载相机与车身的位置关系。
该处理设备还可以用于执行上述任一种车载相机外参确定方法。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列 出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于图5所示的车载相机外参确定装置实施例、图6所示的电子设备实施例、图7所示的辅助驾驶系统实施例、上述计算机可读存储介质实施例、以及上述可执行程序代码实施例而言,由于其基本相似于图1-4所示的车载相机外参确定方法实施例,所以描述的比较简单,相关之处参见图1-4所示的车载相机外参确定方法实施例的部分说明即可。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (18)

  1. 一种车载相机外参确定方法,其特征在于,包括:
    获取每个车载相机对应的标定图像;其中,每个车载相机对应的标定图像根据该车载相机采集的图像得到,相邻车载相机所对应的标定图像中包含同一标记物;
    针对每个车载相机,在该车载相机对应的标定图像中识别标记物的位置,根据所述位置,确定该车载相机与所识别的标记物之间的位置关系;
    根据所确定的每个车载相机与标记物之间的位置关系,以一个标记物为基准,将所述多个车载相机转换至同一坐标系中;
    在所述同一坐标系中,确定所述多个车载相机与车身的位置关系。
  2. 根据权利要求1所述的方法,其特征在于,所述获取每个车载相机对应的标定图像,包括:
    获取多个车载相机采集的原始图像;
    对所获取的多张原始图像进行畸变校正,得到多张标定图像。
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述位置,确定该车载相机与所识别的标记物之间的位置关系,包括:
    在标记物所在的世界坐标系中,构建标记物的协方差矩阵;
    根据所述协方差矩阵,得到标记物的初始旋转矩阵和初始平移向量;
    利用标记物在标定图像中的坐标与在标记物所在的世界坐标系中的坐标之间的重投影误差,将所述初始旋转矩阵及所述初始平移向量进行迭代优化,得到该车载相机与标记物之间的位置关系。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述协方差矩阵,得到标记物的初始旋转矩阵和初始平移向量,包括:
    计算所述协方差矩阵的最小特征值对应的特征向量、以及所述协方差矩阵的坐标均值;
    利用所述特征向量及所述坐标均值,变换得到标记物的初始旋转矩阵;
    根据所述初始旋转矩阵及所述坐标均值,计算标记物的初始平移向量;
    所述利用标记物在标定图像中的坐标与在标记物所在的世界坐标系中的坐标之间的重投影误差,将所述初始旋转矩阵及所述初始平移向量进行迭代优化,得到该车载相机与标记物之间的位置关系,包括:
    利用标记物在标定图像中的坐标与在标记物所在的世界坐标系中的坐标 之间的重投影误差,将所述初始旋转矩阵及所述初始平移向量进行迭代优化,得到标记物在标定图像中的坐标与在标记物所在的世界坐标系中的坐标之间的映射关系;
    根据所述映射关系以及该车载相机的内参,得到该车载相机与标记物之间的位置关系。
  5. 根据权利要求1所述的方法,其特征在于,所述获取每个车载相机对应的标定图像,包括:
    获取第一相机对应的第一标定图像、第二相机对应的第二标定图像以及第三相机对应的第三标定图像;其中,所述第一标定图像中包含第一标记物,所述第二标定图像中包含所述第一标记物和第二标记物,所述第三标定图像中包含所述第二标记物;
    所述根据所确定的每个车载相机与标记物之间的位置关系,以一个标记物为基准,将所述多个车载相机转换至同一坐标系中,包括:
    根据所述第二相机与所述第一标记物的位置关系以及所述第二相机与所述第二标记物的位置关系,将所述第二标记物所在的世界坐标系转换至所述第一标记物所在的世界坐标系,得到每个车载相机在所述第一标记物所在的世界坐标系中的坐标。
  6. 根据权利要求1所述的方法,其特征在于,所述获取每个车载相机对应的标定图像,包括:
    获取前相机对应的前标定图像、左相机对应的左标定图像、右相机对应的右标定图像以及后相机对应的后标定图像;其中,所述前标定图像中包含左前侧标记物和右前侧标记物,所述左标定图像中包含所述左前侧标记物和左后侧标记物,所述右标定图像中包含所述右前侧标记物和右后侧标记物,所述后标定图像中包含所述左后侧标记物和所述右后侧标记物;
    所述根据所确定的每个车载相机与标记物之间的位置关系,以一个标记物为基准,将所述多个车载相机转换至同一坐标系中,包括:
    根据所述前相机与所述左前侧标记物的位置关系,在所述左前侧标记物所在的世界坐标系中,确定所述前相机的坐标;
    根据所述前相机与所述左前侧标记物的位置关系以及所述前相机与所述右前侧标记物的位置关系,将所述右前侧标记物所在的世界坐标系转换至所述左前侧标记物所在的世界坐标系,再结合所述右相机与所述右前侧标记物 的位置关系,在所述左前侧标记物所在的世界坐标系中,确定所述右相机的坐标;
    根据所述左相机与所述左前侧标记物的位置关系,在所述左前侧标记物所在的世界坐标系中,确定所述左相机的坐标;
    根据所述左相机与所述左前侧标记物的位置关系以及所述左相机与所述左后侧标记物的位置关系,将所述左后侧标记物所在的世界坐标系转换至所述左前侧标记物所在的世界坐标系,再结合所述后相机与所述左后侧标记物的位置关系,在所述左前侧标记物所在的世界坐标系中,确定所述后相机的坐标。
  7. 根据权利要求6所述的方法,其特征在于,在所述同一坐标系中,确定所述多个车载相机与车身的位置关系,包括:
    在所述左前侧标记物所在的世界坐标系中,根据所述前相机的第一坐标、所述后相机的第二坐标、所述左相机的第三坐标以及所述右相机的第四坐标,计算车身中心位置的第五坐标、以及车身旋转角;
    根据所述第一坐标与所述第五坐标及所述车身旋转角,得到所述前相机与车身的位置关系;
    根据所述第二坐标与所述第五坐标及所述车身旋转角,得到所述后相机与车身的位置关系;
    根据所述第三坐标与所述第五坐标及所述车身旋转角,得到所述左相机与车身的位置关系;
    根据所述第四坐标与所述第五坐标及所述车身旋转角,得到所述右相机与车身的位置关系。
  8. 一种车载相机外参确定装置,其特征在于,包括:
    获取模块,用于获取每个车载相机对应的标定图像;其中,每个车载相机对应的标定图像根据该车载相机采集的图像得到,相邻车载相机所对应的标定图像中包含同一标记物;
    识别模块,用于针对每个车载相机,在该车载相机对应的标定图像中识别标记物的位置;
    第一确定模块,用于根据所述位置,确定该车载相机与所识别的标记物之间的位置关系;
    转换模块,用于根据所确定的每个车载相机与标记物之间的位置关系, 以一个标记物为基准,将所述多个车载相机转换至同一坐标系中;
    第二确定模块,用于在所述同一坐标系中,确定所述多个车载相机与车身的位置关系。
  9. 根据权利要求8所述的装置,其特征在于,所述获取模块,具体用于:
    获取多个车载相机采集的原始图像;
    对所获取的多张原始图像进行畸变校正,得到多张标定图像。
  10. 根据权利要求8所述的装置,其特征在于,所述第一确定模块,包括:
    构建子模块,用于在标记物所在的世界坐标系中,构建标记物的协方差矩阵;
    获得子模块,用于根据所述协方差矩阵,得到标记物的初始旋转矩阵和初始平移向量;
    迭代子模块,用于利用标记物在标定图像中的坐标与在标记物所在的世界坐标系中的坐标之间的重投影误差,将所述初始旋转矩阵及所述初始平移向量进行迭代优化,得到该车载相机与标记物之间的位置关系。
  11. 根据权利要求10所述的装置,其特征在于,所述获得子模块,具体用于:
    计算所述协方差矩阵的最小特征值对应的特征向量、以及所述协方差矩阵的坐标均值;利用所述特征向量及所述坐标均值,变换得到标记物的初始旋转矩阵;根据所述初始旋转矩阵及所述坐标均值,计算标记物的初始平移向量;
    所述迭代子模块,具体用于:
    利用标记物在标定图像中的坐标与在标记物所在的世界坐标系中的坐标之间的重投影误差,将所述初始旋转矩阵及所述初始平移向量进行迭代优化,得到标记物在标定图像中的坐标与在标记物所在的世界坐标系中的坐标之间的映射关系;根据所述映射关系以及该车载相机的内参,得到该车载相机与标记物之间的位置关系。
  12. 根据权利要求8所述的装置,其特征在于,所述获取模块,具体用于:
    获取第一相机对应的第一标定图像、第二相机对应的第二标定图像以及第三相机对应的第三标定图像;其中,所述第一标定图像中包含第一标记物,所述第二标定图像中包含所述第一标记物和第二标记物,所述第三标定图像中包含所述第二标记物;
    所述转换模块,具体用于:
    根据所述第二相机与所述第一标记物的位置关系以及所述第二相机与所述第二标记物的位置关系,将所述第二标记物所在的世界坐标系转换至所述第一标记物所在的世界坐标系,得到每个车载相机在所述第一标记物所在的世界坐标系中的坐标。
  13. 根据权利要求8所述的装置,其特征在于,所述获取模块,具体用于:
    获取前相机对应的前标定图像、左相机对应的左标定图像、右相机对应的右标定图像以及后相机对应的后标定图像;其中,所述前标定图像中包含左前侧标记物和右前侧标记物,所述左标定图像中包含所述左前侧标记物和左后侧标记物,所述右标定图像中包含所述右前侧标记物和右后侧标记物,所述后标定图像中包含所述左后侧标记物和所述右后侧标记物;
    所述转换模块,具体用于:
    根据所述前相机与所述左前侧标记物的位置关系,在所述左前侧标记物所在的世界坐标系中,确定所述前相机的坐标;
    根据所述前相机与所述左前侧标记物的位置关系以及所述前相机与所述右前侧标记物的位置关系,将所述右前侧标记物所在的世界坐标系转换至所述左前侧标记物所在的世界坐标系,再结合所述右相机与所述右前侧标记物的位置关系,在所述左前侧标记物所在的世界坐标系中,确定所述右相机的坐标;
    根据所述左相机与所述左前侧标记物的位置关系,在所述左前侧标记物所在的世界坐标系中,确定所述左相机的坐标;
    根据所述左相机与所述左前侧标记物的位置关系以及所述左相机与所述左后侧标记物的位置关系,将所述左后侧标记物所在的世界坐标系转换至所述左前侧标记物所在的世界坐标系,再结合所述后相机与所述左后侧标记物的位置关系,在所述左前侧标记物所在的世界坐标系中,确定所述后相机的坐标。
  14. 根据权利要求13所述的装置,其特征在于,所述第二确定模块,具体用于:
    在所述左前侧标记物所在的世界坐标系中,根据所述前相机的第一坐标、所述后相机的第二坐标、所述左相机的第三坐标以及所述右相机的第四坐标,计算车身中心位置的第五坐标、以及车身旋转角;
    根据所述第一坐标与所述第五坐标及所述车身旋转角,得到所述前相机与车身的位置关系;
    根据所述第二坐标与所述第五坐标及所述车身旋转角,得到所述后相机与车身的位置关系;
    根据所述第三坐标与所述第五坐标及所述车身旋转角,得到所述左相机与车身的位置关系;
    根据所述第四坐标与所述第五坐标及所述车身旋转角,得到所述右相机与车身的位置关系。
  15. 一种电子设备,其特征在于,包括处理器和存储器;
    存储器,用于存放计算机程序;
    处理器,用于执行存储器上所存放的程序时,实现权利要求1-7任一所述的方法步骤。
  16. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-7任一所述的方法步骤。
  17. 一种辅助驾驶系统,其特征在于,包括:处理设备及多个车载相机;
    每个车载相机,用于将采集的图像发送至所述处理设备;
    所述处理设备,用于根据所述多个车载相机采集的图像,得到所述多个车载相机对应的标定图像;其中,相邻车载相机所对应的标定图像中包含同一标记物;针对每个车载相机,在该车载相机对应的标定图像中识别标记物的位置,根据所述位置,确定该车载相机与所识别的标记物之间的位置关系;根据所确定的每个车载相机与标记物之间的位置关系,以一个标记物为基准,将所述多个车载相机转换至同一坐标系中;在所述同一坐标系中,确定所述多个车载相机与车身的位置关系。
  18. 一种可执行程序代码,其特征在于,所述可执行程序代码用于被运行以执行权利要求1-7任一所述的方法步骤。
PCT/CN2019/076915 2018-03-07 2019-03-05 一种车载相机外参确定方法、装置、设备及系统 WO2019170066A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810185257.7 2018-03-07
CN201810185257.7A CN110246184B (zh) 2018-03-07 2018-03-07 一种车载相机外参确定方法、装置、设备及系统

Publications (1)

Publication Number Publication Date
WO2019170066A1 true WO2019170066A1 (zh) 2019-09-12

Family

ID=67846857

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/076915 WO2019170066A1 (zh) 2018-03-07 2019-03-05 一种车载相机外参确定方法、装置、设备及系统

Country Status (2)

Country Link
CN (1) CN110246184B (zh)
WO (1) WO2019170066A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815719A (zh) * 2020-07-20 2020-10-23 北京百度网讯科技有限公司 图像采集设备的外参数标定方法、装置、设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113327292A (zh) * 2021-06-11 2021-08-31 杭州鸿泉物联网技术股份有限公司 车载环视设备标定方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8212878B2 (en) * 2006-06-29 2012-07-03 Hitachi, Ltd. Calibration apparatus of on-vehicle camera, program, and car navigation system
CN103593836A (zh) * 2012-08-14 2014-02-19 无锡维森智能传感技术有限公司 一种摄像头参数计算方法及相机确定车体姿态的方法
CN103985118A (zh) * 2014-04-28 2014-08-13 无锡观智视觉科技有限公司 一种车载环视系统摄像头参数标定方法
CN104833372A (zh) * 2015-04-13 2015-08-12 武汉海达数云技术有限公司 一种车载移动测量系统高清全景相机外参数标定方法

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8860818B1 (en) * 2013-07-31 2014-10-14 Apple Inc. Method for dynamically calibrating rotation offset in a camera system
EP2858035B1 (en) * 2013-10-01 2020-04-22 Application Solutions (Electronics and Vision) Limited System, vehicle and method for online calibration of a camera on a vehicle
CN104050650B (zh) * 2014-06-19 2017-02-15 湖北汽车工业学院 基于坐标变换的完整成像的图像拼接方法
CN105631853B (zh) * 2015-11-06 2018-01-30 湖北工业大学 车载双目相机标定及参数验证方法
CN105654422A (zh) * 2015-12-23 2016-06-08 北京观著信息技术有限公司 点云配准方法和系统
CN106570907B (zh) * 2016-11-22 2020-07-10 海信集团有限公司 一种相机标定方法及装置
CN106846410B (zh) * 2016-12-20 2020-06-19 北京鑫洋泉电子科技有限公司 基于三维的行车环境成像方法及装置
CN107133988B (zh) * 2017-06-06 2020-06-02 科大讯飞股份有限公司 车载全景环视系统中摄像头的标定方法及标定系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8212878B2 (en) * 2006-06-29 2012-07-03 Hitachi, Ltd. Calibration apparatus of on-vehicle camera, program, and car navigation system
CN103593836A (zh) * 2012-08-14 2014-02-19 无锡维森智能传感技术有限公司 一种摄像头参数计算方法及相机确定车体姿态的方法
CN103985118A (zh) * 2014-04-28 2014-08-13 无锡观智视觉科技有限公司 一种车载环视系统摄像头参数标定方法
CN104833372A (zh) * 2015-04-13 2015-08-12 武汉海达数云技术有限公司 一种车载移动测量系统高清全景相机外参数标定方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815719A (zh) * 2020-07-20 2020-10-23 北京百度网讯科技有限公司 图像采集设备的外参数标定方法、装置、设备及存储介质
CN111815719B (zh) * 2020-07-20 2023-12-22 阿波罗智能技术(北京)有限公司 图像采集设备的外参数标定方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN110246184A (zh) 2019-09-17
CN110246184B (zh) 2021-06-11

Similar Documents

Publication Publication Date Title
WO2019184885A1 (zh) 一种相机外参标定方法、装置及电子设备
EP3633539A2 (en) Method for position detection, device, and storage medium
US9536306B2 (en) Vehicle vision system
JP4803449B2 (ja) 車載カメラの校正装置、校正方法、並びにこの校正方法を用いた車両の生産方法
US20140085409A1 (en) Wide fov camera image calibration and de-warping
CN109741241B (zh) 鱼眼图像的处理方法、装置、设备和存储介质
US20230215187A1 (en) Target detection method based on monocular image
CN109544629A (zh) 摄像头位姿确定方法和装置以及电子设备
US10803621B2 (en) Method and device for building camera imaging model, and automated driving system for vehicle
JP2020113268A (ja) 牽引ヒッチの位置を計算する方法
WO2019170066A1 (zh) 一种车载相机外参确定方法、装置、设备及系统
CN111489288B (zh) 一种图像的拼接方法和装置
CN113988112B (zh) 车道线的检测方法、装置、设备及存储介质
CN112348902A (zh) 路端相机的安装偏差角标定方法、装置及系统
CN113409396B (zh) 一种adas单目相机的标定方法
TWI778368B (zh) 車載鏡頭的自動校正方法以及車載鏡頭裝置
CN110956585B (zh) 全景图像拼接方法、装置以及计算机可读存储介质
CN112330576A (zh) 车载鱼眼摄像头畸变矫正方法、装置、设备及存储介质
CN111368927A (zh) 一种标注结果处理方法、装置、设备及存储介质
CN116630401A (zh) 鱼眼相机测距方法及终端
KR101431378B1 (ko) 전방위 영상의 생성 방법 및 장치
CN116152347A (zh) 一种车载摄像头安装姿态角标定方法及系统
WO2023168747A1 (zh) 基于域控制器平台的自动泊车的停车位标注方法及装置
CN113610927B (zh) 一种avm摄像头参数标定方法、装置及电子设备
CN115601449A (zh) 标定方法、环视图像生成方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19764296

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19764296

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19764296

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.03.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19764296

Country of ref document: EP

Kind code of ref document: A1