WO2021184218A1 - Relative pose calibration method and related apparatus - Google Patents

Relative pose calibration method and related apparatus Download PDF

Info

Publication number
WO2021184218A1
WO2021184218A1 PCT/CN2020/079780 CN2020079780W WO2021184218A1 WO 2021184218 A1 WO2021184218 A1 WO 2021184218A1 CN 2020079780 W CN2020079780 W CN 2020079780W WO 2021184218 A1 WO2021184218 A1 WO 2021184218A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
feature point
relative pose
pose
matrices
Prior art date
Application number
PCT/CN2020/079780
Other languages
French (fr)
Chinese (zh)
Inventor
刘会
陈亦伦
陈同庆
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202080004815.0A priority Critical patent/CN112639883B/en
Priority to PCT/CN2020/079780 priority patent/WO2021184218A1/en
Publication of WO2021184218A1 publication Critical patent/WO2021184218A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • This application relates to the field of autonomous driving, and more particularly to a method and related devices for calibrating relative poses between cameras applied to autonomous vehicles.
  • Self-driving cars need to have a high perception of the 360° environment of the body, and they need to have stable and reliable perception results for different scenes, different lighting conditions, and obstacles at different distances.
  • a single camera cannot achieve full coverage of 360° due to factors such as camera parameters and installation location. Therefore, self-driving vehicles are generally equipped with multiple cameras to improve visual perception. Different cameras generally have different camera parameters (such as focal length, resolution, dynamic range, etc.), and are installed in different positions of the vehicle body to obtain a more comprehensive perception result.
  • self-driving vehicles are equipped with at least one camera in the front, side, and rear directions to cover a 360° environment around the vehicle body. At the same time, different cameras have different fields of view, and different cameras have partially overlapping fields of view.
  • the relative pose between the cameras on the self-driving car will change as the use time increases.
  • the calibration parameters of each camera ie the camera's external parameters
  • the process of correcting the camera's external parameters is called For recalibration.
  • the demand for recalibration of the camera's calibration parameters has also greatly increased.
  • the currently adopted solution for online recalibration of the external parameters of the camera is as follows: the autonomous vehicle obtains prior position information that characterizes the relative position between known urban construction landmarks currently distributed near the road, and communicates with the sensor in real time.
  • the embodiment of the present application provides a relative pose calibration method, which can accurately determine the relative pose between cameras and has high reliability.
  • an embodiment of the present application provides a method for calibrating a relative pose.
  • the method includes: acquiring images by a first camera and a second camera; the field of view of the first camera and the field of view of the second camera overlap , The first camera and the second camera are installed at different positions of the automatic driving device; feature point matching is performed on the image collected by the first camera and the image collected by the second camera to obtain the first matching feature Point set; the first matching feature point set includes H groups of feature point pairs, each set of feature point pairs includes two matching feature points, one of which is a feature extracted from the image collected by the first camera Point, the other feature point is a feature point extracted from an image collected by the second camera, and the H is an integer not less than 8; according to the first matching feature point set, the first camera and the The first relative pose between the second cameras; when the difference between the first relative pose and the second relative pose is not greater than the pose change threshold, the second relative pose Update to the first relative pose; the second relative pose is the relative pose between the first camera and the second camera
  • the first matching feature point set is obtained by performing feature point matching on the feature points extracted from the image collected by the first camera and the feature points extracted from the image collected by the first camera.
  • the autopilot device may be an autopilot car (also called an unmanned car), a drone, or other devices that need to calibrate the relative pose between cameras.
  • the first relative pose between the first camera and the second camera is determined according to the first matching feature point set, which does not depend on a specific venue or landmark, and does not have a priori assumptions about the surrounding environment.
  • the relative pose calibration method provided in the embodiments of the present application does not require a priori assumptions and is more adaptable to the environment.
  • the relative pose calibration method provided by the embodiment of the present application can achieve a repetition accuracy of less than 0.1°, effectively reduces the number of recalibration of the relative pose, and has high reliability.
  • the determining the first relative pose between the first camera and the second camera according to the first matching feature point set includes: determining the first camera and the The essential matrix between the second cameras; according to the singular value decomposition result of the essential matrix, calculate the relative pose of the 5-DOF DOF between the first camera and the second camera; compare the first distance and the second The ratio between the distances is used as a scale factor; the first distance and the second distance are the results of measuring the same distance by a non-visual sensor and a visual sensor on the automatic driving device, and the visual sensor includes the The first camera and/or the second camera; combining the 5-degree-of-freedom relative pose and the scale factor to obtain the first relative pose.
  • the 5-DOF relative pose between the two cameras is obtained by decomposing the essential matrix, and then the 5-DOF relative pose and the scale factor are used to obtain the 6-DOF relative pose without prior assumptions about the surrounding environment , There is no need to collect 3D information of surrounding scenes, and the workload is small.
  • the calculating the 5-degree-of-freedom relative pose between the first camera and the second camera according to the singular value decomposition result of the essential matrix includes: The essential matrix performs singular value decomposition to obtain the singular value decomposition result; according to the singular value decomposition result, at least two relative poses between the first camera and the second camera are obtained; respectively using the at least Two relative poses are used to calculate the three-dimensional coordinate positions of the feature points in the first matching feature point set; the three-dimensional coordinate positions of the feature points in the first matching feature point set are equalized in the at least two relative poses The relative pose located in front of the first camera and the second camera is used as the 5-degree-of-freedom relative pose.
  • the 5-degree-of-freedom relative pose between the first camera and the second camera can be determined accurately and quickly.
  • the determining the first relative pose between the first camera and the second camera according to the first matching feature point set includes: iteratively solving the target equation to obtain the first A relative pose; in the target equation, the parameters included in the first relative pose are unknowns, the pair of feature points in the first matching feature point set, the internal parameters of the first camera, and the second The internal parameters of the camera are known numbers.
  • the first relative pose can be calculated quickly and accurately, without the need to calculate the essential matrix.
  • the automatic driving device is equipped with M cameras, and the M cameras include the first camera, the second camera, and the third camera, and the field of view of the third camera is the same as The field of view of the first camera and the field of view of the second camera overlap, and the M is an integer greater than 2.
  • the method further includes: obtaining M first rotation matrices, M second rotation matrices, M First translation matrices and M second translation matrices; the M first rotation matrices are the rotation matrices between the first camera and the second camera, and at least two of the M first rotation matrices Different rotation matrices, the M second rotation matrices are the rotation matrices between the second camera and the third camera, and at least two of the M second rotation matrices are different, the M The first translation matrices are the translation matrices between the first camera and the second camera, and at least two of the M first translation matrices are different, and the M second translation matrices are the The translation matrix between the second camera and the third camera and at least two translation matrices in the M second translation matrices are different, and the M first rotation matrices and the M first translation matrices are one-to-one Correspondingly, the M second rotation matrices have a one-to-one correspondence with
  • the two rotation matrices have a one-to-one correspondence; in each second equation, the first rotation matrix, the second rotation matrix, the first translation matrix, and the second translation matrix are known numbers, and the third translation matrix is an unknown number;
  • the third translation matrix is the translation matrix between the first camera and the third camera; the poses including the third rotation matrix and the third translation matrix are used as the first camera and the second camera. The relative pose between the three cameras.
  • the relative pose matrices between the multiple cameras are formed into a closed-loop equation, and the closed-loop equation is used to achieve the pose calibration between cameras without a common field of view and perform overall optimization, so as to obtain a more accurate The relative pose of the camera.
  • the first matching feature point set includes feature points extracted by the first camera from at least two frames of images and features extracted by the second camera from at least two frames of images Points; said determining the first relative pose between the first camera and the second camera according to the first matching feature point set includes: determining the first relative pose between the first camera and the second camera according to the feature point pair in the first matching feature point set The relative pose between the first camera and the second camera to obtain a first intermediate pose; substituting each feature point pair in the first matching feature point set into a first formula to obtain the first matching feature The residuals corresponding to each feature point pair in the point set, the first intermediate pose is included in the first formula representation; the interference feature point pair in the first matching feature point set is eliminated to obtain a second matching feature point set; The interference feature point pair is a feature point pair whose corresponding residual error in the first matching feature point set is greater than a residual threshold; and the first camera and all the points are determined according to the feature point pair in the second matching feature point set.
  • the relative pose between the second cameras to obtain a second intermediate pose take the target intermediate pose as the first relative pose between the first camera and the second camera; the target intermediate The pose is the relative pose between the first camera and the second camera determined according to the feature point pairs in the target matching feature point set, and the number of the feature point pairs in the target matching feature point set is less than the number threshold Or the ratio of the number of feature point pairs in the target matching feature point set to the number of feature point pairs in the first matching feature point set is less than a ratio threshold.
  • the feature points included in the first matching feature point set are feature points that match in the first image collected by the first camera and the second image collected by the second camera.
  • the number of matching feature point pairs in the first image and the second image is not less than 8
  • an estimate of the relative pose of the two cameras can be obtained.
  • the number of matching feature points in a frame of image collected by the first camera and a frame of image collected by the second camera is small, the estimation is unstable, and the influence of errors can be reduced by multi-frame accumulation.
  • the automatic driving device can extract the feature points in the two images of the i-th frame, denoted as x′ i and x i , respectively, and add them to the feature point sets X 1 and X 2 ; after F frames, the feature points The point sets X 1 and X 2 store the matching feature points in the two images in the F frame (that is, the first matching feature point set), and the feature point sets X 1 and X 2 are used to estimate the relative pose of the two cameras; due to the features The number of feature points in the point sets X 1 and X 2 is far more than the number of feature points in a single frame, so iterative optimization can be used: first , all the points in the feature point sets X 1 and X 2 are used for pose estimation to obtain the first The intermediate pose, and then substitute the first intermediate pose into formula (14) to calculate the residual, and remove the feature points whose residuals are greater than the residual threshold, and use the remaining feature points to re-evaluate the pose to obtain the second intermediate position Pose
  • the optimal solution under a set of F frame data can be obtained.
  • the two images in a frame refer to an image captured by the first camera and an image captured by the second camera, and the time interval between the two images in this frame is less than a time threshold, such as 0.5 ms, 1 ms, and so on.
  • a time threshold such as 0.5 ms, 1 ms, and so on.
  • the multi-frame accumulation method can be used to avoid the jitter problem caused by the calculation of a single frame and stabilize the calibration result.
  • the method further includes: outputting a reminder if the difference between the first relative pose and the second relative pose is not greater than the pose change threshold Information; the reminder information is used to prompt that the relative pose between the first camera and the second camera is abnormal.
  • an embodiment of the present application provides another relative pose calibration method.
  • the method includes: a server receives recalibration reference information from an automatic driving device, and the recalibration reference information is used by the server to determine the automatic The relative pose between the first camera and the second camera of the driving device, the field of view of the first camera overlaps the field of view of the second camera, and the first camera and the second camera are installed in the Different locations of the automatic driving device; the server obtains a first matching feature point set according to the recalibration reference information, the first matching feature point set includes H groups of feature point pairs, and each group of feature point pairs includes two phases Matching feature points, one of the feature points is a feature point extracted from the image collected by the first camera, and the other feature point is a feature point extracted from the image collected by the second camera, and the H is not An integer less than 8; the server determines the first relative pose between the first camera and the second camera according to the first matching feature point set; the server converts the first relative pose Send to the automatic driving device.
  • the recalibration reference information includes an image synchronously collected by the first camera and the second camera or the first matching feature point set.
  • the server may store the internal parameters of the first camera and the internal parameters of the second camera, or receive the internal parameters of the first camera and the internal parameters of the second camera from the automatic driving device, and use the internal parameters according to the first camera. Matching a set of feature points to determine a first relative pose between the first camera and the second camera.
  • the automatic driving device does not need to calibrate the relative pose between the cameras by itself, and only needs to send the data required to calibrate the relative pose between the cameras to the server, and the server compares the camera on the automatic driving device. Re-calibration is performed on the relative pose between the two, and the efficiency is high.
  • the method before the determining the first relative pose between the first camera and the second camera according to the first matching feature point set, the method further includes: the server Receiving a pose calibration request from the automatic driving device, the pose calibration request being used to request recalibration of the relative pose between the first camera and the second camera of the automatic driving device, the The pose calibration request carries the internal parameters of the first camera and the internal parameters of the second camera; said determining the first relative relationship between the first camera and the second camera according to the first matching feature point set
  • the pose includes: determining the first relative pose between the first camera and the second camera according to a first matching feature point set, internal parameters of the first camera, and internal parameters of the second camera .
  • the embodiments of the present application provide another relative pose calibration method.
  • the method includes: the automatic driving device sends recalibration reference information to the server, and the recalibration reference information is used by the server to determine the status of the automatic driving device.
  • the second relative pose is updated to the first relative pose; the second relative pose is the first camera and the second camera currently stored by the automatic driving device The relative pose between.
  • the recalibration reference information includes a first matching feature point set, the first matching feature point set includes H groups of feature point pairs, and each group of feature point pairs includes two matching feature points, one of which is A point is a feature point extracted from an image collected by the first camera, another feature point is a feature point extracted from an image collected by the second camera, and the H is an integer not less than 8.
  • the recalibration reference information includes images acquired by the first camera and the second camera synchronously.
  • the automatic driving device does not need to calibrate the relative pose between the cameras by itself, and only needs to send the data required to calibrate the relative pose between the cameras to the server, which requires less workload.
  • the method before the automatic driving device sends the recalibration reference information to the server, the method further includes: the automatic driving device sends a pose calibration request to the server, the pose calibration request Used to request recalibration of the relative pose between the first camera and the second camera of the automatic driving device, the pose calibration request carries the internal parameters of the first camera and the second camera The internal reference.
  • an embodiment of the present application provides an automatic driving device, including: an image acquisition unit for acquiring images through a first camera and a second camera respectively; the field of view of the first camera and the second camera There is overlap in the field of view, the first camera and the second camera are installed at different positions of the automatic driving device; the image feature point extraction unit is used for the image collected by the first camera and the image collected by the second camera Perform feature point matching to obtain a first matching feature point set; the first matching feature point set includes H groups of feature point pairs, and each group of feature point pairs includes two matching feature points, one of which is from the The feature point extracted from the image collected by the first camera, the other feature point is the feature point extracted from the image collected by the second camera, and the H is an integer not less than 8; the pose calculation unit is used for According to the first matching feature point set, the first relative pose between the first camera and the second camera is determined; the first matching feature point set includes H groups of feature point pairs, each group of feature points The pair includes two matching feature points, one of which is
  • the calibration parameter update unit is configured to update the second relative pose to the first when the difference between the first relative pose and the second relative pose is not greater than the pose change threshold.
  • Relative pose is the relative pose between the first camera and the second camera currently stored by the automatic driving device.
  • the pose calculation unit is specifically configured to determine the essential matrix between the first camera and the second camera; and calculate the essential matrix according to the singular value decomposition result of the essential matrix.
  • the first distance and the second distance are the results obtained by measuring the same distance by a non-visual sensor and a visual sensor on the automatic driving device, and the visual sensor includes the first camera and/or the second distance.
  • Camera the pose calculation unit is also used to combine the 5-degree-of-freedom relative pose and the scale factor to obtain the first relative pose.
  • the pose calculation unit is specifically configured to perform singular value decomposition of the essential matrix to obtain the singular value decomposition result; according to the singular value decomposition result, the first The at least two relative poses between the camera and the second camera; the three-dimensional coordinate positions of the feature points in the first matching feature point set are calculated by using the at least two relative poses respectively; and the at least two In the relative pose, the relative pose in which the three-dimensional coordinate positions of each feature point in the first matching feature point set are located in front of the first camera and the second camera is used as the 5-degree-of-freedom relative pose.
  • the pose calculation unit is specifically configured to iteratively solve a target equation to obtain the first relative pose; in the target equation, the parameters included in the first relative pose Is an unknown number, and the feature point pairs in the first matching feature point set, the internal parameters of the first camera, and the internal parameters of the second camera are known numbers.
  • the automatic driving device is equipped with M cameras, and the M cameras include the first camera, the second camera, and the third camera, and the field of view of the third camera is the same as There is overlap between the field of view of the first camera and the field of view of the second camera, and the M is an integer greater than 2.
  • the device further includes: a closed loop optimization unit for obtaining M first rotation matrices, M A second rotation matrix, M first translation matrices, and M second translation matrices; the M first rotation matrices are the rotation matrices between the first camera and the second camera, and the M At least two rotation matrices in a rotation matrix are different, the M second rotation matrices are the rotation matrices between the second camera and the third camera, and at least two of the M second rotation matrices rotate The matrices are different, the M first translation matrices are the translation matrices between the first camera and the second camera, and at least two of the M first translation matrices are different, the M first translation matrices are different.
  • the second translation matrix is the translation matrix between the second camera and the third camera, and at least two of the M second translation matrices are different, and the M first rotation matrices are different from the M first rotation matrices.
  • the first translation matrix has a one-to-one correspondence, the M second rotation matrices have a one-to-one correspondence with the M second translation matrices, and the M is an integer greater than 1; the first equation system is solved to obtain the third rotation matrix;
  • the first equation group includes M first equations, and the M first equations have a one-to-one correspondence with the M first rotation matrices, and have a one-to-one correspondence with the M second rotation matrices; In the first equation, the first rotation matrix and the second rotation matrix are known numbers, and the third rotation matrix is an unknown number; the third rotation matrix is the difference between the first camera and the third camera A rotation matrix; solve a second equation system to obtain a third translation matrix; the second equation system includes M second equations, and the M second equations
  • the first matching feature point set includes feature points extracted by the first camera from at least two frames of images and features extracted by the second camera from at least two frames of images Point;
  • the pose calculation unit is specifically configured to determine the relative pose between the first camera and the second camera according to the feature point pair in the first matching feature point set to obtain the first intermediate position Pose; Substituting each feature point pair in the first matching feature point set into a first formula to obtain a residual error corresponding to each feature point pair in the first matching feature point set, the first formula representation includes the first formula An intermediate pose; eliminate the interference feature point pair in the first matching feature point set to obtain a second matching feature point set; the interference feature point pair is that the corresponding residual in the first matching feature point set is greater than the residual Threshold feature point pairs; according to the feature point pairs in the second matching feature point set, determine the relative pose between the first camera and the second camera to obtain a second intermediate pose; set the target intermediate position
  • the pose is the first relative pose between the first camera and the second camera; the intermediate pose of the target is
  • the relative poses between the second cameras, the number of feature point pairs in the target matching feature point set is less than a number threshold or the number of feature point pairs in the target matching feature point set is the same as the first matching feature
  • the ratio of the number of feature point pairs in the point set is less than the ratio threshold.
  • the device further includes: a reminding unit, configured to determine when the difference between the first relative pose and the second relative pose is not greater than the pose change threshold In this case, output reminding information; the reminding information is used to prompt that the relative pose between the first camera and the second camera is abnormal.
  • an embodiment of the present application provides a server, including: an obtaining unit, configured to obtain a first matching feature point set, the first matching feature point set includes H groups of feature point pairs, and each group of feature point pairs includes Two matching feature points, one feature point is a feature point extracted from the image collected by the first camera, and the other feature point is a feature point extracted from the image collected by the second camera.
  • the field of view and the field of view of the second camera overlap, the first camera and the second camera are installed at different positions of the automatic driving device, and the H is an integer not less than 8;
  • the pose calculation unit is used to calculate The first matching feature point set determines a first relative pose between the first camera and the second camera;
  • a sending unit is configured to send the first relative pose to the automatic driving device.
  • the server further includes: a receiving unit configured to receive a pose calibration request from the automatic driving device, the pose calibration request being used to request recalibration of the automatic driving device
  • the relative pose between the first camera and the second camera, the pose calibration request carries the internal parameters of the first camera and the internal parameters of the second camera
  • the pose calculation unit specifically It is used to determine the first relative pose between the first camera and the second camera according to the first matching feature point set, the internal parameters of the first camera, and the internal parameters of the second camera.
  • an embodiment of the present application provides an automatic driving device, including: a sending unit, configured to send recalibration reference information to a server, where the recalibration reference information is used by the server to determine the first camera of the automatic driving device
  • the relative pose between the first camera and the second camera, the field of view of the first camera overlaps the field of view of the second camera, and the first camera and the second camera are installed in different positions of the automatic driving device; receiving; Unit for receiving the first relative pose from the server; a calibration parameter update unit for the difference between the first relative pose and the second relative pose is not greater than the pose change threshold ,
  • the second relative pose is updated to the first relative pose; the second relative pose is one of the first camera and the second camera currently stored by the automatic driving device The relative pose between.
  • the sending unit is further configured to send a pose calibration request to the server, and the pose calibration request is used to request recalibration of the first camera and the first camera of the automatic driving device.
  • the pose calibration request For the relative poses between the second cameras, the pose calibration request carries the internal parameters of the first camera and the internal parameters of the second camera.
  • an embodiment of the present application provides a car, the car includes a memory and a processor, the memory is used to store code; the processor is used to execute the program stored in the memory, when the program is executed At this time, the processor is configured to execute the methods of the foregoing first aspect or the foregoing third aspect and optional implementation manners.
  • an embodiment of the present application provides a server that includes a memory and a processor, the memory is used to store code; the processor is used to execute the program stored in the memory, and when the program is executed At this time, the processor is configured to execute the above-mentioned second aspect and optional implementation methods.
  • an embodiment of the present application provides a computer-readable storage medium that stores a computer program, and the computer program includes program instructions that, when executed by a processor, cause the processor to execute the above-mentioned first From one aspect to the third aspect and optional implementation methods.
  • an embodiment of the present application provides a chip that includes a processor and a data interface.
  • the processor reads instructions stored in a memory through the data interface, and executes the above-mentioned first to third aspects and any An alternative implementation method.
  • an embodiment of the present application provides a computer program product.
  • the computer program product includes program instructions that, when executed by a processor, cause the processor to execute the first to third aspects and any one of An alternative implementation method.
  • Fig. 1 is a functional block diagram of an automatic driving device provided by an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of an automatic driving system provided by an embodiment of the application.
  • FIG. 3 is a flowchart of a relative pose calibration method provided by an embodiment of this application.
  • FIG. 4 is a schematic diagram of a public field of view between cameras provided by an embodiment of this application.
  • FIG. 5 is a schematic structural diagram of an automatic driving device provided by an embodiment of the application.
  • FIG. 6 is a schematic structural diagram of another automatic driving device provided by an embodiment of the application.
  • FIG. 7 is a schematic structural diagram of another automatic driving device provided by an embodiment of the application.
  • FIG. 8 is a flowchart of another relative pose calibration method provided by an embodiment of the application.
  • FIG. 9 is a flowchart of another relative pose calibration method provided by an embodiment of the application.
  • FIG. 10 is a schematic structural diagram of a server provided by an embodiment of this application.
  • the relative poses between cameras on an autonomous vehicle will change with the increase of use time. At this time, it is necessary to correct the calibration parameters of each camera (that is, the external parameters of the camera). With the increasing number of vehicles with automatic driving functions, the demand for recalibration of the camera's calibration parameters has also greatly increased. Auto-driving vehicles that cannot recalibrate the external parameters of each camera in a timely manner pose great safety hazards.
  • the relative pose calibration method provided in the embodiments of the present application can be applied to an autonomous driving scenario. The following is a brief introduction to the driving scene.
  • Driving scenario 1 The automatic driving device determines the relative pose between the two cameras according to the matching feature points in the images synchronously collected by the two cameras with a common field of view; the relative pose between the two cameras is determined again If the difference between the pose and the relative pose between the two cameras currently calibrated by the automatic driving device is not greater than the pose change threshold, the currently calibrated relative pose is updated.
  • Driving scenario 2 The automatic driving device sends a pose calibration request to the server and information needed to determine the relative pose between at least two cameras on the automatic driving device.
  • the above pose calibration request is used to request automatic recalibration.
  • Fig. 1 is a functional block diagram of an automatic driving device provided by an embodiment of the present application.
  • the automatic driving device 100 is configured in a fully or partially automatic driving mode.
  • the automatic driving device 100 can control itself while in the automatic driving mode, and can determine the current state of the automatic driving device 100 and its surrounding environment through human operation, and determine the possible behavior of at least one other vehicle in the surrounding environment, The confidence level corresponding to the possibility of the other vehicle performing possible behavior is determined, and the automatic driving device 100 is controlled based on the determined information.
  • the automatic driving device 100 can be set to operate without human interaction.
  • the automatic driving apparatus 100 may include various subsystems, such as a traveling system 102, a sensor system 104, a control system 106, one or more peripheral devices 108 and a power supply 110, a computer system 112, and a user interface 116.
  • the automatic driving device 100 may include more or fewer sub-systems, and each sub-system may include multiple elements.
  • each subsystem and element of the automatic driving device 100 may be interconnected by wire or wirelessly.
  • the traveling system 102 may include components that provide power movement for the autonomous driving device 100.
  • the propulsion system 102 may include an engine 118, an energy source 119, a transmission 120, and wheels/tires 121.
  • the engine 118 may be an internal combustion engine, an electric motor, an air compression engine, or a combination of other types of engines, such as a hybrid engine composed of a gasoline engine and an electric motor, or a hybrid engine composed of an internal combustion engine and an air compression engine.
  • the engine 118 converts the energy source 119 into mechanical energy.
  • Examples of energy sources 119 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity.
  • the energy source 119 may also provide energy for other systems of the automatic driving device 100.
  • the transmission device 120 can transmit mechanical power from the engine 118 to the wheels 121.
  • the transmission device 120 may include a gearbox, a differential, and a drive shaft.
  • the transmission device 120 may also include other devices, such as a clutch.
  • the drive shaft may include one or more shafts that can be coupled to one or more wheels 121.
  • the sensor system 104 may include several sensors that sense information about the environment around the automatic driving device 100.
  • the sensor system 104 may include a positioning system 122 (the positioning system may be a global positioning system (GPS) system, a Beidou system or other positioning systems), an inertial measurement unit (IMU) 124, and a radar 126, a laser rangefinder 128, and a camera 130.
  • the sensor system 104 may also include sensors of the internal system of the automatic driving device 100 to be monitored (for example, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding characteristics (position, shape, direction, speed, etc.). Such detection and recognition are key functions for the safe operation of the autonomous automatic driving device 100.
  • the positioning system 122 may be used to estimate the geographic location of the automatic driving device 100.
  • the IMU 124 is used to sense the position and orientation changes of the automatic driving device 100 based on inertial acceleration.
  • the IMU 124 may be a combination of an accelerometer and a gyroscope.
  • the radar 126 may use radio signals to sense objects in the surrounding environment of the automatic driving device 100.
  • the laser rangefinder 128 can use laser light to sense objects in the environment where the automatic driving device 100 is located.
  • the laser rangefinder 128 may include one or more laser sources, laser scanners, and one or more detectors, as well as other system components.
  • the camera 130 may be used to capture multiple images of the surrounding environment of the automatic driving device 100.
  • the camera 130 may be a still camera or a video camera.
  • the camera 130 may capture multiple images of the surrounding environment of the automatic driving device 100 in real time or periodically.
  • the camera 130 includes at least two cameras with overlapping fields of view, that is, at least two cameras have a common field of view.
  • the control system 106 controls the operation of the automatic driving device 100 and its components.
  • the control system 106 may include various components, including a steering system 132, a throttle 134, a braking unit 136, a computer vision system 140, a route control system 142, and an obstacle avoidance system 144.
  • the steering system 132 is operable to adjust the forward direction of the automatic driving device 100.
  • it may be a steering wheel system.
  • the throttle 134 is used to control the operating speed of the engine 118 and thereby control the speed of the automatic driving device 100.
  • the braking unit 136 is used to control the automatic driving device 100 to decelerate.
  • the braking unit 136 may use friction to slow down the wheels 121.
  • the braking unit 136 may convert the kinetic energy of the wheels 121 into electric current.
  • the braking unit 136 may also take other forms to slow down the rotation speed of the wheels 121 to control the speed of the automatic driving device 100.
  • the computer vision system 140 may be operated to process and analyze the images captured by the camera 130 in order to recognize objects and/or features in the surrounding environment of the autonomous driving device 100.
  • the aforementioned objects and/or features may include traffic signals, road boundaries, and obstacles.
  • the computer vision system 140 may use object recognition algorithms, automatic driving methods, Structure from Motion (SFM) algorithms, video tracking, and other computer vision technologies.
  • SFM Structure from Motion
  • the computer vision system 140 may be used to map the environment, track objects, estimate the speed of objects, and so on.
  • the computer vision system 140 may use the point cloud obtained by the lidar and the image of the surrounding environment obtained by the camera to locate the position of the obstacle.
  • the route control system 142 is used to determine the driving route of the automatic driving device 100.
  • the route control system 142 may combine data from the sensor 138, the GPS 122, and one or more predetermined maps to determine the driving route for the automatic driving device 100.
  • the obstacle avoidance system 144 is used to identify, evaluate and avoid or otherwise cross over potential obstacles in the environment of the automatic driving device 100.
  • control system 106 may add or alternatively include components other than those shown and described. Alternatively, a part of the components shown above may be reduced.
  • the automatic driving device 100 interacts with external sensors, other vehicles, other computer systems, or users through peripheral devices 108.
  • the peripheral device 108 may include a wireless communication system 146, an onboard computer 148, a microphone 150, and/or a speaker 152.
  • the peripheral device 108 provides a means for the user of the autonomous driving apparatus 100 to interact with the user interface 116.
  • the onboard computer 148 may provide information to the user of the automatic driving device 100.
  • the user interface 116 can also operate the on-board computer 148 to receive user input.
  • the on-board computer 148 can be operated through a touch screen.
  • the peripheral device 108 may provide a means for the autonomous driving device 100 to communicate with other devices located in the vehicle.
  • the microphone 150 may receive audio (eg, voice commands or other audio input) from the user of the autonomous driving device 100.
  • the speaker 152 may output audio to the user of the automatic driving device 100.
  • the wireless communication system 146 may wirelessly communicate with one or more devices directly or via a communication network.
  • the wireless communication system 146 may use 3G cellular communication, or 4G cellular communication, such as LTE, or 5G cellular communication.
  • the wireless communication system 146 may use WiFi to communicate with a wireless local area network (WLAN).
  • WLAN wireless local area network
  • the wireless communication system 146 may directly communicate with the device using an infrared link, Bluetooth, or ZigBee. Other wireless protocols, such as various vehicle communication systems.
  • the wireless communication system 146 may include one or more dedicated short-range communications (DSRC) devices. These devices may include vehicles and/or roadside stations. Public and/or private data communications.
  • DSRC dedicated short-range communications
  • the power supply 110 may provide power to various components of the automatic driving device 100.
  • the power source 110 may be a rechargeable lithium ion or lead-acid battery.
  • One or more battery packs of such batteries may be configured as a power source to provide power to various components of the automatic driving device 100.
  • the power source 110 and the energy source 119 may be implemented together, such as in some all-electric vehicles.
  • the computer system 112 may include at least one processor 113 that executes instructions 115 stored in a non-transitory computer readable medium such as a data storage device 114.
  • the computer system 112 may also be multiple computing devices that control individual components or subsystems of the automatic driving apparatus 100 in a distributed manner.
  • the processor 113 may be any conventional processor, such as a commercially available central processing unit (CPU). Alternatively, the processor may be a dedicated device such as an ASIC or other hardware-based processor.
  • FIG. 1 functionally illustrates the processor, memory, and other elements of the computer system 112 in the same block, those of ordinary skill in the art should understand that the processor, computer, or memory may actually include Multiple processors, computers, or memories stored in the same physical enclosure.
  • the memory may be a hard disk drive or other storage medium located in a housing other than the computer system 112. Therefore, a reference to a processor or computer will be understood to include a reference to a collection of processors or computers or memories that may or may not operate in parallel.
  • some components such as the steering component and the deceleration component may each have its own processor, and the above-mentioned processors only perform calculations related to component-specific functions.
  • the processor may be located far from the automatic driving device and wirelessly communicate with the automatic driving device. In other aspects, some operations in the process described herein are performed on a processor arranged in the automatic driving device while others are performed by a remote processor, including taking the necessary steps to perform a single manipulation.
  • the data storage device 114 may include instructions 115 (eg, program logic), which may be executed by the processor 113 to perform various functions of the automatic driving device 100, including those described above.
  • the data storage device 114 may also contain additional instructions, including sending data to, receiving data from, interacting with, and/or performing data on one or more of the propulsion system 102, the sensor system 104, the control system 106, and the peripheral device 108. Control instructions.
  • the data storage device 114 may also store data, such as road maps, route information, the location, direction, speed, and other information of the vehicle. This information may be used by the automatic driving device 100 and the computer system 112 during the operation of the automatic driving device 100 in autonomous, semi-autonomous, and/or manual modes.
  • the user interface 116 is used to provide information to or receive information from the user of the automatic driving device 100.
  • the user interface 116 may include one or more input/output devices in the set of peripheral devices 108, such as a wireless communication system 146, a car computer 148, a microphone 150, and a speaker 152.
  • the computer system 112 may control the functions of the automatic driving device 100 based on inputs received from various subsystems (for example, the traveling system 102, the sensor system 104, and the control system 106) and from the user interface 116. For example, the computer system 112 may utilize input from the control system 106 in order to control the steering unit 132 to avoid obstacles detected by the sensor system 104 and the obstacle avoidance system 144. In some embodiments, the computer system 112 is operable to provide control of many aspects of the autonomous driving device 100 and its subsystems.
  • one or more of the aforementioned components may be installed or associated with the automatic driving device 100 separately.
  • the data storage device 114 may exist partially or completely separately from the automatic driving device 100.
  • the above-mentioned components may be communicatively coupled together in a wired and/or wireless manner.
  • FIG. 1 should not be construed as a limitation to the embodiments of the present application.
  • a self-driving car traveling on a road can recognize objects in its surrounding environment to determine the adjustment to the current speed.
  • the aforementioned objects may be other vehicles, traffic control equipment, or other types of objects.
  • each recognized object can be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, distance from the vehicle, etc., can be used to determine the speed to be adjusted by the self-driving car.
  • the automatic driving device 100 or a computing device associated with the automatic driving device 100 may be based on the characteristics of the identified object and the surrounding environment.
  • the state for example, traffic, rain, ice on the road, etc.
  • each recognized object depends on each other's behavior, so all recognized objects can also be considered together to predict the behavior of a single recognized object.
  • the automatic driving device 100 can adjust its speed based on the predicted behavior of the aforementioned recognized object.
  • an autonomous vehicle can determine what stable state the vehicle will need to adjust to (for example, accelerate, decelerate, or stop) based on the predicted behavior of the object.
  • other factors may also be considered to determine the speed of the automatic driving device 100, such as the lateral position of the automatic driving device 100 on the traveling road, the curvature of the road, the proximity of static and dynamic objects, and so on.
  • the computing device can also provide instructions to modify the steering angle of the self-driving device 100, so that the self-driving car follows a given trajectory and/or maintains objects near the self-driving car. (For example, a car in an adjacent lane on a road) The safe horizontal and vertical distance.
  • the above-mentioned automatic driving device 100 may be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, a recreational vehicle, a playground vehicle, construction equipment, a tram, a golf cart, a train, a trolley, etc.
  • the embodiment of the present invention does not make any special limitation.
  • the automatic driving device 100 can collect images in real time or periodically through the camera 130. Two or more cameras included in the camera 130 can collect images synchronously.
  • the simultaneous acquisition of images by two cameras means that the shortest time for the two cameras to acquire images is less than a time threshold, for example, 10 ms.
  • the automatic driving device 100 sends the image collected by the camera 130 and the information needed to determine the relative pose between at least two cameras included in the camera 130 to the server; receives the relative pose from the server, and The relative pose between at least two cameras included in the camera 130 is updated.
  • the automatic driving device 100 determines the relative pose between at least two cameras included in the camera 130 according to the images collected by the camera 130.
  • Fig. 1 shows a functional block diagram of an automatic driving device 100, and an automatic driving system 101 is introduced below.
  • Fig. 2 is a schematic structural diagram of an automatic driving system provided by an embodiment of the application. Fig. 1 and Fig. 2 describe the automatic driving device 100 from different perspectives.
  • the computer system 101 includes a processor 103, and the processor 103 is coupled to a system bus 105.
  • the processor 103 may be one or more processors, where each processor may include one or more processor cores.
  • a display adapter (video adapter) 107, the display adapter can drive the display 109, and the display 109 is coupled to the system bus 105.
  • the system bus 105 is coupled with an input/output (I/O) bus 113 through a bus bridge 111.
  • I/O input/output
  • the I/O interface 115 is coupled to the I/O bus.
  • the I/O interface 115 communicates with a variety of I/O devices, such as an input device 117 (such as a keyboard, a mouse, a touch screen, etc.), a media tray 121, and a multimedia interface.
  • Transceiver 123 can send and/or receive radio communication signals
  • camera 155 can capture scene and dynamic digital video images
  • external USB interface 125 external USB interface 125.
  • the interface connected to the I/O interface 115 may be a USB interface.
  • the processor 103 may be any conventional processor, including a reduced instruction set computing ("RISC”) processor, a complex instruction set computing (“CISC”) processor, or a combination of the foregoing.
  • the processor may be a dedicated device such as an application specific integrated circuit (“ASIC").
  • the processor 103 may be a neural network processor (Neural-network Processing Unit, NPU) or a combination of a neural network processor and the foregoing traditional processors.
  • the processor 103 is mounted with a neural network processor.
  • the computer system 101 can communicate with the software deployment server 149 through the network interface 129.
  • the network interface 129 is a hardware network interface, such as a network card.
  • the network 127 may be an external network, such as the Internet, or an internal network, such as an Ethernet or a virtual private network.
  • the network 127 may also be a wireless network, such as a WiFi network, a cellular network, and so on.
  • the hard disk drive interface is coupled to the system bus 105.
  • the hardware drive interface is connected with the hard drive.
  • the system memory 135 is coupled to the system bus 105.
  • the data running in the system memory 135 may include the operating system 137 and application programs 143 of the computer system 101.
  • the operating system includes a shell (Shell) 139 and a kernel (kernel) 141.
  • the shell 139 is an interface between the user and the kernel of the operating system.
  • the shell 139 is the outermost layer of the operating system.
  • the shell 139 manages the interaction between the user and the operating system: waiting for the user's input, interpreting the user's input to the operating system, and processing the output results of various operating systems.
  • the kernel 141 is composed of those parts of the operating system that are used to manage memory, files, peripherals, and system resources. Directly interact with the hardware.
  • the operating system kernel usually runs processes and provides inter-process communication, providing CPU time slice management, interrupts, memory management, IO management, and so on.
  • the application program 141 includes programs related to automatic driving, such as programs that manage the interaction between the automatic driving device and obstacles on the road, programs that control the driving route or speed of the automatic driving device, and programs that control the interaction between the automatic driving device 100 and other automatic driving devices on the road.
  • the application program 141 also exists on the system of a software deployment server (deploying server) 149. In one embodiment, when the application program 141 needs to be executed, the computer system 101 may download the application program 141 from the software deployment server 149.
  • the sensor 153 is associated with the computer system 101.
  • the sensor 153 is used to detect the environment around the computer system 101.
  • the sensor 153 can detect animals, cars, obstacles, and crosswalks. Further, the sensor can also detect the environment around objects such as animals, cars, obstacles, and crosswalks. Other animals, weather conditions, the brightness of the surrounding environment, etc.
  • the sensor may be a camera (ie, a camera), a laser radar, an infrared sensor, a chemical detector, a microphone, etc.
  • the sensor 153 senses information at preset intervals when activated and provides the sensed information to the computer system 101 in real time or near real time.
  • the senor may include a lidar, which can provide the acquired point cloud to the computer system 101 in real time or near real-time, and provide a series of acquired point clouds to the computer system 101, each time the acquired point cloud Corresponds to a timestamp.
  • the camera provides the acquired images to the computer system 101 in real time or near real time, and each frame of image corresponds to a time stamp. It should be understood that the computer system 101 can obtain an image sequence from a camera.
  • the computer system 101 may be located far away from the automatic driving device, and may perform wireless communication with the automatic driving device.
  • the transceiver 123 can send automatic driving tasks, sensor data collected by the sensor 153, and other data to the computer system 101; and can also receive control instructions sent by the computer system 101.
  • the automatic driving device can execute the control instructions from the computer system 101 received by the transceiver, and perform corresponding driving operations.
  • some of the processes described in this article are executed on a processor installed in an autonomous vehicle, and others are executed by a remote processor, including taking actions required to perform a single manipulation.
  • the autopilot device updates the relative pose between the cameras deployed in the autopilot process in time.
  • the follow-up will detail how the autopilot device updates the relative pose between the cameras deployed.
  • the relative pose calibration method provided by the embodiments of the present application can be applied not only to automatic driving devices, but also to vehicles that are not deployed with automatic driving systems. The following describes the relative pose calibration method provided by the embodiment of the present application.
  • Fig. 3 is a flowchart of a relative pose calibration method provided by an embodiment of the application. As shown in Figure 3, the method includes:
  • the automatic driving device collects images through a first camera and a second camera.
  • the self-driving device can be a self-driving car; it can also be a drone, or other devices that need to calibrate the relative pose between cameras.
  • the automatic driving device performs feature point matching on the image collected by the first camera and the image collected by the second camera to obtain a first matching feature point set.
  • the first matching feature point set includes H groups of feature point pairs, and each set of feature point pairs includes two matching feature points.
  • One feature point is a feature point extracted from an image collected by the first camera, and the other feature point is The point is a feature point extracted from the image collected by the second camera, and the H is an integer not less than 8.
  • the automatic driving device performs feature point matching on the image collected by the first camera and the image collected by the second camera to obtain the first matching feature point set. The feature point is matched to obtain the above-mentioned first matching feature point set.
  • the first camera collects image 1 at the first time point
  • the second camera collects image 2 at the first time point
  • the automatic driving device extracts the feature points in image 1 to obtain the first image feature point set 1, and extracts The feature points in the image 2 obtain the image feature point set 2; then, the feature points in the image feature point set 1 and the image feature point set 2 are matched to obtain the matching feature point set (corresponding to the first matching feature point set).
  • one way for the automatic driving device to implement step 302 is as follows: the automatic driving device uses the first camera and the second camera to synchronously acquire images to obtain the first image sequence and the second image sequence, wherein the first image The multiple images in the sequence are in one-to-one correspondence with the images in the second image sequence; feature point matching is performed on the one-to-one corresponding images in the first image sequence and the second image sequence to obtain the above-mentioned first matching feature point set .
  • image 1 to image 5 in the first image sequence corresponds to image 6 to image 10 in the second image sequence, and the automatic driving device can compare the feature points extracted from image 1 with those extracted from image 6.
  • image 1 and image 6 are images acquired by the first camera and the second camera synchronously (that is, corresponding to the same frame of image)
  • image 2 and image 7 are images acquired by the first camera and the second camera synchronously (that is, corresponding to The same frame image), and so on.
  • the automatic driving device can perform feature point matching on multiple frames of images synchronously collected by the first camera and the second camera, and store the obtained feature point pairs in the first matching feature point set.
  • One frame of image may include one image collected by the first camera and one image collected by the second camera.
  • the automatic driving device determines the first relative pose between the first camera and the second camera according to the first matching feature point set.
  • the first matching feature point set includes H groups of feature point pairs, and each set of feature point pairs includes two matching feature points.
  • One feature point is a feature point extracted from an image collected by the first camera, and the other feature point is The point is the feature point extracted from the image collected by the second camera, the field of view of the first camera overlaps the field of view of the second camera, and the first camera and the second camera are installed at different positions of the automatic driving device,
  • the above H is an integer not less than 8.
  • the automatic driving device updates the second relative pose to the first relative pose.
  • the second relative pose is the relative pose between the first camera and the second camera currently stored by the automatic driving device.
  • the pose change threshold may be the maximum deviation threshold allowed by the normal operation of the multi-camera system, for example, the rotation angle deviation between the first relative pose and the second relative pose does not exceed 0.5 degrees.
  • the method flow in FIG. 3 describes the manner of determining the relative pose between the first camera and the second camera with a common field of view. It should be understood that the automatic driving device may use a method similar to that in FIG. 3 to determine the relative pose between any two cameras with a common field of view. In the multi-camera vision system of the automatic driving device (corresponding to the camera 130 in Figure 1), two cameras at different installation positions generally have a common field of view.
  • the common field of view can be The two cameras of the field of view form a pair (by default, the left camera is in the front and the right camera is in the back).
  • FIG. 4 is a schematic diagram of a public field of view between cameras provided by an embodiment of the application.
  • cameras with a common field of view include: camera 3 and camera 1, camera 3 and camera 2, and camera 1 and camera 2.
  • Figure 4 only shows the common field of view between camera 1 and camera 2.
  • the cameras in the multi-camera system can be grouped in the above manner to obtain multiple pairs of cameras, that is, two cameras with a common field of view are grouped into one group; then, the method flow in FIG. 3 is used to obtain The relative pose between each pair of cameras.
  • the relative pose calibration method provided by the embodiments of the present application can achieve a repetition accuracy of less than 0.1°, effectively reduces the number of times of recalibrating the relative pose, and has high reliability.
  • FIG. 3 does not detail how to determine the first relative pose between the first camera and the second camera according to the first matching feature point set. Some optional implementations of step 303 are described below.
  • the automatic driving device determines the first relative pose between the first camera and the second camera in the following manner: determining the essential matrix between the first camera and the second camera; Based on the singular value decomposition result of the above essential matrix, calculate the 5-degree-of-freedom (DOF) relative pose between the above-mentioned first camera and the above-mentioned second camera; take the ratio between the first distance and the second distance as Scale factor; the first distance and the second distance are the results obtained by measuring the same distance by a non-visual sensor (such as lidar) and a visual sensor (such as the camera 130) on the automatic driving device, and the visual sensor includes the first The camera and/or the above-mentioned second camera; the above-mentioned 5-degree-of-freedom relative pose and the above-mentioned scale factor are combined to obtain the above-mentioned first relative pose.
  • DOF 5-degree-of-freedom
  • the automatic driving device may adopt the following steps to determine the essential matrix between the first camera and the second camera:
  • F represents the basic matrix between the first camera and the second camera
  • f [f 11 f 12 f 13 f 21 f 22 f 23 f 31 f 32 f 33 ] T.
  • D is a diagonal matrix, except for diagonal elements, all are 0.
  • F′ Udiag(r,s,0)V T
  • F′ is an optimal estimation of the fundamental matrix F that satisfies the singularity constraint.
  • K is the internal parameter matrix of the first camera
  • K′ is the internal parameter matrix of the second camera
  • both K and K′ are known quantities. According to the above formula (6), the essential matrix E can be obtained.
  • the automatic driving device calculates the relative pose of the 5-degree-of-freedom DOF between the first camera and the second camera as follows:
  • Using the triangulation method to measure the distance of each feature point in the first matching feature point set may be: using a triangulation formula to determine a three-dimensional space coordinate according to each group of feature point pairs in the first matching feature point set.
  • a three-dimensional space coordinate calculated from a set of feature point pairs is the space coordinates corresponding to the two feature points included in the set of feature point pairs.
  • Triangulation was first proposed by Gauss and used in surveying. Simply put: Observe the same three-dimensional point P(x,y,z) at different positions, and know the two-dimensional projection points X1(x1,y1), X2(x2, y2), using the triangle relationship to recover the depth information of the three-dimensional point, that is, the three-dimensional space coordinates.
  • Triangulation is mainly to calculate the three-dimensional coordinates of the feature points in the camera coordinate system through the matched feature points (ie, pixel points).
  • the translation part (the last column) of P' is determined by Ok, and Is a normalized unit vector, so the translation part of P'differs from the true value by a scale factor.
  • the 5-degree-of-freedom pose between the two cameras is determined. In this way, the 5-DOF relative pose between the first camera and the second camera can be accurately and quickly determined.
  • the relative pose between the two cameras has 6 degrees of freedom.
  • the 5DOF relative pose between the two cameras can be obtained by decomposing the essential matrix, namely Among them, the relative rotation angle between the two cameras can be accurately obtained by R, and the relative position between the two cameras can only obtain a normalized result, that is, a scale factor that differs from the true value. Determining the scale factor depends on the output of other sensors.
  • a non-visual sensor obtains the measurement distance s
  • the same measurement distance s′ can be obtained through the multi-camera 3D reconstruction technology
  • the scale factor is
  • the 6DOF relative pose between the two cameras is
  • a feasible solution for determining the scale factor is as follows: use lidar or millimeter wave radar to measure the distance s between a target and the autopilot device, and use the camera to use binocular (or multi-eye) vision measurement to get the same target to the automatic The distance s′ of the driving device to obtain the scale factor.
  • Another feasible solution for determining the scale factor is as follows: use an Inertial Measurement Unit (IMU) or wheel speed meter to measure the moving distance s of the automatic driving device, and use binocular (or multi-eye) visual measurement to obtain a certain static
  • IMU Inertial Measurement Unit
  • binocular or multi-eye visual measurement
  • the distance difference s′ between the first distance between the target and the automatic driving device at the first moment and the second distance between the stationary target and the automatic driving device at the second moment can also obtain a scale factor.
  • the first time is the starting time of the moving distance s of the automatic driving device
  • the second time is the ending time of the moving distance s of the automatic driving device.
  • the 5-DOF relative pose between the two cameras is obtained by decomposing the essential matrix, and then the 5-DOF relative pose and the scale factor are used to obtain the 6-DOF relative pose without prior assumptions about the surrounding environment , There is no need to collect 3D information of surrounding scenes, and the workload is small.
  • the automatic driving device determines the first relative pose between the first camera and the second camera according to the first matching feature point set as follows: iteratively solves the target equation to obtain the first relative pose; in the target equation Wherein, the parameters included in the first relative pose are unknown numbers, and the feature point pairs in the first matching feature point set, the internal parameters of the first camera, and the internal parameters of the second camera are known numbers.
  • the feature point extracted from the image collected by the first camera is x′
  • the feature point extracted from the image collected by the second camera is x.
  • x [x y 1] T
  • x′ [x′ y′ 1] T.
  • the essential matrix E between the first camera and the second camera has the following equation:
  • K is the internal parameter matrix of the first camera
  • K′ is the internal parameter matrix of the second camera
  • K and K′ are both known quantities
  • J there are a total of 6 parameters in the target equation (that is, the 6 parameters that constitute the relative pose), Denoted as J, and its gradient as
  • the gradient descent method is used to iteratively solve the target equation using the following formula to obtain the above-mentioned first relative pose:
  • is the step length parameter, which can be set according to experience. Substitute x′ and x into formula (15), optimize and update This step does not depend on the number of x'and x points. Even if the number of feature points in the first matching feature point set is small (such as in special scenes such as tunnels), it can still be Optimize until a stable result is obtained.
  • the first relative pose can be calculated quickly and accurately, without the need to calculate the essential matrix.
  • the autonomous driving device after the autonomous driving device obtains the relative pose between two cameras with a common field of view (for example, by executing the method flow in FIG. 1), it can use the difference between each pair of cameras with a common field of view.
  • the closed loop composed of relative poses is used to construct a system of equations for overall optimization.
  • the following uses camera 1 (ie, the first camera), camera 2 (ie, the second camera), and camera 3 (ie, the third camera) as examples to introduce how to optimize the obtained relative poses between the two cameras.
  • Rt ij represents the conversion matrix from camera i to camera j coordinate system
  • R ij represents the rotation matrix between camera i and camera j
  • the pose relationship among camera 1, camera 2, and camera 3 is as follows:
  • R 31 can be obtained, and then can be used R 31 finds In some embodiments, Rt 31 can be regarded as an unknown quantity, Rt 23 is the relative pose obtained by using matching feature points in a set of images (corresponding to a frame) synchronously captured by camera 2 and camera 3, Rt 21 is the relative pose obtained by using matching feature points in a group of images taken by camera 2 and camera 1 simultaneously.
  • formula (17) and formula (18) the following equations can be obtained:
  • R 21 is the first rotation matrix
  • R 23 is the second rotation matrix
  • the automatic driving device may determine an Rt 21 using two images (corresponding to one frame) simultaneously collected by the camera 1 and the camera 2. It should be understood that the automatic driving device can determine M Rt 21 by using M groups of images (corresponding to M frames) collected synchronously. In the same way, the automatic driving device can determine M Rt 23 .
  • treat Rt 31 as a known quantity substitute the results obtained in the previous step into formulas (17) and (18), use a similar method to construct equations and treat Rt 21 as the quantity to be optimized, you can get The optimal solution of Rt 21 when Rt 31 is known.
  • the numbers between the cameras are arbitrarily designated, so the order of the cameras can be changed and the above steps can be repeated to obtain different solutions of poses between multiple sets of cameras, and a set of solutions with the smallest error is selected as the output.
  • Different closed-loop equations can be obtained respectively to obtain different solutions, and the optimal one can be selected.
  • the error is the residual error of the equation.
  • the automatic driving device after the automatic driving device obtains the relative poses between the two cameras, it can optimize the relative poses between the two cameras in a similar way, so as to obtain a more accurate relative pose. posture.
  • the foregoing embodiment describes the manner of determining the relative pose between the first camera and the second camera by using two images synchronously collected by the first camera and the second camera.
  • a relatively accurate relative pose can be obtained.
  • an accurate relative pose cannot be obtained, that is, the pose estimation is unstable.
  • the following introduces a way to determine the relative pose between two cameras through the accumulation of multiple frames. In this way, a more accurate relative pose can be obtained.
  • a way to determine the relative pose between two cameras through multi-frame accumulation is as follows:
  • the two images of the i-th frame include an image collected by the first camera in the i-th frame and an image collected by the second camera in the i-th frame. Extracting the feature points in the two images of the i-th frame can be extracting the feature points in an image collected by the first camera in the i-th frame to obtain x′ i ; extracting the feature points in an image collected by the second camera in the i-th frame The feature points of, get x i . i is an integer greater than zero.
  • the two images in one frame include one image collected by the first camera and one image collected by the second camera, and the time interval between the two images in this frame is less than a time threshold, such as 0.5 ms, 1 ms, and so on.
  • the feature point set X 1 and the feature point set X 2 store the matching feature points in the two images of the N frame.
  • an implementation manner of step 3) may be to perform feature point matching on the feature points in the feature point set X 1 and the feature points in the feature point set X 2 to obtain multiple sets of feature point pairs; Set the feature point pairs to determine the relative pose between the two cameras. Using the multiple sets of feature point pairs, the implementation manner of determining the relative pose between the two cameras may be similar to the implementation manner of step 303.
  • the iterative optimization method can be used to estimate the relative pose of the two cameras: first use the feature point sets X 1 and X 2 Perform pose estimation for all points to obtain the first intermediate pose; then, substitute the first intermediate pose into formula (14) to calculate the residual, and remove the feature points whose residuals are greater than the residual threshold; then use the remaining features Point pose estimation is performed again to obtain the second intermediate pose; the above process is repeated until the number of feature points to be eliminated is less than the number threshold (such as 5% of the total).
  • the unit of formula (14) is pixel, and the residual threshold can be 0.8, 1, 1.2, etc., which is not limited in this application. Finally, a more accurate relative pose under multiple frames of images can be obtained.
  • the implementation of the pose estimation here can be the same as that of step 303.
  • the central limit theorem shows that the mean of a large number of mutually independent random variables converges to a normal distribution.
  • the R and calculated between different frames Are independent and identically distributed.
  • N(0,1) represents the standard normal distribution.
  • the variance of X is Inversely proportional.
  • n is the size of the time window (corresponding to the n frames of images collected). As the time window increases, the variance of X can continue to decrease, and it will stabilize at a certain value in practical applications. According to the central limit theorem, it is assumed that in a long-term observation, the R and R calculated between different frames Are independent and identically distributed, then as n increases, R and Will stabilize. Choose an appropriate n such that R and The variance of satisfies the demand.
  • Using the method of multi-frame accumulation to determine the relative pose between the two cameras can avoid the jitter problem caused by single-frame calculation and stabilize the calibration result.
  • FIG. 5 is a schematic structural diagram of an automatic driving device provided by an embodiment of the application.
  • the automatic driving device includes: an image acquisition unit (corresponding to the first camera and the second camera) 501, an image feature point extraction unit 502, a pose calculation unit 503, a closed loop optimization unit 504, and a multi-frame accumulation optimization unit 505, a scale calculation unit 506, and a calibration parameter update unit 507.
  • the closed-loop optimization unit 504 and the multi-frame accumulation optimization unit 505 are optional, but not necessary.
  • the functions of the image feature point extraction unit 502, the pose calculation unit 503, the closed-loop optimization unit 504, the multi-frame accumulation optimization unit 505, the scale calculation unit 506, and the calibration parameter update unit 507 are all performed by the processor 112 (corresponding to On-board processor). In some embodiments, the functions of the pose calculation unit 503, the closed loop optimization unit 504, the multi-frame accumulation optimization unit 505, the scale calculation unit 506, and the calibration parameter update unit 507 are all implemented by the processor 112 (corresponding to the on-board processor); The function of the image feature point extraction unit 502 is implemented by a graphics processor.
  • the image feature point extraction unit 502, the pose calculation unit 503, the closed loop optimization unit 504, the multi-frame accumulation optimization unit 505, the scale calculation unit 506, and the calibration parameter update unit 507 are used in the automatic driving device to determine the relative relationship between two cameras.
  • the unit of the pose that is, the online calibration part (the part used to realize the relative pose recalibration). The functions of each unit are introduced below.
  • the image acquisition unit 501 is used to acquire images collected by multiple cameras. Exemplarily, the image acquisition unit is used to acquire images synchronously acquired by each camera.
  • the multiple cameras correspond to the camera 130 in FIG. 1.
  • the image feature point extraction unit 502 is configured to extract feature points in images collected by multiple cameras, and match the extracted feature points to obtain a first matching feature point set.
  • the image feature point extraction unit is configured to extract feature points in a first image collected by a first camera to obtain a first feature point set, and extract feature points in a second image collected by a second camera to obtain a first feature point set.
  • Two feature point sets: the feature points in the first feature point set and the feature points in the second feature point set are matched to obtain multiple sets of feature point pairs (corresponding to the first matching feature point set).
  • the image feature point extraction unit is used to extract feature points in multiple sets of images synchronously collected by the first camera and the second camera, and perform feature point matching to obtain multiple sets of feature point pairs (corresponding to the first Matching feature point set), each group of images (corresponding to a frame) includes an image collected by the first camera and an image collected by the second camera.
  • the pose calculation unit 503 is configured to determine the basic matrix between the two cameras according to the first matching feature point set (ie, the matched feature points), and decompose the basic matrix to obtain a 5DOF relative pose. It should be understood that the pose calculation unit may perform pose calculation on all cameras with a common field of view in the multi-camera system, that is, determine the relative poses between all two cameras with a common field of view in the multi-camera system. In the case that the automatic driving device does not include the closed-loop optimization unit 505 and the multi-frame accumulation optimization unit 506, the pose calculation unit 503 is also used to combine the scale factor and the 5DOF relative pose to obtain a complete 6DOF relative pose.
  • the pose calculation unit 503 is specifically configured to determine the essential matrix between the first camera and the second camera; according to the singular value decomposition result of the essential matrix, calculate the difference between the first camera and the second camera. Relative pose between 5 degrees of freedom DOF.
  • the pose calculation unit 503 is specifically configured to perform singular value decomposition on the essential matrix to obtain the singular value decomposition result; according to the singular value decomposition result, obtain at least the difference between the first camera and the second camera.
  • each of the above at least two relative poses is used to calculate the three-dimensional coordinate position of the feature point in the first matching feature point set; each of the at least two relative poses that makes the first matching feature point set is The relative poses in which the three-dimensional coordinate positions of the feature points are located in front of the first camera and the second camera are regarded as the 5-degree-of-freedom relative pose.
  • the scale calculation unit 504 is configured to use the ratio between the first distance and the second distance as a scale factor; the first distance and the second distance are respectively the same distance measured by the non-visual sensor and the visual sensor on the automatic driving device As a result, the visual sensor includes the first camera and/or the second camera.
  • the scale calculation unit 504 is specifically configured to use lidar or millimeter wave radar to measure the distance s of a certain target from the automatic driving device, while using a camera to use binocular (or multi-eye) vision measurement to obtain the same target to the automatic driving The distance s'of the device to obtain the scale factor. It should be understood that combining the scale factor and the 5DOF relative pose can obtain a complete 6DOF relative pose.
  • the closed-loop optimization unit 505 is used to construct a system of equations based on the closed-loop composed of the relative poses of the two cameras in the multi-camera system to perform overall optimization.
  • the closed-loop optimization unit 505 optimizes the relative pose obtained by executing the method flow in FIG. 3 by solving the equation system (21) and the equation system (21) successively.
  • the closed-loop optimization unit 505 is also used to combine the scale factor and the 5DOF relative pose to obtain a complete 6DOF relative pose.
  • the multi-frame accumulation optimization unit 506 is configured to accumulate, filter and optimize the multi-frame calculation results to obtain stable calibration parameters (corresponding to the relative pose).
  • the multi-frame accumulation optimization unit 406 is used to implement a manner of determining the relative pose between two cameras through multi-frame accumulation.
  • the multi-frame accumulation optimization unit 505 may combine the scale factor and the 5DOF relative pose to obtain a complete 6DOF relative pose.
  • the calibration parameter update unit 507 is used to change the relative pose currently used by the automatic driving device when the difference between the calculated relative pose and the relative pose currently used by the automatic driving device is not greater than the pose change threshold. Update to the calculated relative pose.
  • the calibration parameter update unit 507 is used to compare the calculated relative pose (corresponding to the calibration parameter) with the relative pose currently used by the automatic driving device, and if the difference is too large, dynamically update the currently used Relative pose (or reminder at the same time).
  • the automatic driving device in FIG. 5 can use method one to determine the relative pose between two cameras.
  • the following introduces the structure of an automatic driving device that can adopt the second method to determine the relative pose between two cameras.
  • Fig. 6 is a schematic structural diagram of another automatic driving device provided by an embodiment of the application. As shown in Figure 6, the automatic driving device includes:
  • the image acquisition unit 601 is used to acquire images through the first camera and the second camera respectively; the field of view of the first camera and the field of view of the second camera overlap, and the first camera and the second camera are installed in the automatic driving device different positions;
  • the image feature point extraction unit 602 is configured to perform feature point matching on the image collected by the first camera and the image collected by the second camera to obtain a first matching feature point set;
  • the first matching feature point set includes H groups of features Point pairs, each set of feature point pairs includes two matching feature points, one of the feature points is a feature point extracted from the image collected by the above-mentioned first camera, and the other feature point is a feature point extracted from the image collected by the above-mentioned second camera
  • the above H is an integer not less than 8;
  • the pose calculation unit 603 is configured to determine the first relative pose between the first camera and the second camera according to the first matching feature point set;
  • the first matching feature point set includes H groups of feature point pairs, each The set of feature point pairs includes two matching feature points, one of the feature points is the feature point extracted from the image captured by the first camera, and the other feature point is the feature point extracted from the image captured by the second camera.
  • the field of view of the first camera and the field of view of the second camera overlap, the first camera and the second camera are installed at different positions of the automatic driving device, and the above H is an integer not less than 8;
  • the calibration parameter update unit 604 (corresponding to the calibration parameter update unit 507) is configured to: when the difference between the first relative pose and the second relative pose is not greater than the pose change threshold, the second relative The pose is updated to the first relative pose; the second relative pose is the relative pose between the first camera and the second camera currently stored by the automatic driving device.
  • the functions of the pose calculation unit 603 and the calibration parameter update unit 604 are both implemented by the processor 112 (corresponding to the on-board processor).
  • the automatic driving device in FIG. 6 further includes: a closed-loop optimization unit 605 and a multi-frame accumulation optimization unit 606.
  • the closed-loop optimization unit 605 and the multi-frame accumulation optimization unit 606 are optional, but not necessary.
  • the function of the image acquisition unit 601 can be the same as that of the image acquisition unit 501
  • the function of the image feature point extraction unit 602 can be the same as the function of the image feature point extraction unit 502
  • the function of the closed loop optimization unit 605 can be the same as that of the closed loop optimization unit 505.
  • the function of the multi-frame accumulation optimization unit 606 may be the same as the function of the multi-frame accumulation optimization unit 506.
  • the pose calculation unit 603 is specifically configured to iteratively solve the target equation to obtain the first relative pose; in the target equation, the parameter included in the first relative pose is an unknown number, and the first matching feature point is The concentrated feature point pairs, the internal parameters of the first camera, and the internal parameters of the second camera are known numbers. It should be understood that the pose calculation unit 603 may adopt the second method to determine the relative pose between two cameras with a common field of view.
  • each unit in the above automatic driving device is only a division of logical functions, and may be fully or partially integrated into a physical entity in actual implementation, or may be physically separated.
  • the above units can be separately set up processing elements, or they can be integrated into a certain chip of the automatic driving device for implementation.
  • they can also be stored in the storage element of the controller in the form of program code, which is determined by a certain part of the processor.
  • a processing element calls and executes the functions of each of the above units.
  • each unit can be integrated together or implemented independently.
  • the processing element here can be an integrated circuit chip with signal processing capabilities.
  • each step of the above method or each of the above units can be completed by an integrated logic circuit of hardware in the processor element or instructions in the form of software.
  • the processing element may be a general-purpose processor, such as a central processing unit (English: central processing unit, CPU for short), or one or more integrated circuits configured to implement the above methods, for example: one or more specific integrated circuits Circuit (English: application-specific integrated circuit, abbreviation: ASIC), or, one or more microprocessors (English: digital signal processor, abbreviation: DSP), or, one or more field programmable gate arrays (English: field-programmable gate array, referred to as FPGA), etc.
  • a central processing unit English: central processing unit, CPU for short
  • integrated circuits configured to implement the above methods, for example: one or more specific integrated circuits Circuit (English: application-specific integrated circuit, abbreviation: ASIC), or, one or more microprocessors (English: digital signal processor, abbreviation: DSP), or, one or more field programmable gate array
  • FIG. 7 is a schematic structural diagram of another automatic driving device provided by an embodiment of the application.
  • the automatic driving device includes: a multi-camera vision system 701, an image processor 702, an image feature extractor 703, an on-board processor 704, a memory 705, a calibration parameter update device 706, and a parameter abnormality reminder 707.
  • the multi-camera vision system 701 is used to synchronize images collected by multiple cameras.
  • the multi-camera vision system 701 corresponds to the camera 130 in FIG. 1.
  • the multi-camera vision system consists of multiple cameras, which can collect image data synchronously.
  • the image processor 702 is used to perform preprocessing such as scaling on the image data collected by the multi-camera vision system 701.
  • the image feature extractor 703 is configured to perform feature point extraction on the image preprocessed by the image processor 702.
  • the preprocessed image can enter the special image feature extractor 703 for feature point extraction.
  • the image feature extractor 703 may be a graphics processor with general computing capabilities or a specially designed ISP or other hardware.
  • the on-board processor 704 is configured to determine the relative pose between the two cameras according to the feature points extracted by the image feature extractor 703.
  • the on-board processor (corresponding to the processor 113 in FIG. 1) may be a hardware platform with general computing capabilities.
  • the on-board processor includes a pose estimation part and an optimization processing part.
  • the pose estimation part (corresponding to the pose calculation unit 601 in FIG. 6 or the pose calculation unit 503 and the scale calculation unit 504 in FIG. 5) is used to determine the relative pose between the two cameras, that is, the relative pose estimation .
  • the optimization processing part is used to implement closed-loop constrained optimization and/or multi-frame accumulation optimization.
  • the on-board processor includes a pose estimation part and does not include an optimization processing part.
  • the on-board processor is also used to read calibration parameters (that is, the currently stored relative pose) from the memory 705 (corresponding to the memory 114 in FIG.
  • the calibration parameter update device 706 (corresponding to the calibration parameter update unit 507 in FIG. 5 and the calibration parameter update unit 602 in FIG. 6) can update the calibration parameters in the memory as required.
  • the parameter abnormality reminding device 707 can remind the driver of the abnormality of the calibration parameters of the camera by sound or image.
  • the in-vehicle processor determines that the relative pose between the first camera and the second camera differs greatly from the relative pose between the first camera and the second camera currently used by the automatic driving device, it reminds the driver of the first camera and the second camera.
  • the relative pose between the camera and the second camera is abnormal.
  • the on-board application scenario Take the on-board application scenario as an example. If the calibration parameters calculated by the on-board processor are too far from the parameters calibrated when the car leaves the factory, if the angle deviation exceeds 0.3 degrees, it can be considered that the sensor is loosely installed, which may cause the performance of the automatic driving system to decrease. Need to remind the owner to go to the 4s shop to re-calibrate (such as the pop-up warning window of the central control). If the vehicle owner has not recalibrated for a long time and the angle deviation is not large, such as greater than 0.3 degrees but less than 0.5 degrees, the autopilot system can be operated with online calibration parameters instead of factory calibration parameters with the owner's permission.
  • the automatic driving device in FIG. 7 can use method one to determine the relative pose between two cameras, or method two to determine the relative pose between two cameras.
  • FIG. 8 is a flowchart of another relative pose calibration method provided by an embodiment of the application.
  • the method in Figure 8 is a further refinement and improvement of the method in Figure 3. As shown in Figure 8, the method includes:
  • the automatic driving device determines a pair of cameras with a common field of view.
  • the first camera and the second camera have a common field of view
  • the first image and the second image are the images (corresponding to one frame) collected by the first camera and the second camera synchronously
  • the first image is extracted Feature points to obtain a third feature point set
  • extract feature points in the second image to obtain a fourth feature point set
  • multiple sets of feature point pairs corresponding to the pair of cameras can be obtained by using the images synchronously collected by each pair of cameras.
  • step 803 and step 804 may be the same as the manner of determining the 5DOF relative pose between a pair of cameras with a common field of view in the first manner, and will not be described in detail here.
  • step 802 to step 804 may be replaced with the method of multi-frame accumulation in the foregoing embodiment to determine the relative pose between the two cameras. It should be understood that each time the automatic driving device executes steps 801 to 804, a 5DOF relative pose between a pair of cameras with a common field of view can be determined.
  • equation (19) and equation (20) are a set of closed-loop equations of a multi-camera system constructed.
  • Steps 805 and 806 are to construct an equation set using a closed loop composed of the relative poses between the pairs of cameras with a common field of view to optimize the calculated 5DOF relative poses between the two cameras.
  • the foregoing embodiment describes the method of constructing a system of equations using a closed loop composed of the relative poses between pairs of cameras with a common field of view to perform overall optimization.
  • step 809 If yes, go to step 809; if not, go to step 810.
  • Updating the currently used calibration parameters may be updating the currently used calibration parameters to the calculated relative pose (ie, new calibration parameters).
  • the public field of view between the two cameras is used to construct the pose constraint equation between the cameras, such as formula (4).
  • the 5DOF relative pose of the two cameras is obtained by decomposing the essential matrix, making full use of The public field of view information between the cameras.
  • using feature point matching technology to obtain 5DOF pose information and optimize it through closed-loop equations, a higher recalibration accuracy is achieved.
  • the automatic driving device may send the information needed to determine the relative pose between the two cameras to the server, and the server determines the pair of cameras on the automatic driving device according to the information sent by the automatic driving device.
  • the relative pose between introduces a solution for determining the relative pose between the two cameras on the automatic driving device by the server.
  • FIG. 9 is a flowchart of another relative pose calibration method provided by an embodiment of the application. As shown in Figure 9, the method includes:
  • the automatic driving device sends a pose calibration request to the server.
  • the pose calibration request is used to request recalibration of the relative pose between the first camera and the second camera of the automatic driving device, and the pose calibration request carries the internal parameters of the first camera and the The internal parameters of the second camera.
  • the pose calibration request is used to request recalibration of the relative poses between the cameras on the automatic driving device, and the pose calibration request carries internal parameters of the cameras on the automatic driving device.
  • the server sends confirmation information for the pose calibration request to the automatic driving device.
  • the server determines to accept the pose calibration request from the automatic driving device, it sends confirmation information for the pose calibration request to the automatic driving device.
  • the server determines that the autonomous driving device is an authorized device according to the pose calibration request, it accepts the pose calibration request from the automatic driving device, and sends confirmation information for the pose calibration request to the automatic driving device.
  • the authorization device refers to an automatic driving device to which the server needs to provide a pose calibration service (ie, recalibrate the relative pose).
  • the automatic driving device sends recalibration reference information to the server.
  • the recalibration reference information is used by the server to determine the relative pose between the first camera and the second camera of the automatic driving device, the field of view of the first camera and the field of view of the second camera overlap, and the first camera
  • the first camera and the aforementioned second camera are installed at different positions of the automatic driving device.
  • the recalibration reference information includes images acquired synchronously by the first camera and the second camera, or multiple pairs of characteristic point pairs obtained by matching the characteristic points of the automatic driving device according to the synchronously acquired images of the first camera and the second camera. (Corresponding to the first matching feature point set).
  • the above-mentioned recalibration reference information is used by the server to determine the relative poses between the cameras on the above-mentioned automatic driving device, and the above-mentioned cameras are installed in different positions of the automatic driving device.
  • the recalibration reference information includes images synchronously collected by each camera or multiple pairs of feature points obtained by matching feature points of the automatic driving device according to the images synchronously collected by each pair of cameras.
  • the server obtains a first matching feature point set according to the recalibration reference information.
  • the aforementioned first matching feature point set includes H groups of feature point pairs, and each group of feature point pairs includes two matching feature points, one of which is a feature point extracted from an image collected by the first camera, The other feature point is the feature point extracted from the image collected by the second camera, and the above H is an integer not less than 8.
  • the aforementioned recalibration reference information includes a first matching feature point set.
  • the above-mentioned recalibration reference information includes images synchronously collected by the first camera and the second camera, and the server may perform feature point matching on the images synchronously collected by the first camera and the second camera to obtain the first matching feature point set. .
  • the server determines the first relative pose between the first camera and the second camera according to the first matching feature point set.
  • step 905 may be the same as the implementation of step 303.
  • the server can determine the first relative position between the first camera and the second camera in either way one or two, or it can use other ways to determine the first relative position between the first camera and the second camera.
  • the posture is not limited in the embodiment of this application.
  • the server sends the first relative pose to the automatic driving device.
  • the automatic driving device updates the second relative pose to the first relative pose when the difference between the first relative pose and the second relative pose is not greater than the pose change threshold.
  • the second relative pose is the relative pose between the first camera and the second camera currently stored by the automatic driving device.
  • the server may determine the relative pose between the two cameras on the automatic driving device according to the information sent by the automatic driving device. That is to say, the manner in which the automatic driving device in the foregoing embodiment determines the relative pose between the two cameras can be implemented by a server, for example, using a closed loop composed of the relative poses between the two cameras with a common field of view. Construct a set of equations, optimize the calculated 5DOF relative pose between the two cameras, and use the multi-frame accumulation method to determine the relative pose between the two cameras.
  • the automatic driving device does not need to calibrate the relative pose between the cameras by itself, and only needs to send the data required to calibrate the relative pose between the cameras to the server, which requires less workload.
  • Fig. 10 is a schematic structural diagram of a server provided by an embodiment of the application.
  • the server includes: a memory 1001, a processor 1002, a communication interface 1003, and a bus 1004;
  • the interface 1003 realizes the communication connection between each other through the bus 1004.
  • the communication interface 1003 is used for data interaction with the automatic driving device.
  • the processor 1003 reads the code stored in the memory to perform the following operations: receiving recalibration reference information from the automatic driving device, and the recalibration reference information is used by the server to determine the first camera and the first camera of the automatic driving device.
  • the relative pose between the second cameras, the field of view of the first camera overlaps the field of view of the second camera, and the first camera and the second camera are installed in different positions of the automatic driving device; according to the recalibration reference Information, the first matching feature point set is obtained.
  • the first matching feature point set includes H groups of feature point pairs, and each set of feature point pairs includes two matching feature points, one of which is an image collected from the first camera
  • the other feature point is the feature point extracted from the image collected by the second camera, and the above H is an integer not less than 8; according to the first matching feature point set, the first camera and the first camera are determined
  • the first relative pose between the two cameras; the first relative pose is sent to the automatic driving device.
  • the embodiments of the present application also provide a computer-readable storage medium.
  • the above-mentioned computer-readable storage medium stores instructions, which when run on a computer, cause the computer to execute the relative pose calibration method provided in the foregoing embodiments.
  • the above instructions can be implemented when running on a computer: determine the first relative pose between the first camera and the second camera according to the first matching feature point set; the first matching feature point set includes H Set of feature point pairs, each set of feature point pairs includes two matching feature points, one of the feature points is a feature point extracted from the image collected by the above-mentioned first camera, and the other feature point is a feature point collected from the above-mentioned second camera Feature points extracted from the image, the field of view of the first camera and the field of view of the second camera overlap, the first camera and the second camera are installed at different positions of the automatic driving device, and the above H is an integer not less than 8; In the case that the difference between the first relative pose and the second relative pose is not greater than the pose change threshold, the second relative pose is updated to the first relative pose; the second relative pose Is the relative pose between the first camera and the second camera currently stored by the automatic driving device.
  • the above instructions can be implemented when running on a computer: receiving recalibration reference information from the automatic driving device, and the above recalibration reference information is used by the server to determine the relationship between the first camera and the second camera of the automatic driving device.
  • the field of view of the first camera overlaps the field of view of the second camera, and the first camera and the second camera are installed in different positions of the automatic driving device; according to the recalibration reference information, a first match is obtained Feature point set, the above-mentioned first matching feature point set includes H groups of feature point pairs, each set of feature point pairs includes two matching feature points, one of which is a feature point extracted from an image collected by a first camera, The other feature point is the feature point extracted from the image collected by the second camera, and the above H is an integer not less than 8; according to the first matching feature point set, the first camera and the second camera are determined.
  • a relative pose sending the first relative pose to the automatic driving device.
  • the embodiments of the present application provide a computer program product containing instructions, which when run on a computer, cause the computer to execute the relative pose calibration method provided in the foregoing embodiments.

Abstract

A relative pose calibration method and a related apparatus applied to cameras on a self-driving car. The relative pose calibration method comprises: acquiring images by means of a first camera and a second camera; performing feature point matching on the image acquired by the first camera and the image acquired by the second camera, so as to obtain a first matching feature point set; determining a first relative pose between the first camera and the second camera according to the first matching feature point set; and updating a second relative pose to the first relative pose in cases where the difference between the first relative pose and the second relative pose is not greater than a pose change threshold, the second relative pose being a relative pose between the first camera and the second camera currently stored in the self-driving apparatus. The relative pose calibration method provided in the present application has high calibration accuracy and high reliability.

Description

一种相对位姿标定方法及相关装置A relative pose calibration method and related device 技术领域Technical field
本申请涉及自动驾驶领域,尤其涉及一种应用于自动驾驶汽车上的相机之间的相对位姿标定方法及相关装置。This application relates to the field of autonomous driving, and more particularly to a method and related devices for calibrating relative poses between cameras applied to autonomous vehicles.
背景技术Background technique
自动驾驶汽车需要对车身360°的环境都具有较高的感知能力,对不同场景、不同光照条件、不同距离的障碍物都需要有稳定可靠的感知结果。单个相机由于受到相机参数、安装位置等因素影响,不能实现360°度范围的全覆盖。因此,自动驾驶车辆一般配备有多个相机以提高视觉感知能力。不同的相机一般具有不同的相机参数(如焦距、分辨率、动态范围等),并安装于车身的不同位置,以获得更全面的感知结果。举例来说,自动驾驶车辆在前向、侧向、后向均配备至少一个相机,以覆盖车身周围360°的环境。同时,不同相机的视野不同,不同相机之间均具有部分重叠视野。Self-driving cars need to have a high perception of the 360° environment of the body, and they need to have stable and reliable perception results for different scenes, different lighting conditions, and obstacles at different distances. A single camera cannot achieve full coverage of 360° due to factors such as camera parameters and installation location. Therefore, self-driving vehicles are generally equipped with multiple cameras to improve visual perception. Different cameras generally have different camera parameters (such as focal length, resolution, dynamic range, etc.), and are installed in different positions of the vehicle body to obtain a more comprehensive perception result. For example, self-driving vehicles are equipped with at least one camera in the front, side, and rear directions to cover a 360° environment around the vehicle body. At the same time, different cameras have different fields of view, and different cameras have partially overlapping fields of view.
自动驾驶汽车上的相机之间的相对位姿会随着使用时间的增加而发生变化,此时就需要对各相机的标定参数(即相机的外参)进行修正,修正相机外参的过程称为重标定。随着具备自动驾驶功能的车辆越来越多,对相机的标定参数重标定的需求也大大增加,不能及时重标定各相机的外参的自动驾驶车辆存在很大的安全隐患。当前采用的一种在线重标定相机的外参的方案如下:自动驾驶车辆获取表征当前分布于道路附近的已知城建标志物之间的相对位置的先验位置信息,并与传感器实时感知到的城建标志物的位置比对,进而进行在线标定。然而,这种方案存在以下多个缺点:需要获取周围环境的先验位置信息,先验不满足时易失效;需要采集大量数据以得到先验位置信息,工作量大。因此,需要研究新的多相机标定方案。The relative pose between the cameras on the self-driving car will change as the use time increases. At this time, the calibration parameters of each camera (ie the camera's external parameters) need to be corrected. The process of correcting the camera's external parameters is called For recalibration. With the increasing number of vehicles with automatic driving functions, the demand for recalibration of the camera's calibration parameters has also greatly increased. Auto-driving vehicles that cannot recalibrate the external parameters of each camera in a timely manner pose great safety hazards. The currently adopted solution for online recalibration of the external parameters of the camera is as follows: the autonomous vehicle obtains prior position information that characterizes the relative position between known urban construction landmarks currently distributed near the road, and communicates with the sensor in real time. The location of urban construction markers is compared, and then online calibration is performed. However, this solution has the following disadvantages: it needs to obtain a priori location information of the surrounding environment, which is easy to fail when the prior is not satisfied; it needs to collect a large amount of data to obtain the a priori location information, and the workload is large. Therefore, it is necessary to study a new multi-camera calibration scheme.
发明内容Summary of the invention
本申请实施例提供了一种相对位姿标定方法,能够准确地确定相机之间的相对位姿,并且可靠性高。The embodiment of the present application provides a relative pose calibration method, which can accurately determine the relative pose between cameras and has high reliability.
第一方面,本申请实施例提供了一种相对位姿标定方法,该方法包括:通过第一相机和第二相机采集图像;所述第一相机的视野和所述第二相机的视野存在重叠,所述第一相机和所述第二相机安装于自动驾驶装置的不同位置;对所述第一相机采集的图像和所述第二相机采集的图像进行特征点匹配,以得到第一匹配特征点集;所述第一匹配特征点集包括H组特征点对,每组特征点对包括两个相匹配的特征点,其中一个特征点为从所述第一相机采集的图像中提取的特征点,另一个特征点为从所述第二相机采集的图像中提取的特征点,所述H为不小于8的整数;根据所述第一匹配特征点集,确定所述第一相机和所述第二相机之间的第一相对位姿;在所述第一相对位姿与第二相对位姿之间的差值不大于位姿变化阈值的情况下,将所述第二相对位姿更新为所述第一相对位姿;所述第二相对位姿 为所述自动驾驶装置当前存储的所述第一相机和所述第二相机之间的相对位姿。In the first aspect, an embodiment of the present application provides a method for calibrating a relative pose. The method includes: acquiring images by a first camera and a second camera; the field of view of the first camera and the field of view of the second camera overlap , The first camera and the second camera are installed at different positions of the automatic driving device; feature point matching is performed on the image collected by the first camera and the image collected by the second camera to obtain the first matching feature Point set; the first matching feature point set includes H groups of feature point pairs, each set of feature point pairs includes two matching feature points, one of which is a feature extracted from the image collected by the first camera Point, the other feature point is a feature point extracted from an image collected by the second camera, and the H is an integer not less than 8; according to the first matching feature point set, the first camera and the The first relative pose between the second cameras; when the difference between the first relative pose and the second relative pose is not greater than the pose change threshold, the second relative pose Update to the first relative pose; the second relative pose is the relative pose between the first camera and the second camera currently stored by the automatic driving device.
由于第一相机的视野和第二相机的视野存在重叠,因此第一相机采集的图像中的至少部分特征点与第二相机采集的图像中的至少部分特征点相匹配。所述第一匹配特征点集为对从所述第一相机采集的图像中提取的特征点和从所述第一相机采集的图像中提取的特征点进行特征点匹配得到的。所述自动驾驶装置可以是自动驾驶汽车(也称无人驾驶汽车),也可以是无人机,还可以是其他需要标定相机之间的相对位姿的装置。本申请实施例中,根据第一匹配特征点集,确定第一相机和第二相机之间的第一相对位姿,不依赖于特定的场地或标志物,对周围环境没有先验假设。相比业界常用的在线标定方案,本申请实施例提供的相对位姿标定方法,不需要先验假设,对环境有更强的适应性。另外,本申请实施例提供的相对位姿标定方法,能够达到0.1°以下的重复精度,并有效减少重标定相对位姿的次数,可靠性高。Since the field of view of the first camera and the field of view of the second camera overlap, at least part of the feature points in the image collected by the first camera matches at least part of the feature points in the image collected by the second camera. The first matching feature point set is obtained by performing feature point matching on the feature points extracted from the image collected by the first camera and the feature points extracted from the image collected by the first camera. The autopilot device may be an autopilot car (also called an unmanned car), a drone, or other devices that need to calibrate the relative pose between cameras. In the embodiment of the present application, the first relative pose between the first camera and the second camera is determined according to the first matching feature point set, which does not depend on a specific venue or landmark, and does not have a priori assumptions about the surrounding environment. Compared with the online calibration solutions commonly used in the industry, the relative pose calibration method provided in the embodiments of the present application does not require a priori assumptions and is more adaptable to the environment. In addition, the relative pose calibration method provided by the embodiment of the present application can achieve a repetition accuracy of less than 0.1°, effectively reduces the number of recalibration of the relative pose, and has high reliability.
在一个可选的实现方式中,所述根据第一匹配特征点集,确定所述第一相机和所述第二相机之间的第一相对位姿包括:确定所述第一相机和所述第二相机之间的本质矩阵;根据所述本质矩阵的奇异值分解结果,计算所述第一相机和所述第二相机之间的5自由度DOF相对位姿;将第一距离和第二距离之间的比值,作为尺度因子;所述第一距离和所述第二距离分别为所述自动驾驶装置上的非视觉传感器和视觉传感器测量同一距离得到的结果,所述视觉传感器包括所述第一相机和/或所述第二相机;合并所述5自由度相对位姿和所述尺度因子以得到所述第一相对位姿。In an optional implementation manner, the determining the first relative pose between the first camera and the second camera according to the first matching feature point set includes: determining the first camera and the The essential matrix between the second cameras; according to the singular value decomposition result of the essential matrix, calculate the relative pose of the 5-DOF DOF between the first camera and the second camera; compare the first distance and the second The ratio between the distances is used as a scale factor; the first distance and the second distance are the results of measuring the same distance by a non-visual sensor and a visual sensor on the automatic driving device, and the visual sensor includes the The first camera and/or the second camera; combining the 5-degree-of-freedom relative pose and the scale factor to obtain the first relative pose.
在该实现方式中,通过分解本质矩阵得到两相机之间的5自由度相对位姿,再由该5自由度相对位姿和尺度因子得到6自由度相对位姿,对周围环境无先验假设,也不需要采集周边场景的3D信息,工作量少。In this implementation, the 5-DOF relative pose between the two cameras is obtained by decomposing the essential matrix, and then the 5-DOF relative pose and the scale factor are used to obtain the 6-DOF relative pose without prior assumptions about the surrounding environment , There is no need to collect 3D information of surrounding scenes, and the workload is small.
在一个可选的实现方式中,所述根据所述本质矩阵作的奇异值分解结果,计算得到所述第一相机和所述第二相机之间的5自由度相对位姿包括:将所述本质矩阵进行奇异值分解以得到所述奇异值分解结果;根据所述奇异值分解结果,得到所述第一相机和所述第二相机之间的至少两个相对位姿;分别利用所述至少两个相对位姿计算所述第一匹配特征点集中的特征点的三维坐标位置;将所述至少两个相对位姿中使得所述第一匹配特征点集中的各特征点的三维坐标位置均位于所述第一相机和所述第二相机前方的相对位姿作为所述5自由度相对位姿。In an optional implementation manner, the calculating the 5-degree-of-freedom relative pose between the first camera and the second camera according to the singular value decomposition result of the essential matrix includes: The essential matrix performs singular value decomposition to obtain the singular value decomposition result; according to the singular value decomposition result, at least two relative poses between the first camera and the second camera are obtained; respectively using the at least Two relative poses are used to calculate the three-dimensional coordinate positions of the feature points in the first matching feature point set; the three-dimensional coordinate positions of the feature points in the first matching feature point set are equalized in the at least two relative poses The relative pose located in front of the first camera and the second camera is used as the 5-degree-of-freedom relative pose.
在该实现方式中,可以准确、快速地确定第一相机和第二相机之间的5自由度相对位姿。In this implementation, the 5-degree-of-freedom relative pose between the first camera and the second camera can be determined accurately and quickly.
在一个可选的实现方式中,所述根据第一匹配特征点集,确定所述第一相机和所述第二相机之间的第一相对位姿包括:迭代求解目标方程以得到所述第一相对位姿;在所述目标方程中,所述第一相对位姿包括的参数为未知数,所述第一匹配特征点集中的特征点对、所述第一相机的内参以及所述第二相机的内参为已知数。In an optional implementation manner, the determining the first relative pose between the first camera and the second camera according to the first matching feature point set includes: iteratively solving the target equation to obtain the first A relative pose; in the target equation, the parameters included in the first relative pose are unknowns, the pair of feature points in the first matching feature point set, the internal parameters of the first camera, and the second The internal parameters of the camera are known numbers.
在该实现方式中,通过迭代求解目标方程以得到所述第一相对位姿,可以快速、准确地的计算出第一相对位姿,不需要计算本质矩阵。In this implementation manner, by iteratively solving the target equation to obtain the first relative pose, the first relative pose can be calculated quickly and accurately, without the need to calculate the essential matrix.
在一个可选的实现方式中,所述自动驾驶装置安装有M个相机,所述M个相机包括所述第一相机、所述第二相机以及第三相机,所述第三相机的视野与所述第一相机的视野 和所述第二相机的视野均存在重叠,所述M为大于2的整数;所述方法还包括:获得M个第一旋转矩阵、M个第二旋转矩阵、M个第一平移矩阵以及M个第二平移矩阵;所述M个第一旋转矩阵为所述第一相机和所述第二相机之间的旋转矩阵且所述M个第一旋转矩阵中至少两个旋转矩阵不同,所述M个第二旋转矩阵为所述第二相机和所述第三相机之间的旋转矩阵且所述M个第二旋转矩阵中至少两个旋转矩阵不同,所述M个第一平移矩阵为所述第一相机和所述第二相机之间的平移矩阵且所述M个第一平移矩阵中至少两个平移矩阵不同,所述M个第二平移矩阵为所述第二相机和所述第三相机之间的平移矩阵且所述M个第二平移矩阵中至少两个平移矩阵不同,所述M个第一旋转矩阵与所述M个第一平移矩阵一一对应,所述M个第二旋转矩阵与所述M个第二平移矩阵一一对应,所述M为大于1的整数;求解第一方程组以得到第三旋转矩阵;所述第一方程组中包括M个第一方程,所述M个第一方程与所述M个第一旋转矩阵一一对应,且与所述M个第二旋转矩阵一一对应;在每个第一方程中,第一旋转矩阵和第二旋转矩阵为已知数,所述第三旋转矩阵为未知数;所述第三旋转矩阵为所述第一相机和所述第三相机之间的旋转矩阵;求解第二方程组以得到第三平移矩阵;所述第二方程组中包括M个第二方程,所述M个第二方程与所述M个第一旋转矩阵一一对应,且与所述M个第二旋转矩阵一一对应;在每个第二方程中,第一旋转矩阵、第二旋转矩阵、第一平移矩阵以及第二平移矩阵为已知数,所述第三平移矩阵为未知数;所述第三平移矩阵为所述第一相机和所述第三相机之间的平移矩阵;将包括所述第三旋转矩阵和所述第三平移矩阵的位姿作为所述第一相机和所述第三相机之间的相对位姿。In an optional implementation manner, the automatic driving device is equipped with M cameras, and the M cameras include the first camera, the second camera, and the third camera, and the field of view of the third camera is the same as The field of view of the first camera and the field of view of the second camera overlap, and the M is an integer greater than 2. The method further includes: obtaining M first rotation matrices, M second rotation matrices, M First translation matrices and M second translation matrices; the M first rotation matrices are the rotation matrices between the first camera and the second camera, and at least two of the M first rotation matrices Different rotation matrices, the M second rotation matrices are the rotation matrices between the second camera and the third camera, and at least two of the M second rotation matrices are different, the M The first translation matrices are the translation matrices between the first camera and the second camera, and at least two of the M first translation matrices are different, and the M second translation matrices are the The translation matrix between the second camera and the third camera and at least two translation matrices in the M second translation matrices are different, and the M first rotation matrices and the M first translation matrices are one-to-one Correspondingly, the M second rotation matrices have a one-to-one correspondence with the M second translation matrices, and the M is an integer greater than 1; the first equation set is solved to obtain the third rotation matrix; the first equation set Includes M first equations, the M first equations have a one-to-one correspondence with the M first rotation matrices, and the M second rotation matrices have a one-to-one correspondence; in each first equation, The first rotation matrix and the second rotation matrix are known numbers, and the third rotation matrix is an unknown number; the third rotation matrix is the rotation matrix between the first camera and the third camera; Equation set to obtain a third translation matrix; the second equation set includes M second equations, and the M second equations correspond to the M first rotation matrices one-to-one and correspond to the M first rotation matrices. The two rotation matrices have a one-to-one correspondence; in each second equation, the first rotation matrix, the second rotation matrix, the first translation matrix, and the second translation matrix are known numbers, and the third translation matrix is an unknown number; The third translation matrix is the translation matrix between the first camera and the third camera; the poses including the third rotation matrix and the third translation matrix are used as the first camera and the second camera. The relative pose between the three cameras.
在该实现方式中,将多相机两两之间的相对位姿矩阵构成闭环方程,并利用该闭环方程实现无公共视野的相机之间的位姿标定并进行整体优化,从而获得更为精准的相机相对位姿。In this implementation, the relative pose matrices between the multiple cameras are formed into a closed-loop equation, and the closed-loop equation is used to achieve the pose calibration between cameras without a common field of view and perform overall optimization, so as to obtain a more accurate The relative pose of the camera.
在一个可选的实现方式中,所述第一匹配特征点集包括所述第一相机从至少两帧图像中提取出的特征点以及所述第二相机从至少两帧图像中提取出的特征点;所述根据第一匹配特征点集,确定所述第一相机和所述第二相机之间的第一相对位姿包括:根据所述第一匹配特征点集中的特征点对,确定所述第一相机和所述第二相机之间的相对位姿以得到第一中间位姿;将所述第一匹配特征点集中各特征点对代入至第一公式,得到所述第一匹配特征点集中各特征点对对应的残差,所述第一公式表征中包括所述第一中间位姿;剔除所述第一匹配特征点集中的干扰特征点对以得到第二匹配特征点集;所述干扰特征点对为所述第一匹配特征点集中对应的残差大于残差阈值的特征点对;根据所述第二匹配特征点集中的特征点对,确定所述第一相机和所述第二相机之间的相对位姿以得到第二中间位姿;将目标中间位姿作为所述第一相机和所述第二相机之间的所述第一相对位姿;所述目标中间位姿为根据目标匹配特征点集中的特征点对确定的所述第一相机和所述第二相机之间的相对位姿,所述目标匹配特征点集中的特征点对的个数小于数量阈值或者所述目标匹配特征点集中的特征点对的个数与所述第一匹配特征点集中的特征点对的个数的比值小于比例阈值。In an optional implementation manner, the first matching feature point set includes feature points extracted by the first camera from at least two frames of images and features extracted by the second camera from at least two frames of images Points; said determining the first relative pose between the first camera and the second camera according to the first matching feature point set includes: determining the first relative pose between the first camera and the second camera according to the feature point pair in the first matching feature point set The relative pose between the first camera and the second camera to obtain a first intermediate pose; substituting each feature point pair in the first matching feature point set into a first formula to obtain the first matching feature The residuals corresponding to each feature point pair in the point set, the first intermediate pose is included in the first formula representation; the interference feature point pair in the first matching feature point set is eliminated to obtain a second matching feature point set; The interference feature point pair is a feature point pair whose corresponding residual error in the first matching feature point set is greater than a residual threshold; and the first camera and all the points are determined according to the feature point pair in the second matching feature point set. The relative pose between the second cameras to obtain a second intermediate pose; take the target intermediate pose as the first relative pose between the first camera and the second camera; the target intermediate The pose is the relative pose between the first camera and the second camera determined according to the feature point pairs in the target matching feature point set, and the number of the feature point pairs in the target matching feature point set is less than the number threshold Or the ratio of the number of feature point pairs in the target matching feature point set to the number of feature point pairs in the first matching feature point set is less than a ratio threshold.
可选的,所述第一匹配特征点集包括的特征点为第一相机采集的第一图像和第二相机采集的第二图像中相匹配的特征点。当该第一图像和该第二图像中相匹配的特征点对的个 数不低于8个时,可以获得两相机相对位姿的一个估计。当第一相机采集的一帧图像和第二相机采集的一帧图像中相匹配的特征点个数较少时,该估计不稳定,可以通过多帧累加减少误差影响。在一些实施例中,自动驾驶装置可提取第i帧两幅图中的特征点,分别记为x′ i和x i,将其加入特征点集合X 1和X 2;持续F帧后,特征点集合X 1和X 2中保存了F帧中两图像中匹配的特征点(即第一匹配特征点集),使用特征点集合X 1和X 2进行两相机相对位姿的估计;由于特征点集合X 1和X 2中的特征点数远远多于单帧的特征点数,因此可以采用迭代优化的方式:首先利用特征点集合X 1和X 2中全部点进行位姿估计以得到第一中间位姿,之后将该第一中间位姿代入公式(14)计算残差,并将残差大于残差阈值的特征点剔除,利用剩余的特征点重新进行位姿估计以得到第二中间位姿;重复以上过程,直到待剔除的特征点的个数小于数量阈值(如总数的5%)。此时可获得一组F帧数据下的最优解。应理解,一帧中两图像是指第一相机采集的一个图像和第二相机采集的一个图像,并且这一帧中两图像被拍摄的时间间隔小于时间阈值,例如0.5ms、1ms等。根据中心极限定理,假设在长时间的观测中,不同帧之间计算得到的R和
Figure PCTCN2020079780-appb-000001
是独立同分布的,则随着F的增大,R和
Figure PCTCN2020079780-appb-000002
将趋于平稳。选取适当的F使得R和
Figure PCTCN2020079780-appb-000003
的方差满足需求即可。
Optionally, the feature points included in the first matching feature point set are feature points that match in the first image collected by the first camera and the second image collected by the second camera. When the number of matching feature point pairs in the first image and the second image is not less than 8, an estimate of the relative pose of the two cameras can be obtained. When the number of matching feature points in a frame of image collected by the first camera and a frame of image collected by the second camera is small, the estimation is unstable, and the influence of errors can be reduced by multi-frame accumulation. In some embodiments, the automatic driving device can extract the feature points in the two images of the i-th frame, denoted as x′ i and x i , respectively, and add them to the feature point sets X 1 and X 2 ; after F frames, the feature points The point sets X 1 and X 2 store the matching feature points in the two images in the F frame (that is, the first matching feature point set), and the feature point sets X 1 and X 2 are used to estimate the relative pose of the two cameras; due to the features The number of feature points in the point sets X 1 and X 2 is far more than the number of feature points in a single frame, so iterative optimization can be used: first , all the points in the feature point sets X 1 and X 2 are used for pose estimation to obtain the first The intermediate pose, and then substitute the first intermediate pose into formula (14) to calculate the residual, and remove the feature points whose residuals are greater than the residual threshold, and use the remaining feature points to re-evaluate the pose to obtain the second intermediate position Pose; repeat the above process until the number of feature points to be eliminated is less than the number threshold (such as 5% of the total). At this time, the optimal solution under a set of F frame data can be obtained. It should be understood that the two images in a frame refer to an image captured by the first camera and an image captured by the second camera, and the time interval between the two images in this frame is less than a time threshold, such as 0.5 ms, 1 ms, and so on. According to the central limit theorem, it is assumed that in a long-term observation, the R and R calculated between different frames
Figure PCTCN2020079780-appb-000001
Are independent and identically distributed, then as F increases, R and
Figure PCTCN2020079780-appb-000002
Will stabilize. Choose an appropriate F such that R and
Figure PCTCN2020079780-appb-000003
The variance of satisfies the demand.
在该实现方式中,利用多帧累加方式,可以避免单帧计算引发的抖动问题,使标定结果趋于稳定。In this implementation, the multi-frame accumulation method can be used to avoid the jitter problem caused by the calculation of a single frame and stabilize the calibration result.
在一个可选的实现方式中,所述方法还包括:在所述第一相对位姿与所述第二相对位姿之间的差值不大于所述位姿变化阈值的情况下,输出提醒信息;所述提醒信息用于提示所述第一相机和所述第二相机之间的相对位姿异常。In an optional implementation manner, the method further includes: outputting a reminder if the difference between the first relative pose and the second relative pose is not greater than the pose change threshold Information; the reminder information is used to prompt that the relative pose between the first camera and the second camera is abnormal.
在该实现方式中,可及时提醒乘客,相机的标定参数异常。In this implementation, passengers can be reminded in time that the calibration parameters of the camera are abnormal.
第二方面,本申请实施例提供了另一种相对位姿标定方法,该方法包括:服务器接收来自自动驾驶装置的重标定参考信息,所述重标定参考信息用于所述服务器确定所述自动驾驶装置的第一相机和第二相机之间的相对位姿,所述第一相机的视野和所述第二相机的视野存在重叠,所述第一相机和所述第二相机安装于所述自动驾驶装置的不同位置;所述服务器根据所述重标定参考信息,得到第一匹配特征点集,所述第一匹配特征点集包括H组特征点对,每组特征点对包括两个相匹配的特征点,其中一个特征点为从所述第一相机采集的图像中提取的特征点,另一个特征点为从所述第二相机采集的图像中提取的特征点,所述H为不小于8的整数;所述服务器根据所述第一匹配特征点集,确定所述第一相机和所述第二相机之间的第一相对位姿;所述服务器将所述第一相对位姿发送给所述自动驾驶装置。In the second aspect, an embodiment of the present application provides another relative pose calibration method. The method includes: a server receives recalibration reference information from an automatic driving device, and the recalibration reference information is used by the server to determine the automatic The relative pose between the first camera and the second camera of the driving device, the field of view of the first camera overlaps the field of view of the second camera, and the first camera and the second camera are installed in the Different locations of the automatic driving device; the server obtains a first matching feature point set according to the recalibration reference information, the first matching feature point set includes H groups of feature point pairs, and each group of feature point pairs includes two phases Matching feature points, one of the feature points is a feature point extracted from the image collected by the first camera, and the other feature point is a feature point extracted from the image collected by the second camera, and the H is not An integer less than 8; the server determines the first relative pose between the first camera and the second camera according to the first matching feature point set; the server converts the first relative pose Send to the automatic driving device.
可选的,所述重标定参考信息包括第一相机和第二相机同步采集的图像或者所述第一匹配特征点集。所述服务器可存储有所述第一相机的内参和所述第二相机的内参或者接收来自所述自动驾驶装置的所述第一相机的内参和所述第二相机的内参,并根据第一匹配特征点集,确定所述第一相机和所述第二相机之间的第一相对位姿。Optionally, the recalibration reference information includes an image synchronously collected by the first camera and the second camera or the first matching feature point set. The server may store the internal parameters of the first camera and the internal parameters of the second camera, or receive the internal parameters of the first camera and the internal parameters of the second camera from the automatic driving device, and use the internal parameters according to the first camera. Matching a set of feature points to determine a first relative pose between the first camera and the second camera.
本申请实施例中,自动驾驶装置不需要自身来标定相机之间的相对位姿,仅需将标定相机之间的相对位姿所需的数据发送给服务器,服务器对自动驾驶装置上的相机之间的相对位姿进行重标定,效率高。In the embodiment of the present application, the automatic driving device does not need to calibrate the relative pose between the cameras by itself, and only needs to send the data required to calibrate the relative pose between the cameras to the server, and the server compares the camera on the automatic driving device. Re-calibration is performed on the relative pose between the two, and the efficiency is high.
在一个可选的实现方式中,所述根据第一匹配特征点集,确定所述第一相机和所述第 二相机之间的第一相对位姿之前,所述方法还包括:所述服务器接收来自所述自动驾驶装置的位姿标定请求,所述位姿标定请求用于请求重标定所述自动驾驶装置的所述第一相机和所述第二相机之间的相对位姿,所述位姿标定请求携带有所述第一相机的内参以及所述第二相机的内参;所述根据第一匹配特征点集,确定所述第一相机和所述第二相机之间的第一相对位姿包括:根据第一匹配特征点集、所述第一相机的内参以及所述第二相机的内参,确定所述第一相机和所述第二相机之间的所述第一相对位姿。In an optional implementation manner, before the determining the first relative pose between the first camera and the second camera according to the first matching feature point set, the method further includes: the server Receiving a pose calibration request from the automatic driving device, the pose calibration request being used to request recalibration of the relative pose between the first camera and the second camera of the automatic driving device, the The pose calibration request carries the internal parameters of the first camera and the internal parameters of the second camera; said determining the first relative relationship between the first camera and the second camera according to the first matching feature point set The pose includes: determining the first relative pose between the first camera and the second camera according to a first matching feature point set, internal parameters of the first camera, and internal parameters of the second camera .
第三方面,本申请实施例提供了另一种相对位姿标定方法,该方法包括:自动驾驶装置向服务器发送重标定参考信息,所述重标定参考信息用于服务器确定所述自动驾驶装置的第一相机和第二相机之间的相对位姿,所述第一相机的视野和所述第二相机的视野存在重叠,所述第一相机和所述第二相机安装于自动驾驶装置的不同位置;所述自动驾驶装置接收来自所述服务器的所述第一相对位姿;所述自动驾驶装置在所述第一相对位姿与第二相对位姿之间的差值不大于位姿变化阈值的情况下,将所述第二相对位姿更新为所述第一相对位姿;所述第二相对位姿为所述自动驾驶装置当前存储的所述第一相机和所述第二相机之间的相对位姿。可选的,所述重标定参考信息包括第一匹配特征点集,所述第一匹配特征点集包括H组特征点对,每组特征点对包括两个相匹配的特征点,其中一个特征点为从所述第一相机采集的图像中提取的特征点,另一个特征点为从所述第二相机采集的图像中提取的特征点,所述H为不小于8的整数。可选的,所述重标定参考信息包括所述第一相机和所述第二相机同步采集的图像。In the third aspect, the embodiments of the present application provide another relative pose calibration method. The method includes: the automatic driving device sends recalibration reference information to the server, and the recalibration reference information is used by the server to determine the status of the automatic driving device. The relative pose between the first camera and the second camera, the field of view of the first camera overlaps the field of view of the second camera, and the difference between the first camera and the second camera installed in the automatic driving device Position; the automatic driving device receives the first relative pose from the server; the difference between the first relative pose and the second relative pose of the automatic driving device is not greater than the pose change In the case of a threshold value, the second relative pose is updated to the first relative pose; the second relative pose is the first camera and the second camera currently stored by the automatic driving device The relative pose between. Optionally, the recalibration reference information includes a first matching feature point set, the first matching feature point set includes H groups of feature point pairs, and each group of feature point pairs includes two matching feature points, one of which is A point is a feature point extracted from an image collected by the first camera, another feature point is a feature point extracted from an image collected by the second camera, and the H is an integer not less than 8. Optionally, the recalibration reference information includes images acquired by the first camera and the second camera synchronously.
本申请实施例中,自动驾驶装置不需要自身来标定相机之间的相对位姿,仅需将标定相机之间的相对位姿所需的数据发送给服务器,工作量少。In the embodiment of the present application, the automatic driving device does not need to calibrate the relative pose between the cameras by itself, and only needs to send the data required to calibrate the relative pose between the cameras to the server, which requires less workload.
在一个可选的实现方式中,所述自动驾驶装置向服务器发送重标定参考信息之前,所述方法还包括:所述自动驾驶装置向所述服务器发送位姿标定请求,所述位姿标定请求用于请求重标定所述自动驾驶装置的所述第一相机和所述第二相机之间的相对位姿,所述位姿标定请求携带有所述第一相机的内参以及所述第二相机的内参。In an optional implementation manner, before the automatic driving device sends the recalibration reference information to the server, the method further includes: the automatic driving device sends a pose calibration request to the server, the pose calibration request Used to request recalibration of the relative pose between the first camera and the second camera of the automatic driving device, the pose calibration request carries the internal parameters of the first camera and the second camera The internal reference.
第四方面,本申请实施例提供了一种自动驾驶装置,包括:图像采集单元,用于分别通过第一相机和第二相机采集图像;所述第一相机的视野和所述第二相机的视野存在重叠,所述第一相机和所述第二相机安装于自动驾驶装置的不同位置;图像特征点提取单元,用于对所述第一相机采集的图像和所述第二相机采集的图像进行特征点匹配,以得到第一匹配特征点集;所述第一匹配特征点集包括H组特征点对,每组特征点对包括两个相匹配的特征点,其中一个特征点为从所述第一相机采集的图像中提取的特征点,另一个特征点为从所述第二相机采集的图像中提取的特征点,所述H为不小于8的整数;位姿计算单元,用于根据所述第一匹配特征点集,确定所述第一相机和所述第二相机之间的第一相对位姿;所述第一匹配特征点集包括H组特征点对,每组特征点对包括两个相匹配的特征点,其中一个特征点为从所述第一相机采集的图像中提取的特征点,另一个特征点为从所述第二相机采集的图像中提取的特征点,所述第一相机的视野和所述第二相机的视野存在重叠,所述第一相机和所述第二相机安装于自动驾驶装置的不同位置,所述H为不小于8的整数;In a fourth aspect, an embodiment of the present application provides an automatic driving device, including: an image acquisition unit for acquiring images through a first camera and a second camera respectively; the field of view of the first camera and the second camera There is overlap in the field of view, the first camera and the second camera are installed at different positions of the automatic driving device; the image feature point extraction unit is used for the image collected by the first camera and the image collected by the second camera Perform feature point matching to obtain a first matching feature point set; the first matching feature point set includes H groups of feature point pairs, and each group of feature point pairs includes two matching feature points, one of which is from the The feature point extracted from the image collected by the first camera, the other feature point is the feature point extracted from the image collected by the second camera, and the H is an integer not less than 8; the pose calculation unit is used for According to the first matching feature point set, the first relative pose between the first camera and the second camera is determined; the first matching feature point set includes H groups of feature point pairs, each group of feature points The pair includes two matching feature points, one of which is a feature point extracted from the image collected by the first camera, and the other feature point is a feature point extracted from the image collected by the second camera, The field of view of the first camera and the field of view of the second camera overlap, the first camera and the second camera are installed at different positions of the automatic driving device, and the H is an integer not less than 8;
标定参数更新单元,用于在所述第一相对位姿与第二相对位姿之间的差值不大于位姿变化阈值的情况下,将所述第二相对位姿更新为所述第一相对位姿;所述第二相对位姿为 所述自动驾驶装置当前存储的所述第一相机和所述第二相机之间的相对位姿。The calibration parameter update unit is configured to update the second relative pose to the first when the difference between the first relative pose and the second relative pose is not greater than the pose change threshold. Relative pose; the second relative pose is the relative pose between the first camera and the second camera currently stored by the automatic driving device.
在一个可选的实现方式中,所述位姿计算单元,具体用于确定所述第一相机和所述第二相机之间的本质矩阵;根据所述本质矩阵的奇异值分解结果,计算所述第一相机和所述第二相机之间的5自由度DOF相对位姿;所述装置还包括:尺度计算单元,用于将第一距离和第二距离之间的比值,作为尺度因子;所述第一距离和所述第二距离分别为所述自动驾驶装置上的非视觉传感器和视觉传感器测量同一距离得到的结果,所述视觉传感器包括所述第一相机和/或所述第二相机;所述位姿计算单元,还用于合并所述5自由度相对位姿和所述尺度因子以得到所述第一相对位姿。In an optional implementation manner, the pose calculation unit is specifically configured to determine the essential matrix between the first camera and the second camera; and calculate the essential matrix according to the singular value decomposition result of the essential matrix. The 5-degree-of-freedom DOF relative pose between the first camera and the second camera; the device further includes: a scale calculation unit configured to use the ratio between the first distance and the second distance as a scale factor; The first distance and the second distance are the results obtained by measuring the same distance by a non-visual sensor and a visual sensor on the automatic driving device, and the visual sensor includes the first camera and/or the second distance. Camera; the pose calculation unit is also used to combine the 5-degree-of-freedom relative pose and the scale factor to obtain the first relative pose.
在一个可选的实现方式中,所述位姿计算单元,具体用于将所述本质矩阵进行奇异值分解以得到所述奇异值分解结果;根据所述奇异值分解结果,得到所述第一相机和所述第二相机之间的至少两个相对位姿;分别利用所述至少两个相对位姿计算所述第一匹配特征点集中的特征点的三维坐标位置;将所述至少两个相对位姿中使得所述第一匹配特征点集中的各特征点的三维坐标位置均位于所述第一相机和所述第二相机前方的相对位姿作为所述5自由度相对位姿。In an optional implementation manner, the pose calculation unit is specifically configured to perform singular value decomposition of the essential matrix to obtain the singular value decomposition result; according to the singular value decomposition result, the first The at least two relative poses between the camera and the second camera; the three-dimensional coordinate positions of the feature points in the first matching feature point set are calculated by using the at least two relative poses respectively; and the at least two In the relative pose, the relative pose in which the three-dimensional coordinate positions of each feature point in the first matching feature point set are located in front of the first camera and the second camera is used as the 5-degree-of-freedom relative pose.
在一个可选的实现方式中,所述位姿计算单元,具体用于迭代求解目标方程以得到所述第一相对位姿;在所述目标方程中,所述第一相对位姿包括的参数为未知数,所述第一匹配特征点集中的特征点对、所述第一相机的内参以及所述第二相机的内参为已知数。In an optional implementation manner, the pose calculation unit is specifically configured to iteratively solve a target equation to obtain the first relative pose; in the target equation, the parameters included in the first relative pose Is an unknown number, and the feature point pairs in the first matching feature point set, the internal parameters of the first camera, and the internal parameters of the second camera are known numbers.
在一个可选的实现方式中,所述自动驾驶装置安装有M个相机,所述M个相机包括所述第一相机、所述第二相机以及第三相机,所述第三相机的视野与所述第一相机的视野和所述第二相机的视野均存在重叠,所述M为大于2的整数;所述装置还包括:闭环优化单元,用于获得M个第一旋转矩阵、M个第二旋转矩阵、M个第一平移矩阵以及M个第二平移矩阵;所述M个第一旋转矩阵为所述第一相机和所述第二相机之间的旋转矩阵且所述M个第一旋转矩阵中至少两个旋转矩阵不同,所述M个第二旋转矩阵为所述第二相机和所述第三相机之间的旋转矩阵且所述M个第二旋转矩阵中至少两个旋转矩阵不同,所述M个第一平移矩阵为所述第一相机和所述第二相机之间的平移矩阵且所述M个第一平移矩阵中至少两个平移矩阵不同,所述M个第二平移矩阵为所述第二相机和所述第三相机之间的平移矩阵且所述M个第二平移矩阵中至少两个平移矩阵不同,所述M个第一旋转矩阵与所述M个第一平移矩阵一一对应,所述M个第二旋转矩阵与所述M个第二平移矩阵一一对应,所述M为大于1的整数;求解第一方程组以得到第三旋转矩阵;所述第一方程组中包括M个第一方程,所述M个第一方程与所述M个第一旋转矩阵一一对应,且与所述M个第二旋转矩阵一一对应;在每个第一方程中,第一旋转矩阵和第二旋转矩阵为已知数,所述第三旋转矩阵为未知数;所述第三旋转矩阵为所述第一相机和所述第三相机之间的旋转矩阵;求解第二方程组以得到第三平移矩阵;所述第二方程组中包括M个第二方程,所述M个第二方程与所述M个第一旋转矩阵一一对应,且与所述M个第二旋转矩阵一一对应;在每个第二方程中,第一旋转矩阵、第二旋转矩阵、第一平移矩阵以及第二平移矩阵为已知数,所述第三平移矩阵为未知数;所述第三平移矩阵为所述第一相机和所述第三相机之间的平移矩阵;将包括所述第三旋转矩阵和所述第三平移矩阵的位姿作为所述第一相机和所述第三相机之间的相对位姿。In an optional implementation manner, the automatic driving device is equipped with M cameras, and the M cameras include the first camera, the second camera, and the third camera, and the field of view of the third camera is the same as There is overlap between the field of view of the first camera and the field of view of the second camera, and the M is an integer greater than 2. The device further includes: a closed loop optimization unit for obtaining M first rotation matrices, M A second rotation matrix, M first translation matrices, and M second translation matrices; the M first rotation matrices are the rotation matrices between the first camera and the second camera, and the M At least two rotation matrices in a rotation matrix are different, the M second rotation matrices are the rotation matrices between the second camera and the third camera, and at least two of the M second rotation matrices rotate The matrices are different, the M first translation matrices are the translation matrices between the first camera and the second camera, and at least two of the M first translation matrices are different, the M first translation matrices are different. The second translation matrix is the translation matrix between the second camera and the third camera, and at least two of the M second translation matrices are different, and the M first rotation matrices are different from the M first rotation matrices. The first translation matrix has a one-to-one correspondence, the M second rotation matrices have a one-to-one correspondence with the M second translation matrices, and the M is an integer greater than 1; the first equation system is solved to obtain the third rotation matrix; The first equation group includes M first equations, and the M first equations have a one-to-one correspondence with the M first rotation matrices, and have a one-to-one correspondence with the M second rotation matrices; In the first equation, the first rotation matrix and the second rotation matrix are known numbers, and the third rotation matrix is an unknown number; the third rotation matrix is the difference between the first camera and the third camera A rotation matrix; solve a second equation system to obtain a third translation matrix; the second equation system includes M second equations, and the M second equations have a one-to-one correspondence with the M first rotation matrices, and Corresponds to the M second rotation matrices one-to-one; in each second equation, the first rotation matrix, the second rotation matrix, the first translation matrix, and the second translation matrix are known numbers, and the third translation The matrix is an unknown number; the third translation matrix is the translation matrix between the first camera and the third camera; the pose including the third rotation matrix and the third translation matrix is taken as the first The relative pose between a camera and the third camera.
在一个可选的实现方式中,所述第一匹配特征点集包括所述第一相机从至少两帧图像中提取出的特征点以及所述第二相机从至少两帧图像中提取出的特征点;所述位姿计算单元,具体用于根据所述第一匹配特征点集中的特征点对,确定所述第一相机和所述第二相机之间的相对位姿以得到第一中间位姿;将所述第一匹配特征点集中各特征点对代入至第一公式,得到所述第一匹配特征点集中各特征点对对应的残差,所述第一公式表征中包括所述第一中间位姿;剔除所述第一匹配特征点集中的干扰特征点对以得到第二匹配特征点集;所述干扰特征点对为所述第一匹配特征点集中对应的残差大于残差阈值的特征点对;根据所述第二匹配特征点集中的特征点对,确定所述第一相机和所述第二相机之间的相对位姿以得到第二中间位姿;将目标中间位姿作为所述第一相机和所述第二相机之间的所述第一相对位姿;所述目标中间位姿为根据目标匹配特征点集中的特征点对确定的所述第一相机和所述第二相机之间的相对位姿,所述目标匹配特征点集中的特征点对的个数小于数量阈值或者所述目标匹配特征点集中的特征点对的个数与所述第一匹配特征点集中的特征点对的个数的比值小于比例阈值。In an optional implementation manner, the first matching feature point set includes feature points extracted by the first camera from at least two frames of images and features extracted by the second camera from at least two frames of images Point; the pose calculation unit is specifically configured to determine the relative pose between the first camera and the second camera according to the feature point pair in the first matching feature point set to obtain the first intermediate position Pose; Substituting each feature point pair in the first matching feature point set into a first formula to obtain a residual error corresponding to each feature point pair in the first matching feature point set, the first formula representation includes the first formula An intermediate pose; eliminate the interference feature point pair in the first matching feature point set to obtain a second matching feature point set; the interference feature point pair is that the corresponding residual in the first matching feature point set is greater than the residual Threshold feature point pairs; according to the feature point pairs in the second matching feature point set, determine the relative pose between the first camera and the second camera to obtain a second intermediate pose; set the target intermediate position The pose is the first relative pose between the first camera and the second camera; the intermediate pose of the target is the first camera and the total pose determined according to the feature point pair in the target matching feature point set. The relative poses between the second cameras, the number of feature point pairs in the target matching feature point set is less than a number threshold or the number of feature point pairs in the target matching feature point set is the same as the first matching feature The ratio of the number of feature point pairs in the point set is less than the ratio threshold.
在一个可选的实现方式中,所述装置还包括:提醒单元,用于在所述第一相对位姿与所述第二相对位姿之间的差值不大于所述位姿变化阈值的情况下,输出提醒信息;所述提醒信息用于提示所述第一相机和所述第二相机之间的相对位姿异常。In an optional implementation manner, the device further includes: a reminding unit, configured to determine when the difference between the first relative pose and the second relative pose is not greater than the pose change threshold In this case, output reminding information; the reminding information is used to prompt that the relative pose between the first camera and the second camera is abnormal.
第五方面,本申请实施例提供了一种服务器,包括:获取单元,用于获得第一匹配特征点集,所述第一匹配特征点集包括H组特征点对,每组特征点对包括两个相匹配的特征点,其中一个特征点为从第一相机采集的图像中提取的特征点,另一个特征点为从第二相机采集的图像中提取的特征点,所述第一相机的视野和所述第二相机的视野存在重叠,所述第一相机和所述第二相机安装于自动驾驶装置的不同位置,所述H为不小于8的整数;位姿计算单元,用于根据第一匹配特征点集,确定所述第一相机和所述第二相机之间的第一相对位姿;发送单元,用于将所述第一相对位姿发送给所述自动驾驶装置。In a fifth aspect, an embodiment of the present application provides a server, including: an obtaining unit, configured to obtain a first matching feature point set, the first matching feature point set includes H groups of feature point pairs, and each group of feature point pairs includes Two matching feature points, one feature point is a feature point extracted from the image collected by the first camera, and the other feature point is a feature point extracted from the image collected by the second camera. The field of view and the field of view of the second camera overlap, the first camera and the second camera are installed at different positions of the automatic driving device, and the H is an integer not less than 8; the pose calculation unit is used to calculate The first matching feature point set determines a first relative pose between the first camera and the second camera; a sending unit is configured to send the first relative pose to the automatic driving device.
在一个可选的实现方式中,所述服务器还包括:接收单元,用于接收来自所述自动驾驶装置的位姿标定请求,所述位姿标定请求用于请求重标定所述自动驾驶装置的所述第一相机和所述第二相机之间的相对位姿,所述位姿标定请求携带有所述第一相机的内参以及所述第二相机的内参;所述位姿计算单元,具体用于根据第一匹配特征点集、所述第一相机的内参以及所述第二相机的内参,确定所述第一相机和所述第二相机之间的所述第一相对位姿。In an optional implementation manner, the server further includes: a receiving unit configured to receive a pose calibration request from the automatic driving device, the pose calibration request being used to request recalibration of the automatic driving device The relative pose between the first camera and the second camera, the pose calibration request carries the internal parameters of the first camera and the internal parameters of the second camera; the pose calculation unit specifically It is used to determine the first relative pose between the first camera and the second camera according to the first matching feature point set, the internal parameters of the first camera, and the internal parameters of the second camera.
第六方面,本申请实施例提供了一种自动驾驶装置,包括:发送单元,用于向服务器发送重标定参考信息,所述重标定参考信息用于服务器确定所述自动驾驶装置的第一相机和第二相机之间的相对位姿,所述第一相机的视野和所述第二相机的视野存在重叠,所述第一相机和所述第二相机安装于自动驾驶装置的不同位置;接收单元,用于接收来自所述服务器的所述第一相对位姿;标定参数更新单元,用于在所述第一相对位姿与第二相对位姿之间的差值不大于位姿变化阈值的情况下,将所述第二相对位姿更新为所述第一相对位姿;所述第二相对位姿为所述自动驾驶装置当前存储的所述第一相机和所述第二相机之间的相对位姿。In a sixth aspect, an embodiment of the present application provides an automatic driving device, including: a sending unit, configured to send recalibration reference information to a server, where the recalibration reference information is used by the server to determine the first camera of the automatic driving device The relative pose between the first camera and the second camera, the field of view of the first camera overlaps the field of view of the second camera, and the first camera and the second camera are installed in different positions of the automatic driving device; receiving; Unit for receiving the first relative pose from the server; a calibration parameter update unit for the difference between the first relative pose and the second relative pose is not greater than the pose change threshold , The second relative pose is updated to the first relative pose; the second relative pose is one of the first camera and the second camera currently stored by the automatic driving device The relative pose between.
在一个可选的实现方式中,所述发送单元,还用于向所述服务器发送位姿标定请求, 所述位姿标定请求用于请求重标定所述自动驾驶装置的所述第一相机和所述第二相机之间的相对位姿,所述位姿标定请求携带有所述第一相机的内参以及所述第二相机的内参。In an optional implementation manner, the sending unit is further configured to send a pose calibration request to the server, and the pose calibration request is used to request recalibration of the first camera and the first camera of the automatic driving device. For the relative poses between the second cameras, the pose calibration request carries the internal parameters of the first camera and the internal parameters of the second camera.
第七方面,本申请实施例提供了一种汽车,该汽车包括存储器和处理器,该存储器用于存储代码;处理器,用于执行所述存储器存储的所述程序,当所述程序被执行时,所述处理器用于执行上述第一方面或上述第三方面以及可选的实现方式的方法。In a seventh aspect, an embodiment of the present application provides a car, the car includes a memory and a processor, the memory is used to store code; the processor is used to execute the program stored in the memory, when the program is executed At this time, the processor is configured to execute the methods of the foregoing first aspect or the foregoing third aspect and optional implementation manners.
第八方面,本申请实施例提供了一种服务器,该服务器包括存储器和处理器,该存储器用于存储代码;处理器,用于执行所述存储器存储的所述程序,当所述程序被执行时,所述处理器用于执行上述第二方面以及可选的实现方式的方法。In an eighth aspect, an embodiment of the present application provides a server that includes a memory and a processor, the memory is used to store code; the processor is used to execute the program stored in the memory, and when the program is executed At this time, the processor is configured to execute the above-mentioned second aspect and optional implementation methods.
第九方面,本申请实施例提供了一种计算机可读存储介质,该计算机存储介质存储有计算机程序,该计算机程序包括程序指令,该程序指令当被处理器执行时使该处理器执行上述第一方面至第三方面以及可选的实现方式的方法。In a ninth aspect, an embodiment of the present application provides a computer-readable storage medium that stores a computer program, and the computer program includes program instructions that, when executed by a processor, cause the processor to execute the above-mentioned first From one aspect to the third aspect and optional implementation methods.
第十方面,本申请实施例提供了一种芯片,该芯片包括处理器与数据接口,该处理器通过该数据接口读取存储器上存储的指令,执行如上述第一方面至第三方面以及任一种可选的实现方式的方法。In a tenth aspect, an embodiment of the present application provides a chip that includes a processor and a data interface. The processor reads instructions stored in a memory through the data interface, and executes the above-mentioned first to third aspects and any An alternative implementation method.
第十一方面,本申请实施例提供了一种计算机程序产品,该计算机程序产品包括程序指令,该程序指令当被处理器执行时使该处理器执行上述第一方面至第三方面以及任一种可选的实现方式的方法。In an eleventh aspect, an embodiment of the present application provides a computer program product. The computer program product includes program instructions that, when executed by a processor, cause the processor to execute the first to third aspects and any one of An alternative implementation method.
附图说明Description of the drawings
图1是本申请实施例提供的自动驾驶装置的功能框图;Fig. 1 is a functional block diagram of an automatic driving device provided by an embodiment of the present application;
图2为本申请实施例提供的一种自动驾驶系统的结构示意图;2 is a schematic structural diagram of an automatic driving system provided by an embodiment of the application;
图3为本申请实施例提供的一种相对位姿标定方法流程图;FIG. 3 is a flowchart of a relative pose calibration method provided by an embodiment of this application;
图4为本申请实施例提供的一种相机之间的公共视野示意图;FIG. 4 is a schematic diagram of a public field of view between cameras provided by an embodiment of this application;
图5为本申请实施例提供的一种自动驾驶装置的结构示意图;FIG. 5 is a schematic structural diagram of an automatic driving device provided by an embodiment of the application;
图6为本申请实施例提供的另一种自动驾驶装置的结构示意图;6 is a schematic structural diagram of another automatic driving device provided by an embodiment of the application;
图7为本申请实施例提供的另一种自动驾驶装置的结构示意图;FIG. 7 is a schematic structural diagram of another automatic driving device provided by an embodiment of the application;
图8为本申请实施例提供的另一种相对位姿标定方法流程图;FIG. 8 is a flowchart of another relative pose calibration method provided by an embodiment of the application;
图9为本申请实施例提供的另一种相对位姿标定方法流程图;FIG. 9 is a flowchart of another relative pose calibration method provided by an embodiment of the application;
图10为本申请实施例提供的一种服务器的结构示意图。FIG. 10 is a schematic structural diagram of a server provided by an embodiment of this application.
具体实施方式Detailed ways
本申请的说明书实施例和权利要求书及上述附图中的术语“第一”、“第二”、和“第三”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元。方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。“和/或”用于表示在其所连接的两个对象之间选择一个或全部。例如“A和/或B”表示A、B或A+B。The terms "first", "second", and "third" in the specification embodiments and claims of this application and the above-mentioned drawings are used to distinguish similar objects, and are not necessarily used to describe a specific sequence or Priority. In addition, the terms "including" and "having" and any variations of them are intended to cover non-exclusive inclusions, for example, including a series of steps or units. The method, system, product, or device need not be limited to those clearly listed steps or units, but may include other steps or units that are not clearly listed or are inherent to these processes, methods, products, or devices. "And/or" is used to indicate one or all of the two connected objects. For example, "A and/or B" means A, B, or A+B.
如背景技术所述,自动驾驶汽车上的相机之间的相对位姿会随着使用时间的增加而发生变化,此时就需要对各相机的标定参数(即相机的外参)进行修正。随着具备自动驾驶功能的车辆越来越多,对相机的标定参数重标定的需求也大大增加,不能及时重标定各相机的外参的自动驾驶车辆存在很大的安全隐患。本申请实施例提供的相对位姿标定方法可以应用到自动驾驶场景。下面对驾驶场景进行简单的介绍。As described in the background art, the relative poses between cameras on an autonomous vehicle will change with the increase of use time. At this time, it is necessary to correct the calibration parameters of each camera (that is, the external parameters of the camera). With the increasing number of vehicles with automatic driving functions, the demand for recalibration of the camera's calibration parameters has also greatly increased. Auto-driving vehicles that cannot recalibrate the external parameters of each camera in a timely manner pose great safety hazards. The relative pose calibration method provided in the embodiments of the present application can be applied to an autonomous driving scenario. The following is a brief introduction to the driving scene.
驾驶场景1:自动驾驶装置根据具有公共视野的两个相机同步采集的图像中相匹配的特征点,确定这两个相机之间的相对位姿;在重新确定的这两个相机之间的相对位姿与该自动驾驶装置当前标定的这两个相机之间的相对位姿的差值不大于位姿变化阈值的情况下,更新当前标定的相对位姿。Driving scenario 1: The automatic driving device determines the relative pose between the two cameras according to the matching feature points in the images synchronously collected by the two cameras with a common field of view; the relative pose between the two cameras is determined again If the difference between the pose and the relative pose between the two cameras currently calibrated by the automatic driving device is not greater than the pose change threshold, the currently calibrated relative pose is updated.
驾驶场景2:自动驾驶装置向服务器发送位姿标定请求以及用于确定该自动驾驶装置上的至少两个相机之间的相对位姿所需的信息,上述位姿标定请求用于请求重标定自动驾驶装置上的至少两个相机之间的相对位姿;该自动驾驶装置根据来自服务器的相对位姿,更新其当前存储的该至少两个相机之间的相对位姿Driving scenario 2: The automatic driving device sends a pose calibration request to the server and information needed to determine the relative pose between at least two cameras on the automatic driving device. The above pose calibration request is used to request automatic recalibration. The relative pose between at least two cameras on the driving device; the automatic driving device updates its currently stored relative pose between the at least two cameras according to the relative pose from the server
图1是本申请实施例提供的自动驾驶装置的功能框图。在一个实施例中,将自动驾驶装置100配置为完全或部分地自动驾驶模式。例如,自动驾驶装置100可以在处于自动驾驶模式中的同时控制自身,并且可通过人为操作来确定自动驾驶装置100及其周边环境的当前状态,确定周边环境中的至少一个其他车辆的可能行为,并确定该其他车辆执行可能行为的可能性相对应的置信水平,基于所确定的信息来控制自动驾驶装置100。在自动驾驶装置100处于自动驾驶模式中时,可以将自动驾驶装置100置为在没有和人交互的情况下操作。Fig. 1 is a functional block diagram of an automatic driving device provided by an embodiment of the present application. In one embodiment, the automatic driving device 100 is configured in a fully or partially automatic driving mode. For example, the automatic driving device 100 can control itself while in the automatic driving mode, and can determine the current state of the automatic driving device 100 and its surrounding environment through human operation, and determine the possible behavior of at least one other vehicle in the surrounding environment, The confidence level corresponding to the possibility of the other vehicle performing possible behavior is determined, and the automatic driving device 100 is controlled based on the determined information. When the automatic driving device 100 is in the automatic driving mode, the automatic driving device 100 can be set to operate without human interaction.
自动驾驶装置100可包括各种子系统,例如行进系统102、传感器系统104、控制系统106、一个或多个外围设备108以及电源110、计算机系统112和用户接口116。可选地,自动驾驶装置100可包括更多或更少的子系统,并且每个子系统可包括多个元件。另外,自动驾驶装置100的每个子系统和元件可以通过有线或者无线互连。The automatic driving apparatus 100 may include various subsystems, such as a traveling system 102, a sensor system 104, a control system 106, one or more peripheral devices 108 and a power supply 110, a computer system 112, and a user interface 116. Optionally, the automatic driving device 100 may include more or fewer sub-systems, and each sub-system may include multiple elements. In addition, each subsystem and element of the automatic driving device 100 may be interconnected by wire or wirelessly.
行进系统102可包括为自动驾驶装置100提供动力运动的组件。在一个实施例中,推进系统102可包括引擎118、能量源119、传动装置120和车轮/轮胎121。引擎118可以是内燃引擎、电动机、空气压缩引擎或其他类型的引擎组合,例如汽油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。引擎118将能量源119转换成机械能量。The traveling system 102 may include components that provide power movement for the autonomous driving device 100. In one embodiment, the propulsion system 102 may include an engine 118, an energy source 119, a transmission 120, and wheels/tires 121. The engine 118 may be an internal combustion engine, an electric motor, an air compression engine, or a combination of other types of engines, such as a hybrid engine composed of a gasoline engine and an electric motor, or a hybrid engine composed of an internal combustion engine and an air compression engine. The engine 118 converts the energy source 119 into mechanical energy.
能量源119的示例包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池和其他电力来源。能量源119也可以为自动驾驶装置100的其他系统提供能量。Examples of energy sources 119 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity. The energy source 119 may also provide energy for other systems of the automatic driving device 100.
传动装置120可以将来自引擎118的机械动力传送到车轮121。传动装置120可包括变速箱、差速器和驱动轴。在一个实施例中,传动装置120还可以包括其他器件,比如离合器。其中,驱动轴可包括可耦合到一个或多个车轮121的一个或多个轴。The transmission device 120 can transmit mechanical power from the engine 118 to the wheels 121. The transmission device 120 may include a gearbox, a differential, and a drive shaft. In an embodiment, the transmission device 120 may also include other devices, such as a clutch. Among them, the drive shaft may include one or more shafts that can be coupled to one or more wheels 121.
传感器系统104可包括感测关于自动驾驶装置100周边的环境的信息的若干个传感器。例如,传感器系统104可包括定位系统122(定位系统可以是全球定位(global positioning system,GPS)系统,也可以是北斗系统或者其他定位系统)、惯性测量单元(inertial  measurement unit,IMU)124、雷达126、激光测距仪128以及相机130。传感器系统104还可包括被监视自动驾驶装置100的内部系统的传感器(例如,车内空气质量监测器、燃油量表、机油温度表等)。来自这些传感器中的一个或多个的传感器数据可用于检测对象及其相应特性(位置、形状、方向、速度等)。这种检测和识别是自主自动驾驶装置100的安全操作的关键功能。The sensor system 104 may include several sensors that sense information about the environment around the automatic driving device 100. For example, the sensor system 104 may include a positioning system 122 (the positioning system may be a global positioning system (GPS) system, a Beidou system or other positioning systems), an inertial measurement unit (IMU) 124, and a radar 126, a laser rangefinder 128, and a camera 130. The sensor system 104 may also include sensors of the internal system of the automatic driving device 100 to be monitored (for example, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding characteristics (position, shape, direction, speed, etc.). Such detection and recognition are key functions for the safe operation of the autonomous automatic driving device 100.
定位系统122可用于估计自动驾驶装置100的地理位置。IMU 124用于基于惯性加速度来感测自动驾驶装置100的位置和朝向变化。在一个实施例中,IMU 124可以是加速度计和陀螺仪的组合。The positioning system 122 may be used to estimate the geographic location of the automatic driving device 100. The IMU 124 is used to sense the position and orientation changes of the automatic driving device 100 based on inertial acceleration. In one embodiment, the IMU 124 may be a combination of an accelerometer and a gyroscope.
雷达126可利用无线电信号来感测自动驾驶装置100的周边环境内的物体。The radar 126 may use radio signals to sense objects in the surrounding environment of the automatic driving device 100.
激光测距仪128可利用激光来感测自动驾驶装置100所位于的环境中的物体。在一些实施例中,激光测距仪128可包括一个或多个激光源、激光扫描器以及一个或多个检测器,以及其他系统组件。The laser rangefinder 128 can use laser light to sense objects in the environment where the automatic driving device 100 is located. In some embodiments, the laser rangefinder 128 may include one or more laser sources, laser scanners, and one or more detectors, as well as other system components.
相机130可用于捕捉自动驾驶装置100的周边环境的多个图像。相机130可以是静态相机或视频相机。相机130可以实时或周期性的捕捉自动驾驶装置100的周边环境的多个图像。相机130包括至少两个视野存在重叠的相机,即至少两个相机具有公共视野。The camera 130 may be used to capture multiple images of the surrounding environment of the automatic driving device 100. The camera 130 may be a still camera or a video camera. The camera 130 may capture multiple images of the surrounding environment of the automatic driving device 100 in real time or periodically. The camera 130 includes at least two cameras with overlapping fields of view, that is, at least two cameras have a common field of view.
控制系统106为控制自动驾驶装置100及其组件的操作。控制系统106可包括各种元件,其中包括转向系统132、油门134、制动单元136、计算机视觉系统140、路线控制系统142以及障碍物避免系统144。The control system 106 controls the operation of the automatic driving device 100 and its components. The control system 106 may include various components, including a steering system 132, a throttle 134, a braking unit 136, a computer vision system 140, a route control system 142, and an obstacle avoidance system 144.
转向系统132可操作来调整自动驾驶装置100的前进方向。例如在一个实施例中可以为方向盘系统。The steering system 132 is operable to adjust the forward direction of the automatic driving device 100. For example, in one embodiment, it may be a steering wheel system.
油门134用于控制引擎118的操作速度并进而控制自动驾驶装置100的速度。The throttle 134 is used to control the operating speed of the engine 118 and thereby control the speed of the automatic driving device 100.
制动单元136用于控制自动驾驶装置100减速。制动单元136可使用摩擦力来减慢车轮121。在其他实施例中,制动单元136可将车轮121的动能转换为电流。制动单元136也可采取其他形式来减慢车轮121转速从而控制自动驾驶装置100的速度。The braking unit 136 is used to control the automatic driving device 100 to decelerate. The braking unit 136 may use friction to slow down the wheels 121. In other embodiments, the braking unit 136 may convert the kinetic energy of the wheels 121 into electric current. The braking unit 136 may also take other forms to slow down the rotation speed of the wheels 121 to control the speed of the automatic driving device 100.
计算机视觉系统140可以操作来处理和分析由相机130捕捉的图像以便识别自动驾驶装置100周边环境中的物体和/或特征。上述物体和/或特征可包括交通信号、道路边界和障碍物。计算机视觉系统140可使用物体识别算法、自动驾驶方法、运动中恢复结构(Structure from Motion,SFM)算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉系统140可以用于为环境绘制地图、跟踪物体、估计物体的速度等等。计算机视觉系统140可使用激光雷达获取的点云以及相机获取的周围环境的图像,定位障碍物的位置。The computer vision system 140 may be operated to process and analyze the images captured by the camera 130 in order to recognize objects and/or features in the surrounding environment of the autonomous driving device 100. The aforementioned objects and/or features may include traffic signals, road boundaries, and obstacles. The computer vision system 140 may use object recognition algorithms, automatic driving methods, Structure from Motion (SFM) algorithms, video tracking, and other computer vision technologies. In some embodiments, the computer vision system 140 may be used to map the environment, track objects, estimate the speed of objects, and so on. The computer vision system 140 may use the point cloud obtained by the lidar and the image of the surrounding environment obtained by the camera to locate the position of the obstacle.
路线控制系统142用于确定自动驾驶装置100的行驶路线。在一些实施例中,路线控制系统142可结合来自传感器138、GPS 122和一个或多个预定地图的数据以为自动驾驶装置100确定行驶路线。The route control system 142 is used to determine the driving route of the automatic driving device 100. In some embodiments, the route control system 142 may combine data from the sensor 138, the GPS 122, and one or more predetermined maps to determine the driving route for the automatic driving device 100.
障碍物避免系统144用于识别、评估和避免或者以其他方式越过自动驾驶装置100的环境中的潜在障碍物。The obstacle avoidance system 144 is used to identify, evaluate and avoid or otherwise cross over potential obstacles in the environment of the automatic driving device 100.
当然,在一个实例中,控制系统106可以增加或替换地包括除了所示出和描述的那些以外的组件。或者也可以减少一部分上述示出的组件。Of course, in one example, the control system 106 may add or alternatively include components other than those shown and described. Alternatively, a part of the components shown above may be reduced.
自动驾驶装置100通过外围设备108与外部传感器、其他车辆、其他计算机系统或用 户之间进行交互。外围设备108可包括无线通信系统146、车载电脑148、麦克风150和/或扬声器152。The automatic driving device 100 interacts with external sensors, other vehicles, other computer systems, or users through peripheral devices 108. The peripheral device 108 may include a wireless communication system 146, an onboard computer 148, a microphone 150, and/or a speaker 152.
在一些实施例中,外围设备108提供自动驾驶装置100的用户与用户接口116交互的手段。例如,车载电脑148可向自动驾驶装置100的用户提供信息。用户接口116还可操作车载电脑148来接收用户的输入。车载电脑148可以通过触摸屏进行操作。在其他情况中,外围设备108可提供用于自动驾驶装置100与位于车内的其它设备通信的手段。例如,麦克风150可从自动驾驶装置100的用户接收音频(例如,语音命令或其他音频输入)。类似地,扬声器152可向自动驾驶装置100的用户输出音频。In some embodiments, the peripheral device 108 provides a means for the user of the autonomous driving apparatus 100 to interact with the user interface 116. For example, the onboard computer 148 may provide information to the user of the automatic driving device 100. The user interface 116 can also operate the on-board computer 148 to receive user input. The on-board computer 148 can be operated through a touch screen. In other cases, the peripheral device 108 may provide a means for the autonomous driving device 100 to communicate with other devices located in the vehicle. For example, the microphone 150 may receive audio (eg, voice commands or other audio input) from the user of the autonomous driving device 100. Similarly, the speaker 152 may output audio to the user of the automatic driving device 100.
无线通信系统146可以直接地或者经由通信网络来与一个或多个设备无线通信。例如,无线通信系统146可使用3G蜂窝通信,或者4G蜂窝通信,例如LTE,或者5G蜂窝通信。无线通信系统146可利用WiFi与无线局域网(wireless local area network,WLAN)通信。在一些实施例中,无线通信系统146可利用红外链路、蓝牙或ZigBee与设备直接通信。其他无线协议,例如各种车辆通信系统,例如,无线通信系统146可包括一个或多个专用短程通信(dedicated short range communications,DSRC)设备,这些设备可包括车辆和/或路边台站之间的公共和/或私有数据通信。The wireless communication system 146 may wirelessly communicate with one or more devices directly or via a communication network. For example, the wireless communication system 146 may use 3G cellular communication, or 4G cellular communication, such as LTE, or 5G cellular communication. The wireless communication system 146 may use WiFi to communicate with a wireless local area network (WLAN). In some embodiments, the wireless communication system 146 may directly communicate with the device using an infrared link, Bluetooth, or ZigBee. Other wireless protocols, such as various vehicle communication systems. For example, the wireless communication system 146 may include one or more dedicated short-range communications (DSRC) devices. These devices may include vehicles and/or roadside stations. Public and/or private data communications.
电源110可向自动驾驶装置100的各种组件提供电力。在一个实施例中,电源110可以为可再充电锂离子或铅酸电池。这种电池的一个或多个电池组可被配置为电源为自动驾驶装置100的各种组件提供电力。在一些实施例中,电源110和能量源119可一起实现,例如一些全电动车中那样。The power supply 110 may provide power to various components of the automatic driving device 100. In one embodiment, the power source 110 may be a rechargeable lithium ion or lead-acid battery. One or more battery packs of such batteries may be configured as a power source to provide power to various components of the automatic driving device 100. In some embodiments, the power source 110 and the energy source 119 may be implemented together, such as in some all-electric vehicles.
自动驾驶装置100的部分或所有功能受计算机系统112控制。计算机系统112可包括至少一个处理器113,处理器113执行存储在例如数据存储装置114这样的非暂态计算机可读介质中的指令115。计算机系统112还可以是采用分布式方式控制自动驾驶装置100的个体组件或子系统的多个计算设备。Part or all of the functions of the automatic driving device 100 are controlled by the computer system 112. The computer system 112 may include at least one processor 113 that executes instructions 115 stored in a non-transitory computer readable medium such as a data storage device 114. The computer system 112 may also be multiple computing devices that control individual components or subsystems of the automatic driving apparatus 100 in a distributed manner.
处理器113可以是任何常规的处理器,诸如商业可获得的中央处理器(central processing unit,CPU)。替选地,该处理器可以是诸如ASIC或其它基于硬件的处理器的专用设备。尽管图1功能性地图示了处理器、存储器和在相同块中的计算机系统112的其它元件,但是本领域的普通技术人员应该理解该处理器、计算机、或存储器实际上可以包括可以或者可以不存储在相同的物理外壳内的多个处理器、计算机、或存储器。例如,存储器可以是硬盘驱动器或位于不同于计算机系统112的外壳内的其它存储介质。因此,对处理器或计算机的引用将被理解为包括对可以或者可以不并行操作的处理器或计算机或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,上述处理器只执行与特定于组件的功能相关的计算。The processor 113 may be any conventional processor, such as a commercially available central processing unit (CPU). Alternatively, the processor may be a dedicated device such as an ASIC or other hardware-based processor. Although FIG. 1 functionally illustrates the processor, memory, and other elements of the computer system 112 in the same block, those of ordinary skill in the art should understand that the processor, computer, or memory may actually include Multiple processors, computers, or memories stored in the same physical enclosure. For example, the memory may be a hard disk drive or other storage medium located in a housing other than the computer system 112. Therefore, a reference to a processor or computer will be understood to include a reference to a collection of processors or computers or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components such as the steering component and the deceleration component may each have its own processor, and the above-mentioned processors only perform calculations related to component-specific functions.
在此处所描述的各个方面中,处理器可以位于远离该自动驾驶装置并且与该自动驾驶装置进行无线通信。在其它方面中,此处所描述的过程中的一些操作在布置于自动驾驶装置内的处理器上执行而其它则由远程处理器执行,包括采取执行单一操纵的必要步骤。In the various aspects described herein, the processor may be located far from the automatic driving device and wirelessly communicate with the automatic driving device. In other aspects, some operations in the process described herein are performed on a processor arranged in the automatic driving device while others are performed by a remote processor, including taking the necessary steps to perform a single manipulation.
在一些实施例中,数据存储装置114可包含指令115(例如,程序逻辑),指令115可被处理器113执行来执行自动驾驶装置100的各种功能,包括以上描述的那些功能。数据 存储装置114也可包含额外的指令,包括向推进系统102、传感器系统104、控制系统106和外围设备108中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。In some embodiments, the data storage device 114 may include instructions 115 (eg, program logic), which may be executed by the processor 113 to perform various functions of the automatic driving device 100, including those described above. The data storage device 114 may also contain additional instructions, including sending data to, receiving data from, interacting with, and/or performing data on one or more of the propulsion system 102, the sensor system 104, the control system 106, and the peripheral device 108. Control instructions.
除了指令115以外,数据存储装置114还可存储数据,例如道路地图、路线信息,车辆的位置、方向、速度以及其他信息。这些信息可在自动驾驶装置100在自主、半自主和/或手动模式中操作期间被自动驾驶装置100和计算机系统112使用。In addition to the instructions 115, the data storage device 114 may also store data, such as road maps, route information, the location, direction, speed, and other information of the vehicle. This information may be used by the automatic driving device 100 and the computer system 112 during the operation of the automatic driving device 100 in autonomous, semi-autonomous, and/or manual modes.
用户接口116,用于向自动驾驶装置100的用户提供信息或从其接收信息。可选地,用户接口116可包括在外围设备108的集合内的一个或多个输入/输出设备,例如无线通信系统146、车载电脑148、麦克风150和扬声器152。The user interface 116 is used to provide information to or receive information from the user of the automatic driving device 100. Optionally, the user interface 116 may include one or more input/output devices in the set of peripheral devices 108, such as a wireless communication system 146, a car computer 148, a microphone 150, and a speaker 152.
计算机系统112可基于从各种子系统(例如,行进系统102、传感器系统104和控制系统106)以及从用户接口116接收的输入来控制自动驾驶装置100的功能。例如,计算机系统112可利用来自控制系统106的输入以便控制转向单元132来避免由传感器系统104和障碍物避免系统144检测到的障碍物。在一些实施例中,计算机系统112可操作来对自动驾驶装置100及其子系统的许多方面提供控制。The computer system 112 may control the functions of the automatic driving device 100 based on inputs received from various subsystems (for example, the traveling system 102, the sensor system 104, and the control system 106) and from the user interface 116. For example, the computer system 112 may utilize input from the control system 106 in order to control the steering unit 132 to avoid obstacles detected by the sensor system 104 and the obstacle avoidance system 144. In some embodiments, the computer system 112 is operable to provide control of many aspects of the autonomous driving device 100 and its subsystems.
可选地,上述这些组件中的一个或多个可与自动驾驶装置100分开安装或关联。例如,数据存储装置114可以部分或完全地与自动驾驶装置100分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。Optionally, one or more of the aforementioned components may be installed or associated with the automatic driving device 100 separately. For example, the data storage device 114 may exist partially or completely separately from the automatic driving device 100. The above-mentioned components may be communicatively coupled together in a wired and/or wireless manner.
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图1不应理解为对本申请实施例的限制。Optionally, the above-mentioned components are only an example. In practical applications, the components in the above-mentioned modules may be added or deleted according to actual needs. FIG. 1 should not be construed as a limitation to the embodiments of the present application.
在道路行进的自动驾驶汽车,如上面的自动驾驶装置100,可以识别其周围环境内的物体以确定对当前速度的调整。上述物体可以是其它车辆、交通控制设备、或者其它类型的物体。在一些示例中,可以独立地考虑每个识别的物体,并且基于物体的各自的特性,诸如它的当前速度、加速度、与车辆的间距等,可以用来确定自动驾驶汽车所要调整的速度。A self-driving car traveling on a road, such as the self-driving device 100 above, can recognize objects in its surrounding environment to determine the adjustment to the current speed. The aforementioned objects may be other vehicles, traffic control equipment, or other types of objects. In some examples, each recognized object can be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, distance from the vehicle, etc., can be used to determine the speed to be adjusted by the self-driving car.
可选地,自动驾驶装置100或者与自动驾驶装置100相关联的计算设备(如图1的计算机系统112、计算机视觉系统140、数据存储装置114)可以基于所识别的物体的特性和周围环境的状态(例如,交通、雨、道路上的冰等等)来预测上述识别的物体的行为。可选地,每一个所识别的物体都依赖于彼此的行为,因此还可以将所识别的所有物体全部一起考虑来预测单个识别的物体的行为。自动驾驶装置100能够基于预测的上述识别的物体的行为来调整它的速度。换句话说,自动驾驶汽车能够基于所预测的物体的行为来确定车辆将需要调整到(例如,加速、减速、或者停止)什么稳定状态。在这个过程中,也可以考虑其它因素来确定自动驾驶装置100的速度,诸如,自动驾驶装置100在行驶的道路中的横向位置、道路的曲率、静态和动态物体的接近度等等。Optionally, the automatic driving device 100 or a computing device associated with the automatic driving device 100 (such as the computer system 112, the computer vision system 140, and the data storage device 114 in FIG. 1) may be based on the characteristics of the identified object and the surrounding environment. The state (for example, traffic, rain, ice on the road, etc.) predicts the behavior of the above-identified object. Optionally, each recognized object depends on each other's behavior, so all recognized objects can also be considered together to predict the behavior of a single recognized object. The automatic driving device 100 can adjust its speed based on the predicted behavior of the aforementioned recognized object. In other words, an autonomous vehicle can determine what stable state the vehicle will need to adjust to (for example, accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the automatic driving device 100, such as the lateral position of the automatic driving device 100 on the traveling road, the curvature of the road, the proximity of static and dynamic objects, and so on.
除了提供调整自动驾驶汽车的速度的指令之外,计算设备还可以提供修改自动驾驶装置100的转向角的指令,以使得自动驾驶汽车遵循给定的轨迹和/或维持与自动驾驶汽车附近的物体(例如,道路上的相邻车道中的轿车)的安全横向和纵向距离。In addition to providing instructions to adjust the speed of the self-driving car, the computing device can also provide instructions to modify the steering angle of the self-driving device 100, so that the self-driving car follows a given trajectory and/or maintains objects near the self-driving car. (For example, a car in an adjacent lane on a road) The safe horizontal and vertical distance.
上述自动驾驶装置100可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车、和手推车等,本发 明实施例不做特别的限定。The above-mentioned automatic driving device 100 may be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, a recreational vehicle, a playground vehicle, construction equipment, a tram, a golf cart, a train, a trolley, etc. , The embodiment of the present invention does not make any special limitation.
自动驾驶装置100可通过相机130实时或者周期性的采集图像。相机130包括的两个或者两个以上相机可同步采集图像。两个相机同步采集图像是指这两个相机采集图像的最短时间小于时间阈值,例如10ms。可选的,自动驾驶装置100将通过相机130采集的图像以及用于确定相机130包括的至少两个相机之间的相对位姿所需的信息发送给服务器;接收来自服务器的相对位姿,并更新相机130包括的至少两个相机之间的相对位姿。可选的,自动驾驶装置100根据相机130采集的图像,确定相机130包括的至少两个相机之间的相对位姿。The automatic driving device 100 can collect images in real time or periodically through the camera 130. Two or more cameras included in the camera 130 can collect images synchronously. The simultaneous acquisition of images by two cameras means that the shortest time for the two cameras to acquire images is less than a time threshold, for example, 10 ms. Optionally, the automatic driving device 100 sends the image collected by the camera 130 and the information needed to determine the relative pose between at least two cameras included in the camera 130 to the server; receives the relative pose from the server, and The relative pose between at least two cameras included in the camera 130 is updated. Optionally, the automatic driving device 100 determines the relative pose between at least two cameras included in the camera 130 according to the images collected by the camera 130.
图1介绍了自动驾驶装置100的功能框图,下面介绍一种自动驾驶系统101。图2为本申请实施例提供的一种自动驾驶系统的结构示意图。图1和图2是从不同的角度来描述自动驾驶装置100。如图2所示,计算机系统101包括处理器103,处理器103和系统总线105耦合。处理器103可以是一个或者多个处理器,其中,每个处理器都可以包括一个或多个处理器核。显示适配器(video adapter)107,显示适配器可以驱动显示器109,显示器109和系统总线105耦合。系统总线105通过总线桥111和输入输出(I/O)总线113耦合。I/O接口115和I/O总线耦合。I/O接口115和多种I/O设备进行通信,比如输入设备117(如:键盘,鼠标,触摸屏等),多媒体盘(media tray)121,多媒体接口等。收发器123(可以发送和/或接受无线电通信信号),摄像头155(可以捕捉景田和动态数字视频图像)和外部USB接口125。可选的,和I/O接口115相连接的接口可以是USB接口。Fig. 1 shows a functional block diagram of an automatic driving device 100, and an automatic driving system 101 is introduced below. Fig. 2 is a schematic structural diagram of an automatic driving system provided by an embodiment of the application. Fig. 1 and Fig. 2 describe the automatic driving device 100 from different perspectives. As shown in FIG. 2, the computer system 101 includes a processor 103, and the processor 103 is coupled to a system bus 105. The processor 103 may be one or more processors, where each processor may include one or more processor cores. A display adapter (video adapter) 107, the display adapter can drive the display 109, and the display 109 is coupled to the system bus 105. The system bus 105 is coupled with an input/output (I/O) bus 113 through a bus bridge 111. The I/O interface 115 is coupled to the I/O bus. The I/O interface 115 communicates with a variety of I/O devices, such as an input device 117 (such as a keyboard, a mouse, a touch screen, etc.), a media tray 121, and a multimedia interface. Transceiver 123 (can send and/or receive radio communication signals), camera 155 (can capture scene and dynamic digital video images) and external USB interface 125. Optionally, the interface connected to the I/O interface 115 may be a USB interface.
其中,处理器103可以是任何传统处理器,包括精简指令集计算(“RISC”)处理器、复杂指令集计算(“CISC”)处理器或上述的组合。可选的,处理器可以是诸如专用集成电路(“ASIC”)的专用装置。可选的,处理器103可以是神经网络处理器(Neural-network Processing Unit,NPU)或者是神经网络处理器和上述传统处理器的组合。可选的,处理器103挂载有一个神经网络处理器。The processor 103 may be any conventional processor, including a reduced instruction set computing ("RISC") processor, a complex instruction set computing ("CISC") processor, or a combination of the foregoing. Alternatively, the processor may be a dedicated device such as an application specific integrated circuit ("ASIC"). Optionally, the processor 103 may be a neural network processor (Neural-network Processing Unit, NPU) or a combination of a neural network processor and the foregoing traditional processors. Optionally, the processor 103 is mounted with a neural network processor.
计算机系统101可以通过网络接口129和软件部署服务器149通信。网络接口129是硬件网络接口,比如,网卡。网络127可以是外部网络,比如因特网,也可以是内部网络,比如以太网或者虚拟私人网络。可选的,网络127还可以是无线网络,比如WiFi网络,蜂窝网络等。The computer system 101 can communicate with the software deployment server 149 through the network interface 129. The network interface 129 is a hardware network interface, such as a network card. The network 127 may be an external network, such as the Internet, or an internal network, such as an Ethernet or a virtual private network. Optionally, the network 127 may also be a wireless network, such as a WiFi network, a cellular network, and so on.
硬盘驱动接口和系统总线105耦合。硬件驱动接口和硬盘驱动器相连接。系统内存135和系统总线105耦合。运行在系统内存135的数据可以包括计算机系统101的操作系统137和应用程序143。The hard disk drive interface is coupled to the system bus 105. The hardware drive interface is connected with the hard drive. The system memory 135 is coupled to the system bus 105. The data running in the system memory 135 may include the operating system 137 and application programs 143 of the computer system 101.
操作系统包括壳(Shell)139和内核(kernel)141。壳139是介于使用者和操作系统之内核(kernel)间的一个接口。壳139是操作系统最外面的一层。壳139管理使用者与操作系统之间的交互:等待使用者的输入,向操作系统解释使用者的输入,并且处理各种各样的操作系统的输出结果。The operating system includes a shell (Shell) 139 and a kernel (kernel) 141. The shell 139 is an interface between the user and the kernel of the operating system. The shell 139 is the outermost layer of the operating system. The shell 139 manages the interaction between the user and the operating system: waiting for the user's input, interpreting the user's input to the operating system, and processing the output results of various operating systems.
内核141由操作系统中用于管理存储器、文件、外设和系统资源的那些部分组成。直接与硬件交互,操作系统内核通常运行进程,并提供进程间的通信,提供CPU时间片管理、中断、内存管理、IO管理等等。The kernel 141 is composed of those parts of the operating system that are used to manage memory, files, peripherals, and system resources. Directly interact with the hardware. The operating system kernel usually runs processes and provides inter-process communication, providing CPU time slice management, interrupts, memory management, IO management, and so on.
应用程序141包括自动驾驶相关程序,比如,管理自动驾驶装置和路上障碍物交互的 程序,控制自动驾驶装置的行车路线或者速度的程序,控制自动驾驶装置100和路上其他自动驾驶装置交互的程序。应用程序141也存在于软件部署服务器(deploying server)149的系统上。在一个实施例中,在需要执行应用程序141时,计算机系统101可以从软件部署服务器149下载应用程序141。The application program 141 includes programs related to automatic driving, such as programs that manage the interaction between the automatic driving device and obstacles on the road, programs that control the driving route or speed of the automatic driving device, and programs that control the interaction between the automatic driving device 100 and other automatic driving devices on the road. The application program 141 also exists on the system of a software deployment server (deploying server) 149. In one embodiment, when the application program 141 needs to be executed, the computer system 101 may download the application program 141 from the software deployment server 149.
传感器153和计算机系统101关联。传感器153用于探测计算机系统101周围的环境。举例来说,传感器153可以探测动物,汽车,障碍物和人行横道等,进一步传感器还可以探测上述动物,汽车,障碍物和人行横道等物体周围的环境,比如:动物周围的环境,例如,动物周围出现的其他动物,天气条件,周围环境的光亮度等。可选的,如果计算机系统101位于自动驾驶装置上,传感器可以是摄像头(即相机),激光雷达,红外线感应器,化学检测器,麦克风等。传感器153在激活时按照预设间隔感测信息并实时或接近实时地将所感测的信息提供给计算机系统101。可选的,传感器可以包括激光雷达,该激光雷达可以实时或接近实时地将获取的点云提供给计算机系统101,即将获取到的一系列点云提供给计算机系统101,每次获取的点云对应一个时间戳。可选的,摄像头实时或接近实时地将获取的图像提供给计算机系统101,每帧图像对应一个时间戳。应理解,计算机系统101可得到来自摄像头的图像序列。The sensor 153 is associated with the computer system 101. The sensor 153 is used to detect the environment around the computer system 101. For example, the sensor 153 can detect animals, cars, obstacles, and crosswalks. Further, the sensor can also detect the environment around objects such as animals, cars, obstacles, and crosswalks. Other animals, weather conditions, the brightness of the surrounding environment, etc. Optionally, if the computer system 101 is located on the automatic driving device, the sensor may be a camera (ie, a camera), a laser radar, an infrared sensor, a chemical detector, a microphone, etc. The sensor 153 senses information at preset intervals when activated and provides the sensed information to the computer system 101 in real time or near real time. Optionally, the sensor may include a lidar, which can provide the acquired point cloud to the computer system 101 in real time or near real-time, and provide a series of acquired point clouds to the computer system 101, each time the acquired point cloud Corresponds to a timestamp. Optionally, the camera provides the acquired images to the computer system 101 in real time or near real time, and each frame of image corresponds to a time stamp. It should be understood that the computer system 101 can obtain an image sequence from a camera.
可选的,在本文上述的各种实施例中,计算机系统101可位于远离自动驾驶装置的地方,并且可与自动驾驶装置进行无线通信。收发器123可将自动驾驶任务、传感器153采集的传感器数据和其他数据发送给计算机系统101;还可以接收计算机系统101发送的控制指令。自动驾驶装置可执行收发器接收的来自计算机系统101的控制指令,并执行相应的驾驶操作。在其它方面,本文上述的一些过程在设置在自动驾驶车辆内的处理器上执行,其它由远程处理器执行,包括采取执行单个操纵所需的动作。Optionally, in the various embodiments described above, the computer system 101 may be located far away from the automatic driving device, and may perform wireless communication with the automatic driving device. The transceiver 123 can send automatic driving tasks, sensor data collected by the sensor 153, and other data to the computer system 101; and can also receive control instructions sent by the computer system 101. The automatic driving device can execute the control instructions from the computer system 101 received by the transceiver, and perform corresponding driving operations. In other respects, some of the processes described in this article are executed on a processor installed in an autonomous vehicle, and others are executed by a remote processor, including taking actions required to perform a single manipulation.
自动驾驶装置在自动驾驶过程中及时更新其部署的相机之间的相对位姿。后续会详述自动驾驶装置如何更新其部署的相机之间的相对位姿的方式。另外,本申请实施例提供的相对位姿标定方法,不仅可应用于自动驾驶装置,还可以应用于未部署自动驾驶系统的车辆。下面来介绍本申请实施例提供的相对位姿标定方法。The autopilot device updates the relative pose between the cameras deployed in the autopilot process in time. The follow-up will detail how the autopilot device updates the relative pose between the cameras deployed. In addition, the relative pose calibration method provided by the embodiments of the present application can be applied not only to automatic driving devices, but also to vehicles that are not deployed with automatic driving systems. The following describes the relative pose calibration method provided by the embodiment of the present application.
图3为本申请实施例提供的一种相对位姿标定方法流程图。如图3所示,该方法包括:Fig. 3 is a flowchart of a relative pose calibration method provided by an embodiment of the application. As shown in Figure 3, the method includes:
301、自动驾驶装置通过第一相机和第二相机采集图像。301. The automatic driving device collects images through a first camera and a second camera.
上述第一相机的视野和上述第二相机的视野存在重叠,上述第一相机和上述第二相机安装于自动驾驶装置的不同位置。自动驾驶装置可以是自动驾驶汽车;也可以是无人机,还可以是其他需要标定相机之间的相对位姿的装置。The field of view of the first camera and the field of view of the second camera overlap, and the first camera and the second camera are installed at different positions of the automatic driving device. The self-driving device can be a self-driving car; it can also be a drone, or other devices that need to calibrate the relative pose between cameras.
302、自动驾驶装置对上述第一相机采集的图像和上述第二相机采集的图像进行特征点匹配,以得到第一匹配特征点集。302. The automatic driving device performs feature point matching on the image collected by the first camera and the image collected by the second camera to obtain a first matching feature point set.
上述第一匹配特征点集包括H组特征点对,每组特征点对包括两个相匹配的特征点,其中一个特征点为从上述第一相机采集的图像中提取的特征点,另一个特征点为从上述第二相机采集的图像中提取的特征点,上述H为不小于8的整数。自动驾驶装置对上述第一相机采集的图像和上述第二相机采集的图像进行特征点匹配,以得到第一匹配特征点集可以是分别对第一相机和第二相机同步采集的两帧图像进行特征点匹配以得到上述第一匹配特征点集。举例来说,第一相机在第一时间点采集到图像1,第二相机在第一时间点采集 到图像2;自动驾驶装置提取图像1中的特征点得到第一图像特征点集1,提取图像2中的特征点得到图像特征点集2;然后,对图像特征点集1和图像特征点集2中的特征点进行匹配,得到匹配特征点集(对应于第一匹配特征点集)。The first matching feature point set includes H groups of feature point pairs, and each set of feature point pairs includes two matching feature points. One feature point is a feature point extracted from an image collected by the first camera, and the other feature point is The point is a feature point extracted from the image collected by the second camera, and the H is an integer not less than 8. The automatic driving device performs feature point matching on the image collected by the first camera and the image collected by the second camera to obtain the first matching feature point set. The feature point is matched to obtain the above-mentioned first matching feature point set. For example, the first camera collects image 1 at the first time point, and the second camera collects image 2 at the first time point; the automatic driving device extracts the feature points in image 1 to obtain the first image feature point set 1, and extracts The feature points in the image 2 obtain the image feature point set 2; then, the feature points in the image feature point set 1 and the image feature point set 2 are matched to obtain the matching feature point set (corresponding to the first matching feature point set).
在一些实施例中,自动驾驶装置实现步骤302的一种方式如下:自动驾驶装置利用第一相机和第二相机同步采集图像,得到第一图像序列和第二图像序列,其中,该第一图像序列中的多个图像与该第二图像序列中的图像一一对应;对该第一图像序列和该第二图像序列中一一对应的图像进行特征点匹配,得到上述第一匹配特征点集。举例来说,第一图像序列中的图像1至图像5与第二图像序列中的图像6至图像10一一对应,自动驾驶装置可对从该图像1提取出的特征点与从图像6提取出的特征点进行特征匹配,将得到的特征点对加入上述第一匹配特征点集;对从该图像2提取出的特征点与从图像7提取出的特征点进行特征匹配,将得到的特征点对加入上述第一匹配特征点集;以此类推。应理解,图像1和图像6为第一相机和第二相机同步采集的图像(即对应于同一帧图像),图像2与图像7为第一相机和第二相机同步采集的图像(即对应于同一帧图像),以此类推。在实际应用中,自动驾驶装置可对第一相机和第二相机同步采集的多帧图像进行特征点匹配,将得到的特征点对存入第一匹配特征点集。一帧图像可包括第一相机采集的一个图像和第二相机采集的一个图像。In some embodiments, one way for the automatic driving device to implement step 302 is as follows: the automatic driving device uses the first camera and the second camera to synchronously acquire images to obtain the first image sequence and the second image sequence, wherein the first image The multiple images in the sequence are in one-to-one correspondence with the images in the second image sequence; feature point matching is performed on the one-to-one corresponding images in the first image sequence and the second image sequence to obtain the above-mentioned first matching feature point set . For example, image 1 to image 5 in the first image sequence corresponds to image 6 to image 10 in the second image sequence, and the automatic driving device can compare the feature points extracted from image 1 with those extracted from image 6. Perform feature matching on the feature points obtained, and add the obtained feature point pairs to the first matching feature point set; perform feature matching on the feature points extracted from the image 2 and the feature points extracted from the image 7, and the obtained features The point pair is added to the above-mentioned first matching feature point set; and so on. It should be understood that image 1 and image 6 are images acquired by the first camera and the second camera synchronously (that is, corresponding to the same frame of image), and image 2 and image 7 are images acquired by the first camera and the second camera synchronously (that is, corresponding to The same frame image), and so on. In practical applications, the automatic driving device can perform feature point matching on multiple frames of images synchronously collected by the first camera and the second camera, and store the obtained feature point pairs in the first matching feature point set. One frame of image may include one image collected by the first camera and one image collected by the second camera.
303、自动驾驶装置根据第一匹配特征点集,确定第一相机和第二相机之间的第一相对位姿。303. The automatic driving device determines the first relative pose between the first camera and the second camera according to the first matching feature point set.
上述第一匹配特征点集包括H组特征点对,每组特征点对包括两个相匹配的特征点,其中一个特征点为从上述第一相机采集的图像中提取的特征点,另一个特征点为从上述第二相机采集的图像中提取的特征点,上述第一相机的视野和上述第二相机的视野存在重叠,上述第一相机和上述第二相机安装于自动驾驶装置的不同位置,上述H为不小于8的整数。The first matching feature point set includes H groups of feature point pairs, and each set of feature point pairs includes two matching feature points. One feature point is a feature point extracted from an image collected by the first camera, and the other feature point is The point is the feature point extracted from the image collected by the second camera, the field of view of the first camera overlaps the field of view of the second camera, and the first camera and the second camera are installed at different positions of the automatic driving device, The above H is an integer not less than 8.
304、自动驾驶装置在上述第一相对位姿与第二相对位姿之间的差值不大于位姿变化阈值的情况下,将上述第二相对位姿更新为上述第一相对位姿。304. When the difference between the first relative pose and the second relative pose is not greater than the pose change threshold, the automatic driving device updates the second relative pose to the first relative pose.
上述第二相对位姿为上述自动驾驶装置当前存储的上述第一相机和上述第二相机之间的相对位姿。位姿变化阈值可以是多相机系统能够正常工作所允许的最大的偏差阈值,例如第一相对位姿与第二相对位姿之间的旋转角度偏差不超过0.5度。图3中的方法流程描述了确定具有公共视野的第一相机和第二相机之间的相对位姿的方式。应理解,自动驾驶装置可采用类似图3中的方法流程确定任意两个具有公共视野的相机之间的相对位姿。在自动驾驶装置的多相机视觉系统(对应于图1中的相机130)中,不同安装位置的两个相机之间一般会具有公共视野,根据相机之间的公共视野分布情况,可以将具有公共视野的两相机组成一对(默认左侧相机在前,右侧相机在后)。图4为本申请实施例提供的一种相机之间的公共视野示意图。如图4所示,具有公共视野的相机包括:相机3和相机1,相机3和相机2,相机1和相机2。图4中仅展示了相机1和相机2之间的公共视野。在一些实施例中,可以将多相机系统中的相机均按以上方式分组,得到多对相机,即将各具有公共视野的两个相机分为一组;然后,采用图3中的方法流程以得到每对相机之间的相对位姿。The second relative pose is the relative pose between the first camera and the second camera currently stored by the automatic driving device. The pose change threshold may be the maximum deviation threshold allowed by the normal operation of the multi-camera system, for example, the rotation angle deviation between the first relative pose and the second relative pose does not exceed 0.5 degrees. The method flow in FIG. 3 describes the manner of determining the relative pose between the first camera and the second camera with a common field of view. It should be understood that the automatic driving device may use a method similar to that in FIG. 3 to determine the relative pose between any two cameras with a common field of view. In the multi-camera vision system of the automatic driving device (corresponding to the camera 130 in Figure 1), two cameras at different installation positions generally have a common field of view. According to the distribution of the common field of view between the cameras, the common field of view can be The two cameras of the field of view form a pair (by default, the left camera is in the front and the right camera is in the back). FIG. 4 is a schematic diagram of a public field of view between cameras provided by an embodiment of the application. As shown in Figure 4, cameras with a common field of view include: camera 3 and camera 1, camera 3 and camera 2, and camera 1 and camera 2. Figure 4 only shows the common field of view between camera 1 and camera 2. In some embodiments, the cameras in the multi-camera system can be grouped in the above manner to obtain multiple pairs of cameras, that is, two cameras with a common field of view are grouped into one group; then, the method flow in FIG. 3 is used to obtain The relative pose between each pair of cameras.
本申请实施例提供的相对位姿标定方法,能够达到0.1°以下的重复精度,并有效减少 重标定相对位姿的次数,可靠性高。The relative pose calibration method provided by the embodiments of the present application can achieve a repetition accuracy of less than 0.1°, effectively reduces the number of times of recalibrating the relative pose, and has high reliability.
图3中未详述如何根据第一匹配特征点集,确定上述第一相机和上述第二相机之间的第一相对位姿的实现方式。下面介绍步骤303的一些可选的实现方式。FIG. 3 does not detail how to determine the first relative pose between the first camera and the second camera according to the first matching feature point set. Some optional implementations of step 303 are described below.
方式一method one
自动驾驶装置根据第一匹配特征点集,确定上述第一相机和上述第二相机之间的第一相对位姿的方式如下:确定上述第一相机和上述第二相机之间的本质矩阵;根据上述本质矩阵的奇异值分解结果,计算上述第一相机和上述第二相机之间的5自由度(Degree of Freedom,DOF)相对位姿;将第一距离和第二距离之间的比值,作为尺度因子;上述第一距离和上述第二距离分别为上述自动驾驶装置上的非视觉传感器(例如激光雷达)和视觉传感器(例如相机130)测量同一距离得到的结果,上述视觉传感器包括上述第一相机和/或上述第二相机;合并上述5自由度相对位姿和上述尺度因子以得到上述第一相对位姿。According to the first matching feature point set, the automatic driving device determines the first relative pose between the first camera and the second camera in the following manner: determining the essential matrix between the first camera and the second camera; Based on the singular value decomposition result of the above essential matrix, calculate the 5-degree-of-freedom (DOF) relative pose between the above-mentioned first camera and the above-mentioned second camera; take the ratio between the first distance and the second distance as Scale factor; the first distance and the second distance are the results obtained by measuring the same distance by a non-visual sensor (such as lidar) and a visual sensor (such as the camera 130) on the automatic driving device, and the visual sensor includes the first The camera and/or the above-mentioned second camera; the above-mentioned 5-degree-of-freedom relative pose and the above-mentioned scale factor are combined to obtain the above-mentioned first relative pose.
示例性的,自动驾驶装置可采用如下步骤确定上述第一相机和上述第二相机之间的本质矩阵:Exemplarily, the automatic driving device may adopt the following steps to determine the essential matrix between the first camera and the second camera:
1)从第一匹配特征点集中获取从第一相机采集的图像中提取的特征点x′和从第二相机采集的图像中提取的特征点x。例如,x=[x y 1] T,x′=[x′ y′ 1] T1) Acquire the feature point x′ extracted from the image collected by the first camera and the feature point x extracted from the image collected by the second camera from the first matching feature point set. For example, x=[x y 1] T , x′=[x′ y′ 1] T.
由两相机之间的基本矩阵的性质可知,有以下公式成立:According to the nature of the basic matrix between the two cameras, the following formula holds:
x′Fx=0  (1);x′Fx=0 (1);
其中,F表示上述第一相机和上述第二相机之间的基本矩阵,
Figure PCTCN2020079780-appb-000004
Where F represents the basic matrix between the first camera and the second camera,
Figure PCTCN2020079780-appb-000004
将上式(1)展开则有:Expanding the above formula (1), we have:
x′xf 11+x′yf 12+x′f 13+y′xf 21+y′yf 22+y′f 23+xf 31+yf 32+f 33=0  (2); x′xf 11 +x′yf 12 +x′f 13 +y′xf 21 +y′yf 22 +y′f 23 +xf 31 +yf 32 +f 33 =0 (2);
将上式写成矩阵形式则有:Write the above formula in the form of a matrix:
(x′x,x′y,x′,y′x,y′y,y′,x,y,1)f=0  (3);(x′x,x′y,x′,y′x,y′y,y′,x,y,1) f=0 (3);
其中,f=[f 11 f 12 f 13 f 21 f 22 f 23 f 31 f 32 f 33] TAmong them, f=[f 11 f 12 f 13 f 21 f 22 f 23 f 31 f 32 f 33 ] T.
对于g组特征点对可得到如下矩阵方程:The following matrix equations can be obtained for the characteristic point pairs of group g:
Figure PCTCN2020079780-appb-000005
Figure PCTCN2020079780-appb-000005
2)求解以上矩阵方程(4)可得基本矩阵F的线性解。2) Solve the above matrix equation (4) to obtain the linear solution of the fundamental matrix F.
3)采用如下公式将基本矩阵F进行奇异值分解:3) Use the following formula to perform singular value decomposition of the fundamental matrix F:
F=UDV T  (5); F=UDV T (5);
其中D为对角阵,除对角线元素外都为0,对角线元素以r、s、t表示,则有:D=diag(r,s,t),且满足r≥s≥t。令t=0得到:F′=Udiag(r,s,0)V T,F′即为基本矩阵F满足奇异性约束的一个最优估计。 Where D is a diagonal matrix, except for diagonal elements, all are 0. The diagonal elements are represented by r, s, t, then: D = diag (r, s, t), and satisfies r ≥ s ≥ t . Let t=0 to obtain: F′=Udiag(r,s,0)V T , F′ is an optimal estimation of the fundamental matrix F that satisfies the singularity constraint.
4)利用基本矩阵和本质矩阵之间的转换关系计算得到本质矩阵E:4) Use the conversion relationship between the fundamental matrix and the essential matrix to calculate the essential matrix E:
E=K′ TFK  (6); E=K′ T FK (6);
其中,K为第一相机的内参矩阵,K′为第二相机的内参矩阵,K和K′均为已知量。根据 上式(6)可得到本质矩阵E。Among them, K is the internal parameter matrix of the first camera, K′ is the internal parameter matrix of the second camera, and both K and K′ are known quantities. According to the above formula (6), the essential matrix E can be obtained.
可选的,自动驾驶装置根据上述本质矩阵的奇异值分解结果,计算上述第一相机和上述第二相机之间的5自由度DOF相对位姿的步骤如下:Optionally, according to the singular value decomposition result of the essential matrix, the automatic driving device calculates the relative pose of the 5-degree-of-freedom DOF between the first camera and the second camera as follows:
(1)、采用如下公式将本质矩阵进行奇异值分解以得到奇异值分解结果:(1) Use the following formula to perform singular value decomposition of the essential matrix to obtain the singular value decomposition result:
Figure PCTCN2020079780-appb-000006
Figure PCTCN2020079780-appb-000006
(2)根据上述奇异值分解结果,得到上述第一相机和上述第二相机之间的至少两个相对位姿。(2) According to the singular value decomposition result, at least two relative poses between the first camera and the second camera are obtained.
假定第一相机的位姿为P=Rt[I 0],其中,Rt是第一相机相对车身坐标系的变换矩阵,I是3*3的单位矩阵。第二相机的位姿P′的四种可能的形式如下:Assume that the pose of the first camera is P=Rt[I 0], where Rt is the transformation matrix of the first camera relative to the vehicle body coordinate system, and I is a 3*3 unit matrix. The four possible forms of the pose P′ of the second camera are as follows:
Figure PCTCN2020079780-appb-000007
Figure PCTCN2020079780-appb-000007
Figure PCTCN2020079780-appb-000008
Figure PCTCN2020079780-appb-000008
Figure PCTCN2020079780-appb-000009
Figure PCTCN2020079780-appb-000009
Figure PCTCN2020079780-appb-000010
Figure PCTCN2020079780-appb-000010
(3)分别利用三角法对第一匹配特征点集中的各特征点进行测距,将使得全部特征点的三维坐标位置同时位于第一相机和第二相机前方的位姿P’作为第二相机的位姿。(3) Use the triangulation method to measure the distance of each feature point in the first matching feature point set, and make the three-dimensional coordinate positions of all the feature points located in front of the first camera and the second camera at the same time as the second camera. The pose.
利用三角法对第一匹配特征点集中的各特征点进行测距可以是:采用三角化公式根据第一匹配特征点集中的每组特征点对确定一个三维空间坐标。由一组特征点对计算得到的一个三维空间坐标即为该组特征点对包括的两个特征点对应的空间坐标。三角化最早由高斯提出,并应用于测量学中。简单来讲就是:在不同的位置观测同一个三维点P(x,y,z),已知在不同位置处观察到的三维点的二维投影点X1(x1,y1),X2(x2,y2),利用三角关系,恢复出该三维点的深度信息,即三维空间坐标。三角化主要是通过匹配的特征点(即像素点)来计算特征点在相机坐标系下的三维坐标。确定P’的表达形式后,由于P’的平移部分(最后一列)由
Figure PCTCN2020079780-appb-000011
确定,而
Figure PCTCN2020079780-appb-000012
是归一化的单位向量,因此P’的平移部分与真实值相差一个尺度因子。此时两相机之间的5自由度位姿被确定。通过这种方式,可以准确、快速地确定第一相机和第二相机之间的5自由度相对位姿。
Using the triangulation method to measure the distance of each feature point in the first matching feature point set may be: using a triangulation formula to determine a three-dimensional space coordinate according to each group of feature point pairs in the first matching feature point set. A three-dimensional space coordinate calculated from a set of feature point pairs is the space coordinates corresponding to the two feature points included in the set of feature point pairs. Triangulation was first proposed by Gauss and used in surveying. Simply put: Observe the same three-dimensional point P(x,y,z) at different positions, and know the two-dimensional projection points X1(x1,y1), X2(x2, y2), using the triangle relationship to recover the depth information of the three-dimensional point, that is, the three-dimensional space coordinates. Triangulation is mainly to calculate the three-dimensional coordinates of the feature points in the camera coordinate system through the matched feature points (ie, pixel points). After determining the expression form of P', the translation part (the last column) of P'is determined by
Figure PCTCN2020079780-appb-000011
Ok, and
Figure PCTCN2020079780-appb-000012
Is a normalized unit vector, so the translation part of P'differs from the true value by a scale factor. At this time, the 5-degree-of-freedom pose between the two cameras is determined. In this way, the 5-DOF relative pose between the first camera and the second camera can be accurately and quickly determined.
两相机之间的相对位姿共有6个自由度,通过对本质矩阵的分解可获得两相机之间的5DOF相对位姿,即
Figure PCTCN2020079780-appb-000013
其中,两相机之间的相对转角可以通过R精确获得,而两相机之间的相对位置只能获得归一化结果,即与真实值相差一个尺度因子。确定尺度因子需要依赖其他传感器的输出。可选的,记某非视觉传感器获得测量距离s,利用获得的相机相对位姿,可以通过多相机三维重构技术获得同样的测量距离s′,则尺度因子为
Figure PCTCN2020079780-appb-000014
两相机之间的6DOF相对位姿为
Figure PCTCN2020079780-appb-000015
The relative pose between the two cameras has 6 degrees of freedom. The 5DOF relative pose between the two cameras can be obtained by decomposing the essential matrix, namely
Figure PCTCN2020079780-appb-000013
Among them, the relative rotation angle between the two cameras can be accurately obtained by R, and the relative position between the two cameras can only obtain a normalized result, that is, a scale factor that differs from the true value. Determining the scale factor depends on the output of other sensors. Optionally, remember that a non-visual sensor obtains the measurement distance s, using the obtained relative pose of the camera, the same measurement distance s′ can be obtained through the multi-camera 3D reconstruction technology, then the scale factor is
Figure PCTCN2020079780-appb-000014
The 6DOF relative pose between the two cameras is
Figure PCTCN2020079780-appb-000015
一种可行的确定尺度因子的方案如下:利用激光雷达或毫米波雷达测量某个目标与自动驾驶装置之间的距离s,同时使用相机利用双目(或多目)视觉测量得到同一目标到自动驾驶装置的距离s′,从而获得尺度因子。A feasible solution for determining the scale factor is as follows: use lidar or millimeter wave radar to measure the distance s between a target and the autopilot device, and use the camera to use binocular (or multi-eye) vision measurement to get the same target to the automatic The distance s′ of the driving device to obtain the scale factor.
另一种可行的确定尺度因子的方案如下:利用惯性测量单元(Inertial Measurement Unit, IMU)或轮速计测量自动驾驶装置的移动距离s,同时利用双目(或多目)视觉测量得到某静止目标在第一时刻与自动驾驶装置之间的第一距离和该静止目标在第二时刻与该自动驾驶装置之间的第二距离的距离差值s′,同样可获得尺度因子。其中,该第一时刻为该自动驾驶装置移动距离s的起始时刻,该第二时刻为该自动驾驶装置移动距离s的结束时刻。Another feasible solution for determining the scale factor is as follows: use an Inertial Measurement Unit (IMU) or wheel speed meter to measure the moving distance s of the automatic driving device, and use binocular (or multi-eye) visual measurement to obtain a certain static The distance difference s′ between the first distance between the target and the automatic driving device at the first moment and the second distance between the stationary target and the automatic driving device at the second moment can also obtain a scale factor. Wherein, the first time is the starting time of the moving distance s of the automatic driving device, and the second time is the ending time of the moving distance s of the automatic driving device.
在该实现方式中,通过分解本质矩阵得到两相机之间的5自由度相对位姿,再由该5自由度相对位姿和尺度因子得到6自由度相对位姿,对周围环境无先验假设,也不需要采集周边场景的3D信息,工作量少。In this implementation, the 5-DOF relative pose between the two cameras is obtained by decomposing the essential matrix, and then the 5-DOF relative pose and the scale factor are used to obtain the 6-DOF relative pose without prior assumptions about the surrounding environment , There is no need to collect 3D information of surrounding scenes, and the workload is small.
方式二Way two
自动驾驶装置根据第一匹配特征点集,确定上述第一相机和上述第二相机之间的第一相对位姿的方式如下:迭代求解目标方程以得到上述第一相对位姿;在上述目标方程中,上述第一相对位姿包括的参数为未知数,上述第一匹配特征点集中的特征点对、上述第一相机的内参以及上述第二相机的内参为已知数。The automatic driving device determines the first relative pose between the first camera and the second camera according to the first matching feature point set as follows: iteratively solves the target equation to obtain the first relative pose; in the target equation Wherein, the parameters included in the first relative pose are unknown numbers, and the feature point pairs in the first matching feature point set, the internal parameters of the first camera, and the internal parameters of the second camera are known numbers.
假定第一匹配特征点集中,从第一相机采集的图像中提取的特征点为x′,从第二相机采集的图像中提取的特征点为x。例如,x=[x y 1] T,x′=[x′ y′ 1] TAssuming that the first matching feature points are concentrated, the feature point extracted from the image collected by the first camera is x′, and the feature point extracted from the image collected by the second camera is x. For example, x=[x y 1] T , x′=[x′ y′ 1] T.
第一相机和第二相机之间的本质矩阵E有如下等式成立:The essential matrix E between the first camera and the second camera has the following equation:
Figure PCTCN2020079780-appb-000016
Figure PCTCN2020079780-appb-000016
上式等号两侧均为E的表达式,其中,K为第一相机的内参矩阵,K′为第二相机的内参矩阵,
Figure PCTCN2020079780-appb-000017
R表示第一相机和第二相机之间的旋转矩阵,t表示第一相机和第二相机之间的平移向量,t=[t 1 t 2 t 3] T。第一相机和第二相机之间的基本矩阵F有如下等式成立:
Both sides of the equal sign of the above equation are the expressions of E, where K is the internal parameter matrix of the first camera, and K′ is the internal parameter matrix of the second camera,
Figure PCTCN2020079780-appb-000017
R represents the rotation matrix between the first camera and the second camera, t represents the translation vector between the first camera and the second camera, and t=[t 1 t 2 t 3 ] T. The basic matrix F between the first camera and the second camera has the following equation:
x′Fx=0  (13);x′Fx=0 (13);
利用公式(12)消去公式(13)中的F可得如下目标方程:Using formula (12) to eliminate F in formula (13), the following target equation can be obtained:
Figure PCTCN2020079780-appb-000018
Figure PCTCN2020079780-appb-000018
其中,K为第一相机的内参矩阵,K′为第二相机的内参矩阵,K和K′均为已知量,目标方程中共有6个参数(即构成相对位姿的6个参数),记为J,其梯度记为
Figure PCTCN2020079780-appb-000019
Among them, K is the internal parameter matrix of the first camera, K′ is the internal parameter matrix of the second camera, K and K′ are both known quantities, and there are a total of 6 parameters in the target equation (that is, the 6 parameters that constitute the relative pose), Denoted as J, and its gradient as
Figure PCTCN2020079780-appb-000019
可选的,利用梯度下降法采用如下公式迭代求解目标方程以得到上述第一相对位姿:Optionally, the gradient descent method is used to iteratively solve the target equation using the following formula to obtain the above-mentioned first relative pose:
Figure PCTCN2020079780-appb-000020
Figure PCTCN2020079780-appb-000020
其中α为步长参数,可根据经验设置。将x′和x代入公式(15),优化更新
Figure PCTCN2020079780-appb-000021
该步骤不依赖于x′和x点的个数,即使第一匹配特征点集中特征点数很少(如在隧道等特殊场景下),仍然可以对
Figure PCTCN2020079780-appb-000022
进行优化,直至获得稳定结果。
Among them, α is the step length parameter, which can be set according to experience. Substitute x′ and x into formula (15), optimize and update
Figure PCTCN2020079780-appb-000021
This step does not depend on the number of x'and x points. Even if the number of feature points in the first matching feature point set is small (such as in special scenes such as tunnels), it can still be
Figure PCTCN2020079780-appb-000022
Optimize until a stable result is obtained.
在该实现方式中,通过迭代求解目标方程以得到上述第一相对位姿,可以快速、准确地的计算出第一相对位姿,不需要计算本质矩阵。In this implementation manner, by iteratively solving the target equation to obtain the first relative pose, the first relative pose can be calculated quickly and accurately, without the need to calculate the essential matrix.
在一些实施例中,自动驾驶装置得到具有公共视野的两个相机之间的相对位姿(例如通过执行图1中的方法流程)之后,可以利用各对具有公共视野的两两相机之间的相对位姿组成的闭环构建方程组,进行整体优化。下面以相机1(即第一相机),相机2(即第二相机),相机3(即第三相机)为例,来介绍如何对得到的两两相机之间的相对位姿进行优 化。In some embodiments, after the autonomous driving device obtains the relative pose between two cameras with a common field of view (for example, by executing the method flow in FIG. 1), it can use the difference between each pair of cameras with a common field of view. The closed loop composed of relative poses is used to construct a system of equations for overall optimization. The following uses camera 1 (ie, the first camera), camera 2 (ie, the second camera), and camera 3 (ie, the third camera) as examples to introduce how to optimize the obtained relative poses between the two cameras.
假设相机i和相机j之间的位姿为
Figure PCTCN2020079780-appb-000023
Rt ij表示从相机i转到相机j坐标系下的转换矩阵,R ij表示相机i和相机j之间的旋转矩阵,
Figure PCTCN2020079780-appb-000024
表示相机i和相机j之间的平移矩阵。示例性的,相机1,相机2,相机3之间的位姿关系如下:
Suppose the pose between camera i and camera j is
Figure PCTCN2020079780-appb-000023
Rt ij represents the conversion matrix from camera i to camera j coordinate system, R ij represents the rotation matrix between camera i and camera j,
Figure PCTCN2020079780-appb-000024
Represents the translation matrix between camera i and camera j. Exemplarily, the pose relationship among camera 1, camera 2, and camera 3 is as follows:
Rt 31Rt 23=Rt 21  (16); Rt 31 Rt 23 = Rt 21 (16);
其中三个Rt构成闭环,可以通过整体优化获得更优的结果。三个转换矩阵都可以通过公共视野关系(例如采用图1中的方法流程)单独计算得出,因此,公式(16)是一个闭环方程。通过优化以上方程即可得到更优的结果。对于更多的相机组成的系统而言,只要两两相机具有公共视野且可构成闭环,都可以采用以上方式进行优化。如相机a,相机b,相机c,…,相机z,两两相邻相机有公共视野,则有闭环方程:Rt abRt bcRt cd…Rt yzRt za=0。下面介绍如何利用相机之间的变换矩阵构成的闭环方程,优化相机之间的相对位姿的方式。 Among them, three Rt form a closed loop, and better results can be obtained through overall optimization. The three conversion matrices can all be calculated separately through the public view relationship (for example, using the method flow in Figure 1). Therefore, formula (16) is a closed-loop equation. Better results can be obtained by optimizing the above equations. For a system composed of more cameras, as long as the two cameras have a common field of view and can form a closed loop, the above methods can be used for optimization. For example, camera a, camera b, camera c,..., camera z, and two adjacent cameras have a common field of view, then there is a closed-loop equation: Rt ab Rt bc Rt cd …Rt yz Rt za =0. The following describes how to use the closed-loop equation formed by the transformation matrix between the cameras to optimize the relative pose between the cameras.
以相机1(即第一相机),相机2(即第二相机)以及相机3(即第三相机)为例,这三个相机组成的闭环方程如公式(16),将公式(16)展开可得如下两个方程:Taking camera 1 (ie the first camera), camera 2 (ie the second camera) and camera 3 (ie the third camera) as examples, the closed-loop equation composed of these three cameras is shown in formula (16), and formula (16) is expanded The following two equations can be obtained:
R 31R 23=R 21      (17); R 31 R 23 =R 21 (17);
Figure PCTCN2020079780-appb-000025
Figure PCTCN2020079780-appb-000025
应理解,若已知R 23和R 21,可求得R 31,之后可利用
Figure PCTCN2020079780-appb-000026
R 31求得
Figure PCTCN2020079780-appb-000027
在一些实施例中,可将Rt 31视为未知量,Rt 23为利用相机2和相机3同步拍摄的一组图像(对应于一帧)中相匹配的特征点得到的相对位姿,Rt 21为利用相机2和相机1同步拍摄的一组图像中相匹配的特征点得到的相对位姿。利用公式(17)和公式(18)可得到如下方程:
It should be understood that if R 23 and R 21 are known, R 31 can be obtained, and then can be used
Figure PCTCN2020079780-appb-000026
R 31 finds
Figure PCTCN2020079780-appb-000027
In some embodiments, Rt 31 can be regarded as an unknown quantity, Rt 23 is the relative pose obtained by using matching feature points in a set of images (corresponding to a frame) synchronously captured by camera 2 and camera 3, Rt 21 is the relative pose obtained by using matching feature points in a group of images taken by camera 2 and camera 1 simultaneously. Using formula (17) and formula (18), the following equations can be obtained:
Figure PCTCN2020079780-appb-000028
Figure PCTCN2020079780-appb-000028
Figure PCTCN2020079780-appb-000029
Figure PCTCN2020079780-appb-000029
示例性的,R 21为第一旋转矩阵,R 23为第二旋转矩阵,
Figure PCTCN2020079780-appb-000030
为第一平移矩阵,
Figure PCTCN2020079780-appb-000031
为第二平移矩阵。假定自动驾驶装置利用不同组图像得到了M个Rt 23以及M个Rt 21,则由公式(19)和公式(20)可得到如下方程组:
Exemplarily, R 21 is the first rotation matrix, R 23 is the second rotation matrix,
Figure PCTCN2020079780-appb-000030
Is the first translation matrix,
Figure PCTCN2020079780-appb-000031
Is the second translation matrix. Assuming that the automatic driving device uses different sets of images to obtain M Rt 23 and M Rt 21 , the following equations can be obtained from formula (19) and formula (20):
Figure PCTCN2020079780-appb-000032
Figure PCTCN2020079780-appb-000032
Figure PCTCN2020079780-appb-000033
Figure PCTCN2020079780-appb-000033
从方程组(21)可知,R 21和R 23为已知数,
Figure PCTCN2020079780-appb-000034
仅与R 31有关,可以通过牛顿法或梯度下降法求解
Figure PCTCN2020079780-appb-000035
部分得到R 31的最优解。其中,方程组(21)为第一方程组,R 31为第三旋转矩阵。从方程组(22)可知,
Figure PCTCN2020079780-appb-000036
Figure PCTCN2020079780-appb-000037
为已知数,在R 31已知时,
Figure PCTCN2020079780-appb-000038
仅与
Figure PCTCN2020079780-appb-000039
有关,求解
Figure PCTCN2020079780-appb-000040
获得
Figure PCTCN2020079780-appb-000041
的最优解。其中,方程组(22)为第二方程组,
Figure PCTCN2020079780-appb-000042
为第三平移矩阵。在一些实施例中,自动驾驶装置可利用相机1和相机2同步采集的两个图像(对应于一帧)确定一个Rt 21。应理解,自动驾驶装置利用同步采集的M组图像(对应于M帧)可确定M个Rt 21。同理, 自动驾驶装置可确定M个Rt 23
It can be seen from the equation system (21) that R 21 and R 23 are known numbers,
Figure PCTCN2020079780-appb-000034
Only related to R 31 , can be solved by Newton's method or gradient descent method
Figure PCTCN2020079780-appb-000035
Partially obtain the optimal solution of R 31. Among them, the equation set (21) is the first equation set, and R 31 is the third rotation matrix. It can be seen from the equation system (22) that
Figure PCTCN2020079780-appb-000036
with
Figure PCTCN2020079780-appb-000037
Is a known number, when R 31 is known,
Figure PCTCN2020079780-appb-000038
Only with
Figure PCTCN2020079780-appb-000039
Related to
Figure PCTCN2020079780-appb-000040
get
Figure PCTCN2020079780-appb-000041
The optimal solution. Among them, the equation system (22) is the second equation system,
Figure PCTCN2020079780-appb-000042
Is the third translation matrix. In some embodiments, the automatic driving device may determine an Rt 21 using two images (corresponding to one frame) simultaneously collected by the camera 1 and the camera 2. It should be understood that the automatic driving device can determine M Rt 21 by using M groups of images (corresponding to M frames) collected synchronously. In the same way, the automatic driving device can determine M Rt 23 .
可选的,将Rt 31视为已知量,将上一步求得的结果代入公式(17)和(18),使用类似的方法构建方程组并将Rt 21视为待优化量,可以求得Rt 31已知情况下Rt 21的最优解。 Optionally, treat Rt 31 as a known quantity, substitute the results obtained in the previous step into formulas (17) and (18), use a similar method to construct equations and treat Rt 21 as the quantity to be optimized, you can get The optimal solution of Rt 21 when Rt 31 is known.
可选的,将Rt 31和Rt 21都视为已知量,采用类似方法构建方程组并将Rt 23视为待优化量,可以求得Rt 23的最优解。至此,一组闭环优化完成。应理解,这种方式的主要原理是:先将一对具有公共视野的两相机之间的旋转矩阵视为未知数,其他各对具有公共视野的两相机之间的旋转矩阵和平移矩阵视为已知数,通过方程组(21)求解未知数;再将该对具有公共视野的两相机之间的平移矩阵视为未知数,其他各对具有公共视野的两相机之间的旋转矩阵和平移矩阵视为已知数,该对具有公共视野的两相机之间的旋转矩阵视为已知数,通过方程组(22)求解未知数。 Optionally, considering both Rt 31 and Rt 21 as known quantities, using a similar method to construct an equation set and Rt 23 as the quantity to be optimized, the optimal solution of Rt 23 can be obtained. At this point, a set of closed-loop optimization is completed. It should be understood that the main principle of this method is: first treat the rotation matrix between a pair of two cameras with a common field of view as an unknown number, and treat the rotation matrix and translation matrix between the other pairs of two cameras with a common field of view as an unknown number. Knowing the number, solve the unknown by equation (21); then the translation matrix between the pair of cameras with a common field of view is regarded as an unknown number, and the rotation matrix and translation matrix between the other pairs of cameras with a common field of view are regarded as A known number, the rotation matrix between the pair of cameras with a common field of view is regarded as a known number, and the unknown number is solved by equation (22).
应理解,相机之间的编号是任意指定的,因此可变换相机的顺序并重复以上步骤,获得多组相机之间位姿的不同解,并选取一组误差最小的解作为输出。以4个相机为例,记为A、B、C、D,假设4个相机两两之间都有公共视野,则构成闭环方程有多种形式,如Rt ADRt DCRt CBRt BA=0,Rt ADRt DBRt BCRt CA=0等,分别求取不同的闭环方程可以得到不同的解,选取最优的即可。误差即为方程的残差大小。 It should be understood that the numbers between the cameras are arbitrarily designated, so the order of the cameras can be changed and the above steps can be repeated to obtain different solutions of poses between multiple sets of cameras, and a set of solutions with the smallest error is selected as the output. Take 4 cameras as an example, denoted as A, B, C, D. Assuming that each of the 4 cameras has a common field of view, the closed-loop equation has many forms, such as Rt AD Rt DC Rt CB Rt BA =0 , Rt AD Rt DB Rt BC Rt CA = 0, etc. Different closed-loop equations can be obtained respectively to obtain different solutions, and the optimal one can be selected. The error is the residual error of the equation.
在实际应用中,自动驾驶装置在得到多对两两相机之间的相对位姿之后,可采用类似的方式对两两相机之间的相对位姿进行优化,以便于得到精度更高的相对位姿。In practical applications, after the automatic driving device obtains the relative poses between the two cameras, it can optimize the relative poses between the two cameras in a similar way, so as to obtain a more accurate relative pose. posture.
前述实施例描述了利用第一相机和第二相机同步采集的两个图像来确定该第一相机和该第二相机之间的相对位姿的方式。当第一相机和第二相机同步采集的两个图像相匹配的特征点对的个数不低于8时,可获得一个较准确的相对位姿。然而,当第一相机和第二相机同步采集的两个图像相匹配的特征点对的个数低于8时,不能得到一个准确地相对位姿,即位姿估计不稳定。下面介绍一种通过多帧累加来确定两相机之间相对位姿的方式,通过这种方式可得到较准确的相对位姿。一种通过多帧累加来确定两相机之间相对位姿的方式如下:The foregoing embodiment describes the manner of determining the relative pose between the first camera and the second camera by using two images synchronously collected by the first camera and the second camera. When the number of feature point pairs that match the two images simultaneously collected by the first camera and the second camera is not less than 8, a relatively accurate relative pose can be obtained. However, when the number of matching feature point pairs in the two images simultaneously collected by the first camera and the second camera is less than 8, an accurate relative pose cannot be obtained, that is, the pose estimation is unstable. The following introduces a way to determine the relative pose between two cameras through the accumulation of multiple frames. In this way, a more accurate relative pose can be obtained. A way to determine the relative pose between two cameras through multi-frame accumulation is as follows:
1)提取第i帧两幅图中的特征点,分别记为x′ i和x i,将其加入特征点集合X 1和特征点集合X 21) Extract the feature points in the two images of the i-th frame, denoted as x′ i and x i , respectively, and add them to the feature point set X 1 and the feature point set X 2 .
第i帧两幅图包括第一相机在第i帧采集的一幅图像和第二相机在第i帧采集的一幅图像。提取第i帧两幅图中的特征点可以是提取第一相机在第i帧采集的一幅图像中的特征点,得到x′ i;提取第二相机在第i帧采集的一幅图像中的特征点,得到x i。i为大于0的整数。应理解,一帧中两幅图像包括第一相机采集的一个图像和第二相机采集的一个图像,并且这一帧中两图像被拍摄的时间间隔小于时间阈值,例如0.5ms、1ms等。 The two images of the i-th frame include an image collected by the first camera in the i-th frame and an image collected by the second camera in the i-th frame. Extracting the feature points in the two images of the i-th frame can be extracting the feature points in an image collected by the first camera in the i-th frame to obtain x′ i ; extracting the feature points in an image collected by the second camera in the i-th frame The feature points of, get x i . i is an integer greater than zero. It should be understood that the two images in one frame include one image collected by the first camera and one image collected by the second camera, and the time interval between the two images in this frame is less than a time threshold, such as 0.5 ms, 1 ms, and so on.
2)提取第j帧两幅图中的特征点,分别记为x′ j和x j,将其加入特征点集合X 1和特征点集合X 22) Extract the feature points in the two images of the j-th frame, denoted as x′ j and x j , respectively, and add them to the feature point set X 1 and the feature point set X 2 .
j为大于1且不等于i的整数。应理解,持续提取N帧两幅图像中的特征点后,特征点集合X 1和特征点集合X 2中保存了N帧两幅图像中匹配的特征点。 j is an integer greater than 1 and not equal to i. It should be understood that after the feature points in the two images of N frames are continuously extracted, the feature point set X 1 and the feature point set X 2 store the matching feature points in the two images of the N frame.
3)使用特征点集合X 1和特征点集合X 2进行两相机相对位姿的估计。 3) Use the feature point set X 1 and the feature point set X 2 to estimate the relative pose of the two cameras.
可选的,步骤3)的一种实现方式可以是对特征点集合X 1中的特征点和特征点集合X 2中的特征点进行特征点匹配,以得到多组特征点对;利用该多组特征点对,确定两相机之间 的相对位姿。利用该多组特征点对,确定两相机之间的相对位姿的实现方式可以与步骤303的实现方式类似。 Optionally, an implementation manner of step 3) may be to perform feature point matching on the feature points in the feature point set X 1 and the feature points in the feature point set X 2 to obtain multiple sets of feature point pairs; Set the feature point pairs to determine the relative pose between the two cameras. Using the multiple sets of feature point pairs, the implementation manner of determining the relative pose between the two cameras may be similar to the implementation manner of step 303.
由于特征点集合X 1和X 2中的特征点数远远多于单帧的特征点数,因此可以采用迭代优化的方式进行两相机相对位姿的估计:首先利用特征点集合X 1和X 2中全部点进行位姿估计以得到第一中间位姿;然后,将该第一中间位姿代入公式(14)计算残差,并将残差大于残差阈值的特征点剔除;再利用剩余的特征点重新进行位姿估计以得到第二中间位姿;重复以上过程,直到待剔除的特征点的个数小于数量阈值(如总数的5%)。公式(14)的单位为像素,残差阈值可以是0.8、1、1.2等,本申请不作限定。最终可获得多帧图像下的一个较准确地相对位姿。这里位姿估计的实现方式可与步骤303相同。 Since the number of feature points in the feature point sets X 1 and X 2 is far more than the number of feature points in a single frame, the iterative optimization method can be used to estimate the relative pose of the two cameras: first use the feature point sets X 1 and X 2 Perform pose estimation for all points to obtain the first intermediate pose; then, substitute the first intermediate pose into formula (14) to calculate the residual, and remove the feature points whose residuals are greater than the residual threshold; then use the remaining features Point pose estimation is performed again to obtain the second intermediate pose; the above process is repeated until the number of feature points to be eliminated is less than the number threshold (such as 5% of the total). The unit of formula (14) is pixel, and the residual threshold can be 0.8, 1, 1.2, etc., which is not limited in this application. Finally, a more accurate relative pose under multiple frames of images can be obtained. The implementation of the pose estimation here can be the same as that of step 303.
中心极限定理表明,大量相互独立随机变量的均值收敛于正态分布。假设在长时间的观测中,不同帧之间计算得到的R和
Figure PCTCN2020079780-appb-000043
是独立同分布的。记第i帧求解得到的5DOF相对位姿为
Figure PCTCN2020079780-appb-000044
则依据中心极限定理,记
Figure PCTCN2020079780-appb-000045
则有:
The central limit theorem shows that the mean of a large number of mutually independent random variables converges to a normal distribution. Suppose that in a long-term observation, the R and calculated between different frames
Figure PCTCN2020079780-appb-000043
Are independent and identically distributed. Record the 5DOF relative pose obtained by solving the i-th frame as
Figure PCTCN2020079780-appb-000044
According to the central limit theorem, remember
Figure PCTCN2020079780-appb-000045
Then there are:
Figure PCTCN2020079780-appb-000046
Figure PCTCN2020079780-appb-000046
其中N(0,1)表示标准正态分布。记
Figure PCTCN2020079780-appb-000047
可以看到,X的方差与
Figure PCTCN2020079780-appb-000048
成反比。n为时间窗大小(对应于采集的n帧图像),随着时间窗增大,X的方差可不断减小,在实际应用中会稳定于某一值。根据中心极限定理,假设在长时间的观测中,不同帧之间计算得到的R和
Figure PCTCN2020079780-appb-000049
是独立同分布的,则随着n的增大,R和
Figure PCTCN2020079780-appb-000050
将趋于平稳。选取适当的n使得R和
Figure PCTCN2020079780-appb-000051
的方差满足需求即可。
Where N(0,1) represents the standard normal distribution. remember
Figure PCTCN2020079780-appb-000047
As you can see, the variance of X is
Figure PCTCN2020079780-appb-000048
Inversely proportional. n is the size of the time window (corresponding to the n frames of images collected). As the time window increases, the variance of X can continue to decrease, and it will stabilize at a certain value in practical applications. According to the central limit theorem, it is assumed that in a long-term observation, the R and R calculated between different frames
Figure PCTCN2020079780-appb-000049
Are independent and identically distributed, then as n increases, R and
Figure PCTCN2020079780-appb-000050
Will stabilize. Choose an appropriate n such that R and
Figure PCTCN2020079780-appb-000051
The variance of satisfies the demand.
采用多帧累加的方式确定两相机之间的相对位姿,可以避免单帧计算引发的抖动问题,使标定结果趋于稳定。Using the method of multi-frame accumulation to determine the relative pose between the two cameras can avoid the jitter problem caused by single-frame calculation and stabilize the calibration result.
下面结合自动驾驶装置的结构来描述如何实现两两相机之间的相对位姿的重标定。图5为本申请实施例提供的一种自动驾驶装置的结构示意图。如图5所示,自动驾驶装置包括:图像采集单元(对应于第一相机和第二相机)501、图像特征点提取单元502、位姿计算单元503、闭环优化单元504、多帧累加优化单元505、尺度计算单元506以及标定参数更新单元507。其中,闭环优化单元504和多帧累加优化单元505均为可选的,而非必要的。在一些实施例中,图像特征点提取单元502、位姿计算单元503、闭环优化单元504、多帧累加优化单元505、尺度计算单元506以及标定参数更新单元507的功能均由处理器112(对应于车载处理器)实现。在一些实施例中,位姿计算单元503、闭环优化单元504、多帧累加优化单元505、尺度计算单元506以及标定参数更新单元507的功能均由处理器112(对应于车载处理器)实现;图像特征点提取单元502的功能由图形处理器实现。图像特征点提取单元502、位姿计算单元503、闭环优化单元504、多帧累加优化单元505、尺度计算单元506以及标定参数更新单元507为自动驾驶装置中用于确定两两相机之间的相对位姿的单元,即在线标定部分(用于实现相对位姿重标定的部分)。下面分别介绍各单元的功能。The following describes how to realize the recalibration of the relative pose between the two cameras in conjunction with the structure of the automatic driving device. FIG. 5 is a schematic structural diagram of an automatic driving device provided by an embodiment of the application. As shown in Figure 5, the automatic driving device includes: an image acquisition unit (corresponding to the first camera and the second camera) 501, an image feature point extraction unit 502, a pose calculation unit 503, a closed loop optimization unit 504, and a multi-frame accumulation optimization unit 505, a scale calculation unit 506, and a calibration parameter update unit 507. Among them, the closed-loop optimization unit 504 and the multi-frame accumulation optimization unit 505 are optional, but not necessary. In some embodiments, the functions of the image feature point extraction unit 502, the pose calculation unit 503, the closed-loop optimization unit 504, the multi-frame accumulation optimization unit 505, the scale calculation unit 506, and the calibration parameter update unit 507 are all performed by the processor 112 (corresponding to On-board processor). In some embodiments, the functions of the pose calculation unit 503, the closed loop optimization unit 504, the multi-frame accumulation optimization unit 505, the scale calculation unit 506, and the calibration parameter update unit 507 are all implemented by the processor 112 (corresponding to the on-board processor); The function of the image feature point extraction unit 502 is implemented by a graphics processor. The image feature point extraction unit 502, the pose calculation unit 503, the closed loop optimization unit 504, the multi-frame accumulation optimization unit 505, the scale calculation unit 506, and the calibration parameter update unit 507 are used in the automatic driving device to determine the relative relationship between two cameras. The unit of the pose, that is, the online calibration part (the part used to realize the relative pose recalibration). The functions of each unit are introduced below.
图像采集单元501,用于获取多个相机采集的图像。示例性的,图像采集单元用于获取各相机同步采集的图像。该多个相机对应于图1中的相机130。The image acquisition unit 501 is used to acquire images collected by multiple cameras. Exemplarily, the image acquisition unit is used to acquire images synchronously acquired by each camera. The multiple cameras correspond to the camera 130 in FIG. 1.
图像特征点提取单元502,用于提取多个相机采集的图像中的特征点,并对提取的特征点进行匹配以得到第一匹配特征点集。举例来说,图像特征点提取单元,用于提取第一相机采集的第一图像中的特征点以得到第一特征点集,以及提取第二相机采集的第二图像中的特征点以得到第二特征点集;对该第一特征点集中的特征点和该第二特征点集中的特征点进行匹配以得到多组特征点对(对应于第一匹配特征点集)。又举例来说,图像特征点提取单元,用于提取第一相机和第二相机同步采集的多组图像中的特征点,并进行特征点匹配以得到多组特征点对(对应于第一匹配特征点集),每组图像(对应于一帧)包括第一相机采集的一个图像和第二相机采集的一个图像。The image feature point extraction unit 502 is configured to extract feature points in images collected by multiple cameras, and match the extracted feature points to obtain a first matching feature point set. For example, the image feature point extraction unit is configured to extract feature points in a first image collected by a first camera to obtain a first feature point set, and extract feature points in a second image collected by a second camera to obtain a first feature point set. Two feature point sets: the feature points in the first feature point set and the feature points in the second feature point set are matched to obtain multiple sets of feature point pairs (corresponding to the first matching feature point set). For another example, the image feature point extraction unit is used to extract feature points in multiple sets of images synchronously collected by the first camera and the second camera, and perform feature point matching to obtain multiple sets of feature point pairs (corresponding to the first Matching feature point set), each group of images (corresponding to a frame) includes an image collected by the first camera and an image collected by the second camera.
位姿计算单元503,用于根据第一匹配特征点集(即匹配后的特征点)确定两两相机之间的基本矩阵,并将基本矩阵分解获得5DOF相对位姿。应理解,位姿计算单元可对多相机系统中所有具有公共视野的相机进行位姿计算,即确定多相机系统中所有具有公共视野的两相机之间的相对位姿。在自动驾驶装置未包括闭环优化单元505和多帧累加优化单元506的情况下,位姿计算单元503还用于合并尺度因子和5DOF相对位姿以得到完整的6DOF相对位姿。示例性的,位姿计算单元503,具体用于确定上述第一相机和上述第二相机之间的本质矩阵;根据上述本质矩阵的奇异值分解结果,计算上述第一相机和上述第二相机之间的5自由度DOF相对位姿。可选的,位姿计算单元503,具体用于将上述本质矩阵进行奇异值分解以得到上述奇异值分解结果;根据上述奇异值分解结果,得到上述第一相机和上述第二相机之间的至少两个相对位姿;分别利用上述至少两个相对位姿计算上述第一匹配特征点集中的特征点的三维坐标位置;将上述至少两个相对位姿中使得上述第一匹配特征点集中的各特征点的三维坐标位置均位于上述第一相机和上述第二相机前方的相对位姿作为上述5自由度相对位姿。The pose calculation unit 503 is configured to determine the basic matrix between the two cameras according to the first matching feature point set (ie, the matched feature points), and decompose the basic matrix to obtain a 5DOF relative pose. It should be understood that the pose calculation unit may perform pose calculation on all cameras with a common field of view in the multi-camera system, that is, determine the relative poses between all two cameras with a common field of view in the multi-camera system. In the case that the automatic driving device does not include the closed-loop optimization unit 505 and the multi-frame accumulation optimization unit 506, the pose calculation unit 503 is also used to combine the scale factor and the 5DOF relative pose to obtain a complete 6DOF relative pose. Exemplarily, the pose calculation unit 503 is specifically configured to determine the essential matrix between the first camera and the second camera; according to the singular value decomposition result of the essential matrix, calculate the difference between the first camera and the second camera. Relative pose between 5 degrees of freedom DOF. Optionally, the pose calculation unit 503 is specifically configured to perform singular value decomposition on the essential matrix to obtain the singular value decomposition result; according to the singular value decomposition result, obtain at least the difference between the first camera and the second camera. Two relative poses; each of the above at least two relative poses is used to calculate the three-dimensional coordinate position of the feature point in the first matching feature point set; each of the at least two relative poses that makes the first matching feature point set is The relative poses in which the three-dimensional coordinate positions of the feature points are located in front of the first camera and the second camera are regarded as the 5-degree-of-freedom relative pose.
尺度计算单元504,用于将第一距离和第二距离之间的比值,作为尺度因子;上述第一距离和上述第二距离分别为上述自动驾驶装置上的非视觉传感器和视觉传感器测量同一距离得到的结果,上述视觉传感器包括上述第一相机和/或上述第二相机。示例性的,尺度计算单元504,具体用于利用激光雷达或毫米波雷达测量某个目标距离自动驾驶装置的距离s,同时使用相机利用双目(或多目)视觉测量得到同一目标到自动驾驶装置的距离s′,从而获得尺度因子。应理解,合并尺度因子和5DOF相对位姿可得到完整的6DOF相对位姿。The scale calculation unit 504 is configured to use the ratio between the first distance and the second distance as a scale factor; the first distance and the second distance are respectively the same distance measured by the non-visual sensor and the visual sensor on the automatic driving device As a result, the visual sensor includes the first camera and/or the second camera. Exemplarily, the scale calculation unit 504 is specifically configured to use lidar or millimeter wave radar to measure the distance s of a certain target from the automatic driving device, while using a camera to use binocular (or multi-eye) vision measurement to obtain the same target to the automatic driving The distance s'of the device to obtain the scale factor. It should be understood that combining the scale factor and the 5DOF relative pose can obtain a complete 6DOF relative pose.
闭环优化单元505,用于根据多相机系统中两两相机的相对位姿组成的闭环构建方程组,进行整体优化。示例性的,闭环优化单元505通过先后求解方程组(21)和方程组(21)来对执行图3中的方法流程得到的相对位姿进行优化。在自动驾驶装置包括闭环优化单元505且未包括多帧累加优化单元506的情况下,闭环优化单元505还用于合并尺度因子和5DOF相对位姿可得到完整的6DOF相对位姿。The closed-loop optimization unit 505 is used to construct a system of equations based on the closed-loop composed of the relative poses of the two cameras in the multi-camera system to perform overall optimization. Exemplarily, the closed-loop optimization unit 505 optimizes the relative pose obtained by executing the method flow in FIG. 3 by solving the equation system (21) and the equation system (21) successively. In the case that the automatic driving device includes the closed-loop optimization unit 505 and does not include the multi-frame accumulation optimization unit 506, the closed-loop optimization unit 505 is also used to combine the scale factor and the 5DOF relative pose to obtain a complete 6DOF relative pose.
多帧累加优化单元506,用于对多帧计算结果进行累加、滤波和优化,获得稳定的标定参数(对应于相对位姿)。示例性的,多帧累加优化单元406用于实现通过多帧累加来确定两相机之间相对位姿的方式。在自动驾驶装置包括多帧累加优化单元505的情况下,多帧累加优化单元505可合并尺度因子和5DOF相对位姿可得到完整的6DOF相对位姿。The multi-frame accumulation optimization unit 506 is configured to accumulate, filter and optimize the multi-frame calculation results to obtain stable calibration parameters (corresponding to the relative pose). Exemplarily, the multi-frame accumulation optimization unit 406 is used to implement a manner of determining the relative pose between two cameras through multi-frame accumulation. In the case that the automatic driving device includes the multi-frame accumulation optimization unit 505, the multi-frame accumulation optimization unit 505 may combine the scale factor and the 5DOF relative pose to obtain a complete 6DOF relative pose.
标定参数更新单元507,用于在计算得到的相对位姿与自动驾驶装置当前使用的相对 位姿之间的差值不大于位姿变化阈值的情况下,将自动驾驶装置当前使用的相对位姿更新为计算得到的相对位姿。也就是说,标定参数更新单元507,用于将计算得到的相对位姿(对应于标定参数)与自动驾驶装置当前使用的相对位姿进行比对,若差值过大则动态更新当前使用的相对位姿(或同时发出提醒)。The calibration parameter update unit 507 is used to change the relative pose currently used by the automatic driving device when the difference between the calculated relative pose and the relative pose currently used by the automatic driving device is not greater than the pose change threshold. Update to the calculated relative pose. In other words, the calibration parameter update unit 507 is used to compare the calculated relative pose (corresponding to the calibration parameter) with the relative pose currently used by the automatic driving device, and if the difference is too large, dynamically update the currently used Relative pose (or reminder at the same time).
图5中的自动驾驶装置可采用方式一来确定两两相机之间的相对位姿。下面介绍一种可采用方式二来确定两两相机之间的相对位姿的自动驾驶装置的结构。图6为本申请实施例提供的另一种自动驾驶装置的结构示意图。如图6所示,该自动驾驶装置包括:The automatic driving device in FIG. 5 can use method one to determine the relative pose between two cameras. The following introduces the structure of an automatic driving device that can adopt the second method to determine the relative pose between two cameras. Fig. 6 is a schematic structural diagram of another automatic driving device provided by an embodiment of the application. As shown in Figure 6, the automatic driving device includes:
图像采集单元601,用于分别通过第一相机和第二相机采集图像;上述第一相机的视野和上述第二相机的视野存在重叠,上述第一相机和上述第二相机安装于自动驾驶装置的不同位置;The image acquisition unit 601 is used to acquire images through the first camera and the second camera respectively; the field of view of the first camera and the field of view of the second camera overlap, and the first camera and the second camera are installed in the automatic driving device different positions;
图像特征点提取单元602,用于对上述第一相机采集的图像和上述第二相机采集的图像进行特征点匹配,以得到第一匹配特征点集;上述第一匹配特征点集包括H组特征点对,每组特征点对包括两个相匹配的特征点,其中一个特征点为从上述第一相机采集的图像中提取的特征点,另一个特征点为从上述第二相机采集的图像中提取的特征点,上述H为不小于8的整数;The image feature point extraction unit 602 is configured to perform feature point matching on the image collected by the first camera and the image collected by the second camera to obtain a first matching feature point set; the first matching feature point set includes H groups of features Point pairs, each set of feature point pairs includes two matching feature points, one of the feature points is a feature point extracted from the image collected by the above-mentioned first camera, and the other feature point is a feature point extracted from the image collected by the above-mentioned second camera For the extracted feature points, the above H is an integer not less than 8;
位姿计算单元603,用于根据第一匹配特征点集,确定上述第一相机和上述第二相机之间的第一相对位姿;上述第一匹配特征点集包括H组特征点对,每组特征点对包括两个相匹配的特征点,其中一个特征点为从上述第一相机采集的图像中提取的特征点,另一个特征点为从上述第二相机采集的图像中提取的特征点,上述第一相机的视野和上述第二相机的视野存在重叠,上述第一相机和上述第二相机安装于自动驾驶装置的不同位置,上述H为不小于8的整数;The pose calculation unit 603 is configured to determine the first relative pose between the first camera and the second camera according to the first matching feature point set; the first matching feature point set includes H groups of feature point pairs, each The set of feature point pairs includes two matching feature points, one of the feature points is the feature point extracted from the image captured by the first camera, and the other feature point is the feature point extracted from the image captured by the second camera. , The field of view of the first camera and the field of view of the second camera overlap, the first camera and the second camera are installed at different positions of the automatic driving device, and the above H is an integer not less than 8;
标定参数更新单元604(对应于标定参数更新单元507),用于在上述第一相对位姿与第二相对位姿之间的差值不大于位姿变化阈值的情况下,将上述第二相对位姿更新为上述第一相对位姿;上述第二相对位姿为上述自动驾驶装置当前存储的上述第一相机和上述第二相机之间的相对位姿。The calibration parameter update unit 604 (corresponding to the calibration parameter update unit 507) is configured to: when the difference between the first relative pose and the second relative pose is not greater than the pose change threshold, the second relative The pose is updated to the first relative pose; the second relative pose is the relative pose between the first camera and the second camera currently stored by the automatic driving device.
在一些实施例中,位姿计算单元603和标定参数更新单元604的功能均由处理器112(对应于车载处理器)实现。In some embodiments, the functions of the pose calculation unit 603 and the calibration parameter update unit 604 are both implemented by the processor 112 (corresponding to the on-board processor).
可选的,图6中的自动驾驶装置还包括:闭环优化单元605、多帧累加优化单元606。其中,闭环优化单元605和多帧累加优化单元606是可选的,而非必要的。图像采集单元601的功能可与图像采集单元501的功能相同,图像特征点提取单元602的功能可与图像特征点提取单元502的功能相同,闭环优化单元605的功能可与闭环优化单元505的功能相同,多帧累加优化单元606的功能可与多帧累加优化单元506的功能相同。Optionally, the automatic driving device in FIG. 6 further includes: a closed-loop optimization unit 605 and a multi-frame accumulation optimization unit 606. Among them, the closed-loop optimization unit 605 and the multi-frame accumulation optimization unit 606 are optional, but not necessary. The function of the image acquisition unit 601 can be the same as that of the image acquisition unit 501, the function of the image feature point extraction unit 602 can be the same as the function of the image feature point extraction unit 502, and the function of the closed loop optimization unit 605 can be the same as that of the closed loop optimization unit 505. Similarly, the function of the multi-frame accumulation optimization unit 606 may be the same as the function of the multi-frame accumulation optimization unit 506.
示例性的,位姿计算单元603,具体用于迭代求解目标方程以得到上述第一相对位姿;在上述目标方程中,上述第一相对位姿包括的参数为未知数,上述第一匹配特征点集中的特征点对、上述第一相机的内参以及上述第二相机的内参为已知数。应理解,位姿计算单元603可采用方式二来确定具有公共视野的两两相机之间的相对位姿。Exemplarily, the pose calculation unit 603 is specifically configured to iteratively solve the target equation to obtain the first relative pose; in the target equation, the parameter included in the first relative pose is an unknown number, and the first matching feature point is The concentrated feature point pairs, the internal parameters of the first camera, and the internal parameters of the second camera are known numbers. It should be understood that the pose calculation unit 603 may adopt the second method to determine the relative pose between two cameras with a common field of view.
应理解以上自动驾驶装置中的各个单元的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。例如,以上各个单元可以 为单独设立的处理元件,也可以集成在自动驾驶装置的某一个芯片中实现,此外,也可以以程序代码的形式存储于控制器的存储元件中,由处理器的某一个处理元件调用并执行以上各个单元的功能。此外各个单元可以集成在一起,也可以独立实现。这里的处理元件可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤或以上各个单元可以通过处理器元件中的硬件的集成逻辑电路或者软件形式的指令完成。该处理元件可以是通用处理器,例如中央处理器(英文:central processing unit,简称:CPU),还可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个特定集成电路(英文:application-specific integrated circuit,简称:ASIC),或,一个或多个微处理器(英文:digital signal processor,简称:DSP),或,一个或者多个现场可编程门阵列(英文:field-programmable gate array,简称:FPGA)等。It should be understood that the division of each unit in the above automatic driving device is only a division of logical functions, and may be fully or partially integrated into a physical entity in actual implementation, or may be physically separated. For example, the above units can be separately set up processing elements, or they can be integrated into a certain chip of the automatic driving device for implementation. In addition, they can also be stored in the storage element of the controller in the form of program code, which is determined by a certain part of the processor. A processing element calls and executes the functions of each of the above units. In addition, each unit can be integrated together or implemented independently. The processing element here can be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method or each of the above units can be completed by an integrated logic circuit of hardware in the processor element or instructions in the form of software. The processing element may be a general-purpose processor, such as a central processing unit (English: central processing unit, CPU for short), or one or more integrated circuits configured to implement the above methods, for example: one or more specific integrated circuits Circuit (English: application-specific integrated circuit, abbreviation: ASIC), or, one or more microprocessors (English: digital signal processor, abbreviation: DSP), or, one or more field programmable gate arrays (English: field-programmable gate array, referred to as FPGA), etc.
图7为本申请实施例提供的另一种自动驾驶装置的结构示意图。如图7所示,该自动驾驶装置包括:多相机视觉系统701、图像处理器702、图像特征提取器703、车载处理器704、存储器705、标定参数更新装置706以及参数异常提醒装置707。FIG. 7 is a schematic structural diagram of another automatic driving device provided by an embodiment of the application. As shown in FIG. 7, the automatic driving device includes: a multi-camera vision system 701, an image processor 702, an image feature extractor 703, an on-board processor 704, a memory 705, a calibration parameter update device 706, and a parameter abnormality reminder 707.
多相机视觉系统701,用于通过多个相机同步采集的图像。多相机视觉系统701对应于图1中的相机130。多相机视觉系统由多个相机组成,可以分别同步地采集图像数据。The multi-camera vision system 701 is used to synchronize images collected by multiple cameras. The multi-camera vision system 701 corresponds to the camera 130 in FIG. 1. The multi-camera vision system consists of multiple cameras, which can collect image data synchronously.
图像处理器702,用于对多相机视觉系统701采集的图像数据进行缩放等预处理。The image processor 702 is used to perform preprocessing such as scaling on the image data collected by the multi-camera vision system 701.
图像特征提取器703,用于对图像处理器702预处理后的图像进行特征点提取。The image feature extractor 703 is configured to perform feature point extraction on the image preprocessed by the image processor 702.
为提高速度,预处理后的图像可进入专门的图像特征提取器703进行特征点提取。图像特征提取器703可以是具备通用计算能力的图形处理器或专门设计的ISP等硬件。In order to increase the speed, the preprocessed image can enter the special image feature extractor 703 for feature point extraction. The image feature extractor 703 may be a graphics processor with general computing capabilities or a specially designed ISP or other hardware.
车载处理器704,用于根据图像特征提取器703提取的特征点,确定两两相机之间的相对位姿。The on-board processor 704 is configured to determine the relative pose between the two cameras according to the feature points extracted by the image feature extractor 703.
车载处理器(对应于图1中的处理器113)可以为具备通用计算能力的硬件平台。可选的,车载处理器包含位姿估计部分和优化处理部分。位姿估计部分(对应于图6中的位姿计算单元601或者图5中的位姿计算单元503和尺度计算单元504)用于确定两两相机之间的相对位姿,即相对位姿估计。优化处理部分,用于实现闭环约束优化和/或多帧累加优化。可选的,车载处理器包含位姿估计部分且未包含优化处理部分。车载处理器,还用于从存储器705(对应于图1中的存储器114)中读取标定参数(即当前存储的相对位姿),以及比对读取的标定参数(即相对位姿)和计算得到的标定参数(即相对位姿)。之后车载处理器将比对结果分别发送至标定参数更新装置706和参数异常提醒装置707。标定参数更新装置706(对应于图5中的标定参数更新单元507以及图6中的标定参数更新单元602)可根据需要更新存储器中的标定参数。参数异常提醒装置707可以通过声音或图像提醒驾驶员相机的标定参数异常情况。例如,车载处理器确定第一相机和第二相机之间的相对位姿与自动驾驶装置当前使用的第一相机和第二相机之间的相对位姿相差较大时,提醒驾驶员该第一相机和该第二相机之间的相对位姿异常。以车载应用场景为例,如果车载处理器计算得到的标定参数与本车出厂时标定的参数差距过大,如角度偏差超过0.3度,即可认为传感器安装松动,可能导致自动驾驶系统性能下降,需要提醒车主到4s店重新标定(如中控弹出警告窗口)。如果车主长时间没有进行重新标定且角度偏差不大,如大于0.3度但小于0.5度,可在车主允许的情况下以在线标定的参数代替出厂标定参数运行自动驾驶系 统。The on-board processor (corresponding to the processor 113 in FIG. 1) may be a hardware platform with general computing capabilities. Optionally, the on-board processor includes a pose estimation part and an optimization processing part. The pose estimation part (corresponding to the pose calculation unit 601 in FIG. 6 or the pose calculation unit 503 and the scale calculation unit 504 in FIG. 5) is used to determine the relative pose between the two cameras, that is, the relative pose estimation . The optimization processing part is used to implement closed-loop constrained optimization and/or multi-frame accumulation optimization. Optionally, the on-board processor includes a pose estimation part and does not include an optimization processing part. The on-board processor is also used to read calibration parameters (that is, the currently stored relative pose) from the memory 705 (corresponding to the memory 114 in FIG. 1), and to compare the read calibration parameters (that is, the relative pose) and Calculated calibration parameters (ie relative pose). After that, the on-board processor sends the comparison results to the calibration parameter update device 706 and the parameter abnormality reminder 707 respectively. The calibration parameter update device 706 (corresponding to the calibration parameter update unit 507 in FIG. 5 and the calibration parameter update unit 602 in FIG. 6) can update the calibration parameters in the memory as required. The parameter abnormality reminding device 707 can remind the driver of the abnormality of the calibration parameters of the camera by sound or image. For example, when the in-vehicle processor determines that the relative pose between the first camera and the second camera differs greatly from the relative pose between the first camera and the second camera currently used by the automatic driving device, it reminds the driver of the first camera and the second camera. The relative pose between the camera and the second camera is abnormal. Take the on-board application scenario as an example. If the calibration parameters calculated by the on-board processor are too far from the parameters calibrated when the car leaves the factory, if the angle deviation exceeds 0.3 degrees, it can be considered that the sensor is loosely installed, which may cause the performance of the automatic driving system to decrease. Need to remind the owner to go to the 4s shop to re-calibrate (such as the pop-up warning window of the central control). If the vehicle owner has not recalibrated for a long time and the angle deviation is not large, such as greater than 0.3 degrees but less than 0.5 degrees, the autopilot system can be operated with online calibration parameters instead of factory calibration parameters with the owner's permission.
图7中的自动驾驶装置既可采用方式一确定两两相机之间的相对位姿,也可采用方式二确定两两相机之间的相对位姿。The automatic driving device in FIG. 7 can use method one to determine the relative pose between two cameras, or method two to determine the relative pose between two cameras.
图8为本申请实施例提供的另一种相对位姿标定方法流程图。图8中的方法是对图3中的方法的进一步细化和完善。如图8所示,该方法包括:FIG. 8 is a flowchart of another relative pose calibration method provided by an embodiment of the application. The method in Figure 8 is a further refinement and improvement of the method in Figure 3. As shown in Figure 8, the method includes:
801、自动驾驶装置确定具有公共视野的一对相机。801. The automatic driving device determines a pair of cameras with a common field of view.
802、从一对相机同步采集的图像中提取特征点,并进行特征点匹配。802. Extract feature points from an image synchronously collected by a pair of cameras, and perform feature point matching.
举例来说,第一相机和第二相机具有公共视野,第一图像和第二图像为该第一相机和该第二相机同步采集的图像(对应于一帧),提取该第一图像中的特征点以得到第三特征点集,以及提取该第二图像中的特征点以得到第四特征点集;对该第三特征点集中的特征点和该第四特征点集中的特征点进行匹配以得到多组特征点对。应理解,利用每一对相机同步采集的图像可得到该对相机对应的多组特征点对。For example, the first camera and the second camera have a common field of view, and the first image and the second image are the images (corresponding to one frame) collected by the first camera and the second camera synchronously, and the first image is extracted Feature points to obtain a third feature point set, and extract feature points in the second image to obtain a fourth feature point set; match the feature points in the third feature point set with the feature points in the fourth feature point set To get multiple sets of feature point pairs. It should be understood that multiple sets of feature point pairs corresponding to the pair of cameras can be obtained by using the images synchronously collected by each pair of cameras.
803、通过一对相机对应的多组特征点对,计算该对相机之间的本质矩阵。803. Calculate an essential matrix between the pair of cameras through multiple sets of feature point pairs corresponding to the pair of cameras.
804、通过一对相机之间的本质矩阵,确定该对相机之间的5DOF相对位姿。804. Determine the 5DOF relative pose between the pair of cameras through the essential matrix between the pair of cameras.
步骤803和步骤804的实现方式可与方式一中确定具有公共视野的一对相机之间的5DOF相对位姿的方式相同,这里不再详述。可选的,步骤802至步骤804可替换为前述实施例中采用多帧累加的方式确定两相机之间的相对位姿。应理解,自动驾驶装置每执行一次步骤801至步骤804可确定一对具有公共视野的相机之间的5DOF相对位姿。The implementation manner of step 803 and step 804 may be the same as the manner of determining the 5DOF relative pose between a pair of cameras with a common field of view in the first manner, and will not be described in detail here. Optionally, step 802 to step 804 may be replaced with the method of multi-frame accumulation in the foregoing embodiment to determine the relative pose between the two cameras. It should be understood that each time the automatic driving device executes steps 801 to 804, a 5DOF relative pose between a pair of cameras with a common field of view can be determined.
805、构建多相机系统闭环方程。805. Construct a closed-loop equation for a multi-camera system.
示例性的,方程(19)和方程(20)为构建的一组多相机系统闭环方程。Exemplarily, equation (19) and equation (20) are a set of closed-loop equations of a multi-camera system constructed.
806、求解符合多相机系统闭环方程的解。806. Solve a solution conforming to the closed-loop equation of the multi-camera system.
步骤805和步骤806为利用各对具有公共视野的两两相机之间的相对位姿组成的闭环构建方程组,优化计算得到的两两相机之间的5DOF相对位姿的方式。前述实施例描述了利用各对具有公共视野的两两相机之间的相对位姿组成的闭环构建方程组,进行整体优化的方式。 Steps 805 and 806 are to construct an equation set using a closed loop composed of the relative poses between the pairs of cameras with a common field of view to optimize the calculated 5DOF relative poses between the two cameras. The foregoing embodiment describes the method of constructing a system of equations using a closed loop composed of the relative poses between pairs of cameras with a common field of view to perform overall optimization.
807、计算尺度因子,并合并5DOF相对位姿和该尺度因子以得到6DOF相对位姿。807. Calculate the scale factor, and combine the 5DOF relative pose and the scale factor to obtain a 6DOF relative pose.
808、判断是否更新当前使用的标定参数(即相对位姿)。808. Determine whether to update the currently used calibration parameter (ie, relative pose).
若是,执行步骤809;若否,执行步骤810。If yes, go to step 809; if not, go to step 810.
809、更新当前使用的标定参数。809. Update the currently used calibration parameters.
更新当前使用的标定参数可以是将当前使用的标定参数更新为计算得到的相对位姿(即新的标定参数)。Updating the currently used calibration parameters may be updating the currently used calibration parameters to the calculated relative pose (ie, new calibration parameters).
810、输出用于提醒驾驶员相机的标定参数异常情况的信息。810. Output information used to remind the driver of abnormalities in the calibration parameters of the camera.
811、结束本流程。811. End this process.
本申请实施例中,利用多相机两两之间的公共视野,构建相机之间的位姿约束方程,例如公式(4),通过对本质矩阵的分解获得两相机的5DOF相对位姿,充分利用了相机之间的公共视野信息。另外,利用特征点匹配技术获得5DOF的位姿信息并通过闭环方程进行优化,达到了较高的重标定精度。In the embodiment of this application, the public field of view between the two cameras is used to construct the pose constraint equation between the cameras, such as formula (4). The 5DOF relative pose of the two cameras is obtained by decomposing the essential matrix, making full use of The public field of view information between the cameras. In addition, using feature point matching technology to obtain 5DOF pose information and optimize it through closed-loop equations, a higher recalibration accuracy is achieved.
前述实施例描述了自动驾驶装置确定两两相机之间的相对位姿的方式。在一些实施例 中,自动驾驶装置可向服务器发送用于确定两两相机之间的相对位姿所需的信息,由服务器根据自动驾驶装置发送的信息来确定该自动驾驶装置上的两两相机之间的相对位姿。下面来介绍由服务器确定自动驾驶装置上的两两相机之间的相对位姿的方案。The foregoing embodiments describe the manner in which the automatic driving device determines the relative pose between the two cameras. In some embodiments, the automatic driving device may send the information needed to determine the relative pose between the two cameras to the server, and the server determines the pair of cameras on the automatic driving device according to the information sent by the automatic driving device. The relative pose between. The following introduces a solution for determining the relative pose between the two cameras on the automatic driving device by the server.
图9为本申请实施例提供的另一种相对位姿标定方法流程图。如图9所示,该方法包括:FIG. 9 is a flowchart of another relative pose calibration method provided by an embodiment of the application. As shown in Figure 9, the method includes:
901、自动驾驶装置向服务器发送位姿标定请求。901. The automatic driving device sends a pose calibration request to the server.
可选的,上述位姿标定请求用于请求重标定上述自动驾驶装置的上述第一相机和上述第二相机之间的相对位姿,上述位姿标定请求携带有上述第一相机的内参以及上述第二相机的内参。可选的,上述位姿标定请求用于请求重标定上述自动驾驶装置上的各相机之间的相对位姿,上述位姿标定请求携带有上述自动驾驶装置上各相机的内参。Optionally, the pose calibration request is used to request recalibration of the relative pose between the first camera and the second camera of the automatic driving device, and the pose calibration request carries the internal parameters of the first camera and the The internal parameters of the second camera. Optionally, the pose calibration request is used to request recalibration of the relative poses between the cameras on the automatic driving device, and the pose calibration request carries internal parameters of the cameras on the automatic driving device.
902、服务器向自动驾驶装置发送针对位姿标定请求的确认信息。902. The server sends confirmation information for the pose calibration request to the automatic driving device.
服务器在确定接受来自自动驾驶装置的位姿标定请求的情况下,向自动驾驶装置发送针对位姿标定请求的确认信息。可选的,服务器根据上述位姿标定请求确定上述自动驾驶装置为授权装置的情况下,接受来自自动驾驶装置的位姿标定请求,并向自动驾驶装置发送针对位姿标定请求的确认信息。授权装置是指服务器需要向其提供位姿标定服务(即重标定相对位姿)的自动驾驶装置。When the server determines to accept the pose calibration request from the automatic driving device, it sends confirmation information for the pose calibration request to the automatic driving device. Optionally, if the server determines that the autonomous driving device is an authorized device according to the pose calibration request, it accepts the pose calibration request from the automatic driving device, and sends confirmation information for the pose calibration request to the automatic driving device. The authorization device refers to an automatic driving device to which the server needs to provide a pose calibration service (ie, recalibrate the relative pose).
903、自动驾驶装置向服务器发送重标定参考信息。903. The automatic driving device sends recalibration reference information to the server.
可选的,上述重标定参考信息用于服务器确定上述自动驾驶装置的第一相机和第二相机之间的相对位姿,上述第一相机的视野和上述第二相机的视野存在重叠,上述第一相机和上述第二相机安装于自动驾驶装置的不同位置。示例性的,重标定参考信息包括第一相机和第二相机同步采集的图像或者上述自动驾驶装置根据上述第一相机和上述第二相机同步采集的图像进行特征点匹配得到的多对特征点对(对应于第一匹配特征点集)。可选的,上述重标定参考信息用于服务器确定上述自动驾驶装置上的各相机之间的相对位姿,上述各相机安装于自动驾驶装置的不同位置。示例性的,重标定参考信息包括各相机同步采集的图像或者自动驾驶装置根据各对相机同步采集的图像进行特征点匹配得到的多对特征点对。Optionally, the recalibration reference information is used by the server to determine the relative pose between the first camera and the second camera of the automatic driving device, the field of view of the first camera and the field of view of the second camera overlap, and the first camera The first camera and the aforementioned second camera are installed at different positions of the automatic driving device. Exemplarily, the recalibration reference information includes images acquired synchronously by the first camera and the second camera, or multiple pairs of characteristic point pairs obtained by matching the characteristic points of the automatic driving device according to the synchronously acquired images of the first camera and the second camera. (Corresponding to the first matching feature point set). Optionally, the above-mentioned recalibration reference information is used by the server to determine the relative poses between the cameras on the above-mentioned automatic driving device, and the above-mentioned cameras are installed in different positions of the automatic driving device. Exemplarily, the recalibration reference information includes images synchronously collected by each camera or multiple pairs of feature points obtained by matching feature points of the automatic driving device according to the images synchronously collected by each pair of cameras.
904、上述服务器根据上述重标定参考信息,得到第一匹配特征点集。904. The server obtains a first matching feature point set according to the recalibration reference information.
可选的,上述第一匹配特征点集包括H组特征点对,每组特征点对包括两个相匹配的特征点,其中一个特征点为从第一相机采集的图像中提取的特征点,另一个特征点为从第二相机采集的图像中提取的特征点,上述H为不小于8的整数。可选的,上述重标定参考信息包括第一匹配特征点集。可选的,上述重标定参考信息包括第一相机和第二相机同步采集的图像,上述服务器可对第一相机和第二相机同步采集的图像进行特征点匹配以得到上述第一匹配特征点集。Optionally, the aforementioned first matching feature point set includes H groups of feature point pairs, and each group of feature point pairs includes two matching feature points, one of which is a feature point extracted from an image collected by the first camera, The other feature point is the feature point extracted from the image collected by the second camera, and the above H is an integer not less than 8. Optionally, the aforementioned recalibration reference information includes a first matching feature point set. Optionally, the above-mentioned recalibration reference information includes images synchronously collected by the first camera and the second camera, and the server may perform feature point matching on the images synchronously collected by the first camera and the second camera to obtain the first matching feature point set. .
905、服务器根据上述第一匹配特征点集,确定上述第一相机和上述第二相机之间的第一相对位姿。905. The server determines the first relative pose between the first camera and the second camera according to the first matching feature point set.
步骤905的实现方式可以同步骤303的实现方式相同。服务器既可采用方式一或者方式二确定上述第一相机和上述第二相机之间的第一相对位姿,也可以采用其他方式确定上述第一相机和上述第二相机之间的第一相对位姿,本申请实施例不作限定。The implementation of step 905 may be the same as the implementation of step 303. The server can determine the first relative position between the first camera and the second camera in either way one or two, or it can use other ways to determine the first relative position between the first camera and the second camera. The posture is not limited in the embodiment of this application.
906、服务器将上述第一相对位姿发送给上述自动驾驶装置。906. The server sends the first relative pose to the automatic driving device.
907、自动驾驶装置在上述第一相对位姿与第二相对位姿之间的差值不大于位姿变化阈值的情况下,将上述第二相对位姿更新为上述第一相对位姿。907. The automatic driving device updates the second relative pose to the first relative pose when the difference between the first relative pose and the second relative pose is not greater than the pose change threshold.
上述第二相对位姿为上述自动驾驶装置当前存储的上述第一相机和上述第二相机之间的相对位姿。应理解,服务器可根据自动驾驶装置发送的信息,确定该自动驾驶装置上的两两相机之间的相对位姿。也就是说,前述实施例中自动驾驶装置确定两两相机之间的相对位姿的方式均可以有服务器来实现,例如利用各对具有公共视野的两两相机之间的相对位姿组成的闭环构建方程组,优化计算得到的两两相机之间的5DOF相对位姿的方式以及采用多帧累加的方式确定两相机之间的相对位姿。The second relative pose is the relative pose between the first camera and the second camera currently stored by the automatic driving device. It should be understood that the server may determine the relative pose between the two cameras on the automatic driving device according to the information sent by the automatic driving device. That is to say, the manner in which the automatic driving device in the foregoing embodiment determines the relative pose between the two cameras can be implemented by a server, for example, using a closed loop composed of the relative poses between the two cameras with a common field of view. Construct a set of equations, optimize the calculated 5DOF relative pose between the two cameras, and use the multi-frame accumulation method to determine the relative pose between the two cameras.
本申请实施例中,自动驾驶装置不需要自身来标定相机之间的相对位姿,仅需将标定相机之间的相对位姿所需的数据发送给服务器,工作量少。In the embodiment of the present application, the automatic driving device does not need to calibrate the relative pose between the cameras by itself, and only needs to send the data required to calibrate the relative pose between the cameras to the server, which requires less workload.
图10为本申请实施例提供的一种服务器的结构示意图,如图10所示,该服务器包括:存储器1001、处理器1002、通信接口1003以及总线1004;其中,存储器1001、处理器1002、通信接口1003通过总线1004实现彼此之间的通信连接。通信接口1003用于与自动驾驶装置进行数据交互。Fig. 10 is a schematic structural diagram of a server provided by an embodiment of the application. As shown in Fig. 10, the server includes: a memory 1001, a processor 1002, a communication interface 1003, and a bus 1004; The interface 1003 realizes the communication connection between each other through the bus 1004. The communication interface 1003 is used for data interaction with the automatic driving device.
处理器1003通过读取该存储器中存储的该代码以用于执行如下操作:接收来自自动驾驶装置的重标定参考信息,上述重标定参考信息用于上述服务器确定上述自动驾驶装置的第一相机和第二相机之间的相对位姿,上述第一相机的视野和上述第二相机的视野存在重叠,上述第一相机和上述第二相机安装于上述自动驾驶装置的不同位置;根据上述重标定参考信息,得到第一匹配特征点集,上述第一匹配特征点集包括H组特征点对,每组特征点对包括两个相匹配的特征点,其中一个特征点为从第一相机采集的图像中提取的特征点,另一个特征点为从第二相机采集的图像中提取的特征点,上述H为不小于8的整数;根据上述第一匹配特征点集,确定上述第一相机和上述第二相机之间的第一相对位姿;将上述第一相对位姿发送给上述自动驾驶装置。The processor 1003 reads the code stored in the memory to perform the following operations: receiving recalibration reference information from the automatic driving device, and the recalibration reference information is used by the server to determine the first camera and the first camera of the automatic driving device. The relative pose between the second cameras, the field of view of the first camera overlaps the field of view of the second camera, and the first camera and the second camera are installed in different positions of the automatic driving device; according to the recalibration reference Information, the first matching feature point set is obtained. The first matching feature point set includes H groups of feature point pairs, and each set of feature point pairs includes two matching feature points, one of which is an image collected from the first camera The other feature point is the feature point extracted from the image collected by the second camera, and the above H is an integer not less than 8; according to the first matching feature point set, the first camera and the first camera are determined The first relative pose between the two cameras; the first relative pose is sent to the automatic driving device.
本申请实施例还提供一种计算机可读存储介质,上述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行前述实施例所提供的相对位姿标定方法。The embodiments of the present application also provide a computer-readable storage medium. The above-mentioned computer-readable storage medium stores instructions, which when run on a computer, cause the computer to execute the relative pose calibration method provided in the foregoing embodiments.
可选的,上述指令在计算机上运行时可实现:根据第一匹配特征点集,确定上述第一相机和上述第二相机之间的第一相对位姿;上述第一匹配特征点集包括H组特征点对,每组特征点对包括两个相匹配的特征点,其中一个特征点为从上述第一相机采集的图像中提取的特征点,另一个特征点为从上述第二相机采集的图像中提取的特征点,上述第一相机的视野和上述第二相机的视野存在重叠,上述第一相机和上述第二相机安装于自动驾驶装置的不同位置,上述H为不小于8的整数;在上述第一相对位姿与第二相对位姿之间的差值不大于位姿变化阈值的情况下,将上述第二相对位姿更新为上述第一相对位姿;上述第二相对位姿为上述自动驾驶装置当前存储的上述第一相机和上述第二相机之间的相对位姿。Optionally, the above instructions can be implemented when running on a computer: determine the first relative pose between the first camera and the second camera according to the first matching feature point set; the first matching feature point set includes H Set of feature point pairs, each set of feature point pairs includes two matching feature points, one of the feature points is a feature point extracted from the image collected by the above-mentioned first camera, and the other feature point is a feature point collected from the above-mentioned second camera Feature points extracted from the image, the field of view of the first camera and the field of view of the second camera overlap, the first camera and the second camera are installed at different positions of the automatic driving device, and the above H is an integer not less than 8; In the case that the difference between the first relative pose and the second relative pose is not greater than the pose change threshold, the second relative pose is updated to the first relative pose; the second relative pose Is the relative pose between the first camera and the second camera currently stored by the automatic driving device.
可选的,上述指令在计算机上运行时可实现:接收来自自动驾驶装置的重标定参考信息,上述重标定参考信息用于上述服务器确定上述自动驾驶装置的第一相机和第二相机之间的相对位姿,上述第一相机的视野和上述第二相机的视野存在重叠,上述第一相机和上述第二相机安装于上述自动驾驶装置的不同位置;根据上述重标定参考信息,得到第一匹 配特征点集,上述第一匹配特征点集包括H组特征点对,每组特征点对包括两个相匹配的特征点,其中一个特征点为从第一相机采集的图像中提取的特征点,另一个特征点为从第二相机采集的图像中提取的特征点,上述H为不小于8的整数;根据上述第一匹配特征点集,确定上述第一相机和上述第二相机之间的第一相对位姿;将上述第一相对位姿发送给上述自动驾驶装置。Optionally, the above instructions can be implemented when running on a computer: receiving recalibration reference information from the automatic driving device, and the above recalibration reference information is used by the server to determine the relationship between the first camera and the second camera of the automatic driving device. With respect to the pose, the field of view of the first camera overlaps the field of view of the second camera, and the first camera and the second camera are installed in different positions of the automatic driving device; according to the recalibration reference information, a first match is obtained Feature point set, the above-mentioned first matching feature point set includes H groups of feature point pairs, each set of feature point pairs includes two matching feature points, one of which is a feature point extracted from an image collected by a first camera, The other feature point is the feature point extracted from the image collected by the second camera, and the above H is an integer not less than 8; according to the first matching feature point set, the first camera and the second camera are determined. A relative pose; sending the first relative pose to the automatic driving device.
本申请实施例提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行前述实施例所提供的相对位姿标定方法。The embodiments of the present application provide a computer program product containing instructions, which when run on a computer, cause the computer to execute the relative pose calibration method provided in the foregoing embodiments.
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。The above are only specific embodiments of the present invention, but the protection scope of the present invention is not limited to this. Anyone familiar with the technical field can easily think of various equivalents within the technical scope disclosed in the present invention. Modifications or replacements, these modifications or replacements should all be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.

Claims (16)

  1. 一种相对位姿标定方法,其特征在于,包括:A relative pose calibration method, which is characterized in that it includes:
    通过第一相机和第二相机采集图像;所述第一相机的视野和所述第二相机的视野存在重叠,所述第一相机和所述第二相机安装于自动驾驶装置的不同位置;Images are collected by a first camera and a second camera; the field of view of the first camera and the field of view of the second camera overlap, and the first camera and the second camera are installed in different positions of the automatic driving device;
    对所述第一相机采集的图像和所述第二相机采集的图像进行特征点匹配,以得到第一匹配特征点集;所述第一匹配特征点集包括H组特征点对,每组特征点对包括两个相匹配的特征点,其中一个特征点为从所述第一相机采集的图像中提取的特征点,另一个特征点为从所述第二相机采集的图像中提取的特征点,所述H为不小于8的整数;Perform feature point matching on the image collected by the first camera and the image collected by the second camera to obtain a first matching feature point set; the first matching feature point set includes H groups of feature point pairs, each group of features The point pair includes two matching feature points. One feature point is a feature point extracted from the image collected by the first camera, and the other feature point is a feature point extracted from the image collected by the second camera. , The H is an integer not less than 8;
    根据所述第一匹配特征点集,确定所述第一相机和所述第二相机之间的第一相对位姿;Determine a first relative pose between the first camera and the second camera according to the first matching feature point set;
    在所述第一相对位姿与第二相对位姿之间的差值不大于位姿变化阈值的情况下,将所述第二相对位姿更新为所述第一相对位姿;所述第二相对位姿为所述自动驾驶装置当前存储的所述第一相机和所述第二相机之间的相对位姿。In the case that the difference between the first relative pose and the second relative pose is not greater than the pose change threshold, the second relative pose is updated to the first relative pose; the first The second relative pose is the relative pose between the first camera and the second camera currently stored by the automatic driving device.
  2. 根据权利要求1所述的方法,其特征在于,所述根据第一匹配特征点集,确定所述第一相机和所述第二相机之间的第一相对位姿包括:The method according to claim 1, wherein the determining the first relative pose between the first camera and the second camera according to a first matching feature point set comprises:
    确定所述第一相机和所述第二相机之间的本质矩阵;Determining the essential matrix between the first camera and the second camera;
    根据所述本质矩阵的奇异值分解结果,计算所述第一相机和所述第二相机之间的5自由度DOF相对位姿;According to the singular value decomposition result of the essential matrix, calculate the relative pose of the 5-DOF DOF between the first camera and the second camera;
    将第一距离和第二距离之间的比值,作为尺度因子;所述第一距离和所述第二距离分别为所述自动驾驶装置上的非视觉传感器和视觉传感器测量同一距离得到的结果,所述视觉传感器包括所述第一相机和/或所述第二相机;The ratio between the first distance and the second distance is used as a scale factor; the first distance and the second distance are the results obtained by measuring the same distance by the non-visual sensor and the visual sensor on the automatic driving device, respectively, The vision sensor includes the first camera and/or the second camera;
    合并所述5自由度相对位姿和所述尺度因子以得到所述第一相对位姿。Combine the 5-degree-of-freedom relative pose and the scale factor to obtain the first relative pose.
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述本质矩阵作的奇异值分解结果,计算得到所述第一相机和所述第二相机之间的5自由度DOF相对位姿包括:The method according to claim 2, wherein the singular value decomposition result based on the essential matrix is calculated to obtain the relative pose of the 5-degree-of-freedom DOF between the first camera and the second camera include:
    将所述本质矩阵进行奇异值分解以得到所述奇异值分解结果;Performing singular value decomposition on the essential matrix to obtain the singular value decomposition result;
    根据所述奇异值分解结果,得到所述第一相机和所述第二相机之间的至少两个相对位姿;Obtaining at least two relative poses between the first camera and the second camera according to the singular value decomposition result;
    分别利用所述至少两个相对位姿计算所述第一匹配特征点集中的特征点的三维坐标位置;Respectively calculating the three-dimensional coordinate positions of the feature points in the first matching feature point set by using the at least two relative poses;
    将所述至少两个相对位姿中使得所述第一匹配特征点集中的各特征点的三维坐标位置均位于所述第一相机和所述第二相机前方的相对位姿作为所述5自由度相对位姿。In the at least two relative poses, the three-dimensional coordinate positions of the feature points in the first matching feature point set are all located in front of the first camera and the second camera as the 5 free poses. Degree relative pose.
  4. 根据权利要求1所述的方法,其特征在于,所述根据第一匹配特征点集,确定所述第一相机和所述第二相机之间的第一相对位姿包括:The method according to claim 1, wherein the determining the first relative pose between the first camera and the second camera according to a first matching feature point set comprises:
    迭代求解目标方程以得到所述第一相对位姿;在所述目标方程中,所述第一相对位姿包括的参数为未知数,所述第一匹配特征点集中的特征点对、所述第一相机的内参以及所述第二相机的内参为已知数。The target equation is solved iteratively to obtain the first relative pose; in the target equation, the parameters included in the first relative pose are unknowns, and the pair of feature points in the first matching feature point set, the first The internal parameters of a camera and the internal parameters of the second camera are known numbers.
  5. 根据权利要求1至4任一项所述的方法,其特征在于,所述自动驾驶装置安装有M个相机,所述M个相机包括所述第一相机、所述第二相机以及第三相机,所述第三相机的视野与所述第一相机的视野和所述第二相机的视野均存在重叠,所述M为大于2的整数;所述方法还包括:The method according to any one of claims 1 to 4, wherein the automatic driving device is equipped with M cameras, and the M cameras include the first camera, the second camera, and the third camera The field of view of the third camera overlaps the field of view of the first camera and the field of view of the second camera, and the M is an integer greater than 2; the method further includes:
    获得M个第一旋转矩阵、M个第二旋转矩阵、M个第一平移矩阵以及M个第二平移矩阵;所述M个第一旋转矩阵为所述第一相机和所述第二相机之间的旋转矩阵且所述M个第一旋转矩阵中至少两个旋转矩阵不同,所述M个第二旋转矩阵为所述第二相机和所述第三相机之间的旋转矩阵且所述M个第二旋转矩阵中至少两个旋转矩阵不同,所述M个第一平移矩阵为所述第一相机和所述第二相机之间的平移矩阵且所述M个第一平移矩阵中至少两个平移矩阵不同,所述M个第二平移矩阵为所述第二相机和所述第三相机之间的平移矩阵且所述M个第二平移矩阵中至少两个平移矩阵不同,所述M个第一旋转矩阵与所述M个第一平移矩阵一一对应,所述M个第二旋转矩阵与所述M个第二平移矩阵一一对应,所述M为大于1的整数;Obtain M first rotation matrices, M second rotation matrices, M first translation matrices, and M second translation matrices; the M first rotation matrices are one of the first camera and the second camera And at least two of the M first rotation matrices are different, the M second rotation matrices are the rotation matrices between the second camera and the third camera, and the M At least two of the second rotation matrices are different, the M first translation matrices are the translation matrices between the first camera and the second camera, and at least two of the M first translation matrices are Different translation matrices, the M second translation matrices are the translation matrices between the second camera and the third camera, and at least two of the M second translation matrices are different, the M One first rotation matrix corresponds to the M first translation matrices, the M second rotation matrix corresponds to the M second translation matrix one to one, and the M is an integer greater than 1;
    求解第一方程组以得到第三旋转矩阵;所述第一方程组中包括M个第一方程,所述M个第一方程与所述M个第一旋转矩阵一一对应,且与所述M个第二旋转矩阵一一对应;在每个第一方程中,第一旋转矩阵和第二旋转矩阵为已知数,所述第三旋转矩阵为未知数;所述第三旋转矩阵为所述第一相机和所述第三相机之间的旋转矩阵;Solve the first equation set to obtain the third rotation matrix; the first equation set includes M first equations, and the M first equations correspond to the M first rotation matrices one-to-one, and correspond to the M first rotation matrices. The M second rotation matrices have a one-to-one correspondence; in each first equation, the first rotation matrix and the second rotation matrix are known numbers, and the third rotation matrix is an unknown number; the third rotation matrix is the A rotation matrix between the first camera and the third camera;
    求解第二方程组以得到第三平移矩阵;所述第二方程组中包括M个第二方程,所述M个第二方程与所述M个第一旋转矩阵一一对应,且与所述M个第二旋转矩阵一一对应;在每个第二方程中,第一旋转矩阵、第二旋转矩阵、第一平移矩阵以及第二平移矩阵为已知数,所述第三平移矩阵为未知数;所述第三平移矩阵为所述第一相机和所述第三相机之间的平移矩阵;Solve the second equation set to obtain the third translation matrix; the second equation set includes M second equations, and the M second equations correspond to the M first rotation matrices one-to-one, and correspond to the M first rotation matrices. The M second rotation matrices have a one-to-one correspondence; in each second equation, the first rotation matrix, the second rotation matrix, the first translation matrix, and the second translation matrix are known numbers, and the third translation matrix is an unknown number ; The third translation matrix is the translation matrix between the first camera and the third camera;
    将包括所述第三旋转矩阵和所述第三平移矩阵的位姿作为所述第一相机和所述第三相机之间的相对位姿。The pose including the third rotation matrix and the third translation matrix is taken as the relative pose between the first camera and the third camera.
  6. 根据权利要求1所述的方法,其特征在于,所述第一匹配特征点集包括所述第一相机从至少两帧图像中提取出的特征点以及所述第二相机从至少两帧图像中提取出的特征点;所述根据第一匹配特征点集,确定所述第一相机和所述第二相机之间的第一相对位姿包括:The method according to claim 1, wherein the first set of matching feature points includes feature points extracted by the first camera from at least two frames of images, and the second camera from at least two frames of images. The extracted feature points; the determining the first relative pose between the first camera and the second camera according to the first matching feature point set includes:
    根据所述第一匹配特征点集中的特征点对,确定所述第一相机和所述第二相机之间的相对位姿以得到第一中间位姿;Determining the relative pose between the first camera and the second camera according to the feature point pairs in the first matching feature point set to obtain a first intermediate pose;
    将所述第一匹配特征点集中各特征点对代入至第一公式,得到所述第一匹配特征点集中各特征点对对应的残差,所述第一公式表征中包括所述第一中间位姿;Substituting each feature point pair in the first matching feature point set into a first formula to obtain the residual error corresponding to each feature point pair in the first matching feature point set, and the first formula representation includes the first intermediate Posture
    剔除所述第一匹配特征点集中的干扰特征点对以得到第二匹配特征点集;所述干扰特征点对为所述第一匹配特征点集中对应的残差大于残差阈值的特征点对;The interference feature point pair in the first matching feature point set is removed to obtain a second matching feature point set; the interference feature point pair is the feature point pair whose corresponding residual in the first matching feature point set is greater than the residual threshold ;
    根据所述第二匹配特征点集中的特征点对,确定所述第一相机和所述第二相机之间的相对位姿以得到第二中间位姿;Determining the relative pose between the first camera and the second camera according to the feature point pairs in the second matching feature point set to obtain a second intermediate pose;
    将目标中间位姿作为所述第一相机和所述第二相机之间的所述第一相对位姿;所述目标中间位姿为根据目标匹配特征点集中的特征点对确定的所述第一相机和所述第二相机之 间的相对位姿,所述目标匹配特征点集中的特征点对的个数小于数量阈值或者所述目标匹配特征点集中的特征点对的个数与所述第一匹配特征点集中的特征点对的个数的比值小于比例阈值。The target intermediate pose is taken as the first relative pose between the first camera and the second camera; the target intermediate pose is the first determined according to the feature point pair in the target matching feature point set The relative pose between a camera and the second camera, the number of feature point pairs in the target matching feature point set is less than a number threshold or the number of feature point pairs in the target matching feature point set is equal to the The ratio of the number of feature point pairs in the first matching feature point set is less than the ratio threshold.
  7. 根据权利要求1至6任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 6, wherein the method further comprises:
    在所述第一相对位姿与所述第二相对位姿之间的差值不大于所述位姿变化阈值的情况下,输出提醒信息;所述提醒信息用于提示所述第一相机和所述第二相机之间的相对位姿异常。When the difference between the first relative pose and the second relative pose is not greater than the pose change threshold, output reminder information; the reminder information is used to remind the first camera and The relative pose between the second cameras is abnormal.
  8. 一种自动驾驶装置,其特征在于,包括:An automatic driving device, characterized in that it comprises:
    图像采集单元,用于分别通过第一相机和第二相机采集图像;所述第一相机的视野和所述第二相机的视野存在重叠,所述第一相机和所述第二相机安装于自动驾驶装置的不同位置;The image acquisition unit is used to acquire images through a first camera and a second camera respectively; the field of view of the first camera and the field of view of the second camera overlap, and the first camera and the second camera are installed in an automatic Different positions of the driving gear;
    图像特征点提取单元,用于对所述第一相机采集的图像和所述第二相机采集的图像进行特征点匹配,以得到第一匹配特征点集;所述第一匹配特征点集包括H组特征点对,每组特征点对包括两个相匹配的特征点,其中一个特征点为从所述第一相机采集的图像中提取的特征点,另一个特征点为从所述第二相机采集的图像中提取的特征点,所述H为不小于8的整数;The image feature point extraction unit is configured to perform feature point matching on the image collected by the first camera and the image collected by the second camera to obtain a first matching feature point set; the first matching feature point set includes H Group of feature point pairs, each group of feature point pairs includes two matching feature points, one of the feature points is a feature point extracted from the image collected by the first camera, and the other feature point is from the second camera For feature points extracted from the collected image, the H is an integer not less than 8;
    位姿计算单元,用于根据所述第一匹配特征点集,确定所述第一相机和所述第二相机之间的第一相对位姿;所述第一匹配特征点集包括H组特征点对,每组特征点对包括两个相匹配的特征点,其中一个特征点为从所述第一相机采集的图像中提取的特征点,另一个特征点为从所述第二相机采集的图像中提取的特征点,所述第一相机的视野和所述第二相机的视野存在重叠,所述第一相机和所述第二相机安装于自动驾驶装置的不同位置,所述H为不小于8的整数;A pose calculation unit, configured to determine a first relative pose between the first camera and the second camera according to the first matching feature point set; the first matching feature point set includes H groups of features Point pairs, each set of feature point pairs includes two matching feature points, one of the feature points is a feature point extracted from the image collected by the first camera, and the other feature point is a feature point collected from the second camera Feature points extracted from the image, the field of view of the first camera and the field of view of the second camera overlap, the first camera and the second camera are installed in different positions of the automatic driving device, and the H is not Integer less than 8;
    标定参数更新单元,用于在所述第一相对位姿与第二相对位姿之间的差值不大于位姿变化阈值的情况下,将所述第二相对位姿更新为所述第一相对位姿;所述第二相对位姿为所述自动驾驶装置当前存储的所述第一相机和所述第二相机之间的相对位姿。The calibration parameter update unit is configured to update the second relative pose to the first when the difference between the first relative pose and the second relative pose is not greater than the pose change threshold. Relative pose; the second relative pose is the relative pose between the first camera and the second camera currently stored by the automatic driving device.
  9. 根据权利要求8所述的装置,其特征在于,The device according to claim 8, wherein:
    所述位姿计算单元,具体用于确定所述第一相机和所述第二相机之间的本质矩阵;根据所述本质矩阵的奇异值分解结果,计算所述第一相机和所述第二相机之间的5自由度DOF相对位姿;所述装置还包括:The pose calculation unit is specifically configured to determine the essential matrix between the first camera and the second camera; calculate the first camera and the second camera according to the singular value decomposition result of the essential matrix 5 DOF relative poses between the cameras; the device further includes:
    尺度计算单元,用于将第一距离和第二距离之间的比值,作为尺度因子;所述第一距离和所述第二距离分别为所述自动驾驶装置上的非视觉传感器和视觉传感器测量同一距离得到的结果,所述视觉传感器包括所述第一相机和/或所述第二相机;The scale calculation unit is configured to use the ratio between the first distance and the second distance as a scale factor; the first distance and the second distance are measured by the non-visual sensor and the visual sensor on the automatic driving device, respectively As a result obtained from the same distance, the vision sensor includes the first camera and/or the second camera;
    所述位姿计算单元,还用于合并所述5自由度相对位姿和所述尺度因子以得到所述第一相对位姿。The pose calculation unit is further configured to combine the 5-degree-of-freedom relative pose and the scale factor to obtain the first relative pose.
  10. 根据权利要求9所述的装置,其特征在于,The device according to claim 9, wherein:
    所述位姿计算单元,具体用于将所述本质矩阵进行奇异值分解以得到所述奇异值分解结果;根据所述奇异值分解结果,得到所述第一相机和所述第二相机之间的至少两个相对位姿;分别利用所述至少两个相对位姿计算所述第一匹配特征点集中的特征点的三维坐标位置;将所述至少两个相对位姿中使得所述第一匹配特征点集中的各特征点的三维坐标位置均位于所述第一相机和所述第二相机前方的相对位姿作为所述5自由度相对位姿。The pose calculation unit is specifically configured to perform singular value decomposition on the essential matrix to obtain the singular value decomposition result; according to the singular value decomposition result, obtain the difference between the first camera and the second camera Calculate the three-dimensional coordinate positions of the feature points in the first matching feature point set by using the at least two relative poses; respectively use the at least two relative poses to calculate the three-dimensional coordinate positions of the feature points in the first matching feature point set; The relative pose where the three-dimensional coordinate positions of each feature point in the matching feature point set are located in front of the first camera and the second camera is used as the 5-degree-of-freedom relative pose.
  11. 根据权利要求8所述的装置,其特征在于,The device according to claim 8, wherein:
    所述位姿计算单元,具体用于迭代求解目标方程以得到所述第一相对位姿;在所述目标方程中,所述第一相对位姿包括的参数为未知数,所述第一匹配特征点集中的特征点对、所述第一相机的内参以及所述第二相机的内参为已知数。The pose calculation unit is specifically configured to iteratively solve a target equation to obtain the first relative pose; in the target equation, the parameter included in the first relative pose is an unknown number, and the first matching feature The characteristic point pairs in the point set, the internal parameters of the first camera, and the internal parameters of the second camera are known numbers.
  12. 根据权利要求8至11任一项所述的装置,其特征在于,所述自动驾驶装置安装有M个相机,所述M个相机包括所述第一相机、所述第二相机以及第三相机,所述第三相机的视野与所述第一相机的视野和所述第二相机的视野均存在重叠,所述M为大于2的整数;所述装置还包括:The device according to any one of claims 8 to 11, wherein the automatic driving device is equipped with M cameras, and the M cameras include the first camera, the second camera, and the third camera The field of view of the third camera overlaps the field of view of the first camera and the field of view of the second camera, and the M is an integer greater than 2; the device further includes:
    闭环优化单元,用于获得M个第一旋转矩阵、M个第二旋转矩阵、M个第一平移矩阵以及M个第二平移矩阵;所述M个第一旋转矩阵为所述第一相机和所述第二相机之间的旋转矩阵且所述M个第一旋转矩阵中至少两个旋转矩阵不同,所述M个第二旋转矩阵为所述第二相机和所述第三相机之间的旋转矩阵且所述M个第二旋转矩阵中至少两个旋转矩阵不同,所述M个第一平移矩阵为所述第一相机和所述第二相机之间的平移矩阵且所述M个第一平移矩阵中至少两个平移矩阵不同,所述M个第二平移矩阵为所述第二相机和所述第三相机之间的平移矩阵且所述M个第二平移矩阵中至少两个平移矩阵不同,所述M个第一旋转矩阵与所述M个第一平移矩阵一一对应,所述M个第二旋转矩阵与所述M个第二平移矩阵一一对应,所述M为大于1的整数;A closed-loop optimization unit for obtaining M first rotation matrices, M second rotation matrices, M first translation matrices, and M second translation matrices; the M first rotation matrices are the first camera and The rotation matrices between the second cameras and at least two of the M first rotation matrices are different, and the M second rotation matrices are the difference between the second camera and the third camera Rotation matrix and at least two rotation matrices in the M second rotation matrices are different, the M first translation matrices are the translation matrices between the first camera and the second camera, and the M th At least two translation matrices in a translation matrix are different, the M second translation matrices are the translation matrices between the second camera and the third camera, and at least two of the M second translation matrices are translated The matrices are different, the M first rotation matrices correspond to the M first translation matrices one-to-one, the M second rotation matrices correspond to the M second translation matrices one-to-one, and the M is greater than An integer of 1;
    求解第一方程组以得到第三旋转矩阵;所述第一方程组中包括M个第一方程,所述M个第一方程与所述M个第一旋转矩阵一一对应,且与所述M个第二旋转矩阵一一对应;在每个第一方程中,第一旋转矩阵和第二旋转矩阵为已知数,所述第三旋转矩阵为未知数;所述第三旋转矩阵为所述第一相机和所述第三相机之间的旋转矩阵;Solve the first equation set to obtain the third rotation matrix; the first equation set includes M first equations, and the M first equations correspond to the M first rotation matrices one-to-one, and correspond to the M first rotation matrices. The M second rotation matrices have a one-to-one correspondence; in each first equation, the first rotation matrix and the second rotation matrix are known numbers, and the third rotation matrix is an unknown number; the third rotation matrix is the A rotation matrix between the first camera and the third camera;
    求解第二方程组以得到第三平移矩阵;所述第二方程组中包括M个第二方程,所述M个第二方程与所述M个第一旋转矩阵一一对应,且与所述M个第二旋转矩阵一一对应;在每个第二方程中,第一旋转矩阵、第二旋转矩阵、第一平移矩阵以及第二平移矩阵为已知数,所述第三平移矩阵为未知数;所述第三平移矩阵为所述第一相机和所述第三相机之间的平移矩阵;Solve the second equation set to obtain the third translation matrix; the second equation set includes M second equations, and the M second equations correspond to the M first rotation matrices one-to-one, and correspond to the M first rotation matrices. The M second rotation matrices have a one-to-one correspondence; in each second equation, the first rotation matrix, the second rotation matrix, the first translation matrix, and the second translation matrix are known numbers, and the third translation matrix is an unknown number ; The third translation matrix is the translation matrix between the first camera and the third camera;
    将包括所述第三旋转矩阵和所述第三平移矩阵的位姿作为所述第一相机和所述第三相机之间的相对位姿。The pose including the third rotation matrix and the third translation matrix is taken as the relative pose between the first camera and the third camera.
  13. 根据权利要求8所述的装置,其特征在于,所述第一匹配特征点集包括所述第一 相机从至少两帧图像中提取出的特征点以及所述第二相机从至少两帧图像中提取出的特征点;The device according to claim 8, wherein the first matching feature point set includes feature points extracted by the first camera from at least two frames of images, and the second camera from at least two frames of images Feature points extracted;
    所述位姿计算单元,具体用于根据所述第一匹配特征点集中的特征点对,确定所述第一相机和所述第二相机之间的相对位姿以得到第一中间位姿;The pose calculation unit is specifically configured to determine the relative pose between the first camera and the second camera according to the feature point pair in the first matching feature point set to obtain the first intermediate pose;
    将所述第一匹配特征点集中各特征点对代入至第一公式,得到所述第一匹配特征点集中各特征点对对应的残差,所述第一公式表征中包括所述第一中间位姿;Substituting each feature point pair in the first matching feature point set into a first formula to obtain the residual error corresponding to each feature point pair in the first matching feature point set, and the first formula representation includes the first intermediate Posture
    剔除所述第一匹配特征点集中的干扰特征点对以得到第二匹配特征点集;所述干扰特征点对为所述第一匹配特征点集中对应的残差大于残差阈值的特征点对;The interference feature point pair in the first matching feature point set is removed to obtain a second matching feature point set; the interference feature point pair is the feature point pair whose corresponding residual in the first matching feature point set is greater than the residual threshold ;
    根据所述第二匹配特征点集中的特征点对,确定所述第一相机和所述第二相机之间的相对位姿以得到第二中间位姿;Determining the relative pose between the first camera and the second camera according to the feature point pairs in the second matching feature point set to obtain a second intermediate pose;
    将目标中间位姿作为所述第一相机和所述第二相机之间的所述第一相对位姿;所述目标中间位姿为根据目标匹配特征点集中的特征点对确定的所述第一相机和所述第二相机之间的相对位姿,所述目标匹配特征点集中的特征点对的个数小于数量阈值或者所述目标匹配特征点集中的特征点对的个数与所述第一匹配特征点集中的特征点对的个数的比值小于比例阈值。The target intermediate pose is taken as the first relative pose between the first camera and the second camera; the target intermediate pose is the first determined according to the feature point pair in the target matching feature point set The relative pose between a camera and the second camera, the number of feature point pairs in the target matching feature point set is less than a number threshold or the number of feature point pairs in the target matching feature point set is equal to the The ratio of the number of feature point pairs in the first matching feature point set is less than the ratio threshold.
  14. 根据权利要求8至13任一项所述的装置,其特征在于,所述装置还包括:The device according to any one of claims 8 to 13, wherein the device further comprises:
    提醒单元,用于在所述第一相对位姿与所述第二相对位姿之间的差值不大于所述位姿变化阈值的情况下,输出提醒信息;所述提醒信息用于提示所述第一相机和所述第二相机之间的相对位姿异常。The reminder unit is configured to output reminder information when the difference between the first relative pose and the second relative pose is not greater than the pose change threshold; the reminder information is used to remind all The relative pose between the first camera and the second camera is abnormal.
  15. 一种汽车,其特征在于,包括:A car characterized by comprising:
    存储器,用于存储程序;Memory, used to store programs;
    处理器,用于执行所述存储器存储的所述程序,当所述程序被执行时,所述处理器用于执行如权利要求1至7任一项所述的方法。The processor is configured to execute the program stored in the memory, and when the program is executed, the processor is configured to execute the method according to any one of claims 1 to 7.
  16. 一种计算机可读存储介质,其特征在于,所述计算机存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行如权利要求1至7任一项所述的方法。A computer-readable storage medium, wherein the computer storage medium stores a computer program, the computer program includes program instructions, and when executed by a processor, the program instructions cause the processor to execute as claimed in claim 1. To the method of any one of 7.
PCT/CN2020/079780 2020-03-17 2020-03-17 Relative pose calibration method and related apparatus WO2021184218A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080004815.0A CN112639883B (en) 2020-03-17 2020-03-17 Relative attitude calibration method and related device
PCT/CN2020/079780 WO2021184218A1 (en) 2020-03-17 2020-03-17 Relative pose calibration method and related apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/079780 WO2021184218A1 (en) 2020-03-17 2020-03-17 Relative pose calibration method and related apparatus

Publications (1)

Publication Number Publication Date
WO2021184218A1 true WO2021184218A1 (en) 2021-09-23

Family

ID=75291186

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/079780 WO2021184218A1 (en) 2020-03-17 2020-03-17 Relative pose calibration method and related apparatus

Country Status (2)

Country Link
CN (1) CN112639883B (en)
WO (1) WO2021184218A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114675657A (en) * 2022-05-25 2022-06-28 天津卡雷尔机器人技术有限公司 Nest returning charging method based on infrared camera fuzzy control algorithm
CN114882115A (en) * 2022-06-10 2022-08-09 国汽智控(北京)科技有限公司 Vehicle pose prediction method and device, electronic equipment and storage medium
CN115375890A (en) * 2022-10-25 2022-11-22 苏州千里雪智能科技有限公司 Based on four mesh stereovision cameras governing system of 5G
CN116468804A (en) * 2023-04-21 2023-07-21 湖南佑湘网联智能科技有限公司 Laser radar and camera external parameter calibration precision evaluation method and device
WO2024066816A1 (en) * 2022-09-29 2024-04-04 腾讯科技(深圳)有限公司 Method and apparatus for calibrating cameras and inertial measurement unit, and computer device

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113223077A (en) * 2021-05-21 2021-08-06 广州高新兴机器人有限公司 Method and device for automatic initial positioning based on vision-assisted laser
CN113504385B (en) * 2021-06-30 2023-07-14 安徽爱观视觉科技有限公司 Speed measuring method and device for plural cameras
CN113204661B (en) * 2021-07-06 2021-09-21 禾多科技(北京)有限公司 Real-time road condition updating method, electronic equipment and computer readable medium
CN113739819B (en) * 2021-08-05 2024-04-16 上海高仙自动化科技发展有限公司 Verification method, verification device, electronic equipment, storage medium and chip
CN113639782A (en) * 2021-08-13 2021-11-12 北京地平线信息技术有限公司 External parameter calibration method and device for vehicle-mounted sensor, equipment and medium
CN114283447B (en) * 2021-12-13 2024-03-26 北京元客方舟科技有限公司 Motion capturing system and method
CN116952190A (en) * 2022-04-14 2023-10-27 华为技术有限公司 Multi-vision distance measuring method and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018125706A (en) * 2017-02-01 2018-08-09 トヨタ自動車株式会社 Imaging apparatus
CN108648240A (en) * 2018-05-11 2018-10-12 东南大学 Based on a non-overlapping visual field camera posture scaling method for cloud characteristics map registration
CN109658457A (en) * 2018-11-02 2019-04-19 浙江大学 A kind of scaling method of laser and any relative pose relationship of camera
EP3537103A1 (en) * 2018-03-07 2019-09-11 Ricoh Company, Ltd. Calibration reference point acquisition system and calibration reference point acquisition method
CN110580720A (en) * 2019-08-29 2019-12-17 天津大学 camera pose estimation method based on panorama
CN110672094A (en) * 2019-10-09 2020-01-10 北京航空航天大学 Distributed POS multi-node multi-parameter instant synchronous calibration method
CN110851770A (en) * 2019-08-30 2020-02-28 中国第一汽车股份有限公司 Vehicle-mounted camera pose correction device and method, control equipment and correction system

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4820221B2 (en) * 2006-06-29 2011-11-24 日立オートモティブシステムズ株式会社 Car camera calibration device and program
CN101419055B (en) * 2008-10-30 2010-08-25 北京航空航天大学 Space target position and pose measuring device and method based on vision
CN102506757B (en) * 2011-10-10 2014-04-23 南京航空航天大学 Self-positioning method of binocular stereo measuring system in multiple-visual angle measurement
CN103824278B (en) * 2013-12-10 2016-09-21 清华大学 The scaling method of CCTV camera and system
KR102209008B1 (en) * 2014-02-17 2021-01-28 삼성전자주식회사 Apparatus for estimating camera pose and method for estimating camera pose
CN105953796A (en) * 2016-05-23 2016-09-21 北京暴风魔镜科技有限公司 Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone
JP6648639B2 (en) * 2016-05-27 2020-02-14 富士通株式会社 Biological information processing apparatus, biological information processing method, and biological information processing program
CN110660098B (en) * 2018-06-28 2022-08-12 北京京东叁佰陆拾度电子商务有限公司 Positioning method and device based on monocular vision
WO2020024182A1 (en) * 2018-08-01 2020-02-06 深圳市大疆创新科技有限公司 Parameter processing method and apparatus, camera device and aircraft
CN109141442B (en) * 2018-09-07 2022-05-17 高子庆 Navigation method based on UWB positioning and image feature matching and mobile terminal
CN110375732A (en) * 2019-07-22 2019-10-25 中国人民解放军国防科技大学 Monocular camera pose measurement method based on inertial measurement unit and point line characteristics

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018125706A (en) * 2017-02-01 2018-08-09 トヨタ自動車株式会社 Imaging apparatus
EP3537103A1 (en) * 2018-03-07 2019-09-11 Ricoh Company, Ltd. Calibration reference point acquisition system and calibration reference point acquisition method
CN108648240A (en) * 2018-05-11 2018-10-12 东南大学 Based on a non-overlapping visual field camera posture scaling method for cloud characteristics map registration
CN109658457A (en) * 2018-11-02 2019-04-19 浙江大学 A kind of scaling method of laser and any relative pose relationship of camera
CN110580720A (en) * 2019-08-29 2019-12-17 天津大学 camera pose estimation method based on panorama
CN110851770A (en) * 2019-08-30 2020-02-28 中国第一汽车股份有限公司 Vehicle-mounted camera pose correction device and method, control equipment and correction system
CN110672094A (en) * 2019-10-09 2020-01-10 北京航空航天大学 Distributed POS multi-node multi-parameter instant synchronous calibration method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114675657A (en) * 2022-05-25 2022-06-28 天津卡雷尔机器人技术有限公司 Nest returning charging method based on infrared camera fuzzy control algorithm
CN114675657B (en) * 2022-05-25 2022-09-23 天津卡雷尔机器人技术有限公司 Infrared camera fuzzy control algorithm based homing charging method
CN114882115A (en) * 2022-06-10 2022-08-09 国汽智控(北京)科技有限公司 Vehicle pose prediction method and device, electronic equipment and storage medium
CN114882115B (en) * 2022-06-10 2023-08-25 国汽智控(北京)科技有限公司 Vehicle pose prediction method and device, electronic equipment and storage medium
WO2024066816A1 (en) * 2022-09-29 2024-04-04 腾讯科技(深圳)有限公司 Method and apparatus for calibrating cameras and inertial measurement unit, and computer device
CN115375890A (en) * 2022-10-25 2022-11-22 苏州千里雪智能科技有限公司 Based on four mesh stereovision cameras governing system of 5G
CN116468804A (en) * 2023-04-21 2023-07-21 湖南佑湘网联智能科技有限公司 Laser radar and camera external parameter calibration precision evaluation method and device
CN116468804B (en) * 2023-04-21 2024-04-02 湖南佑湘网联智能科技有限公司 Laser radar and camera external parameter calibration precision evaluation method and device

Also Published As

Publication number Publication date
CN112639883A (en) 2021-04-09
CN112639883B (en) 2021-11-19

Similar Documents

Publication Publication Date Title
WO2021184218A1 (en) Relative pose calibration method and related apparatus
EP3858697A1 (en) Obstacle avoidance method and device
US11915492B2 (en) Traffic light recognition method and apparatus
WO2021026705A1 (en) Matching relationship determination method, re-projection error calculation method and related apparatus
WO2019127479A1 (en) Systems and methods for path determination
WO2021217420A1 (en) Lane tracking method and apparatus
CN112512887B (en) Driving decision selection method and device
US20220019845A1 (en) Positioning Method and Apparatus
CN113498529B (en) Target tracking method and device
US20240017719A1 (en) Mapping method and apparatus, vehicle, readable storage medium, and chip
CN112810603B (en) Positioning method and related product
US20230048680A1 (en) Method and apparatus for passing through barrier gate crossbar by vehicle
WO2021163846A1 (en) Target tracking method and target tracking apparatus
CN115265561A (en) Vehicle positioning method, device, vehicle and medium
WO2022089577A1 (en) Pose determination method and related device thereof
US20230410535A1 (en) Method and apparatus for generating lane line, vehicle, storage medium and chip
WO2022022284A1 (en) Target object sensing method and apparatus
CN115205848A (en) Target detection method, target detection device, vehicle, storage medium and chip
WO2022061725A1 (en) Traffic element observation method and apparatus
CN115082886B (en) Target detection method, device, storage medium, chip and vehicle
WO2022041820A1 (en) Method and apparatus for planning lane-changing trajectory
CN117671402A (en) Recognition model training method and device and mobile intelligent equipment
CN114822216A (en) Method and device for generating parking space map, vehicle, storage medium and chip

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20925534

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20925534

Country of ref document: EP

Kind code of ref document: A1