WO2023060964A1 - Procédé d'étalonnage et appareil, dispositif, support de stockage et produit-programme informatique associés - Google Patents

Procédé d'étalonnage et appareil, dispositif, support de stockage et produit-programme informatique associés Download PDF

Info

Publication number
WO2023060964A1
WO2023060964A1 PCT/CN2022/105803 CN2022105803W WO2023060964A1 WO 2023060964 A1 WO2023060964 A1 WO 2023060964A1 CN 2022105803 W CN2022105803 W CN 2022105803W WO 2023060964 A1 WO2023060964 A1 WO 2023060964A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature point
original image
pose
parameter
image
Prior art date
Application number
PCT/CN2022/105803
Other languages
English (en)
Chinese (zh)
Inventor
张壮
姜翰青
冯友计
邵文坚
刘浩敏
章国锋
鲍虎军
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023060964A1 publication Critical patent/WO2023060964A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • G01C25/005Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass initial alignment, calibration or starting-up of inertial devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/13Receivers
    • G01S19/23Testing, monitoring, correcting or calibrating of receiver elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present disclosure relates to but not limited to the technical field of computer vision, and in particular relates to a calibration method and related devices, equipment, storage media and computer program products.
  • Sensors and cameras such as those used in global satellite positioning systems and inertial navigation systems have broad application prospects in many fields such as industrial measurement, robotics, and automatic driving.
  • the relative pose calibration between the sensor and the camera is a key factor restricting the accuracy of data fusion.
  • lidar is usually introduced in engineering as an intermediary for calibrating the relative pose between the sensor and the camera; while the traditional hand-eye calibration relies on the full movement of the sensor in six degrees of freedom (Degree of Freedom, DoF), which is costly and expensive. The process is complicated.
  • Embodiments of the present disclosure provide a calibration method and related devices, equipment, storage media and computer program products.
  • the first aspect of the embodiments of the present disclosure provides a calibration method, including: acquiring the original image captured by the camera at a preset timing and acquiring the first pose parameters measured by the sensor at the preset timing; based on the first feature in the original image point, get the second pose parameter of the original image; based on the first pose parameter and the second pose parameter, get the first feature point corresponding to the second feature point in the first space coordinate system of the sensor, and use the reference transformation parameter and the first pose parameter, the second feature point is projected to the third feature point in the original image; based on the first feature point corresponding to the second feature point and the second feature point is projected to the third feature point in the original image The position difference between them, adjust the reference transformation parameters, and get the pose transformation parameters.
  • the original image taken by the camera at the preset timing and the first pose parameter measured by the sensor at the preset timing are obtained, and based on the first feature point in the original image, the second pose parameter of the original image is obtained, and the On this basis, based on the first pose parameter and the second pose parameter, the first feature point is obtained corresponding to the second feature point in the first space coordinate system of the sensor, and using the reference transformation parameter and the first pose parameter , to obtain the second feature point projected to the third feature point in the original image, so that based on the position difference between the first feature point corresponding to the second feature point and the third feature point projected from the second feature point to the original image, Adjust the reference conversion parameters to obtain the pose conversion parameters, so the sensor and camera can be calibrated directly without an intermediary such as lidar, and the equation solving problem constructed by traditional hand-eye calibration is converted into the optimization problem of position difference, and then the calibration process Reduces reliance on full motion in all degrees of freedom, thereby reducing calibration cost and complexity.
  • obtaining the second pose parameter of the original image includes: performing pose estimation based on the first feature point in the original image to obtain the second pose parameter of the original image; A pose parameter and a second pose parameter, obtaining the first feature point corresponding to the second feature point in the first space coordinate system of the sensor, including: based on the second pose parameter, obtaining the first feature point at the first position of the camera The fourth feature point in the second space coordinate system; align the first pose parameter and the second pose parameter to obtain the coordinate transformation parameters between the first space coordinate system and the second space coordinate system; use the coordinate transformation parameters to The fourth feature point is transformed to obtain the second feature point.
  • the second pose parameter of the original image is obtained, and based on the second pose parameter, the position of the first feature point in the second space coordinate system of the camera is obtained The fourth feature point.
  • the first pose parameter and the second pose parameter are aligned to obtain the coordinate transformation parameters between the first space coordinate system and the second space coordinate system, so that the coordinate transformation parameters
  • the fourth feature point is converted to obtain the second feature point, that is, the pose conversion parameters between the sensor and the camera can be optimized and adjusted according to the consistency of the first pose parameter of the sensor and the pose estimation of the original image, which is beneficial to Improve calibration accuracy.
  • the pose estimation is performed based on the first feature points in the original image, and the second pose parameters of the original image are obtained, including: the first feature point in the original image, the shooting position of the original image, and the shooting angle of the original image satisfy
  • the original image is used as the current target image; wherein, the shooting position is obtained by using the first pose parameters; the pose estimation is performed by using the first feature points in the target image to obtain the second position of the target image Pose parameters; based on the second pose parameters, the fourth feature points of the first feature points in the second space coordinate system of the camera are obtained, including: using the second pose parameters to triangulate the first feature points to obtain The fourth feature point of the first feature point in the second space coordinate system of the camera.
  • the original image is used as the current target image, and the shooting position is based on the first pose parameter
  • the first feature point in the target image is used to perform pose estimation to obtain the second pose parameter of the target image
  • the second pose parameter is used to triangulate the first feature point to obtain
  • the first feature point is the fourth feature point in the second space coordinate system of the camera, so the target image can be selected in combination with the first pose parameters of the sensor during the 3D reconstruction process, which can reduce the possibility of introducing redundant data in the reconstruction process , which is beneficial to improve the calibration accuracy.
  • At least one of the following is also performed: obtaining the total number of first feature points in the original image that meet the matching condition, obtaining the first shot between the original image and the previous target image
  • the position difference is to obtain the disparity between the original image and the previous target image; wherein, the matching condition is to match the existing fourth feature point in the second space coordinate system, and the existing fourth feature point in the second space coordinate system
  • the preset conditions include at least one of the following: the total number is greater than the first threshold, the first shooting position difference is greater than the second threshold, and the parallax is greater than the third threshold.
  • At least one of the following is performed: obtaining the total number of first feature points in the original image that meet the matching condition, obtaining the first shot between the original image and the previous target image Position difference, to obtain the disparity between the original image and the previous target image, and the matching condition is to match the existing fourth feature point in the second space coordinate system, the existing fourth feature point in the second space coordinate system is It is obtained by using the first feature point in the previous target image, and the preset condition includes at least one of the following: the total number is greater than the first threshold, the first shooting position difference is greater than the second threshold, and the parallax is greater than the third threshold, so by the total The number of images, the first shooting position difference and the parallax are used to select the target image, which can help reduce the possibility of introducing redundant data in the reconstruction process and improve the calibration accuracy.
  • the method before obtaining the first shooting position difference between the original image and the previous target image, includes: respectively acquiring the second shooting position difference between two adjacent previous target images; counting the second shooting position difference The average value of the second threshold is obtained.
  • the second threshold is obtained, so the second threshold can be obtained based on the previous target images , so in the incremental three-dimensional reconstruction process, the second threshold can be continuously updated based on the newly introduced target image, which can help improve the calibration accuracy.
  • obtaining the first shooting position difference between the original image and the previous target image includes: obtaining the first shooting position of the original image, and obtaining the second shooting position of the previous target image closest to the original image; The difference between the shooting position and the second shooting position is used as the first shooting position difference.
  • the difference between the first shooting position and the second shooting position As the first photographing position difference, it can be beneficial to improve the accuracy of the first photographing position difference.
  • adjust the reference conversion parameters to obtain the pose conversion parameters including: based on the position difference , construct the objective function with the reference transformation parameters as the optimization object, and use the objective function to obtain the pose transformation parameters.
  • the objective function with the reference conversion parameter as the optimization object is constructed, including: obtaining the initial conversion parameter between the camera and the sensor, and obtaining the parameter difference between the reference conversion parameter and the initial conversion parameter; and, obtaining the first The position difference between the first feature point corresponding to the two feature points and the third feature point projected from the second feature point to the original image; based on the parameter difference and the position difference, an objective function is constructed.
  • the initial conversion parameters between the camera and the sensor are obtained, and the parameter difference between the reference conversion parameters and the initial conversion parameters is obtained, and the first feature point corresponding to the second feature point and the second feature point are projected into the original image
  • the position difference between the third feature points so that based on the parameter difference and the position difference, the objective function is constructed, that is, the objective function is jointly constructed by the reference parameter difference and the position difference, so even if the sensor does not move enough during the calibration process , it can also ensure that the solved pose transformation parameters are as close to the true value as possible, so it can help improve the calibration accuracy.
  • the second feature point is obtained based on the first feature point in at least one target image, and the target image is selected from several preset timing original images; obtaining the first feature point corresponding to the second feature point and The position difference between the second feature point projected to the third feature point in the original image, including: respectively using the target image as the current image; obtaining the first position of the first feature point corresponding to the second feature point in the current image Information, and obtain the second position information of the second feature point projected to the third feature point in the current image; wherein, the second position information is measured by using the internal parameters of the camera, the reference conversion parameters and the preset timing corresponding to the current image Obtained by projecting the first pose parameter to the second feature point; based on the first position information and the second position information, obtaining the pixel distance between the first feature point and the third feature point; based on at least one target image Pixel distance to get position difference.
  • the second feature point is obtained based on the first feature point in at least one target image, and the target image is selected from several original images with preset timings, and on this basis, the target image is respectively used as the current image , so as to obtain the first position information of the first feature point corresponding to the second feature point in the current image, and obtain the second position information of the second feature point projected to the third feature point in the current image, and the second position information It is obtained by projecting the second feature point on the first pose parameter measured by the internal parameters of the camera, the reference conversion parameter and the preset time sequence corresponding to the current image, and then based on the first position information and the second position information, the obtained The pixel distance between the first feature point and the third feature point, and based on the pixel distance of at least one target image, the position difference is obtained, so the second feature point obtained by 3D reconstruction can be counted to the first feature point in the target image And the second feature point is projected to the pixel distance between the third feature point in the target image to obtain the position difference, which can help
  • the first position information includes the first pixel coordinates of the first feature point
  • the second position information includes the second pixel coordinates of the third feature point
  • the first feature point and The pixel distance between the third feature points includes: if there is a first feature point corresponding to the second feature point in the current image, using the first pixel coordinates and the second pixel coordinates to obtain the first feature point and the third feature Pixel distance between points.
  • the first position information includes the first pixel coordinates of the first feature point
  • the second position information includes the second pixel coordinates of the third feature point
  • the pixel distance between the first feature point and the third feature point is obtained by using the first pixel coordinate and the second pixel coordinate.
  • obtaining the pixel distance between the first feature point and the third feature point also includes: the first feature point corresponding to the second feature point does not exist in the current image case, set the pixel distance to a preset value.
  • setting the pixel distance to a preset value can help improve the accuracy of the pixel distance.
  • the camera and the sensor are arranged on the preset carrier, and the original image and the first pose parameters are acquired during the non-linear motion of the preset carrier.
  • the camera and the sensor are set on the preset carrier, and the original image and the first pose parameters are obtained during the non-linear motion of the preset carrier, which can reduce the matrix degradation in the calibration process, which can help improve the calibration performance. accuracy.
  • the second aspect of the embodiment of the present disclosure provides a calibration device, including: an information acquisition part, a parameter acquisition part, a point cloud acquisition part and a parameter adjustment part, and the information acquisition part is configured to acquire the original image taken by the camera at a preset time sequence And acquire the first pose parameter measured by the sensor at the preset timing; the parameter acquisition part is configured to obtain the second pose parameter of the original image based on the first feature point in the original image; the point cloud acquisition part is configured In order to obtain the first feature point corresponding to the second feature point in the first space coordinate system of the sensor based on the first pose parameter and the second pose parameter, and use the reference transformation parameter and the first pose parameter to obtain the second The feature point is projected to the third feature point in the original image; the parameter adjustment part is configured to be based on the position between the first feature point corresponding to the second feature point and the second feature point projected to the third feature point in the original image Difference, adjust the reference transformation parameters, and get the pose transformation parameters.
  • the third aspect of the embodiments of the present disclosure provides an electronic device, including a memory and a processor coupled to each other, and the processor is configured to execute program instructions stored in the memory, so as to implement the calibration method in the first aspect above.
  • a fourth aspect of the embodiments of the present disclosure provides a computer-readable storage medium, on which program instructions are stored, and when the program instructions are executed by a processor, the calibration method in the above-mentioned first aspect is implemented.
  • the fifth aspect of the embodiments of the present disclosure provides a computer program product, the computer program product includes a computer program or an instruction, and when the computer program or instruction is run on an electronic device, the electronic device realizes The calibration method in the first aspect above.
  • the original image taken by the camera at the preset timing and the first pose parameter measured by the sensor at the preset timing are obtained, and the second pose parameter of the original image is obtained based on the first feature point in the original image
  • the first feature point is obtained corresponding to the second feature point in the first space coordinate system of the sensor, and using the reference transformation parameters and the first pose parameters, to obtain the second feature point projected to the third feature point in the original image, so as to be based on the position difference between the first feature point corresponding to the second feature point and the third feature point projected from the second feature point to the original image
  • adjust the reference conversion parameters to obtain the pose conversion parameters, so the sensor and camera can be directly calibrated without an intermediary such as lidar, and the equation solving problem constructed by traditional hand-eye calibration is converted into an optimization problem of position difference, and then the calibration process Reduces the reliance on full motion in all degrees of freedom, thereby reducing calibration cost and complexity.
  • FIG. 1A shows a schematic diagram of the architecture of an execution system of a calibration method provided by an embodiment of the present disclosure
  • FIG. 1B is a schematic flowchart of a calibration method provided by an embodiment of the present disclosure
  • Fig. 2 is a schematic flow chart of constructing an objective function based on position difference
  • Fig. 3 is a schematic flowchart of a calibration method provided by an embodiment of the present disclosure
  • Fig. 4 is a schematic frame diagram of a calibration device provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic frame diagram of an embodiment of an electronic device provided by an embodiment of the present disclosure.
  • Fig. 6 is a schematic diagram of a computer-readable storage medium provided by an embodiment of the present disclosure.
  • system and “network” are often used interchangeably herein.
  • the term “and/or” in this article is an association relationship describing associated objects, which means that there can be three relationships, for example, A and/or B, which can mean: A exists alone, A and B exist at the same time, and B exists alone. three conditions.
  • the character "/” in this article generally indicates that the contextual objects are an “or” relationship.
  • “many” herein means two or more than two.
  • FIG. 1A shows a schematic architecture diagram of a system for executing a calibration method provided by an embodiment of the present disclosure.
  • a camera 100 and a sensor 300 are respectively connected to a computer processing device 400 through a network 200 .
  • the network 200 may be a wide area network or a local area network, or a combination of both.
  • the computer processing device 400, the camera 100 and the sensor 300 may be physically separate or integrated.
  • the camera 100 can send or store the original image captured by the camera at a preset time sequence to the computer processing device 400 through the network 200, and at the same time, the sensor 300 can also send or store the first pose parameters measured by the sensor at a preset time sequence through the network 200
  • the computer processing device 400 obtains the second pose parameter of the original image based on the first feature point in the original image; based on the first pose parameter and the second pose parameters, the first feature point corresponds to the second feature point in the first space coordinate system of the sensor, and using the reference transformation parameter and the first pose parameter, the projection of the second feature point to The third feature point in the original image; based on the position difference between the first feature point corresponding to the second feature point and the third feature point projected from the second feature point to the original image, adjust The reference conversion parameters are used to obtain pose conversion parameters.
  • the sensor and the camera can be directly calibrated without an intermediary such as lidar, and the equation solving problem constructed by traditional hand-eye calibration is converted into an optimization problem of position difference, thereby reducing the dependence on full motion in full degrees of freedom during the calibration process , so the calibration cost and complexity can be reduced.
  • Fig. 1B is a schematic flowchart of a calibration method provided by an embodiment of the present disclosure, the method is executed by an electronic device, please refer to Fig. 1B, the method includes the following steps:
  • Step S11 Obtain the original image captured by the camera at the preset timing and acquire the first pose parameters measured by the sensor at the preset timing.
  • the application scope of the sensor may include but not limited to: Global Navigation Satellite System (Global Navigation Satellite System, GNSS), Inertial Navigation System (Inertial Navigation System, INS), etc., which are not limited here.
  • the sensor can be applied to the global satellite navigation system and the inertial navigation system at the same time, which is not limited here.
  • the first pose parameter is the acquired pose parameter of the global navigation satellite system or the inertial navigation system.
  • the camera and the sensor can be set on the preset carrier, and the original image and the first pose parameters are acquired during the non-linear motion of the preset carrier, that is, in the calibration process, it is only necessary to ensure that The preset carrier can only move in a non-linear manner, and does not need to move fully in six degrees of freedom, which is beneficial to reduce the complexity of calibration.
  • the preset carrier can be set according to the actual application situation.
  • the preset carrier in the field of automatic driving, can be the car body, that is, the camera and the sensor can be set on the car body, such as the sensor can be set on the The roof and the camera can be set near the front windshield, etc., which are not limited here; or, in the field of robotics, the sensor can be set inside the robot, and the camera can be set on the top of the robot, etc., which is not limited here.
  • Other scenarios can be deduced similarly.
  • the above non-linear motion may include but not limited to translation, rotation, etc., which is not limited here.
  • the preset carrier can perform curved movements during the calibration process, and so on.
  • the camera captures N original images per second, and the sensor measures the first pose parameters correspondingly at the N preset timings when the N original images are captured.
  • the original image O1 captured by the camera at preset timing t 01 and the first pose parameter measured by the sensor at t 01 can be obtained;
  • the original image O2 captured by the camera at preset timing t 02 and the first pose parameter measured by the sensor at t 02 can be acquired 02 The measured first pose parameter, and so on.
  • Step S12 Obtain a second pose parameter of the original image based on the first feature point in the original image.
  • the first feature point in the original image can be extracted by means of scale-invariant feature transform (Scale-Invariant Feature Transform, SIFT), simplified fast orientation rotation (Oriented Fast and Rotated Brief, ORB), etc., in This is not limited.
  • scale-invariant feature transform Scale-Invariant Feature Transform, SIFT
  • simplified fast orientation rotation Oriented Fast and Rotated Brief, ORB
  • the first feature point may include more prominent points in the original image, for example, may include but not limited to: contour points, bright spots in darker areas, dark spots in brighter areas, etc., It is not limited here.
  • the first feature representation of the first feature points may also be obtained.
  • the first feature representation can be represented by a 0-1 vector, and reference can be made to related technical descriptions such as SIFT and ORB.
  • 3D reconstruction can be performed based on the first feature points in the original image, and the 3D reconstruction includes processes such as feature matching, so that the second pose parameters of the original image can be obtained through feature matching.
  • the process of 3D reconstruction can refer to reconstruction methods such as Structure From Motion (SFM).
  • the first feature point in the original image, the shooting position of the original image, and the shooting angle of the original image meet the preset conditions
  • the The original image is used as the current target image
  • the shooting position is obtained by using the first pose parameters.
  • the first feature points in the target image can be used for pose estimation to obtain the second pose parameters of the target image .
  • the relevant description in the following disclosed embodiments refer to the relevant description in the following disclosed embodiments. Therefore, only the original images that meet the preset conditions can be selected as the target images for subsequent three-dimensional reconstruction, thereby reducing the probability of introducing redundant data as much as possible and improving the calibration accuracy.
  • the original image when the original image is the first frame image captured by the camera, the original image may be directly used as the target image. After that, for each frame of the original image, it can be determined whether to use it as the target image by judging whether it satisfies the above preset conditions.
  • the shooting position can be obtained by measuring the global satellite navigation system, and on this basis, the translation parameter t of the sensor can be obtained .
  • the inertial navigation system can also be used to measure the acceleration on the x, y and z axes, and on this basis, the rotation parameter R of the sensor can be obtained.
  • Step S13 Based on the first pose parameter and the second pose parameter, obtain the first feature point corresponding to the second feature point in the first space coordinate system of the sensor, and use the reference transformation parameter and the first pose parameter to obtain The second feature point is projected to the third feature point in the original image.
  • the pose estimation is performed based on the first feature points in the original image, and the second pose parameters of the original image can be obtained, and based on the second pose parameters, the position of the first feature points in the second position of the camera can be obtained.
  • the fourth feature point in the spatial coordinate system refer to the relevant description in the following disclosed embodiments.
  • the first pose parameter and the second pose parameter can be aligned to obtain the coordinate transformation parameters between the first space coordinate system of the sensor and the second space coordinate system of the camera, so that the coordinate transformation parameters can be used
  • the fourth feature point is converted to obtain the first feature point corresponding to the second feature point in the first space coordinate system of the sensor.
  • the first pose parameter and the second pose parameter can be aligned to obtain the first space coordinate system and the second space coordinate Coordinate transformation parameters between systems.
  • the target image 01, target image 02, ..., target image M can be sequentially screened from the original image, and the second pose parameters of the target images and the second pose parameters of the target images can be obtained respectively.
  • the first pose parameter measured by the time series sensor that is, the second pose parameter observed by the camera visually and the first pose parameter measured by the sensor can be obtained at different trajectory points during the movement process.
  • the fourth feature point can be converted by using the coordinate conversion parameters to obtain the first feature point corresponding to the second feature point in the first space coordinate system of the sensor, so that the reconstructed fourth feature point can be aligned To the first space coordinate system where the sensor is located, to improve the accuracy of the subsequent calculation of pose transformation parameters.
  • the coordinate transformation parameters of the second space coordinate system of the camera to the first space coordinate system of the sensor can be obtained, where Basically, the coordinate transformation parameter can be multiplied by the coordinate information of the fourth feature point in the second space coordinate system of the camera to obtain that the first feature point corresponds to the second feature point in the first space coordinate system of the sensor.
  • the first space coordinate system and the jth second feature point can be marked as X j
  • the second feature point X j is in the ith target image
  • the corresponding first feature point in is denoted as x j .
  • the second feature point may be projected by using the internal parameters of the camera, the reference transformation parameter and the first pose parameter, so as to obtain the third feature point projected from the second feature point to the original image.
  • the internal parameters of the camera are parameters related to the characteristics of the camera itself, such as but not limited to the focal length and pixel size of the camera.
  • the internal parameters of the camera can be denoted as K.
  • the internal parameter K of the camera can be expressed as:
  • f x and f y represent the focal length in the horizontal direction and the focal length in the vertical direction, respectively, x 0 and y 0 represent the principal point coordinates, and s represents the coordinate axis tilt parameter.
  • the reference transformation parameter may include a rotation transformation parameter T R and a translation transformation parameter T t
  • the first pose parameter may include a rotation parameter R and a translation parameter t
  • R i represents the rotation parameter in the first pose parameter of the i-th original image
  • t i represents the translation parameter in the first pose parameter of the i-th original image
  • x′ j represents the The second feature point X j is projected to the third feature point in the ith original image.
  • R i represents the rotation parameter in the first pose parameter of the i-th target image
  • t i represents the i-th target
  • x′ j represents the third feature point projected from the second feature point X j to the i-th target image.
  • the reference transformation parameters also represent the calibration relationship between the sensor and the camera.
  • the reference transformation parameters can be set in advance, and the reference transformation parameters can be optimized and adjusted in subsequent steps, and the optimized and adjusted reference transformation parameters can be used as pose transformation parameters.
  • the optimization and adjustment process please refer to the subsequent description.
  • Step S14 Based on the position difference between the first feature point corresponding to the second feature point and the third feature point projected from the second feature point to the original image, adjust the reference conversion parameters to obtain pose conversion parameters.
  • the position difference between the first feature point corresponding to the second feature point and the third feature point projected from the second feature point to the original image should be as large as possible is small, so by minimizing the position difference to adjust the reference transformation parameters, the optimal solution of the pose transformation parameters can be obtained.
  • the reference conversion parameter can be set as an unknown quantity, so that the third feature point can be represented by an unknown quantity "reference conversion parameter".
  • the position difference can be minimized, and the reference can be obtained by solving The optimal solution of the conversion parameters, and the optimal solution can be used as the final pose conversion parameters.
  • the reference conversion parameter can also be set to an initial value first, and use the initial value to obtain the position difference, so that the initial value can be increased (or decreased) based on the position difference, and according to the adjusted
  • the reference transformation parameters of the corresponding position and position difference are recalculated, and finally the latest reference transformation parameters can be used as the final pose transformation parameters when the position difference no longer decreases (or the position difference is smaller than the preset threshold).
  • an objective function with the reference transformation parameters as the optimization object can be constructed, and the objective function can be solved, and the solution result can be used as the final pose transformation parameter.
  • the original image taken by the camera at the preset timing and the first pose parameter measured by the sensor at the preset timing are obtained, and the second pose parameter of the original image is obtained based on the first feature point in the original image
  • the first feature point is obtained corresponding to the second feature point in the first space coordinate system of the sensor, and using the reference transformation parameters and the first pose parameters, to obtain the second feature point projected to the third feature point in the original image, so as to be based on the position difference between the first feature point corresponding to the second feature point and the third feature point projected from the second feature point to the original image
  • adjust the reference conversion parameters to obtain the pose conversion parameters, so the sensor and camera can be directly calibrated without an intermediary such as lidar, and the equation solving problem constructed by traditional hand-eye calibration is converted into an optimization problem of position difference, and then the calibration process Reduces the reliance on full motion in all degrees of freedom, thereby reducing calibration cost and complexity.
  • FIG. 2 is a schematic flow chart of an embodiment of step S14 in FIG. 1, and the method may include the following steps:
  • Step S21 Obtain the initial conversion parameters between the camera and the sensor, and obtain the parameter difference between the reference conversion parameters and the initial conversion parameters.
  • the initial transformation parameters may include initial rotation transformation parameters and initial translation transformation parameters.
  • the initial rotation transformation parameter may be denoted as T R0
  • the initial translation transformation parameter may be denoted as T t0 .
  • the camera and the sensor are arranged on the preset carrier, and it is also possible that the camera and the sensor can be arranged on the preset carrier at preset relative positions. On this basis, the initial conversion parameters between the two can be obtained based on the preset relative position between the two.
  • the initial rotation transformation parameter T R0 can be expressed as:
  • the initial translation can be
  • the conversion parameter T t0 is expressed as:
  • the initial conversion parameters can be deduced in the same way.
  • Step S22 Obtain the position difference between the first feature point corresponding to the second feature point and the third feature point projected from the second feature point into the original image.
  • the second feature point may be obtained based on the first feature point in at least one target image, and the target image is selected from several original images with preset time sequences, where Basically, the target image can be used as the current image respectively, so as to obtain the first position information of the first feature point corresponding to the second feature point in the current image, and obtain the third feature point projected from the second feature point to the current image
  • the second position information is obtained by projecting the second feature point using the internal parameters of the camera, the reference transformation parameters, and the first pose parameters measured by the preset time sequence corresponding to the current image.
  • the pixel distance between the first feature point and the third feature point can be obtained based on the first position information and the second position information, and the position difference can be obtained based on the pixel distance of at least one target image.
  • the position difference can be obtained based on the pixel distance of at least one target image.
  • the first position information includes the first pixel coordinates of the first feature point
  • the second position information includes the second pixel coordinates of the third feature point, and there is a second pixel coordinate corresponding to the second feature point in the current image.
  • the pixel distance between the first feature point and the third feature point can be obtained by using the first pixel coordinates and the second pixel coordinates, and there is no first feature corresponding to the second feature point in the current image
  • the pixel distance can be set to a preset value.
  • the preset value can be set according to actual application needs, for example, the preset value can be set to 0, which is not limited here.
  • the pixel distance is obtained by distinguishing whether there is a first feature point corresponding to the second feature point in the current image, which can help improve the accuracy of the pixel distance.
  • the number of at least one target image can be recorded as n
  • the number of second feature points can be recorded as m
  • the pixel distance can be expressed as v ij
  • v ij when the current image i has the first feature point x j corresponding to the second feature point X j , v ij can take 1, that is, the pixel distance can pass the first pixel coordinate of the first feature point and the third The second pixel coordinates of the feature points are calculated; and when the current image i does not have the first feature point x j corresponding to the second feature point X j , v ij can take a preset value (eg, 0). In some embodiments, by counting at least one target image, the position difference can be obtained
  • Step S23 Construct an objective function based on the parameter difference and the position difference.
  • the location difference can be expressed as
  • the initial conversion parameters between the camera and the sensor are obtained, the parameter difference between the reference conversion parameters and the initial conversion parameters is obtained, and the first feature point and the second feature point projection corresponding to the second feature point are obtained to the position difference between the third feature point in the original image, so as to construct the objective function based on the parameter difference and the position difference, that is, the objective function is constructed by referring to both the parameter difference and the position difference, so even if the sensor is in the calibration process Insufficient movement in the middle can also ensure that the solved pose transformation parameters are as close to the true value as possible, so it can help improve the calibration accuracy.
  • FIG. 3 is a schematic flowchart of a calibration method provided by an embodiment of the present disclosure. The method may include the following steps:
  • Step S31 Obtain the original image taken by the camera at the preset timing and acquire the first pose parameters measured by the sensor at the preset timing.
  • Step S32 Perform pose estimation based on the first feature point in the original image to obtain the second pose parameter of the original image, and based on the second pose, obtain the first feature point in the second space coordinate system of the camera Four feature points.
  • the first feature point in the original image, the shooting position of the original image, and the shooting angle of the original image can meet the preset
  • the original image is used as the current target image
  • the first feature point of the target image is used to perform pose estimation to obtain the second pose parameter of the target image, so that the first feature can be estimated using the second pose parameter Points are triangulated to obtain the fourth feature point of the first feature point in the second space coordinate system of the camera.
  • the original image when the original image is the first frame image captured by the camera, the original image can be directly used as the target image, and the second frame of the target image
  • the pose parameters are set to the preset pose parameters.
  • At least one of the following may be performed in advance: obtaining the total number of first feature points satisfying the matching condition in the original image, obtaining the first shooting position difference between the original image and the previous target image, Obtain the disparity between the original image and the previous target image, on this basis, the above matching condition can be set to match the existing fourth feature point in the second space coordinate system of the camera, and the existing fourth feature point in the second space coordinate system
  • the fourth feature point of can be obtained by using the first feature point in the previous target image
  • the preset condition can be set to include at least one of the following: the total number is greater than the first threshold, and the first shooting position difference is greater than the second threshold , the disparity is greater than the third threshold.
  • the target image can be selected according to the total number, the first shooting position difference and the parallax, which can help reduce the possibility of introducing redundant data in the reconstruction process and help improve the calibration accuracy.
  • the above steps of obtaining the total number, obtaining the first shooting position difference, and obtaining the parallax can all be performed, and the above preset conditions can be Including: the total number is greater than the first threshold, the first shooting position difference is greater than the second threshold, and the parallax is greater than the third threshold, so as to combine the total number, the first shooting position difference and the parallax to select the target image.
  • the first feature point in the previous target image can be used to perform pose estimation, and the second pose parameter of the previous target image can be obtained, so that the first feature point can be triangulated using the previous target image
  • the fourth feature point of the first feature point in the previous target image in the second space coordinate system can be obtained through the processing, so "incremental" three-dimensional reconstruction can be realized.
  • the feature representation of the first feature point can also be obtained. On this basis, if the feature representation of the first feature point is consistent with the existing If the feature similarity between the feature representations of the first feature point corresponding to the fourth feature point in the previous target image is greater than the preset similarity threshold, it can be considered that the first feature point matches the existing fourth feature point, On the contrary, it can be considered that the first feature point does not match the existing fourth feature point.
  • the process of obtaining feature representation can refer to related technologies of SIFT and ORB.
  • the total number of the first feature points satisfying the matching condition in the original image is greater than the first threshold, it can be considered that there is a high common observation between the original image and the map composed of the existing fourth feature points, and the The original image is used as a candidate image.
  • the first shooting position of the original image can be obtained based on the first pose parameter measured at the preset time sequence of the original image, and the previous target image closest to the original image in time sequence can be obtained, and based on The first pose parameter measured at the preset timing of the target image is used to obtain the second shooting position of the target image, and on this basis, the difference between the first shooting position and the second shooting position can be calculated, Poor location as first shot.
  • the first shooting position of the k+1th original image can be recorded as C k+1
  • the preset timing of the previous target image closest to the original image can be recorded as f(k+1)
  • the second shooting position of the target image whose preset timing is f(k+1) can be recorded as C f(k+1)
  • the difference between the first shooting position and the second shooting position can be recorded as
  • the second shooting position differences between two adjacent previous target images may be obtained respectively, and the average value of the second shooting position differences may be calculated to obtain the second threshold.
  • the shooting position of the kth target image in the previous target image can be recorded as C k
  • the target image closest to the target image k (for example, the closest to the target image k and located before the target image k , or the preset timing of the target image closest to the target image k and behind the target image k)
  • f(k) the shooting position of the target image with the preset timing of f(k)
  • the second shooting position difference can be recorded as
  • the average It can be expressed as:
  • the second threshold can be expressed as the product of the average value and a preset coefficient ⁇
  • the preset coefficient ⁇ can be set according to actual application conditions, for example, it can be set to be less than 1, which is not limited here. Therefore, if the candidate image satisfies that the first shooting position difference is greater than the second threshold, it can be considered that the candidate image deviates from the previous target image by a larger shooting distance, and the candidate image can be kept.
  • the parallax can be obtained by using the difference between the imaging positions of a certain point in space projected onto different images, and the detailed process can refer to the relevant description of triangulation. Therefore, if the disparity between the candidate image and the previous target image is greater than the third threshold, the candidate image can be used as the target image, which can help reduce the probability of subsequent triangulation errors caused by image dwelling, thereby Improve reconstruction accuracy.
  • the feature representation of the first feature point can be used to construct a scene graph to associate with the target image.
  • target image 1 and target image i can be associated, so that the relative pose parameters between target image 1 and target image i can be obtained by using the first feature point in target image 1 and the first feature point in target image i , and based on the second pose parameter of the target image 1 and the above relative pose parameter, the second pose parameter of the target image i is obtained.
  • Other target images can be deduced by analogy.
  • Step S33 Align the first pose parameters and the second pose parameters to obtain coordinate conversion parameters between the first space coordinate system and the second space coordinate system.
  • first pose parameters and the second pose parameters may be aligned based on ICP or other methods to obtain coordinate conversion parameters between the first space coordinate system and the second space coordinate system.
  • Step S34 Transform the fourth feature point by using the coordinate transformation parameters to obtain the second feature point.
  • the coordinate information of the fourth feature point in the second space coordinate system of the camera can be multiplied by the coordinate conversion parameter to obtain that the first feature point corresponds to the second feature in the first space coordinate system of the sensor point.
  • Step S35 Using the reference conversion parameter and the first pose parameter, obtain the third feature point projected from the second feature point to the original image.
  • the second feature point may be projected by using the internal parameters of the camera, the reference transformation parameter and the first pose parameter, so as to obtain the third feature point projected from the second feature point to the original image.
  • Step S36 Based on the position difference between the first feature point corresponding to the second feature point and the third feature point projected from the second feature point to the original image, adjust the reference conversion parameters to obtain pose conversion parameters.
  • step S36 may refer to step S14.
  • the second pose parameter of the original image and the fourth feature point of the first feature point in the second space coordinate system of the camera are obtained, On this basis, the first pose parameter and the second pose parameter are aligned to obtain the coordinate transformation parameters between the first space coordinate system and the second space coordinate system, so that the fourth feature point can be transformed using the coordinate transformation parameters
  • the conversion is performed to obtain the second feature point, that is, the pose conversion parameters between the sensor and the camera can be optimized and adjusted according to the consistency of the first pose parameter of the sensor and the 3D reconstruction of the original image, which is beneficial to improve the calibration accuracy.
  • FIG. 4 is a schematic frame diagram of a calibration device 40 provided by an embodiment of the present disclosure.
  • the calibration device 40 includes: an information acquisition part 41, a parameter acquisition part 42, a point cloud acquisition part 43 and a parameter adjustment part 44.
  • the information acquisition part 41 is configured to acquire the original image taken by the camera and the time sequence of the original image measured by the sensor.
  • the first pose parameter of the original image; the parameter acquisition part 42 is configured to obtain the second pose parameter of the original image based on the first feature point in the original image;
  • the point cloud acquisition part 43 is configured to obtain the second pose parameter based on the first pose parameter and the first feature point Two pose parameters, get the first feature point corresponding to the second feature point in the first space coordinate system of the sensor, and use the reference transformation parameters and the first pose parameter to get the second feature point projected to the first point in the original image Three feature points;
  • the parameter adjustment part 44 is configured to adjust the reference conversion parameter based on the position difference between the first feature point corresponding to the second feature point and the third feature point projected from the second feature point to the original image, to obtain the position Pose conversion parameters.
  • the original image taken by the camera at the preset timing and the first pose parameter measured by the sensor at the preset timing are obtained, and the second pose parameter of the original image is obtained based on the first feature point in the original image
  • the first feature point is obtained corresponding to the second feature point in the first space coordinate system of the sensor, and using the reference transformation parameters and the first pose parameters, to obtain the second feature point projected to the third feature point in the original image, so as to be based on the position difference between the first feature point corresponding to the second feature point and the third feature point projected from the second feature point to the original image
  • adjust the reference conversion parameters to obtain the pose conversion parameters, so the sensor and camera can be directly calibrated without an intermediary such as lidar, and the equation solving problem constructed by traditional hand-eye calibration is converted into an optimization problem of position difference, and then the calibration process Reduces the reliance on full motion in all degrees of freedom, thereby reducing calibration cost and complexity.
  • the parameter acquisition part 42 is configured to perform pose estimation based on the first feature points in the original image to obtain the second pose parameters of the original image;
  • the point cloud acquisition part 43 includes a feature point acquisition part, coordinate The alignment part and the coordinate conversion part, the feature point acquisition part is configured to obtain the fourth feature point of the first feature point in the second space coordinate system of the camera based on the second pose parameter;
  • the coordinate alignment part is configured to The first pose parameter and the second pose parameter are aligned to obtain the coordinate transformation parameters between the first space coordinate system and the second space coordinate system, and the coordinate transformation part is configured to use the coordinate transformation parameters to carry out the fourth feature point Convert to get the second feature point.
  • the second pose parameters of the original image are obtained, and based on the second pose parameters, the position of the first feature points in the second space of the camera is obtained.
  • the fourth feature point in the coordinate system on this basis, the first pose parameter and the second pose parameter are aligned to obtain the coordinate transformation parameters between the first space coordinate system and the second space coordinate system, so that Use the coordinate conversion parameters to convert the fourth feature point to obtain the second feature point, that is, the pose conversion between the sensor and the camera can be optimized and adjusted according to the consistency of the first pose parameter of the sensor and the pose estimation of the original image parameters, which is beneficial to improve the calibration accuracy.
  • the parameter acquisition part 42 includes an image selection part configured to convert the original The image is used as the current target image; wherein, the shooting position is obtained by using the first pose parameters; the parameter acquisition part 42 includes a pose estimation part, which is configured to use the first feature points in the target image to perform pose estimation to obtain the target The second pose parameter of the image, the feature point acquisition part is configured to use the second pose parameter to triangulate the first feature point to obtain the fourth feature point of the first feature point in the second space coordinate system of the camera .
  • the original image is used as the current target image, and the shooting position is determined using the first
  • use the first feature point in the target image to perform pose estimation obtain the second pose parameter of the target image, and use the second pose parameter to triangulate the first feature point
  • the fourth feature point of the first feature point in the second space coordinate system of the camera can be obtained through the processing, so the target image can be selected in combination with the first pose parameters of the sensor during the three-dimensional reconstruction process, and the redundancy introduced in the reconstruction process can be reduced
  • the possibility of data is conducive to improving the calibration accuracy.
  • the parameter acquiring part 42 includes a preprocessing part configured to perform at least one of the following: acquiring the total number of first feature points satisfying the matching condition in the original image, acquiring the difference between the original image and the previous target image The difference between the first shooting position, to obtain the parallax between the original image and the previous target image, wherein, the matching condition is to match with the existing fourth feature point in the second space coordinate system, and the existing fourth feature point in the second space coordinate system Some fourth feature points are obtained by using the first feature points in the previous target image, and the preset conditions include at least one of the following: the total number is greater than the first threshold, the first shooting position difference is greater than the second threshold, and the parallax greater than the third threshold.
  • the original image before using the original image as the current target image, perform at least one of the following: obtain the total number of first feature points in the original image that meet the matching conditions, obtain the difference between the original image and the previous target image
  • the difference between the first shooting position of the original image and the previous target image is obtained, and the matching condition is to match the existing fourth feature point in the second space coordinate system, and the existing fourth feature point in the second space coordinate system
  • the four feature points are obtained by using the first feature points in the previous target image
  • the preset conditions include at least one of the following: the total number is greater than the first threshold, the first shooting position difference is greater than the second threshold, and the parallax is greater than the third threshold , so selecting the target image by the total number, the first shooting position difference and the parallax can help reduce the possibility of introducing redundant data in the reconstruction process and help improve the calibration accuracy.
  • the preprocessing part includes a threshold acquisition part configured to respectively acquire the second shooting position differences between two adjacent previous target images, and count the average value of the second shooting position differences, to obtain second threshold.
  • the second threshold is obtained by obtaining the second shooting position difference between two adjacent previous target images and counting the average value of the second shooting position difference, so it can be based on the previous target image
  • the second threshold value is obtained, so that the second threshold value can be continuously updated based on the newly introduced target image during the incremental three-dimensional reconstruction process, which is beneficial to improve the calibration accuracy.
  • the preprocessing part includes a position difference obtaining part configured to obtain a first shooting position of the original image, and obtain a second shooting position of a previous target image closest to the original image, and take the first shooting position The difference between the position and the second shooting position is used as the first shooting position difference.
  • the distance between the first shooting position and the second shooting position is used as the first shooting position difference, which can help to improve the accuracy of the first shooting position difference.
  • the parameter adjustment part 44 is configured to construct an objective function with reference transformation parameters as optimization objects based on the position difference, and solve the objective function to obtain pose transformation parameters.
  • the parameter adjustment part 44 includes a parameter difference acquisition part configured to obtain an initial conversion parameter between the camera and the sensor, and obtain a parameter difference between the reference conversion parameter and the initial conversion parameter, and the parameter adjustment part 44 Including a position difference acquisition part configured to obtain the position difference between the first feature point corresponding to the second feature point and the third feature point projected from the second feature point into the original image, the parameter adjustment part 44 includes an objective function construction part , configured to construct an objective function based on parameter differences and position differences.
  • the initial conversion parameters between the camera and the sensor are obtained, the parameter difference between the reference conversion parameters and the initial conversion parameters is obtained, and the first feature point and the second feature point projection corresponding to the second feature point are obtained to the position difference between the third feature point in the original image, so as to construct the objective function based on the parameter difference and the position difference, that is, the objective function is constructed by referring to both the parameter difference and the position difference, so even if the sensor is in the calibration process Insufficient movement in the middle can also ensure that the solved pose transformation parameters are as close to the true value as possible, so it can help improve the calibration accuracy.
  • the second feature point is obtained based on the first feature point in at least one target image, and the target image is selected from several original images with preset time sequences
  • the position difference acquisition part includes the current image
  • the selection part is configured to respectively use the target image as the current image
  • the position difference acquisition part includes a position information acquisition part configured to acquire the first position information of the first feature point corresponding to the second feature point in the current image, and Obtain second position information of the second feature point projected to the third feature point in the current image; wherein, the second position information is the first measured by using the internal parameters of the camera, the reference conversion parameters, and the preset timing corresponding to the current image.
  • the position difference acquisition part includes a pixel distance acquisition part configured to obtain the distance between the first feature point and the third feature point based on the first position information and the second position information the pixel distance; the position difference acquisition part includes a position difference determination part configured to obtain the position difference based on the pixel distance of at least one target image.
  • the second feature point is obtained based on the first feature point in at least one target image, and the target image is selected from several preset time-sequence original images.
  • the target The image is used as the current image, so as to obtain the first position information of the first feature point corresponding to the second feature point in the current image, and obtain the second position information of the third feature point projected from the second feature point to the current image
  • the second position information is obtained by projecting the second feature points by using the internal parameters of the camera, the reference transformation parameters and the first pose parameters measured by the preset timing corresponding to the current image, and then based on the first position information and the second Position information, obtain the pixel distance between the first feature point and the third feature point, and count the pixel distance of at least one target image to obtain the position difference, so the position of the second feature point in the target image obtained by 3D reconstruction can be counted
  • the first position information includes the first pixel coordinates of the first feature point
  • the second position information includes the second pixel coordinates of the third feature point
  • the pixel distance acquisition part is configured to exist in the current image
  • the pixel distance between the first feature point and the third feature point is obtained by using the first pixel coordinate and the second pixel coordinate.
  • the pixel distance acquiring part is configured to set the pixel distance to a preset value when there is no first feature point corresponding to the second feature point in the current image.
  • the first position information includes the first pixel coordinates of the first feature point
  • the second position information includes the second pixel coordinates of the third feature point
  • feature points use the first pixel coordinates and the second pixel coordinates to obtain the pixel distance between the first feature point and the third feature point, and the first feature point corresponding to the second feature point does not exist in the current image
  • setting the pixel distance to a preset value can help improve the accuracy of the pixel distance.
  • the camera and the sensor are arranged on the preset carrier, and the original image and the first pose parameters are acquired during the non-linear motion of the preset carrier.
  • the camera and the sensor are set on the preset carrier, and the original image and the first pose parameters are obtained during the non-linear motion of the preset carrier, which can reduce matrix degradation during the calibration process, thereby enabling It is beneficial to improve the calibration accuracy.
  • FIG. 5 is a schematic frame diagram of an electronic device provided by an embodiment of the present disclosure.
  • the electronic device 50 includes a memory 51 and a processor 52 coupled to each other, and the processor 52 is configured to execute program instructions stored in the memory 51 to implement the steps of any calibration method embodiment described above.
  • the electronic device 50 may include, but is not limited to: a microcomputer and a server.
  • the electronic device 50 may also include mobile devices such as notebook computers, tablet computers, and mobile phones, which are not limited here.
  • the processor 52 is used to control itself and the memory 51 to implement the steps in any one of the above calibration method embodiments.
  • the processor 52 may also be called a central processing unit (Central Processing Unit, CPU).
  • Processor 52 may be an integrated circuit chip with signal processing capabilities.
  • the processor 52 can also be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field-programmable gate array (Field-Programmable Gate Array, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the processor 52 may be jointly realized by an integrated circuit chip.
  • FIG. 6 is a schematic framework diagram of a computer-readable storage medium provided by an embodiment of the present disclosure.
  • the computer-readable storage medium 60 stores program instructions 601 that can be executed by the processor, and the program instructions 601 are used to implement the steps of any calibration method embodiment described above.
  • a computer readable storage medium may be a non-transitory computer readable storage medium, or may be a volatile computer readable storage medium.
  • a computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device.
  • a computer readable storage medium may be, for example, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Examples of computer-readable storage media include: portable computer disks, hard disks, Random Access Memory (RAM), Read-Only Memory (ROM), erasable Electrical Programmable Read Only Memory (EPROM) or flash memory, Static Random-Access Memory (Static Random-Access Memory, SRAM), Portable Compact Disc Read-Only Memory (CD-ROM), Digital Video Discs (DVDs), memory sticks, floppy disks, mechanically encoded devices such as punched cards or raised structures in grooves with instructions stored thereon, and any suitable combination of the foregoing.
  • RAM Random Access Memory
  • ROM Read-Only Memory
  • EPROM erasable Electrical Programmable Read Only Memory
  • flash memory Static Random-Access Memory
  • SRAM Static Random-Access Memory
  • CD-ROM Portable Compact Disc Read-Only Memory
  • DVDs Digital Video Discs
  • memory sticks floppy disks, mechanically encoded devices such as punched cards or raised structures in grooves with instructions stored thereon, and any suitable combination of the foregoing.
  • computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., pulses of light through fiber optic cables), or transmitted electrical signals.
  • the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device over at least one of a network, such as the Internet, a local area network, a wide area network, and a wireless network.
  • the network may include at least one of copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and edge servers.
  • a network adapter card or a network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • Computer program instructions for performing the operations of the present disclosure may be assembly instructions, Industry Standard Architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or in one or more source or object code written in any combination of programming languages, including object-oriented programming languages—such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the “C” language or similar programming languages.
  • Computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or it may be connected to an external computer (for example, using Internet Service Provider to connect via the Internet).
  • LAN Local Area Network
  • WAN Wide Area Network
  • electronic circuits such as programmable logic circuits, FPGAs, or programmable logic arrays (Programmable Logic Arrays, PLAs), can be customized by using state information of computer-readable program instructions, which can execute computer-readable Read program instructions, thereby implementing various aspects of the present disclosure.
  • An embodiment of the present disclosure also provides a computer program product, including computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are run in an electronic device , the processor in the electronic device executes the above method.
  • Embodiments of the present disclosure also provide another computer program product, the computer program product carries program code, and the instructions included in the program code can be configured to execute the calibration method described in the above method embodiment, for details, refer to the above method embodiment.
  • the above-mentioned computer program product may be realized by hardware, software or a combination thereof.
  • the computer program product may be embodied as a computer storage medium, and in other embodiments, the computer program product may be embodied as a software product, such as a software development kit (Software Development Kit, SDK) and the like.
  • the device involved in the embodiments of the present disclosure may be at least one of a system, a method, and a computer program product.
  • a computer program product may include a computer readable storage medium having computer readable program instructions thereon for causing a processor to implement various aspects of the present disclosure.
  • the functions or modules included in the apparatus provided by the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments, and the implementation can refer to the descriptions of the above method embodiments.
  • This disclosure relates to the field of augmented reality.
  • acquiring the image information of the target object in the real environment and then using various visual related algorithms to detect or identify the relevant features, states and attributes of the target object, and obtain a virtual reality that matches the application.
  • AR effects combined with reality may involve faces, limbs, gestures, actions, etc. related to the human body, or markers and markers related to objects, or sand tables, display areas or display items related to venues or places.
  • Vision-related algorithms can involve visual positioning, simultaneous localization and mapping (SLAM), 3D reconstruction, image registration, background segmentation, object key point extraction and tracking, object pose or depth detection, etc.
  • SLAM simultaneous localization and mapping
  • the application of the present disclosure can not only involve interactive scenes such as tour, navigation, explanation, reconstruction, virtual effect overlay display and other interactive scenes related to real scenes or objects, but also can involve special effects processing related to people, such as makeup beautification, body beautification, and special effect display , virtual model display and other interactive scenarios.
  • the relevant features, states and attributes of the target object can be detected or identified through the convolutional neural network.
  • the above-mentioned convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
  • a "part" may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course it may also be a unit, a module or a non-modular one.
  • the disclosed methods and devices may be implemented in other ways.
  • the device implementation described above is schematic, for example, the division of modules or units is a logical function division, and there may be other division methods in actual implementation, for example, units or components can be combined or integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • a unit described as a separate component may or may not be physically separated, and a component shown as a unit may or may not be a physical unit, that is, it may be located in one place, or may also be distributed to network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present disclosure is essentially or part of the contribution to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) execute all or part of the steps of the methods in various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, ROM, RAM), magnetic disk or optical disk and other media that can store program codes.
  • Embodiments of the present disclosure provide a calibration method and related devices, equipment, storage media, and computer program products, wherein the calibration method includes: acquiring the original image captured by the camera at a preset time sequence and acquiring the first image measured by the sensor at a preset time sequence A pose parameter; based on the first feature point in the original image, the second pose parameter of the original image is obtained; based on the first pose parameter and the second pose parameter, the first feature point corresponds to the first space of the sensor The second feature point in the coordinate system, and use the reference transformation parameter and the first pose parameter to obtain the third feature point projected from the second feature point to the original image; based on the first feature point corresponding to the second feature point and the first feature point The position difference between the two feature points projected to the third feature point in the original image, adjust the reference conversion parameters, and obtain the pose conversion parameters, so that the sensor and camera can be directly calibrated without an intermediary such as lidar, and by The equation solving problem constructed by traditional hand-eye calibration is converted into the optimization problem of position difference, and then the

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Manufacturing & Machinery (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Des modes de réalisation de la présente divulgation concernent un procédé d'étalonnage et un appareil, un dispositif, un support de stockage et un produit-programme informatique associés. Le procédé d'étalonnage consiste à : acquérir une image d'origine capturée par une caméra à une séquence temporelle prédéfinie, et acquérir un premier paramètre de pose mesuré et obtenu par un capteur à la séquence temporelle prédéfinie ; sur la base d'un premier point caractéristique dans l'image d'origine, obtenir un second paramètre de pose de l'image d'origine ; sur la base du premier paramètre de pose et du second paramètre de pose, obtenir un deuxième point caractéristique correspondant au premier point caractéristique dans un premier système de coordonnées spatiales du capteur, et utiliser un paramètre de transformation de référence et le premier paramètre de pose pour obtenir un troisième point caractéristique projeté à partir du deuxième point caractéristique vers l'image d'origine ; et sur la base de la différence de position entre le premier point caractéristique correspondant au deuxième point caractéristique et le troisième point caractéristique projeté depuis le deuxième point caractéristique vers l'image d'origine, ajuster le paramètre de transformation de référence pour obtenir un paramètre de transformation de pose. Dans la solution décrite, les coûts d'étalonnage et la complexité peuvent être réduits.
PCT/CN2022/105803 2021-10-14 2022-07-14 Procédé d'étalonnage et appareil, dispositif, support de stockage et produit-programme informatique associés WO2023060964A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111199862.8A CN114022560A (zh) 2021-10-14 2021-10-14 标定方法及相关装置、设备
CN202111199862.8 2021-10-14

Publications (1)

Publication Number Publication Date
WO2023060964A1 true WO2023060964A1 (fr) 2023-04-20

Family

ID=80055995

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/105803 WO2023060964A1 (fr) 2021-10-14 2022-07-14 Procédé d'étalonnage et appareil, dispositif, support de stockage et produit-programme informatique associés

Country Status (2)

Country Link
CN (1) CN114022560A (fr)
WO (1) WO2023060964A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116740183A (zh) * 2023-08-15 2023-09-12 浙江大学 一种双视角舱体位姿调整方法
CN116934871A (zh) * 2023-07-27 2023-10-24 湖南视比特机器人有限公司 一种基于标定物的多目系统标定方法、系统及存储介质
CN117226853A (zh) * 2023-11-13 2023-12-15 之江实验室 一种机器人运动学标定的方法、装置、存储介质、设备

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022560A (zh) * 2021-10-14 2022-02-08 浙江商汤科技开发有限公司 标定方法及相关装置、设备
CN115601450B (zh) * 2022-11-29 2023-03-31 浙江零跑科技股份有限公司 周视标定方法及相关装置、设备、系统和介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108717712A (zh) * 2018-05-29 2018-10-30 东北大学 一种基于地平面假设的视觉惯导slam方法
CN108876854A (zh) * 2018-04-27 2018-11-23 腾讯科技(深圳)有限公司 相机姿态追踪过程的重定位方法、装置、设备及存储介质
US20190188533A1 (en) * 2017-12-19 2019-06-20 Massachusetts Institute Of Technology Pose estimation
CN111522043A (zh) * 2020-04-30 2020-08-11 北京联合大学 一种无人车激光雷达快速重新匹配定位方法
CN113436270A (zh) * 2021-06-18 2021-09-24 上海商汤临港智能科技有限公司 传感器标定方法及装置、电子设备和存储介质
CN114022560A (zh) * 2021-10-14 2022-02-08 浙江商汤科技开发有限公司 标定方法及相关装置、设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190188533A1 (en) * 2017-12-19 2019-06-20 Massachusetts Institute Of Technology Pose estimation
CN108876854A (zh) * 2018-04-27 2018-11-23 腾讯科技(深圳)有限公司 相机姿态追踪过程的重定位方法、装置、设备及存储介质
CN108717712A (zh) * 2018-05-29 2018-10-30 东北大学 一种基于地平面假设的视觉惯导slam方法
CN111522043A (zh) * 2020-04-30 2020-08-11 北京联合大学 一种无人车激光雷达快速重新匹配定位方法
CN113436270A (zh) * 2021-06-18 2021-09-24 上海商汤临港智能科技有限公司 传感器标定方法及装置、电子设备和存储介质
CN114022560A (zh) * 2021-10-14 2022-02-08 浙江商汤科技开发有限公司 标定方法及相关装置、设备

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116934871A (zh) * 2023-07-27 2023-10-24 湖南视比特机器人有限公司 一种基于标定物的多目系统标定方法、系统及存储介质
CN116934871B (zh) * 2023-07-27 2024-03-26 湖南视比特机器人有限公司 一种基于标定物的多目系统标定方法、系统及存储介质
CN116740183A (zh) * 2023-08-15 2023-09-12 浙江大学 一种双视角舱体位姿调整方法
CN116740183B (zh) * 2023-08-15 2023-11-07 浙江大学 一种双视角舱体位姿调整方法
CN117226853A (zh) * 2023-11-13 2023-12-15 之江实验室 一种机器人运动学标定的方法、装置、存储介质、设备
CN117226853B (zh) * 2023-11-13 2024-02-06 之江实验室 一种机器人运动学标定的方法、装置、存储介质、设备

Also Published As

Publication number Publication date
CN114022560A (zh) 2022-02-08

Similar Documents

Publication Publication Date Title
WO2023060964A1 (fr) Procédé d'étalonnage et appareil, dispositif, support de stockage et produit-programme informatique associés
CN110322500B (zh) 即时定位与地图构建的优化方法及装置、介质和电子设备
CN109887003B (zh) 一种用于进行三维跟踪初始化的方法与设备
JP7236565B2 (ja) 位置姿勢決定方法、装置、電子機器、記憶媒体及びコンピュータプログラム
US10825197B2 (en) Three dimensional position estimation mechanism
JP6258953B2 (ja) 単眼視覚slamのための高速初期化
CN110458865B (zh) 平面自然特征目标的原位形成
US7554575B2 (en) Fast imaging system calibration
WO2021139176A1 (fr) Procédé et appareil de suivi de trajectoire de piéton sur la base d'un étalonnage de caméra binoculaire, dispositif informatique et support de stockage
JP2014515530A (ja) モバイルデバイスのための平面マッピングおよびトラッキング
CN107852447A (zh) 基于设备运动和场景距离使电子设备处的曝光和增益平衡
Taiana et al. Tracking objects with generic calibrated sensors: An algorithm based on color and 3D shape features
CN110349212B (zh) 即时定位与地图构建的优化方法及装置、介质和电子设备
JP5833507B2 (ja) 画像処理装置
Lowe et al. Complementary perception for handheld slam
WO2022267257A1 (fr) Procédé d'enregistrement d'image, procédé de positionnement visuel, appareil, dispositif, support et programme
WO2023151251A1 (fr) Procédé et appareil de construction de carte, procédé et appareil de détermination de pose, dispositif et produit-programme d'ordinateur
US11403781B2 (en) Methods and systems for intra-capture camera calibration
WO2023083256A1 (fr) Procédé et appareil d'affichage de pose, et système, serveur, et support de stockage
CN114882106A (zh) 位姿确定方法和装置、设备、介质
CN114600162A (zh) 用于捕捉摄像机图像的场景锁定模式
Xu et al. Real-time camera tracking for marker-less and unprepared augmented reality environments
US20220405954A1 (en) Systems and methods for determining environment dimensions based on environment pose
CN110648353A (zh) 一种基于单目传感器的机器人室内定位方法和装置
Laskar et al. Robust loop closures for scene reconstruction by combining odometry and visual correspondences

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22879911

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE