WO2020181409A1 - Procédé d'étalonnage de paramètre de dispositif de capture, appareil et support de stockage - Google Patents

Procédé d'étalonnage de paramètre de dispositif de capture, appareil et support de stockage Download PDF

Info

Publication number
WO2020181409A1
WO2020181409A1 PCT/CN2019/077475 CN2019077475W WO2020181409A1 WO 2020181409 A1 WO2020181409 A1 WO 2020181409A1 CN 2019077475 W CN2019077475 W CN 2019077475W WO 2020181409 A1 WO2020181409 A1 WO 2020181409A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
depth information
photographing device
moment
image
Prior art date
Application number
PCT/CN2019/077475
Other languages
English (en)
Chinese (zh)
Inventor
熊策
徐彬
周游
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2019/077475 priority Critical patent/WO2020181409A1/fr
Priority to CN201980005404.0A priority patent/CN111316325B/zh
Publication of WO2020181409A1 publication Critical patent/WO2020181409A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • the embodiments of the present invention relate to the field of unmanned aerial vehicles, and in particular to a method, equipment and storage medium for calibration of parameters of a photographing device.
  • an intelligent movable platform is usually provided with a shooting device, such as a binocular vision module.
  • the binocular vision module can not only provide image information of the target object to the movable platform, but also provide its depth information. So as to provide a richer decision-making basis for intelligent control.
  • the movable platform can be drones, autonomous vehicles, auxiliary driving devices, driving recorders, smart electric vehicles, scooters, balance vehicles, multi-camera smartphones, etc.
  • the binocular vision module Due to external factors such as changes in temperature and humidity, or vibration of the movable platform, it is difficult for the binocular vision module to maintain a stable state on the movable platform. Even if the binocular vision module of the movable platform has been accurately calibrated at the factory, as the movable platform is continuously used, the binocular vision module may undergo minor deformations, resulting in the binocular vision module. The pre-calibrated parameters between the binoculars may no longer be accurate, affecting the control of the movable platform.
  • the embodiments of the present invention provide a method, equipment and storage medium for parameter calibration of a photographing device, so as to improve the accuracy and efficiency of parameter calibration between the first photographing device and the second photographing device in the photographing device.
  • the first aspect of the embodiments of the present invention is to provide a method for calibrating parameters of a photographing device.
  • the photographing device is configured to be mounted on a movable platform.
  • the photographing device includes at least a first photographing device and a second photographing device.
  • the method includes :
  • the parameters of the photographing device are calibrated according to the first depth information, the second depth information, and the pose change.
  • a second aspect of the embodiments of the present invention is to provide a movable platform, the movable platform is equipped with a camera, the camera at least includes a first camera and a second camera, the mobile platform includes a memory and processor;
  • the memory is used to store program codes
  • the processor calls the program code, and when the program code is executed, is used to perform the following operations:
  • the parameters of the photographing device are calibrated according to the first depth information, the second depth information, and the pose change.
  • a third aspect of the embodiments of the present invention is to provide a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to implement the method described in the first aspect.
  • the camera parameter calibration method, equipment and storage medium provided by this embodiment, through the first camera and the second camera in the camera, the first image and the second image including the target object are respectively captured at the first moment to determine the target
  • the constraint item is added, and the influence of outliers on the parameter calibration is reduced,
  • the accuracy and efficiency of parameter calibration between the first camera and the second camera are improved.
  • FIG. 1 is a flowchart of a method for calibrating camera parameters provided by an embodiment of the present invention
  • Figure 2 is a schematic diagram of a drone provided by an embodiment of the present invention.
  • Figure 3 is a schematic diagram of an application scenario provided by an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of another application scenario provided by an embodiment of the present invention.
  • Figure 5 is a schematic diagram of yet another application scenario provided by an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of another application scenario provided by an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of Rolling Shutter provided by an embodiment of the present invention.
  • Fig. 8 is a schematic structural diagram of a movable platform provided by an embodiment of the present invention.
  • 70 movable platform; 71: first camera; 72: second camera;
  • a component when a component is said to be “fixed to” another component, it can be directly on the other component or a central component may also exist. When a component is considered to be “connected” to another component, it can be directly connected to another component or there may be a centered component at the same time.
  • FIG. 1 is a flowchart of a method for calibrating camera parameters provided by an embodiment of the present invention.
  • the photographing device is mounted on a movable platform, and may include being installed on the mobile platform.
  • the photographing device includes at least a first photographing device and a second photographing device.
  • the movable platform includes a drone or a vehicle.
  • This embodiment takes a drone as an example for schematic description.
  • the drone is equipped with a camera.
  • the camera at least includes a first camera and a second camera.
  • the camera is a dual camera of the drone.
  • the eye system is composed of a first camera and a second camera. As shown in FIG.
  • the drone includes a main camera 20 and a binocular system
  • the binocular system includes a first camera 21 and a second camera 22.
  • the first photographing device 21 may be the left-eye camera of the drone
  • the second photographing device 22 may be the right-eye camera of the drone. It can be understood that this is only a schematic description, and does not limit the specific shape and structure of the drone.
  • the method for calibrating the parameters of the camera in this embodiment may include:
  • Step S101 Acquire a first image and a second image including a target object captured by the first camera and the second camera at the first moment, and determine first depth information of the target object.
  • the first photographing device 21 and the second photographing device 22 can photograph the same target object at the same time.
  • 30 represents the target object in the three-dimensional space
  • 31 represents the first image captured by the first camera 21 at time t1 that includes the target object
  • 32 represents the target captured by the second camera 22 at time t1 and includes the target The second image of the object 30.
  • the processor in the drone can obtain the first image 31 taken by the first camera 21 at time t1 and the second image 32 taken by the second camera 22 at time t1, and use triangulation measurement to determine the target object 30 The depth information and the triangulation error.
  • the depth information of the target object 30 at time t1 is recorded as the first depth information
  • the triangulation error at time t1 is recorded as the first error information.
  • the depth information of the target object 30 can be determined according to the depth information of the three-dimensional points on the target object 30.
  • the point P represents any three-dimensional point on the target object 30.
  • the depth information of the point P can be determined according to the three-dimensional position information of the point P in the three-dimensional space.
  • the three-dimensional position information of the point P in the three-dimensional space can be calculated by the triangulation method.
  • the three-dimensional space can be a world coordinate system.
  • G represents the coordinate origin of the world coordinate system
  • C 0 , C 1 , and C 2 represent the coordinate origin of the camera coordinate system when the camera is in three different poses.
  • Image 40, image 41 and image 42 are in turn Images taken when the camera is in three different poses. It can be understood that the camera can shoot the same target object in different poses.
  • the camera can be the main camera, the first camera or the second camera of the drone.
  • the same three-dimensional point on the target object, such as the mapping point of point P in different images may have different positions in the corresponding images.
  • the mapping point of point P in image 40 is p0
  • the mapping point in the image 41 is p1
  • the mapping point of the point P in the image 42 is p2.
  • the positions of p0, p1, and p2 in the corresponding images may be different.
  • the three-dimensional coordinates (x w , y w , z w ) of the three-dimensional point on the target object in the world coordinate system and the mapping point of the three-dimensional point in the image can be obtained.
  • the position information in the image such as the relationship of pixel coordinates ( ⁇ , ⁇ ), is specifically shown in the following formula (1):
  • z c represents the coordinates of the three-dimensional point on the Z axis of the camera coordinate system.
  • K represents the internal parameters of the camera
  • R represents the rotation matrix of the camera coordinate system relative to the world coordinate system
  • T represents the translation matrix of the camera coordinate system relative to the world coordinate system
  • R and T are the external parameters of the camera.
  • the internal parameter K of the camera is a known quantity.
  • ⁇ x fm x
  • ⁇ y fm y
  • f represents a focal length of the camera
  • x m and y m is the image corresponding to the x-axis, the number of pixels per unit distance in the y-axis direction.
  • is the distortion parameter between the x-axis and the y-axis.
  • ⁇ 0 , v 0 is the position of the optical center of the camera in the pixel plane coordinate system.
  • the theoretical projection point of the three-dimensional point on the target object in the image may not be exactly the same as the projection point actually observed in the image.
  • the projection point of the point P on the normalized plane of the C 0 stand is denoted as p′ 0 .
  • the rotation matrix of the camera coordinate system when the coordinate origin is C 1 relative to the camera coordinate system when the coordinate origin is C 0 can also be recorded as The translation matrix of the camera coordinate system when the coordinate origin is C 1 relative to the camera coordinate system when the coordinate origin is C 0 is recorded as The rotation matrix of the camera coordinate system when the coordinate origin is C 2 relative to the camera coordinate system when the coordinate origin is C 1 is recorded as The translation matrix of the camera coordinate system when the coordinate origin is C 2 relative to the camera coordinate system when the coordinate origin is C 1 is recorded as
  • p′ 0 p 0
  • the errors generated by p′ 0 and p 0 are recorded as reprojection errors.
  • the points can be determined by the following formula (2) The three-dimensional coordinates of P in the world coordinate system (x w ,y w ,z w ):
  • N represents the number of seats or the number of the image shown in Figure 4, (u i, v i) T i-th imaging device captures an image of the point P (x w, y w, z w) The projected pixel coordinates.
  • the position of the point P calculated by the projection points p0, p1 and p2 in the image may not be the same position. Therefore, the above formula can be used ,Use the optimization problem to obtain the three-dimensional coordinates of point P (x w , y w , z w ).
  • the external parameters between the first camera and the second camera, the internal parameters of the first camera, and The internal parameters of the second camera can be calculated using the triangulation measurement method to obtain the depth information and triangulation error of the three-dimensional point P.
  • [Z,Cost] triangulate(p L ,p R ,R LR ,t LR ,K L ,K R ), where triangulate represents the triangulation measurement method, and p L represents the point P taken by the first camera The pixel coordinates of the mapping point in the image, p R represents the pixel coordinates of the mapping point of the point P in the image captured by the second camera, R LR represents the rotation relationship between the first camera and the second camera, t LR It represents the displacement relationship between the first camera and the second camera, K L represents the internal parameter of the first camera, and K R represents the internal parameter of the second camera.
  • Z represents the depth information of the three-dimensional point P calculated by the triangulation measurement method
  • Cost represents the triangulation error, which can specifically be the error of the depth information of the three-dimensional point P, as shown in Figure 5, the point P is a three-dimensional space
  • a three-dimensional point in p1 and p2 are the mapping points of point P in two different images (for example, image 51 and image 52). The two different images can be the first camera and the second camera at the same time taking pictures.
  • the epipolar line corresponding to the point p1 is in the image 52. When the epipolar line is searched for the epipolar line, the point found is p2'. Due to the error between p2' and p2, the depth of the three-dimensional point P after the triangulation exists For a certain error, such as PP' shown in Figure 5, the error is the triangulation error.
  • the pixel coordinates of the projection point of the point P actually observed from the first image 31 in the first image 31 is p11
  • the projection point of the point P actually observed from the second image 32 The pixel coordinate in the second image 32 is p12.
  • the external parameters between the first camera and the second camera are denoted as R LR and t LR , where R LR represents the rotational relationship between the first camera and the second camera, and the rotational relationship is specifically the first The rotation matrix of the camera coordinate system of the imaging device relative to the camera coordinate system of the second imaging device, or the rotation matrix of the camera coordinate system of the second imaging device relative to the camera coordinate system of the first imaging device.
  • t LR represents the displacement relationship between the first camera and the second camera.
  • the displacement relationship is specifically the translation matrix of the camera coordinate system of the first camera relative to the camera coordinate system of the second camera, or the second camera
  • the translation matrix of the camera coordinate system relative to the camera coordinate system of the first camera is marked as K L
  • the internal parameter of the second camera is marked as K R.
  • K L and K R are fixed values. In this embodiment, calibration is not required, and R LR and t LR are the original Examples of parameters to be calibrated.
  • the depth information and triangulation error of the three-dimensional point P can be calculated by the triangulation measurement method.
  • O1 represents the optical center of the first camera
  • O2 represents the optical center of the second camera.
  • the depth information of the point P can be determined based on the first camera or the second camera. For example, the depth information of the point P is determined with the first camera as a reference, the depth information of the point P determined with the first camera as the reference at t1 is recorded as z1, and the triangulation error at time t1 is recorded as Cost1.
  • [z1,Cost1] triangulate(p11,p12,R LR ,t LR ,K L ,K R ).
  • Step S102 Acquire a third image and a fourth image including the target object captured by the first camera and the second camera at a second moment, and determine second depth information of the target object.
  • 33 represents the third image including the target object 30 captured by the first camera 21 at time t2
  • 34 represents the fourth image including the target object 30 captured by the first camera 21 at time t2.
  • the processor in the drone can obtain the third image 33 taken by the first camera 21 and the fourth image 34 taken by the second camera 22 at time t2, and use the triangulation method to determine the depth information and triangulation of the point P Error, here, the depth information at point P at time t2 is recorded as second depth information, and the triangulated error at time t2 is recorded as second error information.
  • the pixel coordinate of the projection point of the point P actually observed from the third image 33 in the third image 33 is p21
  • the projection point of the point P actually observed from the fourth image 34 The pixel coordinate in the fourth image 34 is p22.
  • the depth information of the point P determined on the basis of the first camera at time t2 is recorded as z2, and the triangulation error at time t2 is recorded as Cost2.
  • [z2,Cost2] triangulate(p21,p22,R LR , t LR ,K L ,K R ).
  • Step S103 Obtain the pose change of the movable platform between the first moment and the second moment.
  • the pose of the drone may change.
  • the pose change of the drone includes a position change or a posture change.
  • the drone may move and/or rotate.
  • the speedometer information can be used to determine the travel distance of the drone between t1 and t2.
  • the attitude change of the drone can be measured by the inertial measurement unit on the drone.
  • the movement distance of the drone is d
  • the movement distance of the first camera 21 or the second camera 22 is also d ,As shown in Figure 6. It can be understood that when the attitude of the drone changes between time t1 and time t2, the movement distance of the first camera 21 and the movement distance of the second camera 22 may be different.
  • Step S104 Calibrate the parameters of the camera according to the first depth information, the second depth information, and the pose change.
  • the camera parameters include: external parameters between the first camera and the second camera.
  • the external parameters between the first camera and the second camera include: a rotation relationship and a displacement relationship between the first camera and the second camera.
  • the first depth information may be the depth information of the point P determined on the basis of the first camera at time t1, or the depth information of the point P determined on the basis of the second camera at time t1.
  • the second depth information may be the depth information of the point P determined on the basis of the first camera at time t2, or the depth information of the point P determined on the basis of the second camera at time t2.
  • the first camera optionally, according to the depth information of the point P determined on the basis of the first camera at time t1, and the depth information of the point P determined on the basis of the first camera at time t2 , And the movement distance of the first imaging device 21 between time t1 and time t2, calibrating the rotation relationship and displacement relationship between the first imaging device and the second imaging device.
  • the second camera optionally, according to the depth information of the point P determined on the basis of the second camera at time t1, and the depth information of the point P determined on the basis of the second camera at time t2 , And the movement distance of the second imaging device 22 between time t1 and time t2, calibrating the rotation relationship and displacement relationship between the first imaging device and the second imaging device.
  • the calibrating the parameters of the photographing device according to the first depth information, the second depth information, and the pose change includes: according to the first depth information and the second depth information And the geometric constraints between the pose changes and calibrate the camera parameters.
  • z1 represents the depth information of the point P determined on the basis of the first imaging device at time t1
  • z2 represents the depth information of the point P determined on the basis of the first imaging device at time t2.
  • d represents the movement distance of the first camera between time t1 and time t2.
  • z1, z2, and d constitute the three sides of the triangle. According to the geometric constraints between the three sides of the triangle, that is, the sum of the two sides is greater than the third side, and the difference between the two sides is less than the third side, the first camera and the The rotation relationship and displacement relationship between the second camera. Therefore, when z1, z2, and d are all accurate, the relationship between the three should satisfy the following geometric constraints, which are specifically shown in the following formula (3):
  • the calibration of the camera parameters according to the geometric constraints between the first depth information, the second depth information, and the pose change includes: according to the first depth information, the The geometric constraints between the second depth information and the pose change are used to determine target error information; and the camera parameters are calibrated according to the target error information.
  • b represents the baseline distance between the first camera and the second camera
  • z1 represents the depth information of the point P determined on the basis of the first camera at t1
  • z2 represents the depth information determined on the basis of the first camera
  • f represents the focal length of the first camera
  • the first depth information and the second depth information are determined based on the first photographing device; the first depth information, the second depth information, and the The geometric constraints between the pose changes and the determination of target error information include: according to the first depth information, the second depth information, and the geometric constraints between the pose changes, the first camera, and The distance information between the second imaging devices and the focal length of the first imaging device determine the target error information.
  • the target error information is recorded as Cost3, and Cost3 can be determined according to formula (4), and Cost3 is specifically shown in formula (5) as follows:
  • z1 represents the depth information of the point P determined on the basis of the first imaging device at time t1
  • z2 represents the depth information of the point P determined on the basis of the first imaging device at time t2.
  • d represents the movement distance of the first camera between time t1 and time t2.
  • b represents the baseline distance between the first camera and the second camera.
  • f represents the focal length of the first camera.
  • the first depth information and the second depth information are determined on the basis of the second camera; the first depth information, the second depth information, and the The geometric constraints between the pose changes and the determination of target error information include: according to the first depth information, the second depth information, and the geometric constraints between the pose changes, the first camera The distance information from the second imaging device and the focal length of the second imaging device determine the target error information.
  • z1' indicates the depth information of the point P determined on the basis of the second camera at t1
  • z2' indicates the depth information of the point P determined on the basis of the second camera at t2
  • d' indicates the depth information at time t1 The movement distance of the second camera between t2 and t2.
  • z1', z2', and d' constitute the three sides of the triangle.
  • the rotation relationship and displacement relationship between the first imaging device and the second imaging device can be calibrated according to the target error information.
  • the calibrating the parameters of the shooting device according to the target error information includes: determining a cost function according to the target error information; and calibrating the parameters of the shooting device according to the cost function.
  • Cost1, Cost2, and Cost3 constitute the cost function Cost, and Cost is specifically shown in the following formula (6):
  • Cost [Cost1,Cost2,Cost3] (6)
  • the rotation relationship and the displacement relationship between the first imaging device and the second imaging device are determined.
  • the calibrating the parameters of the photographing device according to the cost function includes: optimizing the solution of the cost function, and determining the parameters of the photographing device that can minimize the second norm of the cost function.
  • R LR represents the rotation relationship between the first camera and the second camera
  • t LR represents the displacement relationship between the first camera and the second camera
  • ⁇ Cost ⁇ 2 represents the second norm of the cost function Cost.
  • the movable platform not only includes two shooting devices, such as a first shooting device and a second shooting device, but also more shooting devices.
  • this embodiment The method for calibrating the parameters of the shooting device can be applied to the calibration of external parameters between any two shooting devices among the plurality of shooting devices.
  • the first camera and the second camera on the movable platform respectively take the first image and the second image including the target object at the first moment to determine the first depth information of the target object at the first moment
  • the second depth information of the target object at the second time is determined, according to the first depth information and the second depth Information and the position and posture changes of the movable platform between the first moment and the second moment to calibrate the rotation relationship and displacement relationship between the first camera and the second camera, compared to only based on the first depth information and the second camera.
  • Two depth information calibrate the rotation relationship and displacement relationship between the first camera and the second camera, increase the constraint item, reduce the influence of outliers on the parameter calibration, and improve the impact on the first camera and the second camera The accuracy and efficiency of parameter calibration.
  • the embodiment of the present invention provides a method for parameter calibration of a photographing device.
  • the first camera and the second camera may be a global shutter (Global Shutter) camera, or the first and the second camera may be a rolling shutter (Rolling Shutter) camera.
  • Global Shutter global shutter
  • Rolling Shutter rolling shutter
  • the exposure time of each row of pixels is the same.
  • the position change is determined according to the time difference between the first time and the second time, and the moving speed of the movable platform between the first time and the second time of.
  • the exposure time of each row of pixels is different.
  • a certain frame of image includes N rows of pixels, and each row starts to be exposed at a different time point.
  • the exposure start time of the first line is recorded as Start1
  • the exposure start time of the second line is recorded as Start2.
  • the start time of the third line of exposure is recorded as Start3, and so on.
  • the exposure time length of each row is the same, that is, the time interval between the exposure start time and the exposure end time of each row is the same.
  • the time interval between the exposure start time of each row and the exposure start time of the previous row is the same.
  • the time interval between Start1 and Start2 is the same as the time interval between Start2 and Start3.
  • the mapping points of the same three-dimensional point P in different images are located in different rows of the corresponding images.
  • the mapping point of the three-dimensional point P in the first image 31 is p11.
  • the mapping point of the three-dimensional point P in the third image 33 is p21.
  • the line where p11 is located in the first image 31 and the line where p21 is located in the third image 33 are different, so the exposure time of p11 and p21 may be different. As a result, the exposure time of the target object in the first image 31 and the third image 33 is different.
  • the mapping point of the three-dimensional point P in the second image 32 is p12.
  • the mapping point of the three-dimensional point P in the fourth image 34 is p22.
  • the line where p12 is located in the second image 32 and the line where p22 is located in the fourth image 34 are different, so the exposure time of p12 and p22 may be different. As a result, the exposure time of the target object in the second image 32 and the fourth image 34 is different.
  • the position change is based on the time difference between the first time and the second time, the exposure time difference of the target object in the first image and the third image, and the The moving speed of the movable platform between the first moment and the second moment is determined.
  • the same 3D point P may be in different exposure lines due to the movement of the mobile platform in the image, resulting in the same 3D point P at different times.
  • the exposure time in the image is also different.
  • the movement distance d in the above embodiment can be compensated according to the difference in the exposure time of the target object in the first image 31 and the third image 33.
  • the position change is based on the time difference between the first time and the second time, the exposure time difference of the target object in the second image and the fourth image, and the The moving speed of the movable platform between the first moment and the second moment is determined.
  • the movement distance d in the above embodiment can be calculated according to the difference in the exposure time of the target object in the second image 32 and the fourth image 34.
  • This embodiment compensates for the movement distance of the movable platform through the different exposure time of the target object in the images captured by the same camera at different times, reduces the influence of the rolling shutter camera on the parameter calibration of the camera, and further improves the shooting
  • the accuracy of the device parameter calibration also enables the camera parameter calibration method to be applicable to a rolling shutter camera, which improves the application range of the camera parameter calibration method.
  • FIG. 8 is a schematic structural diagram of a movable platform provided by an embodiment of the present invention.
  • the movable platform 70 includes at least a first camera 71 and a second camera 72, a memory 73 and a processor 74.
  • the memory is used to store program code; the processor 74 calls the program code, and when the program code is executed, is used to perform the following operations: acquire the first camera and the second camera at the first moment The first image and the second image of the target object captured by the device, and the first depth information of the target object is determined; the first image and the second image captured by the first camera and the second camera at the second moment include all The third image and the fourth image of the target object, and determine the second depth information of the target object; acquire the pose change of the movable platform between the first moment and the second moment; The first depth information, the second depth information, and the pose change are used to calibrate the camera parameters.
  • the processor 74 is specifically configured to calibrate the parameters of the photographing device according to the first depth information, the second depth information, and the pose change: The geometric constraint between the second depth information and the pose change is used to calibrate the camera parameters.
  • the processor 74 is specifically configured to calibrate the camera parameters according to the geometric constraints between the first depth information, the second depth information, and the pose change:
  • the geometric constraints between the first depth information, the second depth information, and the pose change determine target error information; and the camera parameters are calibrated according to the target error information.
  • the processor 74 when calibrating the parameters of the camera according to the target error information, is specifically configured to: determine a cost function according to the target error information; and calibrate the camera according to the cost function parameter.
  • the processor 74 when calibrating the parameters of the photographing device according to the cost function, is specifically configured to: optimize the cost function to determine the smallest two-norm of the cost function The camera parameters.
  • the camera parameters include: external parameters between the first camera and the second camera.
  • the external parameters between the first camera and the second camera include: a rotation relationship and a displacement relationship between the first camera and the second camera.
  • the first depth information and the second depth information are determined based on the first camera; the processor 74 is based on the first depth information, the second depth information, and the
  • the geometric constraints between the pose changes are specifically used to: according to the first depth information, the second depth information, and the geometric constraints between the pose changes, the first The distance information between a photographing device and the second photographing device and the focal length of the first photographing device determine the target error information.
  • the first depth information and the second depth information are determined based on the second camera; the processor 74 is based on the first depth information, the second depth information, and the
  • the geometric constraints between the pose changes are specifically used to: according to the first depth information, the second depth information, and the geometric constraints between the pose changes, the first The distance information between a photographing device and the second photographing device and the focal length of the second photographing device determine the target error information.
  • the posture change includes a position change or a posture change.
  • the position change is determined according to the time difference between the first time and the second time, and the moving speed of the movable platform between the first time and the second time of.
  • the position change is based on the time difference between the first time and the second time, the exposure time difference of the target object in the first image and the third image, and the The moving speed of the movable platform between the first moment and the second moment is determined.
  • the position change is based on the time difference between the first time and the second time, the exposure time difference of the target object in the second image and the fourth image, and the The moving speed of the movable platform between the first moment and the second moment is determined.
  • the movable platform includes a drone or a vehicle.
  • the first camera and the second camera on the movable platform respectively take the first image and the second image including the target object at the first moment to determine the first depth information of the target object at the first moment
  • the second depth information of the target object at the second time is determined, according to the first depth information and the second depth Information and the position and posture changes of the movable platform between the first moment and the second moment to calibrate the rotation relationship and displacement relationship between the first camera and the second camera, compared to only based on the first depth information and the second camera.
  • Two depth information calibrate the rotation relationship and displacement relationship between the first camera and the second camera, increase the constraint item, reduce the influence of outliers on the parameter calibration, and improve the impact on the first camera and the second camera The accuracy and efficiency of parameter calibration.
  • this embodiment also provides a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to implement the method for calibrating the camera parameters described in the foregoing embodiment.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional units.
  • the above-mentioned integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium.
  • the above-mentioned software functional unit is stored in a storage medium and includes several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor execute the method described in the various embodiments of the present invention. Part of the steps.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Des modes de réalisation de l'invention concernent un procédé d'étalonnage de paramètres de dispositif de capture, un appareil et un support de stockage. Le procédé consiste à : utiliser un premier dispositif de capture et un second dispositif de capture parmi des dispositifs de capture pour capturer respectivement, à un premier point temporel, une première image et une deuxième image comprenant un objet cible, et déterminer des premières informations de profondeur de l'objet cible au premier point temporel ; utiliser le premier dispositif de capture et le second dispositif de capture pour capturer respectivement, à un second point temporel, une troisième image et une quatrième image comprenant l'objet cible, et déterminer des secondes informations de profondeur de l'objet cible au deuxième point temporel ; et étalonner une relation de rotation et une relation de déplacement entre le premier dispositif de capture et le second dispositif de capture en fonction des premières informations de profondeur, des secondes informations de profondeur, et d'un changement d'orientation d'une plate-forme mobile entre le premier point temporel et le second point temporel. L'invention introduit une contrainte supplémentaire, et réduit l'effet d'une valeur aberrante sur l'étalonnage de paramètres, ce qui permet d'améliorer la précision et l'efficacité d'étalonnage de paramètres entre un premier dispositif de capture et un second dispositif de capture.
PCT/CN2019/077475 2019-03-08 2019-03-08 Procédé d'étalonnage de paramètre de dispositif de capture, appareil et support de stockage WO2020181409A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/077475 WO2020181409A1 (fr) 2019-03-08 2019-03-08 Procédé d'étalonnage de paramètre de dispositif de capture, appareil et support de stockage
CN201980005404.0A CN111316325B (zh) 2019-03-08 2019-03-08 拍摄装置参数标定方法、设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/077475 WO2020181409A1 (fr) 2019-03-08 2019-03-08 Procédé d'étalonnage de paramètre de dispositif de capture, appareil et support de stockage

Publications (1)

Publication Number Publication Date
WO2020181409A1 true WO2020181409A1 (fr) 2020-09-17

Family

ID=71155758

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/077475 WO2020181409A1 (fr) 2019-03-08 2019-03-08 Procédé d'étalonnage de paramètre de dispositif de capture, appareil et support de stockage

Country Status (2)

Country Link
CN (1) CN111316325B (fr)
WO (1) WO2020181409A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022094772A1 (fr) * 2020-11-03 2022-05-12 深圳市大疆创新科技有限公司 Procédé d'estimation de position, procédé de commande de suivi, dispositif et support de stockage
CN112911091B (zh) * 2021-03-23 2023-02-24 维沃移动通信(杭州)有限公司 多点激光器的参数调整方法、装置和电子设备
WO2023272524A1 (fr) * 2021-06-29 2023-01-05 深圳市大疆创新科技有限公司 Appareil de capture binoculaire, procédé et appareil de détermination de profondeur d'observation de celui-ci, et plateforme mobile
CN117882110A (zh) * 2022-01-28 2024-04-12 深圳市大疆创新科技有限公司 可移动平台的位姿估计方法、可移动平台及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130162822A1 (en) * 2011-12-27 2013-06-27 Hon Hai Precision Industry Co., Ltd. Computing device and method for controlling unmanned aerial vehicle to capture images
CN106774947A (zh) * 2017-02-08 2017-05-31 亿航智能设备(广州)有限公司 一种飞行器及其控制方法
CN106803271A (zh) * 2016-12-23 2017-06-06 成都通甲优博科技有限责任公司 一种视觉导航无人机的摄像机标定方法及装置
CN108171787A (zh) * 2017-12-18 2018-06-15 桂林电子科技大学 一种基于orb特征检测的三维重建方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130162822A1 (en) * 2011-12-27 2013-06-27 Hon Hai Precision Industry Co., Ltd. Computing device and method for controlling unmanned aerial vehicle to capture images
CN106803271A (zh) * 2016-12-23 2017-06-06 成都通甲优博科技有限责任公司 一种视觉导航无人机的摄像机标定方法及装置
CN106774947A (zh) * 2017-02-08 2017-05-31 亿航智能设备(广州)有限公司 一种飞行器及其控制方法
CN108171787A (zh) * 2017-12-18 2018-06-15 桂林电子科技大学 一种基于orb特征检测的三维重建方法

Also Published As

Publication number Publication date
CN111316325A (zh) 2020-06-19
CN111316325B (zh) 2021-07-30

Similar Documents

Publication Publication Date Title
WO2020181409A1 (fr) Procédé d'étalonnage de paramètre de dispositif de capture, appareil et support de stockage
US10085011B2 (en) Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof
US20170127045A1 (en) Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof
WO2018076154A1 (fr) Étalonnage de positionnement spatial d'un procédé de génération de séquences vidéo panoramiques fondé sur une caméra ultra-grand-angulaire
WO2021043213A1 (fr) Procédé d'étalonnage, dispositif, dispositif de photographie aérienne et support de stockage
JP5992184B2 (ja) 画像データ処理装置、画像データ処理方法および画像データ処理用のプログラム
JP2017108387A (ja) パノラマ魚眼カメラの画像較正、スティッチ、および深さ再構成方法、ならびにそのシステム
WO2019113966A1 (fr) Procédé et dispositif d'évitement d'obstacle, et véhicule aérien autonome
WO2019104571A1 (fr) Procédé et dispositif de traitement d'image
TWI649721B (zh) 無人飛行機之全景拍照方法與使用其之無人飛行機
JP2019510234A (ja) 奥行き情報取得方法および装置、ならびに画像取得デバイス
CN108235815B (zh) 摄像控制装置、摄像装置、摄像系统、移动体、摄像控制方法及介质
CN107560603B (zh) 一种无人机倾斜摄影测量系统及测量方法
KR101342393B1 (ko) 회전식 라인 카메라로 획득한 실내 전방위 영상의 지오레퍼런싱 방법
CN107545586B (zh) 基于光场极线平面图像局部的深度获取方法及系统
CN111612794A (zh) 基于多2d视觉的零部件高精度三维位姿估计方法及系统
WO2021081707A1 (fr) Procédé et appareil de traitement de données, plateforme mobile et support de stockage lisible par ordinateur
JP2017026552A (ja) 3次元計測装置、3次元計測方法、及びプログラム
WO2021104308A1 (fr) Procédé de mesure de profondeur panoramique, caméra à œil de poisson à quatre yeux et caméra fisheye binoculaire
CN113551665A (zh) 一种用于运动载体的高动态运动状态感知系统及感知方法
WO2021139176A1 (fr) Procédé et appareil de suivi de trajectoire de piéton sur la base d'un étalonnage de caméra binoculaire, dispositif informatique et support de stockage
WO2021043214A1 (fr) Procédé et dispositif d'étalonnage, et véhicule aérien sans pilote
JP2022089269A (ja) キャリブレーション装置およびキャリブレーション方法
CN114396944B (zh) 一种基于数字孪生的自主定位误差矫正方法
CN113436267B (zh) 视觉惯导标定方法、装置、计算机设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19919340

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19919340

Country of ref document: EP

Kind code of ref document: A1