WO2024012405A1 - Procédé et appareil d'étalonnage - Google Patents

Procédé et appareil d'étalonnage Download PDF

Info

Publication number
WO2024012405A1
WO2024012405A1 PCT/CN2023/106553 CN2023106553W WO2024012405A1 WO 2024012405 A1 WO2024012405 A1 WO 2024012405A1 CN 2023106553 W CN2023106553 W CN 2023106553W WO 2024012405 A1 WO2024012405 A1 WO 2024012405A1
Authority
WO
WIPO (PCT)
Prior art keywords
acquisition device
calibration
collection
coordinate system
acquisition
Prior art date
Application number
PCT/CN2023/106553
Other languages
English (en)
Chinese (zh)
Inventor
王栋
李明
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024012405A1 publication Critical patent/WO2024012405A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Definitions

  • the present application relates to the field of computer vision technology, and in particular to a calibration method and device.
  • Large-scene venues refer to large-scale sports competition venues, such as football fields, basketball venues, volleyball venues, skating and skiing venues, and venues of similar size to competition venues, such as squares, exhibition halls, conference venues, etc.
  • the current calibration solution is to arrange multiple calibration columns in the shooting area according to a predetermined placement pattern, and physically measure the positional relationship between the calibration columns to unify the feature points on all calibration columns in the same world coordinate system. , and then collect images including calibration columns from multiple cameras, and then use direct linear transformation to obtain the internal and external parameters of the camera by identifying the coordinates of the calibration points in the image.
  • it is necessary to accurately measure the spatial distance of the calibration columns and ensure that multiple calibration columns are on the same horizontal plane. This has strict restrictions on the flatness of the site and is difficult to apply to scenarios where the site is not very flat, such as football.
  • Embodiments of the present application provide a calibration method and device, which are not limited to the flatness of the calibration site, and improve the accuracy and applicability of the calibration.
  • embodiments of the present application provide a calibration method, including: acquiring multiple video streams collected by multiple acquisition devices, the multiple acquisition devices being deployed in a set space of a sports venue, and the multiple video streams It is obtained by synchronously photographing the multiple acquisition devices during the movement of the target calibration object on the sports field; the movement trajectory of the target calibration object on the sports field at least covers the set area of the sports field, and the target calibration object It includes at least two non-coplanar calibration surfaces, and each calibration surface includes at least two calibration points; the video stream collected by each collection device includes multiple image frames; and each collection device in the plurality of collection devices collects Perform calibration point detection on the image frames respectively to obtain the pixel coordinates of multiple calibration points on the target calibration object in the image frames collected by each acquisition device; according to the calibration points in the image frames collected by each acquisition device The pixel coordinates of the multiple calibration points included in the target calibration object, and the three-dimensional coordinates of the multiple calibration points in the calibration object coordinate system of the target calibration object, estimate the internal parameter matrix of each acquisition
  • the method further includes:
  • the distortion coefficient of each acquisition device is used to obtain a first distortion coefficient estimate of each acquisition device.
  • the distortion coefficient can also be estimated on the basis of estimating the internal and external parameters.
  • the i-th acquisition is estimated
  • the internal parameter matrix of the device obtains the second internal parameter estimate value of the i-th acquisition device;
  • the image set includes M1 image frames including the target calibration object in the video stream collected by the i-th acquisition device, and the M1 images
  • the frame corresponds one-to-one to M1 moving positions among the M moving positions of the target calibration object;
  • M1 is a positive integer, and M is an integer greater than M1;
  • the pixel coordinates of the calibration points on the target calibration object in the image frame included in the image set collected by the i-th acquisition device, and the calibration points of the multiple calibration points are used to estimate the pose set corresponding to the i-th collection device respectively.
  • the pose set corresponding to the i-th collection device includes the relative position of the target calibration object at M1 moving positions relative to the The poses of i collection devices; the value of i is a positive integer less than or equal to N, where N is the number of collection devices deployed in the set space of the sports venue;
  • the range of moving positions corresponding to the image frames collected by different collection devices is different;
  • the three-dimensional coordinates of the multiple calibration points in the calibration object coordinate system and the N acquisition devices respectively correspond to The pose of the target calibration object, on the basis of the distortion coefficients and the second internal parameter estimates corresponding to the N acquisition devices initially set, globally iteratively adjusting the internal parameter matrices and distortion coefficients of the N acquisition devices for multiple rounds to obtain the result
  • the first internal parameter estimate value and the first distortion coefficient estimate value of the N collection devices are provided.
  • the internal parameters and distortion coefficients are optimized through global optimization based on the principle of minimum projection error, which can improve the accuracy of the calibrated internal parameters and distortion coefficients.
  • the internal parameter matrices and distortion coefficients of the N acquisition devices are adjusted globally and iteratively for multiple rounds to obtain the first internal parameter estimates of the N acquisition devices and the first distortion coefficient estimates of the N acquisition devices.
  • Values including:
  • the pose set of the target calibration object corresponding to each acquisition device the second internal parameter estimate value and the initially set distortion coefficient corresponding to each acquisition device.
  • the pose set of the target calibration object corresponding to each acquisition device, the second internal parameter estimate corresponding to each acquisition device, and the initially set distortion coefficient are adjusted to obtain each acquisition device after the current round of adjustment.
  • the corresponding internal parameter estimates and distortion coefficients respectively;
  • the internal parameter estimate value and distortion coefficient corresponding to each acquisition device after the current round of adjustment are used as the basis for the next round of adjustment, until the completion of the C round of adjustment to obtain the first internal parameter estimate value and the first internal parameter estimate value of the N acquisition devices.
  • Estimated values and initially set distortion coefficients estimating the pixel coordinates of the multiple calibration points in the image coordinate system of each acquisition device, including:
  • the plurality of calibration points are determined.
  • the pixel coordinates of the plurality of calibration points projected into the image coordinate system of the i-th acquisition device are estimated based on the distortion coordinates and the second internal parameter estimate of the i-th acquisition device.
  • the matching feature point set corresponding to each collection device group, and the target The calibration object includes the three-dimensional coordinates of multiple calibration points in the calibration object coordinate system.
  • the first external parameter estimate value of each acquisition device is estimated and determined, including:
  • the matching feature point set corresponding to each collection device group, and the plurality of calibration points included in the target calibration object The three-dimensional coordinates under the coordinate system of the calibration object are used to obtain the second relative posture of other acquisition devices among the plurality of acquisition devices except the reference acquisition device relative to the reference acquisition device; the reference acquisition device is the plurality of acquisition devices. any of the devices;
  • the scale factor is the ratio between the first distance and the second distance.
  • the first distance is the distance between the two calibration points on the target calibration object.
  • the second distance is the distance between the two calibration points. The distance between two calibration points in the same image coordinate system, and the two calibration points are located on the same calibration surface on the target calibration object;
  • the first external parameter estimate of each acquisition device is obtained according to the second relative pose of each acquisition device and the scale factor.
  • the extrinsic parameters of each acquisition device can also be globally optimized through the principle of minimum projection error.
  • the matching feature point set corresponding to each collection device group and the target The three-dimensional coordinates of the multiple calibration points included in the calibration object in the calibration object coordinate system, and obtaining the second relative pose of the other acquisition devices among the multiple acquisition devices except the reference acquisition device relative to the reference acquisition device, including:
  • the essential matrix between the first collection device and the reference collection device is determined according to the matching feature point set corresponding to the first collection device group, the first collection device and the reference collection device belong to the first collection device group, and the The first collection equipment group is one of the plurality of collection equipment groups;
  • a second relative pose between the first acquisition device and the reference acquisition device is determined.
  • obtaining the first external parameter estimate of each acquisition device based on the second relative pose of each acquisition device and the scale factor includes:
  • the target calibration is determined based on the second relative posture of each collection equipment.
  • the three-dimensional coordinates of the multiple calibration points in the local coordinate system when the object moves to M2 moving positions respectively, and the local coordinate system is the camera coordinate system of the reference acquisition device; any one of the M2 moving positions
  • the mobile position is at least located in the common viewing area of two of the g-th collection devices;
  • the M2 movements are estimated based on the three-dimensional coordinates of the multiple calibration points at the M2 movement positions in the local coordinate system, the second relative pose of the collection equipment included in the g-th collection equipment group, and the first internal parameter estimate.
  • the plurality of calibration points at the position are respectively projected to the pixel coordinates under the image coordinate system of the acquisition device included in the g-th acquisition device group;
  • the internal parameter estimates and relative poses corresponding to the acquisition equipment in the g-th acquisition equipment group after the current round of adjustment are used as the basis for the next round of adjustment, until the adjustment of round D is completed to obtain the acquisition equipment included in the g-th acquisition equipment group.
  • the first external parameter estimate value of each acquisition device included in the g-th acquisition device group is obtained by adding the scale factor on the basis of the third relative pose of the acquisition device included in the g-th acquisition device group.
  • the extrinsic parameters of each acquisition device are globally optimized through the principle of minimum projection error. It can improve the accuracy of calibrated external parameters.
  • the internal parameters are also optimized, which can further improve the accuracy of the calibrated internal parameters.
  • obtaining the first external parameter estimate of each acquisition device based on the second relative pose of each acquisition device and the scale factor includes:
  • the target calibration is determined based on the second relative posture of each collection equipment.
  • the three-dimensional coordinates of the multiple calibration points in the local coordinate system when the object moves to M2 moving positions respectively, and the local coordinate system is the camera coordinate system of the reference acquisition device; any one of the M2 moving positions
  • the mobile position is at least located in the common viewing area of two of the g-th collection devices;
  • the second relative pose of the collection equipment included in the g-th collection equipment group estimate the pixel coordinates of the multiple calibration points at the M2 moving positions respectively projected to the image coordinate system of the acquisition device included in the g-th acquisition device group;
  • the first external parameter estimate value of each acquisition device included in the g-th acquisition device group is obtained by adding the scale factor on the basis of the third relative pose of the acquisition device included in the g-th acquisition device group.
  • the extrinsic parameters of each acquisition device are globally optimized through the principle of minimum projection error. It can improve the accuracy of calibrated external parameters.
  • the internal parameters and distortion coefficients are also optimized, which can further improve the accuracy of the calibrated internal parameters and distortion coefficients.
  • each of the plurality of collection equipment groups includes two collection equipment, and according to the first internal parameters of at least two collection equipment included in each of the plurality of collection equipment groups, The estimated value, the matching feature point set corresponding to each collection equipment group, and the three-dimensional coordinates of the target calibration object including multiple calibration points in the calibration object coordinate system, estimating the first external parameter estimate value of each collection device, including :
  • the relative pose is based on the matching feature point set corresponding to at least one collection device group, the target calibration object including the three-dimensional coordinates of a plurality of calibration points in the calibration object coordinate system, and the first of two collection devices included in at least one collection device group.
  • the internal parameter estimate is determined;
  • the coordinates of the multiple calibration points included in the target calibration object in the calibration object coordinate system, and the coordinates of the target calibration object in the image frames collected by each acquisition device Calibrate the pixel coordinates of the point and globally optimize the camera parameters of each acquisition device.
  • the camera parameters include an internal parameter matrix and an external parameter matrix, or the camera parameters include an internal parameter matrix, an external parameter matrix and a distortion coefficient;
  • the camera parameters of each acquisition device and the poses of the calibration surfaces where the multiple calibration points are respectively located in the calibration object coordinate system are used as the quantities to be optimized; the first internal parameters of each acquisition device in the global optimization
  • the estimated value is used as the initial value of the internal parameter matrix of each acquisition device.
  • the three-dimensional coordinates of each calibration point in the space are determined by setting a basic moving position (reference point) in the space, and then based on the principle of minimizing the projection error, each acquisition device is determined through global optimization.
  • the camera parameters can improve the accuracy of calibration.
  • the relative posture of the first moving position pair satisfies the following conditions:
  • T 12 represents the relative pose between the first moving position and the second moving position
  • the at least one collection device group includes a first collection device group
  • the first collection device group includes a first collection device and a second collection device.
  • collection equipment Indicates that the determination is based on the pixel coordinates of the calibration point in the image frame collected by the first acquisition device when the target calibration object moves to the first movement position and the pixel coordinates of the calibration point in the image frame collected when the target calibration object moves to the second movement position.
  • the pose from the first moving position to the second moving position Indicates that the determination is based on the pixel coordinates of the calibration point in the image frame collected by the second acquisition device when the target calibration object moves to the second movement position and the pixel coordinates of the calibration point in the image frame collected when the target calibration object moves to the first movement position. from the second movement position to the first movement position.
  • I represents the identity matrix, Indicates the pixel coordinates of the calibration point in the image frame collected by the first acquisition device in the l-th acquisition device group when the target calibration object moves to the first movement position and the pixel coordinates of the calibration point collected when the target calibration object moves to the second movement position.
  • the pose from the first moving position to the second moving position determined by the pixel coordinates of the calibration point in the image frame;
  • the pose from the second movement position to the first movement position determined by the pixel coordinates of the calibration point in the collected image frame; 11 represents the first collection device in the first collection device group, and 12 represents the second collection device in the first collection device group.
  • determining the poses of the M mobile positions in the world coordinate system based on the coordinates of the basic mobile position in the world coordinate system and the relative poses of multiple mobile position pairs includes:
  • the shortest path is the path with the smallest credibility weight among all paths from the third mobile position to the basic mobile position; the credibility weight of any path is the trustworthiness of the mobile position pair passed by the any path. The sum of the reliability weights;
  • the pose of the third moving position is determined based on the relative pose of the pair of moving positions passed by the shortest path.
  • a calibration device including:
  • the acquisition unit is used to acquire multiple video streams collected by multiple acquisition devices, the multiple acquisition devices are deployed in the setting space of the sports field, and the multiple video streams are during the movement of the target calibration object on the sports field. Obtained by synchronous shooting by the plurality of acquisition devices; the movement trajectory of the target calibration object on the sports field at least covers the set area of the sports field, and the target calibration object includes at least two non-coplanar calibration surfaces, Each calibration surface includes at least two calibration points; the video stream collected by each acquisition device includes multiple image frames;
  • a processing unit configured to perform calibration point detection on the image frames collected by each collection device in the plurality of collection devices, so as to obtain multiple calibration points on the target calibration object in the image frames collected by each collection device.
  • pixel coordinates according to the pixel coordinates of multiple calibration points included in the target calibration object in the image frame collected by each acquisition device, and the pixel coordinates of the multiple calibration points in the calibration object coordinate system of the target calibration object Three-dimensional coordinates, estimate the internal parameter matrix of each collection device to obtain the first internal parameter estimate value of each collection device; according to the first internal parameter estimate value of at least two collection devices included in each collection device group in the plurality of collection device groups , the matching feature point set corresponding to each collection equipment group, and the target calibration object includes the three-dimensional coordinates of multiple calibration points in the calibration object coordinate system, determining the first external parameter estimate value of each collection device;
  • the matching feature point set includes multiple matching feature point groups, and each matching feature point group includes at least two matching pixel coordinates, so The at least two matching pixel coordinates are the pixel coordinates of the same calibration point detected by image frames collected by different collection equipment belonging to the same collection equipment group at the same time; the plurality of collection equipment groups are the pixel coordinates of the plurality of collection equipment. Obtained by device grouping, any two collection device groups among the plurality of collection device groups include at least one identical collection device.
  • the processing unit is also used to:
  • the distortion coefficient of each acquisition device is used to obtain a first distortion coefficient estimate of each acquisition device.
  • the processing unit is specifically used for:
  • the i-th acquisition is estimated
  • the internal parameter matrix of the device obtains the second internal parameter estimate value of the i-th acquisition device;
  • the image set includes M1 image frames including the target calibration object in the video stream collected by the i-th acquisition device, and the M1 images
  • the frame corresponds one-to-one to M1 moving positions among the M moving positions of the target calibration object;
  • M1 is a positive integer, and M is an integer greater than M1;
  • the pixel coordinates of the calibration points on the target calibration object in the image frame included in the image set collected by the i-th acquisition device, and the calibration points of the multiple calibration points are used to estimate the pose set corresponding to the i-th collection device respectively.
  • the pose set corresponding to the i-th collection device includes the relative position of the target calibration object at M1 moving positions relative to the The poses of i collection devices; the value of i is a positive integer less than or equal to N, where N is the number of collection devices deployed in the set space of the sports venue;
  • the range of moving positions corresponding to the image frames collected by different collection devices is different;
  • the three-dimensional coordinates of the multiple calibration points in the calibration object coordinate system and the N acquisition devices respectively correspond to The pose of the target calibration object, on the basis of the distortion coefficients and the second internal parameter estimates corresponding to the N acquisition devices initially set, globally iteratively adjusting the internal parameter matrices and distortion coefficients of the N acquisition devices for multiple rounds to obtain the result
  • the first internal parameter estimate value and the first distortion coefficient estimate value of the N collection devices are provided.
  • the processing unit is specifically used for:
  • the pose set of the target calibration object corresponding to each acquisition device the second internal parameter estimate value and the initially set distortion coefficient corresponding to each acquisition device.
  • the pose set of the target calibration object corresponding to each acquisition device, the second internal parameter estimate corresponding to each acquisition device, and the initially set distortion coefficient are adjusted to obtain each acquisition device after the current round of adjustment.
  • the corresponding internal parameter estimates and distortion coefficients respectively;
  • the internal parameter estimate value and distortion coefficient corresponding to each acquisition device after the current round of adjustment are used as the basis for the next round of adjustment, until the completion of the C round of adjustment to obtain the first internal parameter estimate value and the first internal parameter estimate value of the N acquisition devices.
  • the processing unit is specifically used for:
  • the plurality of calibration points are determined.
  • the pixel coordinates of the plurality of calibration points projected into the image coordinate system of the i-th acquisition device are estimated based on the distortion coordinates and the second internal parameter estimate of the i-th acquisition device.
  • the processing unit is specifically used for:
  • the matching feature point set corresponding to each collection device group, and the plurality of calibration points included in the target calibration object The three-dimensional coordinates under the coordinate system of the calibration object are used to obtain the second relative posture of other acquisition devices among the plurality of acquisition devices except the reference acquisition device relative to the reference acquisition device; the reference acquisition device is the plurality of acquisition devices. any of the devices;
  • the scale factor is the ratio between the first distance and the second distance.
  • the first distance is the distance between the two calibration points on the target calibration object.
  • the second distance is the distance between the two calibration points. The distance between two calibration points in the same image coordinate system, and the two calibration points are located on the same calibration surface on the target calibration object;
  • the first external parameter estimate of each acquisition device is obtained according to the second relative pose of each acquisition device and the scale factor.
  • the processing unit is specifically used for:
  • the essential matrix between the first collection device and the reference collection device is determined according to the matching feature point set corresponding to the first collection device group, the first collection device and the reference collection device belong to the first collection device group, and the The first collection equipment group is one of the plurality of collection equipment groups;
  • a second relative pose between the first acquisition device and the reference acquisition device is determined.
  • the processing unit is specifically used for:
  • the target calibration is determined based on the second relative posture of each collection equipment.
  • the three-dimensional coordinates of the multiple calibration points in the local coordinate system when the object moves to M2 moving positions respectively, and the local coordinate system is the camera coordinate system of the reference acquisition device; any one of the M2 moving positions
  • the mobile position is at least located in the common viewing area of two of the g-th collection devices;
  • the M2 movements are estimated based on the three-dimensional coordinates of the multiple calibration points at the M2 movement positions in the local coordinate system, the second relative pose of the collection equipment included in the g-th collection equipment group, and the first internal parameter estimate.
  • the plurality of calibration points at the position are respectively projected to the pixel coordinates under the image coordinate system of the acquisition device included in the g-th acquisition device group;
  • the internal parameter estimates and relative poses corresponding to the acquisition equipment in the g-th acquisition equipment group after the current round of adjustment are used as the basis for the next round of adjustment, until the adjustment of round D is completed to obtain the acquisition equipment included in the g-th acquisition equipment group.
  • the first external parameter estimate value of each acquisition device included in the g-th acquisition device group is obtained by adding the scale factor on the basis of the third relative pose of the acquisition device included in the g-th acquisition device group.
  • the processing unit is specifically used for:
  • the target calibration is determined based on the second relative posture of each collection equipment.
  • the three-dimensional coordinates of the multiple calibration points in the local coordinate system when the object moves to M2 moving positions respectively, and the local coordinate system is the camera coordinate system of the reference acquisition device; any one of the M2 moving positions
  • the mobile position is at least located in the common viewing area of two of the g-th collection devices;
  • the second relative pose of the collection equipment included in the g-th collection equipment group estimate the pixel coordinates of the multiple calibration points at the M2 moving positions respectively projected to the image coordinate system of the acquisition device included in the g-th acquisition device group;
  • the internal parameter estimates and relative poses corresponding to the acquisition equipment in the g-th acquisition equipment group after the current round of adjustment are used as the basis for the next round of adjustment, until the adjustment of round D is completed to obtain the acquisition equipment included in the g-th acquisition equipment group.
  • the first external parameter estimate value of each acquisition device included in the g-th acquisition device group is obtained by adding the scale factor on the basis of the third relative pose of the acquisition device included in the g-th acquisition device group.
  • each of the plurality of collection device groups includes two collection devices, and the processing unit is specifically used to:
  • the relative pose is based on the matching feature point set corresponding to at least one collection device group, the target calibration object including the three-dimensional coordinates of a plurality of calibration points in the calibration object coordinate system, and the first of two collection devices included in at least one collection device group.
  • the internal parameter estimate is determined;
  • the coordinates of the multiple calibration points included in the target calibration object in the calibration object coordinate system, and the coordinates of the target calibration object in the image frames collected by each acquisition device Calibrate the pixel coordinates of the point and globally optimize the camera parameters of each acquisition device.
  • the camera parameters include an internal parameter matrix and an external parameter matrix, or the camera parameters include an internal parameter matrix, an external parameter matrix and a distortion coefficient;
  • the camera parameters of each acquisition device and the poses of the calibration surfaces where the multiple calibration points are respectively located in the calibration object coordinate system are used as the quantities to be optimized; the first internal parameters of each acquisition device in the global optimization are The estimated value is used as the initial value of the internal parameter matrix of each acquisition device.
  • the relative posture of the first moving position pair satisfies the following conditions:
  • T 12 represents the relative pose between the first moving position and the second moving position
  • the at least one collection device group includes a first collection device group
  • the first collection device group includes a first collection device and a second collection device.
  • collection equipment Indicates that the determination is based on the pixel coordinates of the calibration point in the image frame collected by the first acquisition device when the target calibration object moves to the first movement position and the pixel coordinates of the calibration point in the image frame collected when the target calibration object moves to the second movement position.
  • the pose from the first moving position to the second moving position Indicates that the determination is based on the pixel coordinates of the calibration point in the image frame collected by the second acquisition device when the target calibration object moves to the second movement position and the pixel coordinates of the calibration point in the image frame collected when the target calibration object moves to the first movement position. from the second movement position to the first movement position.
  • I represents the identity matrix, Indicates the pixel coordinates of the calibration point in the image frame collected by the first acquisition device in the l-th acquisition device group when the target calibration object moves to the first movement position and the pixel coordinates of the calibration point collected when the target calibration object moves to the second movement position.
  • the pose from the first moving position to the second moving position determined by the pixel coordinates of the calibration point in the image frame;
  • the pose from the second movement position to the first movement position determined by the pixel coordinates of the calibration point in the collected image frame; 11 represents the first collection device in the first collection device group, and 12 represents the second collection device in the first collection device group.
  • determining the poses of the M mobile positions in the world coordinate system based on the coordinates of the basic mobile position in the world coordinate system and the relative poses of multiple mobile position pairs includes:
  • the shortest path is the path with the smallest credibility weight among all paths from the third mobile position to the basic mobile position; the credibility weight of any path is the trustworthiness of the mobile position pair passed by the any path. The sum of the reliability weights;
  • the pose of the third moving position is determined based on the relative pose of the pair of moving positions passed by the shortest path.
  • embodiments of the present application provide a calibration device, including a memory and a processor.
  • the memory is used to store programs or instructions; the processor is used to call the program or instructions to execute the method described in the first aspect or any design of the first aspect.
  • the present application provides a computer-readable storage medium.
  • Computer programs or instructions are stored in the computer-readable storage medium.
  • the processor is caused to execute the first aspect or the third aspect. Methods in any possible design on the one hand.
  • the present application provides a computer program product.
  • the computer program product includes a computer program or instructions.
  • the computer program or instructions are executed by a processor, the first aspect or any possible implementation manner of the first aspect is implemented. method in.
  • Figure 1 is a schematic diagram of the image coordinate system provided by the embodiment of the present application.
  • Figure 2 is a schematic diagram of the camera coordinate system provided by the embodiment of the present application.
  • FIG. 3 is a schematic diagram of an information system architecture provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of another information system architecture provided by an embodiment of the present application.
  • Figure 5 is a schematic diagram of a camera deployment method for track and field venues provided by an embodiment of the present application
  • Figure 6 is a schematic diagram of another camera deployment method for track and field venues provided by an embodiment of the present application.
  • Figure 7 is a schematic diagram of another camera deployment method for track and field venues provided by the embodiment of the present application.
  • Figure 8 is a schematic diagram of a camera deployment method for a football field provided by an embodiment of the present application.
  • Figure 9 is a schematic flow chart of the calibration method provided by the embodiment of the present application.
  • Figure 10 is a schematic diagram of the target calibration object provided by the embodiment of the present application.
  • FIG 11 is a schematic diagram of the calibration tower provided by the embodiment of the present application.
  • Figure 12A is a schematic diagram of the movement trajectory of a target calibration object provided by an embodiment of the present application.
  • Figure 12B is a schematic diagram of another target calibration object movement trajectory provided by an embodiment of the present application.
  • Figure 12C is a schematic diagram of another target calibration object movement trajectory provided by an embodiment of the present application.
  • Figure 12D is a schematic diagram of another target calibration object movement trajectory provided by an embodiment of the present application.
  • Figure 13 is a schematic diagram of feature point screening provided by the embodiment of the present application.
  • Figure 14 is a schematic flow chart of the first possible external parameter determination method provided by the embodiment of the present application.
  • Figure 15 is a schematic flowchart of optimizing internal parameters and relative poses provided by the embodiment of the present application.
  • Figure 16 is a schematic flowchart of optimizing internal parameters, relative poses and distortion coefficients provided by the embodiment of the present application.
  • Figure 17 is a schematic diagram of the moving position of the target calibration object represented by the graphic model provided by the embodiment of the present application.
  • Figure 18 is a schematic flow chart of the second possible external parameter determination method provided by the embodiment of the present application.
  • Figure 19 is a schematic structural diagram of a calibration device provided by an embodiment of the present application.
  • Figure 20 is a schematic structural diagram of another calibration device provided by an embodiment of the present application.
  • Camera internal parameters distortion parameters (k 1 , k 2 , k 3 , p 1 , p 2 ), focal length (f x , f y ), center point (u 0 , v) in the pinhole camera model (pinhole camera) 0 ).
  • the internal parameter matrix involved in the embodiment of this application refers to a matrix composed of focal length and center point.
  • the distortion of the camera refers to the degree of distortion of the image formed by the camera's optical system relative to the object itself. It is an inherent characteristic of the optical lens. The direct reason is that the magnification of the edge part and the center part of the lens in the camera is not the same. Same. Camera distortion mainly includes radial distortion and tangential distortion.
  • Radial distortion Mainly caused by the different magnifications of different parts of the camera lens, it is divided into two types: pincushion distortion and barrel distortion.
  • Tangential distortion Mainly caused by the fact that the camera lens installation is not perpendicular to the imaging plane, similar to the perspective principle (large near and small at far, circle becomes ellipse, etc.).
  • the distortion formula of the camera is shown in formula (1-1) and formula (1-2).
  • the pixel coordinates (x′, y′) after distortion and the pixel coordinates (x, y) before distortion satisfy the conditions shown in the following formulas (1-1) and formula (1-2).
  • x′ x(1+k 1 r 2 +k 2 r 4 +k 3 r 6 )+[2p 1 xy+p 2 (r 2 +2x 2 )]
  • y′ y(1+k 1 r 2 +k 2 r 4 +k 3 r 6 )+[p 1 (r 2 +2y 2 )+2p 2 xy]
  • x, y are the normalized coordinates of the three-dimensional point projected into the camera coordinate system
  • x′, y′ are the coordinates after distortion.
  • r 2 x 2 +y 2 .
  • Camera external parameters the rotation and translation transformation of the camera in the pinhole camera model relative to a certain coordinate system (such as the world coordinate system or a certain reference camera pose), that is, the The camera's 6 degrees of freedom (6dof) pose in this coordinate system. Represents translation in three directions and rotation about three axes.
  • Image physical coordinate system (referred to as image coordinate system).
  • Point O i is the intersection of the camera optical axis and the camera imaging plane, which is the origin of the image physical coordinate system.
  • (u, v) represents the number of columns and rows of pixels, where (O p , u, v) constitutes the pixel plane coordinate system.
  • the origin O p of the pixel plane coordinate system is located at the upper left corner of the camera imaging plane, and the two coordinate axes (O p u axis and O p v axis) point to the right and downward respectively.
  • the origin O i of the image physical coordinate system is located at the center of the pixel plane coordinate system, with coordinates (u 0 , v 0 ), and the two coordinate axes (O i x axis and O i y axis) point to the right and downward respectively.
  • Let dx and dy represent the physical dimensions of a pixel along the u-axis and v-axis respectively, then the relationship between the pixel plane coordinate system and the image physical coordinate system is as shown in formula (2).
  • a specific example of a camera coordinate system can be seen in Figure 2.
  • the camera coordinate system is a spatial coordinate system, and its origin O c is located at the optical center of the camera.
  • Point O i is the intersection point of the camera optical axis and the camera imaging plane, which is the origin of the image physical coordinate system.
  • the O c x c axis and O c y c axis of the camera coordinate system are parallel to the O i x axis and O i y axis of the image physical coordinate system respectively.
  • the O c z c axis passes through point O i , point
  • the distance from O i to point O c is the focal length, represented by f.
  • This coordinate system can be called a world coordinate system.
  • the relationship between the camera coordinate system and the world coordinate system can be described by the rotation matrix R and the translation parameter T. Therefore, the homogeneous coordinates of a point P in the space in the world coordinate system and the camera coordinate system are (x w ,y w ,z w ) and (x c ,y c ,z c ) respectively, satisfying the following formula (5) relationship shown.
  • R is a 3 ⁇ 3 rotation matrix
  • T is a 3 ⁇ 1 translation parameter
  • dx and dy represent how many length units a pixel in the x direction and y direction occupies respectively, that is, the actual physical value represented by a pixel, which is the key to realizing the conversion between the camera coordinate system and the image coordinate system.
  • This application provides a calibration method and device for calibrating camera parameters of a collection device deployed in a set space at a set site.
  • Camera parameters include internal parameter matrix, external parameter matrix (and distortion coefficient).
  • the setting venue can be a circular venue, such as a circular track or a circular speed skating track.
  • Sports scenes can also be linear venues.
  • the sports venue may also be in other forms, such as a football field, etc., which is not specifically limited in the embodiments of this application.
  • the information system includes multiple collection devices and data processing servers.
  • N collection devices are taken as an example, and N is a positive integer.
  • the number of cameras included in the information system can be configured according to the size of the sports field.
  • the collection device can be a camera, a camera, a video camera, etc.
  • the multiple collection devices can be deployed in a set space where the sports venue is located.
  • the sports venue is a football field
  • the football field is located in an open-air stadium or football stadium
  • multiple collection devices are deployed in the open-air stadium or football stadium.
  • the viewing range of each of the plurality of acquisition devices includes part of the sports field. Different collection devices have different viewing ranges, and there is a common viewing area between the corresponding viewing ranges of at least two spatially adjacent collection devices.
  • the common viewing area is the area captured by two acquisition devices at the same time.
  • the data processing server may include one or more servers. If the data processing server includes multiple servers, it can be understood that the data processing server is a server cluster composed of multiple servers.
  • the data processing server can work in two different modes, calibration mode and game recording mode.
  • calibration mode the data processing server performs calibration processing and stores the calibration results.
  • the calibration process may include calibrating to obtain internal parameters and external parameters (and distortion coefficients) of multiple cameras.
  • competition recording mode the data processing server can be used to extract the synchronized frames of multiple collection devices for video streams collected by multiple collection devices, and then perform visual algorithm processing on the synchronized frames based on the calibration results frame by frame to generate spatial videos. Carry out sports analysis based on spatial video, review athletes' skills, or capture exciting moments, etc.
  • the spatial video can be sent to the guide vehicle, etc.
  • the information system may also include one or more routing devices, and the routing device may be used to transmit images collected by the collection device to the data processing server.
  • Routing devices can be routers, switches, etc. Take switches as an example, as shown in Figure 4.
  • Multi-layer switches can be deployed in the information system. Taking two layers as an example, the switches deployed on the first layer can be used to connect one or more collection devices, and the switches deployed on the second layer can As the main switch, one end of the main switch is connected to the first layer switch, and the other end is connected to the data processing server. For example, see Figure 4.
  • the information system also supports sending spatial video data to the broadcasting vehicle.
  • the information system also supports the acquisition of sports analysis data through terminal devices.
  • the information system also includes a mobile front end.
  • the mobile front-end includes a web page server.
  • the web page server is connected to the data processing server.
  • the mobile front-end may also include a wireless router (or wired router), a guide truck or a terminal device.
  • the terminal device can be a desktop computer, a portable computer, a mobile phone, or other electronic device that supports accessing web pages.
  • the terminal device can operate the data server by accessing the web page server, such as sending synchronous collection signals to multiple collection devices or stopping recording signals.
  • the synchronous acquisition signal is used to instruct the acquisition device to start video recording synchronously.
  • the stop recording signal is used to instruct the capture device to stop video recording. Another example is historical video playback, or sports information display, etc.
  • the calibration method provided by the embodiments of the present application will be described in detail below with reference to the examples.
  • Deploy the collection equipment in the set space belonging to the set venue such as sports venues, conference venues, etc.
  • the sports field is taken as an example, and the collection device is a camera as an example.
  • the points allowed for installation in the set space of the sports venue such as whether there are columns, trusses or suspended ceilings, etc.
  • each camera can cover part of the entire track, such as a length of track.
  • Spatially adjacent cameras have a common viewing area, such as a common viewing area of 1/2 or 1/3 of the image.
  • a truss refers to a planar or spatial structure composed of straight rods, generally with triangular units, and is used for custom parts of camera mounts.
  • Figures 5-8 show schematic diagrams of three possible camera deployment methods.
  • the camera is deployed on a column as an example.
  • (b) in Figure 5 is a top view of the camera deployment.
  • (c) in Figure 5 is a side view of the camera deployed on the column. Straight cameras are deployed on the extension of the straight and on the side of the curve. Each camera captures a 40-meter range, and two adjacent cameras have a common field of view of 20 meters.
  • a total of 20 cameras cover a 400-meter track (5 cameras*2 on straights, 5 cameras*2 on curves).
  • the focus, orientation or field of view of the camera can be adjusted so that each camera focuses on a part of the track and there is a common view area between adjacent camera positions.
  • the camera group is connected to two switches, cameras 1-10 are connected to one switch respectively, cameras 11-20 are connected to another switch respectively, and the video frames collected by cameras 1-20 are sent to the data processing server through the two switches.
  • FIG. 6 See (a) in Figure 6 , taking 20 camera positions deployed along the track of a track and field venue as an example. Take a camera deployed on a ceiling truss as an example. Each camera is located above the track, taking pictures of the track from a high position. The axis of the camera lens is at an acute angle with the ground and is not perpendicular to the ground in order to cover a larger shooting range.
  • Figure 6(b) is a top view of the camera deployment.
  • (c) in Figure 6 is a side view of the camera deployed on the ceiling truss.
  • the focus, orientation or field of view of the camera can be adjusted so that each camera focuses on a part of the track and there is a common view area between adjacent camera positions.
  • the camera group is connected to two switches, cameras 1-10 are connected to one switch respectively, cameras 11-20 are connected to another switch respectively, and the video frames collected by cameras 1-20 are sent to the data processing server through the two switches.
  • FIG. 7 shows a side view of cameras 1-5 deployed on a column.
  • the camera group is connected to two switches, cameras 1-10 are connected to one switch respectively, cameras 11-20 are connected to another switch respectively, and the video frames collected by cameras 1-20 are sent to the data processing server through the two switches.
  • athletes participating in competitions such as sprints, middle-distance running, hurdles, high jumps, and long jumps can be analyzed to obtain athletes' sports information or exciting moments.
  • Cameras can be deployed on columns, trusses or ceilings, or at set locations in the stands. For example, see Figure 8, which takes the deployment of 20 camera positions along the track as an example.
  • the camera group is connected to two switches, cameras 1-10 are connected to one switch respectively, cameras 11-20 are connected to another switch respectively, and the video frames collected by cameras 1-20 are sent to the data processing server through the two switches.
  • the internal parameter matrix, external parameter matrix and distortion coefficient of each camera need to be calibrated.
  • each camera when calibrating the internal parameter matrix, external parameter matrix and distortion coefficient of the camera, each camera collects video streams during the process of moving the target calibration object.
  • the calibration process of camera parameters is described in detail below in conjunction with Figure 9.
  • the method provided in Figure 9 can be executed by the data processing server, or by a processor or processor system in the data processing server.
  • the target calibrator may include one or a group of calibrators.
  • a set of calibration objects may be composed of multiple calibration objects.
  • Each calibration object may include at least two calibration surfaces, and each calibration surface may include at least one calibration point.
  • Calibration points have stable visual characteristics that do not change from time to time.
  • the target calibration object has a specific pattern, and the intersection points of the lines in the pattern can be used as calibration points.
  • the first target calibration object may have a luminescent screen, and the displayed luminous point serves as the calibration point.
  • other methods can also be used to set the calibration points on the calibration object, and the embodiments of this application do not specifically limit this.
  • Figure 10 shows a schematic structural diagram of a possible target calibration object.
  • the target calibration object includes a group of calibration objects.
  • Each calibration object in a set of calibration objects can be a cabinet rack.
  • the four sides of the cabinet shelf have specific patterns, and the specific patterns on different sides are different.
  • the specific patterns vary from box to box.
  • Figure 10 takes the QR code as an example. You can choose points at each corner of the QR code as calibration points, or the corner points of a specific grid in the box as calibration points, or the two lower corner points of the box as calibration points, or the lower edge of the rectangle including the QR code.
  • the two points of the corners are used as calibration points, etc., which are not specifically limited in the embodiment of the present application.
  • the target calibration object may adopt a tower-like structure, so the target calibration object may also be called a calibration tower, as shown in FIG. 11 .
  • the target calibration object can be placed on a wheeled flatbed truck.
  • the movement trajectory of the target calibration object uniformly covers the shooting area (ie, the sports field).
  • Motion trajectories include but are not limited to regular paths (such as polygonal shapes, spiral shapes), random paths, etc.
  • Figure 12A for a spiral path.
  • FIG. 12B and FIG. 12C a zigzag path is shown.
  • Figure 12D for a random path.
  • the black dots in Figures 12A-12D represent cameras.
  • the embodiment of the present application does not place any specific restrictions on the trajectory direction of the path.
  • the starting point can be located at any point on the field. In order to facilitate memory and use in applications, it is often set at a landmark point on the field. Taking the football field as an example, it can be a corner kick point, a penalty kick point, or a certain field line corner point.
  • synchronous sampling processing is performed on the video streams collected by the multiple cameras.
  • it can be understood as extracting frames from the video stream collected by each camera.
  • the target calibration object is included in multiple image frames sampled from the first camera.
  • the image frames that do not include the target calibration object can be removed from the extracted image frames, thereby forming an image set corresponding to each acquisition device. It can be understood that the moving positions of the target calibration objects corresponding to different image frames included in the image set corresponding to each collection device are different.
  • the target calibration object moves in the sports field, for a certain camera, during certain time periods, the target calibration object moves outside the field of view of the camera, so that during this time period, the camera No footage including the target calibration object was captured. Therefore, image frames that do not include the target calibration object can be removed from the extracted image frames.
  • frames when extracting frames for different cameras, frames may be extracted based on the moving position of the target calibration object. For example, multiple positions can be set on the movement trajectory of the target calibration object, such as position 1-position m, so that image frames that move to multiple positions can be sampled. In another example, a frame can be extracted every set time period. The setting of the set duration can be based on the movement rate of the target calibration object in the set area.
  • multiple locations can be marked within a set area.
  • multiple cameras are controlled to collect an image frame respectively, thus forming an image frame set for each camera.
  • the number of images collected by each camera is the same as the number of marked locations.
  • Calibration point detection is performed on the image frames collected by each camera in the plurality of cameras to obtain feature points. Different feature points in the same image frame correspond to different calibration points. The detected feature points are further screened to remove feature points with lower reliability. Feature points are used to express the pixel coordinates of calibration points, thereby obtaining the pixel coordinates of multiple calibration points in the image frames collected by each camera.
  • the distance from a feature point to the image boundary is used as a filtering condition to remove feature points with low reliability. For example, if the distance between a certain feature point and the image boundary is less than a certain set value, the feature point will be filtered out.
  • the calibration surface has a fixed shape, such as rectangle or circle. For example, if the calibration surface is a rectangle, feature points with low reliability can be removed based on the diagonal angle of the feature surface in the collected image. For example, if the diagonal angle is less than the set threshold, the features belonging to the feature surface will be removed. Click Remove.
  • the calibration surface is circular, it can be determined based on the curvature of the circular feature surface in the collected image whether to determine the feature point of the feature surface as a feature point with low reliability. For example, if the curvature is greater than the set curvature value, the unreliable feature points of the feature surface are determined and removed.
  • the ratio of the minimum radius to the maximum radius can also be used to determine whether the feature point of the feature surface is a reliable feature point. For example, if the ratio of the minimum radius to the maximum radius is less than the set ratio, the feature points on the feature surface are determined to be unreliable feature points and removed.
  • global calibration point detection when performing calibration point detection on an image frame sampled by a camera, global calibration point detection may be performed on the first image frame.
  • a tracking algorithm can be used to narrow the detection range of the required detection target calibration object in the image frame. This method can improve the detection speed of feature points.
  • the internal parameter matrix of each camera is estimated to obtain the first internal parameter estimate value of each camera.
  • the matching feature point set corresponding to each camera group, and the target calibration object includes multiple calibration points in the calibration object.
  • the three-dimensional coordinates in the coordinate system determine the first external parameter estimate of each camera.
  • the matching feature point set includes multiple matching feature point groups, each matching feature point group includes at least two matching pixel coordinates, and the at least The two matching pixel coordinates are the pixel coordinates of the same calibration point detected by image frames collected by different cameras belonging to the same camera group at the same time; the multiple camera groups are obtained by grouping the multiple cameras, so Any two of the plurality of camera groups include at least one identical camera.
  • the multiple target calibration objects included in the image frames collected by each camera can be The pixel coordinates of the calibration points, as well as the three-dimensional coordinates of the multiple calibration points in the calibration object coordinate system, are combined with direct linear transform (DLT) to estimate the internal parameter matrix of each camera.
  • DLT direct linear transform
  • the three-dimensional point P w in space is projected onto the camera pixel plane, which can be expressed by the following formula (6).
  • P uv P proj P w formula (6)
  • P proj represents the projection matrix.
  • the projection matrix is determined by the internal parameters of the camera and the pose of the camera (under the calibration object coordinate system).
  • t] c represent the internal parameters and pose of the camera respectively (under the calibration object coordinate system).
  • P proj I c [R
  • the mapping relationship between the calibration object coordinate system of the target calibration object and the image coordinate system can be expressed as formula (7).
  • P uv I c [R
  • P w P proj P w formula (7)
  • DLT calculates the projection matrix P proj by substituting a large number of known quantities (P uv , P w ) into the above formula (6).
  • t] c is obtained by decomposing the matrix P proj .
  • P proj can be calculated by solving a system of linear equations, and further using QR decomposition of the matrix to obtain I c .
  • the projection matrix P proj can be expressed as a 3*4 matrix.
  • the 3*4 matrix is expressed as Then the following formula (8) is established.
  • f x f/dx
  • f the focal length
  • fy f/dy
  • dx and dy represent how many length units a pixel in the x direction and y direction occupies respectively, that is, the actual physical value represented by a pixel, which is the key to realizing the conversion between the camera coordinate system and the image coordinate system.
  • u 0 , v 0 represents the number of horizontal and vertical pixels that differ between the center pixel coordinates of the image and the image origin pixel coordinates.
  • l 1 to l 12 in P proj are calculated by solving a system of linear equations. Then again Perform QR decomposition to obtain the internal parameter matrix of the camera.
  • the coordinates of each calibration point on the target calibration object in the calibration object coordinate system remain unchanged. Since the internal parameters of the camera are used to express the conversion relationship between the camera coordinate system and the image coordinate system, no matter how the calibration object moves, the internal parameters of the camera remain unchanged. Based on this, one or more projection matrices can be calculated for the feature points in each image frame collected by a camera. Then, QR decomposition is performed on the projection matrix to obtain the internal parameter matrix and the external parameter matrix. Since the internal parameter matrices decomposed for each calculated projection matrix should be the same, the first internal parameter estimate is determined from all the obtained internal parameter matrices. For example, it can be determined by Gaussian distribution.
  • the target calibration object included in the image frame collected by each acquisition device may be The pixel coordinates of multiple calibration points, as well as the three-dimensional coordinates of the multiple calibration points in the calibration object coordinate system, are combined with the DLT algorithm to estimate the internal parameter matrix of each camera to obtain the second internal parameter matrix, and then based on the minimum projection error principle, Globally optimize the intrinsic parameter matrix of each camera.
  • some redundant data may be filtered out from the feature data of the calibration points. It can also be understood that some redundant pixel coordinates are removed from the pixel coordinates of multiple calibration points included in the target calibration object identified in the image frames collected by each camera. For example, for each camera, count the distribution of calibration points in the image plane in each image frame in the collected image set. For any camera, count the feature points in the image plane (corresponding to the position of the calibration point), which will be in the image plane. Overlapping feature points are deleted. For example, the image plane can also be meshed. Redundancy processing is performed based on the distribution of feature points in the grid. See Figure 13.
  • the target calibration object may appear multiple times in the grid, and the target calibration object that appears multiple times in the grid can be retained only once.
  • the target calibration object appears for other times, the pixel coordinates of the calibration point of the recognized target calibration object in the collected image frame are removed.
  • the number of cameras deployed in the set space of the sports venue is N.
  • optimize the internal parameter matrix of camera i that is, perform optimization on the basis of the second internal parameter estimate to obtain the first internal parameter estimate. For example, based on the second intrinsic parameter estimate of camera i, the pixel coordinates of the calibration point on the target calibration object in the image frame included in the image set collected by camera i, and the three-dimensional coordinates of the multiple calibration points in the calibration object coordinate system, Estimate the pose set corresponding to camera i.
  • the pose set corresponding to camera i includes the pose of the target calibration object relative to camera i at M1 moving positions; the value of i is a positive integer less than or equal to N.
  • the moving positions of the target calibration object captured by N cameras are M, and the moving positions of the target calibration object captured by camera i are less than or equal to M, which are called M1 here. Because the field of view of camera i may not cover the M moving positions.
  • the internal parameter matrix of camera 1 is optimized, that is, optimizing is performed on the basis of the second internal parameter estimate to obtain the first internal parameter estimate.
  • the internal parameter matrix of camera 1 is optimized, that is, optimizing is performed on the basis of the second internal parameter estimate to obtain the first internal parameter estimate.
  • the pixel coordinates of the fixed point) and the coordinates of the calibration point of the target calibration object in the calibration object coordinate system can determine the poses of different positions relative to the camera 1. Through the above method, the pose of the calibration object coordinate system relative to each camera coordinate system at different positions can be determined.
  • N cameras are deployed in a sports venue as an example. Take the moving trajectory of the target calibration object passing through position 1-position M as an example. For example, the moving positions of the target calibration objects captured by N cameras are M.
  • the pixel coordinates of each calibration point in the image frame captured when the target calibration object moves to position 1 can be collected based on camera 1 , the 3D coordinates of each calibration point of the target calibration object in the calibration object coordinate system, and the first internal parameter estimate of camera 1, combined with the above formula (7), the PnP algorithm is used to estimate the position of the target calibration object at position 1 relative to camera 1
  • t] 1-1 It should be noted that the field of view of each camera does not cover all positions. Therefore, the pose relative to the camera at the uncovered position cannot be obtained, which is represented by "None" in Table 1.
  • the camera parameters can be globally optimized based on a preset nonlinear optimization algorithm to minimize the projection error on the image frames of each camera.
  • the preset nonlinear optimization algorithm is the Leven-Marquardt (LM) algorithm.
  • the target calibration at position j can be estimated based on the three-dimensional coordinates of the calibration point on the target calibration object at position j, the pose at position j relative to camera i, and formula (9).
  • the calibration point on the object is projected to the coordinates in the camera coordinate system of camera i.
  • the normalized coordinates are expressed as
  • the distorted position is estimated based on the projection of the calibration point on the target calibration object at position j to the normalized coordinates in the camera coordinate system of camera i through formula (1-1) and formula (1-2).
  • the calibration point on the target calibration object at j is projected to the normalized coordinates in the camera coordinate system of camera i Then based on the second internal parameter estimate of camera i To calculate the pixel coordinates of the calibration point on the target calibration object at position j projected into the image coordinate system of camera i See formula (10).
  • the error between the estimated pixel coordinates and the pixel coordinates of the calibration point actually recognized for the image frame collected by camera i is calculated.
  • the next round of optimization is performed based on the optimized internal parameter matrix and distortion coefficient of camera i.
  • the projection of the calibration point on the target calibration object at position j can be estimated based on the three-dimensional coordinates of the calibration point on the target calibration object at position j, the pose of position j relative to camera i after the previous round of optimization, and formula (9).
  • the calibration point on the target calibration object at j is projected to the normalized coordinates in the camera coordinate system of camera i. Then, the pixel coordinates of the calibration point on the target calibration object at position j projected to the image coordinate system of camera i are further determined based on formula (10) and the optimized internal parameter matrix of camera i. Then the error between the estimated pixel coordinates and the pixel coordinates of the calibration point actually recognized for the image frame collected by camera i is calculated.
  • Formula (11) can estimate the pixel coordinates of the calibration point on the target calibration object at position j projected into the image coordinate system of camera i.
  • the second relative pose of the other cameras among the plurality of cameras except the reference camera with respect to the reference camera may be first obtained, and the reference camera is any one of the plurality of cameras. Then a scale factor is added based on the relative pose to determine the external parameter matrix of each camera.
  • the basic mobile position and the common view relationship between cameras are combined to determine the pose of each mobile position in the world coordinate system.
  • the target calibration The external parameter matrix of each camera is determined by using the coordinates of multiple calibration points included in the object in the calibration object coordinate system and the pixel coordinates of the calibration points on the target calibration object in the image frames collected by each acquisition device.
  • the determined external parameter matrix is used as the initial value to further globally optimize the external parameter matrix.
  • the internal parameter matrix and distortion coefficient can be further optimized.
  • Multiple cameras can be grouped based on the common viewing relationship between multiple cameras deployed in the venue. For example, it can be divided into multiple camera groups. Each camera group includes at least two cameras. Any two of the plurality of camera groups include at least one identical camera. There is a common viewing area between at least two cameras.
  • the matching feature point set corresponding to each camera group According to the first internal parameter estimates of at least two cameras included in each camera group, the matching feature point set corresponding to each camera group, and the multiple calibration points included in the target calibration object in the calibration object coordinate system.
  • the three-dimensional coordinates are used to obtain the second relative pose of the cameras other than the reference camera among the multiple cameras with respect to the reference camera; the reference camera is any one of the multiple cameras.
  • the matching feature point set includes multiple matching feature point groups, each matching feature point group includes at least two matching pixel coordinates, and the at least two matching pixel coordinates are obtained by different cameras belonging to the same camera group at the same time.
  • each camera pair may be determined, and the camera pair includes two cameras that have a common view relationship (or have a common view area). It is further possible to calculate the number of each camera included in the camera pair. The camera with the largest number included in the camera pair is used as the reference camera.
  • the scale factor is the ratio between the first distance and the second distance.
  • the first distance is the two calibration points.
  • the distance on the target calibration object, the second distance is the distance between the two calibration points in the same image coordinate system, and the two calibration points are located on the same calibration surface on the target calibration object.
  • the first distance is obtained by measurement.
  • the side length of the set side of the calibration surface on the target calibration object can be measured to obtain the side length measurement value.
  • the endpoints at both ends of this side can also be understood as two calibration points.
  • Identify the calibration surface in the image frame collected by the reference camera (camera 1) and calculate the length of the set side of the calibration surface. Taking the rectangular calibration surface as an example, one of the sides can be set.
  • the four corner points of the calibration surface in the image frame collected by camera 1 can be identified, and the coordinates of the two corner points in the camera coordinate system of camera 1 are determined based on the pixel coordinates of the two corner points corresponding to the set edges and the internal parameters of camera 1. .
  • the side length is calculated based on the coordinates of the two corner points in the camera coordinate system of camera 1 to obtain the side length estimate.
  • the set edge can be the diameter of the circle.
  • the ratio of the measured side length to the estimated side length is the adjustment ratio between the camera coordinate system and the world coordinate system. In the embodiment of this application, this adjustment ratio is called scale.
  • the scale is represented by S.
  • any two calibration points on the same calibration surface may also be used, which is not specifically limited in the embodiments of the present application.
  • the two calibration points can be located at any two calibration points on the same calibration surface.
  • the second relative pose of other cameras that have a common view relationship with the reference camera and the reference camera may be determined.
  • the second relative position of a camera that does not have a common view relationship with the reference camera can be determined by the second relative pose of the camera that has a common view relationship with the reference camera and the relative position between the camera that has a common view relationship with the reference camera. posture to calculate.
  • camera 1 is the reference camera.
  • Camera 2 and camera 1 have a common view relationship, and the second relative pose of camera 2 relative to camera 1 can be calculated.
  • Camera 3 does not have a common view relationship with camera 1, but has a common view relationship with camera 2.
  • the relative pose of camera 3 relative to camera 2 can be calculated, and then the second relative pose of camera 3 relative to camera 1 is determined by combining the relative pose of camera 3 with respect to camera 2 and the relative pose of camera 2 with respect to camera 1 .
  • the two cameras have a common viewing area. According to the common view relationship between the two cameras, the relative pose of the two cameras can be calculated.
  • Camera 1 and camera 2 Take camera 1 and camera 2 as an example.
  • Camera 1 and camera 2 have a common viewing area.
  • Camera 1 and camera 2 can be understood as a camera group (or camera pair).
  • the essential matrix between camera 1 and camera 2 is determined based on the matching feature point set corresponding to the camera group. Then perform singular value QR decomposition on the essential matrix to obtain the second relative pose between camera 1 and camera 2.
  • the essential matrix may also be called an eigenmatrix, which is not specifically limited in the embodiment of the present application.
  • P x1y1 represents the normalized coordinates of the calibration point in the camera coordinate system of camera 1
  • P x2y2 represents the normalized coordinates of the calibration point in the camera coordinate system of camera 2.
  • E represents the eigenmatrix.
  • Matrix E describes the pose relationship between cameras. Definition: Matrix E contains the rotation and translation information related to two cameras in physical space.
  • the normalized coordinates of the calibration point in the camera coordinate system of camera 1 can be based on the pixel coordinates of the calibration point in the image coordinate system of camera 1 (that is, the calibration point included in the matching feature point set is in the image coordinate system of camera 1 coordinates below) and the second internal parameter estimate of camera 1.
  • the normalized coordinates of the calibration point in the camera coordinate system of camera 2 can be based on the pixel coordinates of the calibration point in the image coordinate system of camera 2 (that is, the calibration point included in the matching feature point set is in the image coordinate system of camera 1 pixel coordinates) and the second internal parameter estimate of camera 2.
  • the specific calculation method please refer to the description of the conversion relationship between the image coordinate system and the camera coordinate system in formula (2).
  • E can be decomposed to obtain the rotation matrix R and the translation parameter t.
  • the rotation matrix of the base camera is set to the unit matrix, and the translation parameter is set to 0, then R and t obtained by decomposition are the other cameras.
  • the rotation matrix R and translation parameter t are the second relative pose.
  • the scale factor can be added to the second relative pose of each camera to obtain the extrinsic parameter matrix of each camera.
  • the determined second relative pose of camera 1 is [R
  • the external parameter matrix after adding the scale factor to the second relative pose of camera 1 can be expressed as The external parameters of other cameras can be adjusted in this way.
  • the relative pose of each camera after determining the second relative pose of each camera relative to the reference camera, the estimated intrinsic parameter matrix of each camera, the relative pose of each camera (and distortion coefficient) for global optimization. Then the scale factor is added based on the optimized relative pose to obtain the extrinsic parameter matrix of each camera.
  • the optimized distortion coefficients and internal parameter matrices of each camera are used as the distortion coefficients and internal parameter matrices of the final calibrated camera.
  • the local coordinate system is the camera coordinate system of the reference camera. Any one of the M2 moving positions is located at least in the common viewing area of two of the g-th cameras.
  • the intrinsic parameter estimates and second relative poses corresponding to the cameras in the g-th camera group after the current round of adjustment are used as the basis for the next round of adjustment until the D-round adjustment is completed to obtain the cameras included in the g-th camera group.
  • the third relative pose and the third internal parameter estimate are used as the basis for the next round of adjustment until the D-round adjustment is completed to obtain the cameras included in the g-th camera group.
  • the local coordinate system is the camera coordinate system of the reference acquisition device; any one of the M2 moving positions is located at least in the common view area of two of the gth acquisition devices.
  • the second relative pose of the collection device included in the g-th collection device group the first internal parameter estimate and the first The estimated value of the distortion coefficient estimates the pixel coordinates of the multiple calibration points at the M2 moving positions respectively projected to the image coordinate system of the acquisition device included in the g-th acquisition device group.
  • the internal parameter estimates and relative poses corresponding to the acquisition equipment in the g-th acquisition equipment group after the current round of adjustment are used as the basis for the next round of adjustment, until the adjustment of round D is completed to obtain the acquisition equipment included in the g-th acquisition equipment group.
  • the third relative pose of the device, the third internal parameter estimate and the second distortion coefficient are used as the basis for the next round of adjustment, until the adjustment of round D is completed to obtain the acquisition equipment included in the g-th acquisition equipment group.
  • the above-mentioned multiple camera groups include different numbers.
  • the relative poses and intrinsic parameter matrices of the cameras included in the camera group can be optimized in the order of the number included in the camera group.
  • the first optimized camera group includes two cameras.
  • the second camera group includes 3 cameras.
  • the three cameras include two cameras in the first camera group.
  • the second camera group is to add a camera to the first camera group.
  • the added camera has a common viewing area with at least one camera in the first camera group.
  • the first camera group including the second relative poses of the cameras and the intrinsic parameter matrix are then optimized. Then, the relative pose of the newly added camera in the second camera group is calculated based on the relative pose of the two cameras of the optimized first camera group and the internal parameter matrix. Then the relative pose and intrinsic parameter matrix of the cameras included in the second camera group are further optimized. And so on.
  • Select one camera from the two selected cameras as the base camera.
  • the angle between the optical axes of the two cameras is within the set range, for example, the angle between the optical axes is less than 5 degrees.
  • the angle between the optical axes can be determined by the second internal parameter estimates corresponding to the two cameras and the calibration points in the captured images.
  • the two cameras have a common viewing area. According to the common view relationship of the two cameras, the relative pose of the two cameras is calculated. For example, the camera coordinate system of the reference camera is used as the local coordinate system. The method for determining the relative poses of the two cameras is as described above and will not be described again here.
  • P u1v1 represents the pixel coordinates of the calibration point in the image frame corresponding to camera 1 in the image frame pair
  • P u2v2 represents the pixel coordinates of the same calibration point in the image frame corresponding to camera 2 in the image frame pair
  • P u1v1 [u1 v1 1] T
  • P u2v2 [u2 v2 1] T
  • the second intrinsic parameter estimate of camera 1 is represented by I 1
  • the second intrinsic parameter estimate of camera 2 is represented by I 2
  • the first external parameter estimate of camera 1 is represented by [R
  • the first external parameter estimate of camera 2 can be represented by [R
  • the first external parameter estimate of camera 1 is the identity matrix.
  • the first external parameter estimate of camera 2 can be obtained through the above decomposition.
  • the pixel coordinates of each calibration point in the local coordinate system of the common viewing area of camera 1 and camera 2 are estimated based on the following formula (13).
  • Camera 1 is the reference camera, then in [R
  • each calibration point in the local coordinate system in the common view area can be estimated.
  • the first pixel coordinate estimate value of each calibration point in the common view area in the image coordinate system of camera 2 is estimated.
  • camera distortion is taken into account.
  • the estimated coordinates of each calibration point in the common view area in the camera coordinate system of camera 1 can be estimated based on the coordinates of each calibration point in the common view area in the local coordinate system and the following formula (15).
  • P u1v1 I 1 P x1y1 formula (15)
  • each calibration point in the local coordinate system in the common view area can be estimated. Combined with the following formula (17) to estimate the coordinate estimate P x2y2 of each calibration point in the common view area in the camera coordinate system of camera 2
  • normalization processing can be performed to obtain the normalized coordinate estimate value of each calibration point in the camera coordinate system of camera 2.
  • the error between camera 1 and P u2v2 is used to adjust the internal parameter matrix of camera 1, the distortion coefficient of camera 1, the internal parameter matrix of camera 2, the relative pose of camera 2 and the distortion coefficient of camera 2.
  • the next round of iterative adjustment is performed based on the adjusted internal parameter matrix of camera 1, the distortion coefficient of camera 1, the internal parameter matrix of camera 2, the relative pose of camera 2, and the distortion coefficient of camera 2.
  • the coordinates of each calibration point in the common view area in the local coordinate system are further recalculated.
  • the estimated pixel coordinates of each calibration point in the common view area in the image coordinate system of camera 2 are estimated. Further calculate the error between the estimated pixel coordinates and the actual pixel coordinates to adjust the internal parameter matrix of camera 1, the distortion coefficient of camera 1, the internal parameter matrix of camera 2, the relative pose of camera 2 and the distortion coefficient of camera 2. By analogy, multiple rounds of adjustments are performed.
  • the number of rounds of iterative adjustment may be pre-configured, and when the configured number of rounds of iterative adjustment is reached, the iterative adjustment is stopped.
  • the error threshold can also be pre-configured. When the calculated error during a certain round of iterative adjustment is less than or equal to the error threshold, the iterative adjustment can be stopped.
  • each camera is added to the two cameras, and the three cameras have a common viewing area.
  • the three cameras form camera group 2. Joint optimization is achieved for the internal parameter matrices and external parameter matrices (and distortion coefficients) of the three cameras.
  • Three image frames captured by three cameras at the same time constitute an image frame group. It can be understood that the coordinates of the calibration points in the common view area taken at the same time in the local coordinate system of the reference camera should be the same, that is, the coordinates of the calibration points in the common view area of the three cameras in the local coordinate system should be the same. Based on this, the internal parameters, relative poses, and distortion coefficients of the three cameras are optimized based on the pixel coordinates of each calibration point in each image frame group captured by the three cameras at the same time.
  • some calibration points are not located in the common viewing area of three cameras, but are located in the common viewing area of two cameras. These feature points can participate in the adjustment of the three cameras.
  • the extrinsic parameter matrix of each camera is obtained by adding a scale factor based on the optimized third relative pose.
  • the optimized second distortion coefficient and third internal parameter estimate value of each camera are used as the final calibrated internal parameter matrix and distortion coefficient of each camera.
  • the extrinsic parameter matrix, the second distortion coefficient and the third intrinsic parameter estimate of each camera can also be obtained. Internal parameter estimates are used for further global optimization.
  • the coordinates of each calibration point in the local coordinate system can be converted to the world coordinate system based on scale.
  • the normalized coordinates of the calibration point in the world coordinate system are expressed as P′ w .
  • P′ w [X Y Z s] T .
  • the estimated pixel coordinates of each calibration point in the image coordinate system are estimated based on the normalized coordinates of each calibration point in the world coordinate system, the internal parameters of camera i, and the external parameters of camera i in the world coordinate system. For example, it can be calculated based on formula (19).
  • P′ uivi I i [R′
  • I i represents the internal parameters of camera i
  • t] i represents the external parameters of camera i
  • P′ wi represents the coordinates of the calibration point in the world coordinate system within the field of view of camera i
  • P′ uivi represents the estimated pixel coordinates of the calibration point in the image frame collected by camera i.
  • P uivi represent the pixel coordinates of the calibration point in the image frame collected by camera i, that is, P uivi represents the pixel coordinates obtained by identifying the calibration point in the image frame collected by camera i. Further, the error between P′ uivi and P uivi is determined.
  • the pixel coordinates of the calibration points in the image frames collected by each camera are estimated for each camera, and the error between the estimated pixel coordinates of the calibration points and the pixel coordinates obtained by identifying the calibration points is determined.
  • the internal and external parameters of each camera are further adjusted based on the error. Complete the adjustment of the internal and external parameters of the current round of cameras. Then the next round of iterative adjustments is performed based on the adjusted internal parameters and external parameters of the camera.
  • the pixel coordinates of the calibration points in the image frames collected by each camera are further re-estimated, and the difference between the estimated pixel coordinates of the calibration points and the pixel coordinates obtained by identifying the calibration points is determined. error. Further adjust the internal parameters and external parameters of each camera based on the error to complete the adjustment of the internal parameters and external parameters of the current round of cameras. By analogy, multiple rounds of adjustments are performed.
  • the error between the pixel coordinates of the distorted calibration point and the pixel coordinates obtained by identifying the calibration point is determined by combining formula (1-1) and formula (1-2).
  • the internal parameter matrix, external parameter matrix, and distortion coefficient of each camera are further adjusted according to the error. Specifically, the second distortion coefficient of each camera determined above is used as the initial adjusted distortion coefficient. After completing the adjustment of the intrinsic parameter matrix, extrinsic parameter matrix and distortion coefficient of the current round of camera. Then the next round of iterative adjustment is performed based on the adjusted internal parameter matrix, external parameter matrix and distortion coefficient of the camera.
  • the pixel coordinates of the calibration point in the image frame collected by each camera after distortion are further re-estimated, and the estimated calibration point after distortion is determined
  • the error between the pixel coordinates and the pixel coordinates obtained by identifying the calibration point is further adjusted according to the error.
  • multiple rounds of adjustments are performed.
  • a calibration position can be set up in the sports field.
  • This calibration position serves as the origin of the world coordinate system.
  • the target calibration object passes through this calibration position during the movement of the sports field, which can also be understood as the basic movement position of the target calibration object.
  • the position with the largest number of common-view cameras can be selected as the calibration position.
  • the target calibration object passes through multiple moving positions including position 1 to position M during its movement on the sports field.
  • Each position of the target calibration object can be represented in the form of a graphical model, for example, as shown in Figure 17 below.
  • the specific implementation process is shown in Figure 18.
  • the relative posture of the first moving position pair is based on the matching feature point set corresponding to at least one collection device group
  • the target calibration object includes the three-dimensional coordinates of multiple calibration points in the calibration object coordinate system
  • the at least one collection device group includes The first internal parameter estimates of the two acquisition devices are determined.
  • cameras 1 to 9 are nine cameras deployed in a ring in the set space where the sports venue is located.
  • S0 ⁇ S5 represent the six positions where the target calibration object moves
  • S0 represents the calibration position
  • the coordinates of the calibration position in the world coordinate system are known, that is, the origin.
  • the relative posture between the positions can be calculated based on the common view relationship between each position and different cameras (shown as a dotted line connection in Figure 17). If two moving positions are located within the common viewing area of two cameras, the two moving positions constitute a moving position pair.
  • S0 and S5 are a moving position pair.
  • S0 and S3 are a moving position pair.
  • S5 and S1 are a moving position pair, and so on.
  • a moving position pair can be located in the common viewing area of multiple pairs of cameras.
  • the moving position pair composed of S0 and S5 is located in the common viewing area of camera 1 and camera 2, and is also located in the common viewing area of camera 2 and camera 3.
  • the basic mobile position is one of the M mobile positions. one of.
  • the camera parameters of each acquisition device are globally optimized.
  • the camera parameters include an intrinsic parameter matrix and an extrinsic parameter matrix, or the camera parameters include an intrinsic parameter matrix, an extrinsic parameter matrix and a distortion coefficient.
  • the camera parameters of each acquisition device and the poses of the calibration surfaces where the multiple calibration points are respectively located in the calibration object coordinate system are used as the quantities to be optimized; the first internal parameters of each acquisition device in the global optimization
  • the estimated value is used as the initial value of the internal parameter matrix of each acquisition device.
  • the relative posture of a pair of moving positions composed of two moving positions can satisfy the conditions shown in the following formula (20).
  • the mobile position pair is located in the common view area of l camera pairs (camera groups). Calculated based on each camera pair
  • Determining the pose from position S i to position S j from camera a can be based on the pose of the calibration object coordinate system relative to the camera coordinate system of camera a at position S i, and the calibration object coordinate system relative to the camera at position S j The pose of the camera coordinate system of a is determined.
  • Determining the pose from position S j to position S i from camera b can be based on the pose of the calibration object coordinate system at position S i relative to the camera coordinate system of camera b, and the calibration object coordinate system relative to the camera at position S j The pose of the camera coordinate system of b is determined.
  • the image frames collected by camera a and camera b when the target calibration object moves to position S i are respectively obtained, and the image frames collected by camera a and camera b when the target calibration object moves to position S j are respectively obtained. Identify the image frame to obtain the pixel coordinates of the calibration point.
  • the PnP algorithm can be used Determine the pose of the calibration object coordinate system relative to the camera coordinate system of camera a at position S i
  • the PnP algorithm can be used Determine the pose of the calibration object coordinate system relative to the camera coordinate system of camera a at position S j
  • the internal parameters of each camera can be optimized first based on the embodiments corresponding to Figure 15 or Figure 16 to obtain the third internal parameter estimate of each camera. Then the pose at each position is further determined based on the third internal parameter estimate.
  • the intrinsic parameters of each camera are no longer optimized based on the embodiments corresponding to Figure 15 or Figure 16. Thus, the pose at each position can be further determined based on the second internal parameter estimate.
  • the PnP algorithm can be used Determine the pose of the calibration object coordinate system relative to the camera coordinate system of camera b at position S i
  • the PnP algorithm can be used Determine the pose of the calibration object coordinate system relative to the camera coordinate system of camera b at position S j
  • the pose of the calibration object coordinate system relative to each camera coordinate system at different positions can be determined. See Table 2 for the poses of the target calibration object at different positions relative to different cameras.
  • Table 2 b cameras are deployed in a sports venue as an example. Take the moving trajectory of the target calibration object passing through position 1-position M as an example.
  • the moving position pair may be located within the common viewing area of multiple camera pairs (or camera groups). Then for each camera pair, the relative pose from one moving position to the other moving position in the moving position pair is calculated. and the relative pose of another moving position to a moving position can be based on Select a relative pose from the relative poses of the moving position pairs corresponding to the plurality of camera pairs as the relative pose of the moving position pair. For example, it can be determined according to the following formula (23).
  • I is the identity matrix.
  • the poses of the M moving positions in the world coordinate system are determined based on the coordinates of the basic moving position in the world coordinate system and the relative poses of multiple pairs of moving positions. This can be determined as follows:
  • A1 determine the credibility weight of each mobile location pair among multiple mobile location pairs. Based on the above example, the credibility weight of the mobile location pair is Execute A2.
  • A2 determine the shortest path from the third mobile position to the basic mobile position based on the credibility weight of each mobile position pair;
  • the shortest path is the path with the smallest credibility weight among all paths from the third mobile position to the basic mobile position; the credibility weight of any path is the trustworthiness of the mobile position pair passed by the any path. The sum of the reliability weights. Execute A3.
  • A3 Determine the pose of the third moving position based on the relative pose of the pair of moving positions passed by the shortest path.
  • Each moving position of the target calibration object is used as the vertex of the graph model.
  • the edge connecting any two vertices is defined by the credibility weight.
  • the path can be determined using Dijkstra's algorithm, such as calculating the shortest path based on the credibility weight.
  • the path corresponding to the minimum value of the sum of weights among all paths from position S i to reference position S 0 is regarded as the shortest path.
  • the coordinates of each calibration point in the world coordinate system at each moving position can be calculated. Then based on the coordinates of each calibration point in the world coordinate system (which can also be called the global coordinate system), the extrinsic parameter matrix of each camera is roughly calculated. Then the internal parameter matrix, external parameter matrix and distortion coefficient of each camera are globally optimized.
  • the target calibration objects at different positions can be obtained based on the poses T i at different positions and the coordinates of each calibration point on the target calibration object in the coordinates of the target calibration object.
  • the coordinates of each calibration point in the global coordinate system can be obtained based on the poses T i at different positions and the coordinates of each calibration point on the target calibration object in the coordinates of the target calibration object. The coordinates of each calibration point in the global coordinate system.
  • the calibration points are calibrated based on the coordinates of each calibration point in the global coordinate system on the target calibration object at different locations, the internal parameters of camera i, and the image frames collected by camera i shooting multiple target calibration objects at different locations.
  • the pixel coordinates of determine the external parameters of camera i. Get the external parameters of all cameras in this way.
  • the internal parameters and external parameters of each camera can be optimized based on the coordinates of each calibration point in the global coordinate system on the target calibration object at different locations.
  • the coordinates of each calibration point on the target calibration object at different locations in the global coordinate system can be used to estimate the pixel coordinates of the calibration points on the target calibration object projected to each camera at different locations.
  • the specific determination method is shown in formula (24).
  • P wj represents the coordinates of the calibration point on the target calibration object at position S j in the calibration object coordinate system
  • B m is the pose of the calibration plate where the calibration point is located in the calibration object coordinate system
  • T j represents the coordinates of the target calibration object.
  • t] i respectively represent the internal parameters and external parameters of camera i (the pose in the global coordinate system).
  • the above B m can be obtained by measurement.
  • the target calibration objects are assembled according to the designed size, as shown in Figure 11.
  • the printing size error of each plate is negligible.
  • the design posture B m of each calibration plate is obtained according to the design size.
  • the design pose of each calibration plate can be used as a variable to be optimized, thereby improving the internal parameters, external parameters and distortion of the camera. Calibration accuracy of coefficients.
  • the pixel coordinates of the calibration points in the image frames collected by each camera on the target calibration object at different positions are estimated through the above formula (24), and then adjusted according to the error between the estimated pixel coordinates and the pixel coordinates in the identified image frame.
  • the pixel coordinates of the calibration points in the image frames collected by each camera on the target calibration object at different positions are estimated again, and then based on the estimated pixel coordinates
  • the internal parameters, external parameters and B m of each camera are adjusted based on the error of identifying the pixel coordinates in the image frame.
  • formula (1-1) and formula (1-2) are combined to determine the pixel coordinates of the calibration points on the target calibration object at different positions in the image frames collected by each camera. Then, the internal parameters, external parameters, B m and distortion coefficient of each camera are adjusted according to the error between the estimated distorted pixel coordinates and the pixel coordinates in the identified image frame, and multiple rounds of adjustments are performed to obtain the internal parameters, external parameters and distortion of each camera. coefficient.
  • the number of rounds of iterative adjustment may be pre-configured, and when the configured number of rounds of iterative adjustment is reached, the iterative adjustment is stopped.
  • the error threshold can also be pre-configured. When the calculated error during a certain round of iterative adjustment is less than or equal to the error threshold, the iterative adjustment can be stopped.
  • the embodiments of this application can be applied in motion analysis scenarios.
  • Each camera captures a video stream of an athlete's movements. Then the motion information is calculated based on the calibrated camera parameters. For example, the athlete's running distance, speed, number of steps, etc. You can also get in-depth information such as team interface and technical and tactical analysis.
  • the athlete's spatial position is accurately restored from the image frames captured by each camera and the calibrated camera parameters of each camera. Then further obtain motion information.
  • the pixel coordinates of human bone points detected in the image frame can be used to calculate the three-dimensional space coordinates of all bone points using calibrated camera parameters.
  • the quality of camera calibration directly affects the calculation accuracy of three-dimensional position, which in turn affects the reliability of subsequent motion analysis.
  • the skeleton points have a higher degree of coincidence with the real human body.
  • the mobility solution provided by the embodiments of the present application is not limited by the placement location, can cover a wider area, and the calibration results can more fully reflect the spatial relationship of the site area.
  • the image edges can be corrected, reducing the adverse effects of lens distortion.
  • the embodiments of this application can also be applied to large-scene sports events and can be used to provide a 6-degree-of-freedom (6DoF) experience.
  • the audience can freely choose the viewing position and angle, and can enter the scene to achieve close-up and distant views, which can bring users an immersive visual experience.
  • 6DoF video effect you first need to have high-precision calibration parameters of the camera, and then perform a three-dimensional reconstruction of the shooting scene through the correlation between the content and features of the images captured by each camera.
  • the calibration results using the calibration scheme provided by the embodiments of this application are more accurate, making the three-dimensional reconstruction effect better and closer to the real effect.
  • the data processing server includes hardware structures and/or software modules corresponding to each function.
  • modules and method steps of each example described in conjunction with the embodiments disclosed in this application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software driving the hardware depends on the specific application scenarios and design constraints of the technical solution.
  • FIG. 19 is a schematic structural diagram of a calibration device provided by an embodiment of the present application.
  • the device can be applied to a data processing server.
  • the calibration device includes an acquisition unit 1901 and a processing unit 1902.
  • the acquisition unit 1901 is used to acquire multiple video streams collected by multiple acquisition devices deployed in the setting space of the sports field.
  • the multiple video streams are generated when the target calibration object moves in the sports field.
  • the plurality of acquisition devices are simultaneously photographed; the movement trajectory of the target calibration object on the sports field at least covers the set area of the sports field, and the target calibration object includes at least two non-coplanar calibration surfaces.
  • each calibration surface includes at least two calibration points; the video stream collected by each acquisition device includes multiple image frames;
  • the processing unit 1902 is configured to perform calibration point detection on the image frames collected by each collection device among the plurality of collection devices, so as to obtain multiple marks on the target calibration object in the image frames collected by each collection device.
  • the matching feature point set includes multiple matching feature point groups, and each matching feature point group includes at least two matching pixel coordinates, so The at least two matching pixel coordinates are the pixel coordinates of the same calibration point detected by image frames collected by different collection equipment belonging to the same collection equipment group at the same time; the plurality of collection equipment groups are the pixel coordinates of the plurality of collection equipment. Obtained by device grouping, any two collection device groups among the plurality of collection device groups include at least one identical collection device.
  • processing unit 1902 is also used to:
  • the distortion coefficient of each acquisition device is used to obtain a first distortion coefficient estimate of each acquisition device.
  • processing unit 1902 is specifically used to:
  • the i-th acquisition is estimated
  • the internal parameter matrix of the device obtains the second internal parameter estimate value of the i-th acquisition device;
  • the image set includes M1 image frames including the target calibration object in the video stream collected by the i-th acquisition device, and the M1 images
  • the frame corresponds one-to-one to M1 moving positions among the M moving positions of the target calibration object;
  • M1 is a positive integer, and M is an integer greater than M1;
  • the pixel coordinates of the calibration points on the target calibration object in the image frame included in the image set collected by the i-th acquisition device, and the calibration points of the multiple calibration points are used to estimate the pose set corresponding to the i-th collection device respectively.
  • the pose set corresponding to the i-th collection device includes the relative position of the target calibration object at M1 moving positions relative to the The poses of i collection devices; the value of i is a positive integer less than or equal to N, where N is the number of collection devices deployed in the set space of the sports venue;
  • the range of moving positions corresponding to the image frames collected by different collection devices is different;
  • the three-dimensional coordinates of the multiple calibration points in the calibration object coordinate system and the N acquisition devices respectively correspond to The pose of the target calibration object, on the basis of the distortion coefficients and the second internal parameter estimates corresponding to the N acquisition devices initially set, globally iteratively adjusting the internal parameter matrices and distortion coefficients of the N acquisition devices for multiple rounds to obtain the result
  • the first internal parameter estimate value and the first distortion coefficient estimate value of the N collection devices are provided.
  • processing unit 1902 is specifically used to:
  • the pose set of the target calibration object corresponding to each acquisition device the second internal parameter estimate value and the initially set distortion coefficient corresponding to each acquisition device.
  • the pose set of the target calibration object corresponding to each acquisition device, the second internal parameter estimate corresponding to each acquisition device, and the initially set distortion coefficient are adjusted to obtain each acquisition device after the current round of adjustment.
  • the corresponding internal parameter estimates and distortion coefficients respectively;
  • the internal parameter estimate value and distortion coefficient corresponding to each acquisition device after the current round of adjustment are used as the basis for the next round of adjustment, until the completion of the C round of adjustment to obtain the first internal parameter estimate value and the first internal parameter estimate value of the N acquisition devices.
  • processing unit 1902 is specifically used to:
  • the plurality of calibration points are determined.
  • the pixel coordinates of the plurality of calibration points projected into the image coordinate system of the i-th acquisition device are estimated based on the distortion coordinates and the second internal parameter estimate of the i-th acquisition device.
  • processing unit 1902 is specifically used to:
  • the matching feature point set corresponding to each collection device group, and the plurality of calibration points included in the target calibration object The three-dimensional coordinates under the coordinate system of the calibration object are used to obtain the second relative posture of other acquisition devices among the plurality of acquisition devices except the reference acquisition device relative to the reference acquisition device; the reference acquisition device is the plurality of acquisition devices. any of the devices;
  • the scale factor is the ratio between the first distance and the second distance.
  • the first distance is the distance between the two calibration points on the target calibration object.
  • the second distance is the distance between the two calibration points. The distance between two calibration points in the same image coordinate system, and the two calibration points are located on the same calibration surface on the target calibration object;
  • the first external parameter estimate of each acquisition device is obtained according to the second relative pose of each acquisition device and the scale factor.
  • processing unit 1902 is specifically used to:
  • the essential matrix between the first collection device and the reference collection device is determined according to the matching feature point set corresponding to the first collection device group, the first collection device and the reference collection device belong to the first collection device group, and the The first collection equipment group is one of the plurality of collection equipment groups;
  • a second relative pose between the first acquisition device and the reference acquisition device is determined.
  • processing unit 1902 is specifically used to:
  • the target calibration is determined based on the second relative posture of each collection equipment.
  • the three-dimensional coordinates of the multiple calibration points in the local coordinate system when the object moves to M2 moving positions respectively, and the local coordinate system is the camera coordinate system of the reference acquisition device; any one of the M2 moving positions
  • the mobile position is at least located in the common viewing area of two of the g-th collection devices;
  • the M2 movements are estimated based on the three-dimensional coordinates of the multiple calibration points at the M2 movement positions in the local coordinate system, the second relative pose of the collection equipment included in the g-th collection equipment group, and the first internal parameter estimate.
  • the plurality of calibration points at the position are respectively projected to the pixel coordinates under the image coordinate system of the acquisition device included in the g-th acquisition device group;
  • the internal parameter estimates and relative poses corresponding to the acquisition equipment in the g-th acquisition equipment group after the current round of adjustment are used as the basis for the next round of adjustment, until the adjustment of round D is completed to obtain the acquisition equipment included in the g-th acquisition equipment group.
  • the first external parameter estimate value of each acquisition device included in the g-th acquisition device group is obtained by adding the scale factor on the basis of the third relative pose of the acquisition device included in the g-th acquisition device group.
  • processing unit 1902 is specifically used to:
  • the target calibration is determined based on the second relative posture of each collection equipment.
  • the three-dimensional coordinates of the multiple calibration points in the local coordinate system when the object moves to M2 moving positions respectively, and the local coordinate system is the camera coordinate system of the reference acquisition device; any one of the M2 moving positions
  • the mobile position is at least located in the common viewing area of two of the g-th collection devices;
  • the second relative pose of the collection equipment included in the g-th collection equipment group estimate the pixel coordinates of the multiple calibration points at the M2 moving positions respectively projected to the image coordinate system of the acquisition device included in the g-th acquisition device group;
  • the internal parameter estimates and relative poses corresponding to the acquisition equipment in the g-th acquisition equipment group after the current round of adjustment are used as the basis for the next round of adjustment, until the adjustment of round D is completed to obtain the acquisition equipment included in the g-th acquisition equipment group.
  • the first external parameter estimate value of each acquisition device included in the g-th acquisition device group is obtained by adding the scale factor on the basis of the third relative pose of the acquisition device included in the g-th acquisition device group.
  • each of the multiple collection device groups includes two collection devices, and the processing unit 1902 is specifically used to:
  • the relative pose is based on the matching feature point set corresponding to at least one collection device group, the target calibration object including the three-dimensional coordinates of a plurality of calibration points in the calibration object coordinate system, and the first of two collection devices included in at least one collection device group.
  • the internal parameter estimate is determined;
  • the coordinates of the multiple calibration points included in the target calibration object in the calibration object coordinate system, and the coordinates of the target calibration object in the image frames collected by each acquisition device Calibrate the pixel coordinates of the point and globally optimize the camera parameters of each acquisition device.
  • the camera parameters include an internal parameter matrix and an external parameter matrix, or the camera parameters include an internal parameter matrix, an external parameter matrix and a distortion coefficient;
  • the camera parameters of each acquisition device and the poses of the calibration surfaces where the multiple calibration points are respectively located in the calibration object coordinate system are used as the quantities to be optimized; the first internal parameters of each acquisition device in the global optimization
  • the estimated value is used as the initial value of the internal parameter matrix of each acquisition device.
  • the relative pose of the first moving position pair satisfies the following conditions:
  • T 12 represents the relative pose between the first moving position and the second moving position
  • the at least one collection device group includes a first collection device group
  • the first collection device group includes a first collection device and a second collection device.
  • collection equipment Indicates that the determination is based on the pixel coordinates of the calibration point in the image frame collected by the first acquisition device when the target calibration object moves to the first movement position and the pixel coordinates of the calibration point in the image frame collected when the target calibration object moves to the second movement position.
  • the pose from the first moving position to the second moving position Indicates that the determination is based on the pixel coordinates of the calibration point in the image frame collected by the second acquisition device when the target calibration object moves to the second movement position and the pixel coordinates of the calibration point in the image frame collected when the target calibration object moves to the first movement position. from the second movement position to the first movement position.
  • the first collection device group satisfies:
  • I represents the identity matrix, Indicates the pixel coordinates of the calibration point in the image frame collected by the first acquisition device in the l-th acquisition device group when the target calibration object moves to the first movement position and the pixel coordinates of the calibration point collected when the target calibration object moves to the second movement position.
  • the pose from the first moving position to the second moving position determined by the pixel coordinates of the calibration point in the image frame;
  • the pose from the second movement position to the first movement position determined by the pixel coordinates of the calibration point in the collected image frame; 11 represents the first collection device in the first collection device group, and 12 represents the second collection device in the first collection device group.
  • determining the poses of the M mobile positions in the world coordinate system based on the coordinates of the basic mobile position in the world coordinate system and the relative poses of multiple pairs of mobile positions includes:
  • the shortest path is the path with the smallest credibility weight among all paths from the third mobile position to the basic mobile position; the credibility weight of any path is the trustworthiness of the mobile position pair passed by the any path. The sum of the reliability weights;
  • the pose of the third moving position is determined based on the relative pose of the pair of moving positions passed by the shortest path.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit. It can exist physically alone in the device, or two or more units can be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software functional units. Only one or more of the various units in Figure 19 may be implemented in software, hardware, firmware, or a combination thereof.
  • the software or firmware includes, but is not limited to, computer program instructions or code, and may be executed by a hardware processor.
  • the hardware includes, but is not limited to, various types of integrated circuits, such as central processing units (CPUs), digital signal processors (DSPs), field programmable gate arrays (FPGAs) or application specific integrated circuits (ASICs).
  • embodiments of the present application also provide a calibration device for implementing the calibration method provided by the embodiments of the present application.
  • the apparatus may include: one or more processors 2001, memories 2002, and one or more computer programs (not shown in the figure).
  • each of the above devices may be coupled through one or more communication lines 2003.
  • one or more computer programs are stored in the memory 2002, and the one or more computer programs include instructions; the processor 2001 calls the instructions stored in the memory 2002, so that the device executes the calibration method provided by the embodiment of the present application.
  • the processor may be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, which may implement or Execute each method, step and logical block diagram disclosed in the embodiment of this application.
  • a general-purpose processor may be a microprocessor or any conventional processor, etc. The steps of the methods disclosed in conjunction with the embodiments of the present application can be directly implemented by a hardware processor for execution, or can be executed by a combination of hardware and software modules in the processor.
  • the memory may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memories.
  • non-volatile memory can be read-only memory (ROM), programmable ROM (PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically removable memory. Erase electrically programmable read-only memory (EPROM, EEPROM) or flash memory.
  • Volatile memory can be random access memory (RAM), which is used as an external cache.
  • RAM static random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • double data rate SDRAM double data rate SDRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous link dynamic random access memory
  • direct rambus RAM direct rambus RAM
  • the device may also include a communication interface 2004 for communicating with other devices through a transmission medium.
  • the communication interface 2004 may be used to communicate with a collection device to receive images collected by the collection device. frame.
  • the communication interface 2004 may be a transceiver, a circuit, a bus, a module, or other types of communication interfaces.
  • the transceiver when the communication interface 2004 is a transceiver, the transceiver may include an independent receiver or an independent transmitter; it may also be a transceiver with integrated transceiver functions or an interface circuit.
  • the processor 2001, the memory 2002 and the communication interface 2004 can be connected to each other through a communication line 2003;
  • the communication line 2003 can be a Peripheral Component Interconnect (PCI) bus or extension Industrial Standard Architecture (Extended Industry Standard Architecture, EISA for short) bus, etc.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the communication line 2003 can be divided into an address bus, a data bus, a control bus, etc. For ease of presentation, only one thick line is used in Figure 20, but it does not mean that there is only one bus or one type of bus.
  • At least one refers to one or more, and “plurality” refers to two or more.
  • “And/or” describes the association of associated objects, indicating that there can be three relationships, for example, A and/or B, which can mean: A exists alone, A and B exist simultaneously, and B exists alone, where A, B can be singular or plural. “At least one of the following” or similar expressions thereof refers to any combination of these items, including any combination of a single item (items) or a plurality of items (items).
  • a, b or c can mean: a, b, c, "a and b", “a and c", “b and c”, or “a and b and c” ”, where a, b, c can be single or multiple.
  • the character “/” generally indicates that the related objects are in an "or” relationship.
  • the character “/” indicates that the related objects are in a “division” relationship.
  • the word “exemplarily” is used to mean an example, illustration or explanation. Any embodiment or design described herein as “example” is not intended to be construed as preferred or advantageous over other embodiments or designs. Alternatively, it can be understood that the use of the word “example” is intended to present concepts in a specific manner and does not constitute a limitation on this application.
  • One embodiment of the present application provides a computer-readable medium for storing a computer program, the computer program including instructions for executing the method steps in the above method embodiment.
  • embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment that combines software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, etc.) having computer-usable program code embodied therein.
  • a computer-usable storage media including, but not limited to, disk storage, optical storage, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

L'invention concerne un procédé et un appareil d'étalonnage appliqués au domaine technique de la vision artificielle. La présente invention n'est pas limitée à la planéité d'un lieu d'étalonnage, et améliore la précision et l'applicabilité de l'étalonnage. Les dispositifs d'une pluralité de dispositifs d'acquisition sont déployés dans un lieu à étalonner, puis un étalonnage est effectué par déplacement d'un objet d'étalonnage cible. De cette manière, l'influence de caractéristiques visuelles/points d'étalonnage inhérents du lieu est éliminée, les scénarios d'application sont plus larges, et le procédé peut également être appliqué à un scénario faisant état d'un grand lieu. Selon la présente invention, des paramètres de caméra des dispositifs d'acquisition sont déterminés au moyen d'une relation de vue commune d'une manière consistant à déplacer l'objet d'étalonnage, de sorte que l'exigence de la planéité du lieu ne soit pas grande, et que la précision d'étalonnage de ce type de scénario puisse être améliorée.
PCT/CN2023/106553 2022-07-11 2023-07-10 Procédé et appareil d'étalonnage WO2024012405A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210815609.9A CN117422770A (zh) 2022-07-11 2022-07-11 一种标定方法及装置
CN202210815609.9 2022-07-11

Publications (1)

Publication Number Publication Date
WO2024012405A1 true WO2024012405A1 (fr) 2024-01-18

Family

ID=89528924

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/106553 WO2024012405A1 (fr) 2022-07-11 2023-07-10 Procédé et appareil d'étalonnage

Country Status (2)

Country Link
CN (1) CN117422770A (fr)
WO (1) WO2024012405A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876608A (zh) * 2024-03-11 2024-04-12 魔视智能科技(武汉)有限公司 三维图像重建方法、装置、计算机设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030048357A1 (en) * 2001-08-29 2003-03-13 Geovantage, Inc. Digital imaging system for airborne applications
CN103035008A (zh) * 2012-12-15 2013-04-10 北京工业大学 一种多相机系统的加权标定方法
CN109242915A (zh) * 2018-09-29 2019-01-18 合肥工业大学 基于多面立体靶标的多相机系统标定方法
CN111369608A (zh) * 2020-05-29 2020-07-03 南京晓庄学院 一种基于图像深度估计的视觉里程计方法
CN114004901A (zh) * 2022-01-04 2022-02-01 南昌虚拟现实研究院股份有限公司 多相机标定方法、装置、终端设备及可读存储介质
CN114399554A (zh) * 2021-12-08 2022-04-26 凌云光技术股份有限公司 一种多相机系统的标定方法及系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030048357A1 (en) * 2001-08-29 2003-03-13 Geovantage, Inc. Digital imaging system for airborne applications
CN103035008A (zh) * 2012-12-15 2013-04-10 北京工业大学 一种多相机系统的加权标定方法
CN109242915A (zh) * 2018-09-29 2019-01-18 合肥工业大学 基于多面立体靶标的多相机系统标定方法
CN111369608A (zh) * 2020-05-29 2020-07-03 南京晓庄学院 一种基于图像深度估计的视觉里程计方法
CN114399554A (zh) * 2021-12-08 2022-04-26 凌云光技术股份有限公司 一种多相机系统的标定方法及系统
CN114004901A (zh) * 2022-01-04 2022-02-01 南昌虚拟现实研究院股份有限公司 多相机标定方法、装置、终端设备及可读存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876608A (zh) * 2024-03-11 2024-04-12 魔视智能科技(武汉)有限公司 三维图像重建方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
CN117422770A (zh) 2024-01-19

Similar Documents

Publication Publication Date Title
CN108765498B (zh) 单目视觉跟踪方法、装置及存储介质
CN111750820B (zh) 影像定位方法及其系统
US10271036B2 (en) Systems and methods for incorporating two dimensional images captured by a moving studio camera with actively controlled optics into a virtual three dimensional coordinate system
CN106228538B (zh) 基于logo的双目视觉室内定位方法
US20180213218A1 (en) Equipment and method for promptly performing calibration and verification of intrinsic and extrinsic parameters of a plurality of image capturing elements installed on electronic device
CN103874193B (zh) 一种移动终端定位的方法及系统
CN107665483B (zh) 免定标便捷的单目镜头鱼眼图像畸变矫正方法
CN105069795B (zh) 运动对象跟踪方法及装置
CN107431796A (zh) 全景虚拟现实内容的全方位立体式捕捉和渲染
CN111127559B (zh) 光学动捕系统中标定杆检测方法、装置、设备和存储介质
CN113160325B (zh) 基于进化算法的多摄像机高精度自动标定方法
JP6616967B2 (ja) 地図作成装置、および地図作成方法
CN106803913A (zh) 一种用于自动侦测学生起立发言动作的检测方法及其装置
CN108010086A (zh) 基于网球场标志线交点的摄像机标定方法、装置和介质
CN106886976B (zh) 一种基于内参数修正鱼眼像机的图像生成方法
CN108921889A (zh) 一种基于扩增现实应用的室内三维定位方法
CN109141432B (zh) 一种基于影像空间和全景辅助的室内定位导航方法
CN106713740A (zh) 定位跟踪摄像方法与系统
CN103500471A (zh) 实现高分辨率增强现实系统的方法
CN109902675A (zh) 物体的位姿获取方法、场景重构的方法和装置
CN109712249B (zh) 地理要素增强现实方法及装置
WO2024012405A1 (fr) Procédé et appareil d'étalonnage
CN114037923A (zh) 一种目标活动热点图绘制方法、系统、设备及存储介质
Szelag et al. Real-time camera pose estimation based on volleyball court view
WO2021208630A1 (fr) Procédé d'étalonnage, appareil d'étalonnage et dispositif électronique l'utilisant

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23838897

Country of ref document: EP

Kind code of ref document: A1