CN111598956A - Calibration method, device and system - Google Patents

Calibration method, device and system Download PDF

Info

Publication number
CN111598956A
CN111598956A CN202010367126.8A CN202010367126A CN111598956A CN 111598956 A CN111598956 A CN 111598956A CN 202010367126 A CN202010367126 A CN 202010367126A CN 111598956 A CN111598956 A CN 111598956A
Authority
CN
China
Prior art keywords
image
pixel position
target
homography matrix
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010367126.8A
Other languages
Chinese (zh)
Inventor
马政
黄瑞
闫国行
石建萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sensetime Group Ltd
Original Assignee
Sensetime Group Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sensetime Group Ltd filed Critical Sensetime Group Ltd
Priority to CN202010367126.8A priority Critical patent/CN111598956A/en
Publication of CN111598956A publication Critical patent/CN111598956A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Abstract

The embodiment of the specification provides a calibration method, a calibration device and a calibration system, wherein an image to be processed including a calibration plate is shot by using an image acquisition device to be calibrated, a first pixel position of a target point on the calibration plate on the image to be processed is detected, then a target pixel position closest to the first pixel position and a target homography matrix corresponding to the target pixel position are directly searched in a pre-established homography matrix set, and a homography matrix of the image acquisition device to be calibrated is determined according to the searched target homography matrix.

Description

Calibration method, device and system
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to a calibration method, apparatus, and system.
Background
In the scenes of assistant driving, unmanned driving and the like, distance measurement is a very important link. In order to reduce the cost, many algorithms based on monocular distance measurement are proposed, and one of the distance measurement methods is distance measurement by using a homography matrix. The common method for solving the homography matrix is to calibrate the homography matrix by using a manual point selection mode, and the calibration efficiency of the mode is low.
Disclosure of Invention
The disclosure provides a calibration method, device and system.
Specifically, the present disclosure is realized by the following technical solutions:
according to a first aspect of the embodiments of the present disclosure, there is provided a calibration method, the method including: acquiring an image to be processed shot by image acquisition equipment to be calibrated, wherein the image to be processed comprises a calibration plate which is preset relative to the image acquisition equipment to be calibrated; detecting a first pixel position of a target point on the calibration plate on the image to be processed; determining a target pixel position closest to the first pixel position, and determining a homography matrix of the image acquisition equipment to be calibrated according to a target homography matrix corresponding to the target pixel position in a homography matrix set; the target pixel position corresponding to one target homography matrix is the pixel position of a target point on the calibration plate in a training image shot by a reference image acquisition device in one pose.
The image to be processed comprising the calibration board is shot by the image acquisition equipment to be calibrated, the first pixel position of a target point on the calibration board on the image to be processed is detected, then the target pixel position closest to the first pixel position and the corresponding target homography matrix are directly searched in the homography matrix set, the homography matrix of the image acquisition equipment to be calibrated is determined according to the searched target homography matrix, manual point selection is not needed in the calibration process, the homography matrix can be calibrated in a short time, and the calibration efficiency is improved. In addition, a reference object is not required to be arranged in the calibration process, and the space requirement on a calibration site is reduced.
Optionally, the homography matrix set is determined by the following steps: acquiring training images shot by reference image acquisition equipment at different poses, wherein the training images comprise a reference object and a calibration plate which are preset relative to the reference image acquisition equipment; for each training image, respectively acquiring a target pixel position of a target point on the calibration plate on the training image, and determining a target homography matrix corresponding to the training image according to a second pixel position of the reference object on the training image and a physical position of the reference object in a physical space; and establishing a corresponding relation between the target homography matrix under each pose and the target pixel position under the pose to obtain the homography matrix set.
By establishing the homography matrix set, the corresponding target homography matrix can be obtained only by retrieving the position of the target pixel without manual point selection in the calibration process, so that the calibration efficiency is improved; meanwhile, a reference object is not required to be arranged during calibration, so that the space of a calibration site is reduced, and the method is suitable for being deployed on a production line.
Optionally, the determining a target homography matrix corresponding to the training image according to the second pixel position of the reference object on the training image and the physical position of the reference object in the physical space includes: clustering the positions of all target pixels according to a preset clustering radius to obtain a plurality of clusters; respectively determining the clustering centers of all clusters; for any cluster, determining a target pixel position closest to the cluster center of the cluster from the target pixel positions of the cluster as a candidate target pixel position of the cluster, and extracting candidate training images corresponding to the candidate target pixel positions of all clusters from the training images; determining a target homography matrix corresponding to each candidate training image according to the second pixel position of the reference object on each candidate training image and the physical position of the reference object in a physical space; the establishing of the corresponding relationship between the target homography matrix under each pose and the target pixel position under the pose to obtain the homography matrix set comprises the following steps: and establishing a corresponding relation between each candidate target homography matrix and each candidate target pixel position to obtain the homography matrix set.
The homography matrix set is constructed by extracting the training images corresponding to the candidate target pixel positions closest to the cluster centers, so that the calibration efficiency is improved, and the calibration cost is reduced.
Optionally, the specific pixel position is determined using the following steps: carrying out angular point detection on an image to be detected, and acquiring a third pixel position of an angular point on the calibration plate on the image to be detected; the image to be detected comprises an image of the calibration plate; calculating the specific pixel position of the target point on the image to be detected according to the third pixel position; when the image to be detected is the image to be processed, the specific pixel position is the first pixel position; and when the image to be detected is the training image, the specific pixel position is the target pixel position.
Optionally, performing corner detection on the image to be detected, and acquiring a third pixel position of a corner on the calibration board on the image to be detected, including: detecting the confidence coefficient that each pixel point in the image to be detected or the interested region on the image to be detected is an angular point; and determining pixel points with the confidence degrees larger than a preset confidence degree threshold value as third pixel positions of corner points on the calibration board on the image to be detected, wherein the confidence degree threshold value is set according to the illumination intensity when the image to be detected is collected.
And a confidence threshold is set according to the illumination intensity, so that the accuracy of corner point detection is improved. And the angular point detection is only carried out in the region of interest, so that the detection efficiency is improved, and the detection error is reduced.
Optionally, the training images include a first training image captured by the reference image capturing device at a first preset position, and a second training image captured by the reference image capturing device after leaving and returning to the first preset position. By the method, the randomness of the training sample can be increased, errors caused by the deviation of the position of the image acquisition equipment in the process of shooting the training image are reduced, and the richness of the sample in the homography matrix set is improved, so that the calibration accuracy is improved.
According to a second aspect of the embodiments of the present disclosure, there is provided a calibration apparatus, the apparatus including: the device comprises a first acquisition module, a second acquisition module and a calibration module, wherein the first acquisition module is used for acquiring an image to be processed shot by image acquisition equipment to be calibrated, and the image to be processed comprises a calibration plate which is preset relative to the image acquisition equipment to be calibrated; the first detection module is used for detecting a first pixel position of a target point on the calibration plate on the image to be processed; the calibration module is used for determining a target pixel position closest to the first pixel position and determining a homography matrix of the image acquisition equipment to be calibrated according to a target homography matrix corresponding to the target pixel position in a homography matrix set; the target pixel position corresponding to one target homography matrix is the pixel position of a target point on the calibration plate in a training image shot by a reference image acquisition device in one pose.
Optionally, the set of homography matrices is determined using the following modules: the second acquisition module is used for acquiring training images shot by reference image acquisition equipment at different poses, wherein the training images comprise reference objects and calibration plates which are preset relative to the reference image acquisition equipment; the determining module is used for respectively acquiring target pixel positions of target points on the calibration plate on the training images for each training image, and determining a target homography matrix corresponding to the training images according to a second pixel position of the reference object on the training images and a physical position of the reference object in a physical space; and the establishing module is used for establishing the corresponding relation between the target homography matrix under each pose and the target pixel position under the pose to obtain the homography matrix set.
Optionally, the determining module includes: the clustering unit is used for clustering the positions of all target pixels according to a preset clustering radius to obtain a plurality of clusters; a first determining unit, configured to determine a clustering center of each cluster, respectively; an extracting unit, configured to determine, for any one cluster, a target pixel position closest to a cluster center of the cluster from the target pixel positions of the cluster as a candidate target pixel position of the cluster, and extract a candidate training image corresponding to the candidate target pixel position of each cluster from the training images; a second determining unit, configured to determine a target homography matrix corresponding to each candidate training image according to a second pixel position of the reference object on each candidate training image and a physical position of the reference object in a physical space, respectively; the establishing module is used for: and establishing a corresponding relation between each candidate target homography matrix and each candidate target pixel position to obtain the homography matrix set.
Optionally, the specific pixel location is determined using the following modules: the second detection module is used for carrying out angular point detection on the image to be detected and acquiring a third pixel position of an angular point on the calibration plate on the image to be detected; the image to be detected comprises an image of the calibration plate; the calculating module is used for calculating the specific pixel position of the target point on the image to be detected according to the third pixel position; when the image to be detected is the image to be processed, the specific pixel position is the first pixel position; and when the image to be detected is the training image, the specific pixel position is the target pixel position.
Optionally, the second detection module includes: the detection unit is used for detecting the confidence coefficient that each pixel point in the image to be detected or the interested region on the image to be detected is an angular point; and the third determining unit is used for determining pixel points with the confidence degrees larger than a preset confidence degree threshold value as third pixel positions of corner points on the calibration board on the image to be detected, and the confidence degree threshold value is set according to the illumination intensity when the image to be detected is collected.
According to a third aspect of the embodiments of the present disclosure, there is provided a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any of the embodiments when executing the program.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a calibration system, the system comprising: the computer apparatus of any embodiment; the image acquisition equipment to be calibrated is in communication connection with the computer equipment; the image acquisition equipment to be calibrated is used for shooting an image to be processed, and the image to be processed comprises a calibration plate which is preset relative to the image acquisition equipment to be calibrated.
Optionally, the system further comprises: a reference image acquisition device in communication with the computer device; the reference image acquisition equipment is used for shooting a group of training images shot under different poses, and the training images comprise reference objects and the calibration plates which are preset relative to the reference image acquisition equipment.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the method of any of the embodiments.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flow chart of a calibration method according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram of corner detection in an embodiment of the disclosure.
Fig. 3 is a flowchart of a method for establishing a homography matrix set according to an embodiment of the disclosure.
Fig. 4 is a schematic diagram of a clustering process of an embodiment of the disclosure.
Fig. 5 is a schematic diagram of a scene arrangement for establishing a homography matrix set according to an embodiment of the disclosure.
Fig. 6 is a schematic diagram of a calibration scenario arrangement in an embodiment of the disclosure.
Fig. 7 is a block diagram of a calibration arrangement of an embodiment of the present disclosure.
FIG. 8 is a schematic diagram of a computer device for implementing the disclosed method, in an embodiment of the present disclosure.
FIG. 9 is a schematic diagram of a calibration system of an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In order to make the technical solutions in the embodiments of the present disclosure better understood and make the above objects, features and advantages of the embodiments of the present disclosure more comprehensible, the technical solutions in the embodiments of the present disclosure are described in further detail below with reference to the accompanying drawings.
In the scenes of assistant driving, unmanned driving and the like, in order to measure the distance between a movable platform (for example, a vehicle, a movable robot and the like) and a surrounding object (for example, other movable platforms, pedestrians, obstacles, buildings and the like), an image acquisition device can be installed on the movable platform, a two-dimensional image of the surrounding environment of the movable platform is shot through the image acquisition device, then pixel points on the two-dimensional image are transformed into a three-dimensional physical space according to a homography matrix of the image acquisition device, and the actual distance between the pixel points and the movable platform in the physical space can be obtained. Due to limitations of the installation process, there may be some difference between the pose of the image capture device (including but not limited to at least one of pitch angle, yaw angle, and roll angle) on each movable platform, for example, the actual pitch angle of the image capture device may be-2 °, +5 °, or other values. This results in the homography matrices of the individual image acquisition devices being different in practice. Therefore, in order to accurately perform ranging, the homography matrix of the image acquisition device on the movable platform needs to be calibrated, i.e., the homography matrix of the image acquisition device is solved.
A common method for solving the homography matrix is to use manual point selection, which can meet the requirement on precision, but if mass production is to be realized, the manual calibration method has low efficiency on the one hand, and calibration of one homography matrix requires about five minutes; on the other hand, solving the homography matrix requires acquiring the physical positions of four points in the physical space and the pixel positions of corresponding pixel points of the four points in the image shot by the image acquisition equipment at least, so that a reference object is generally placed in a relatively far place, and a calibration site is required to have a large space. Taking the example that the movable platform is a vehicle, a rectangular area with a width of about 4 meters and a length of about 30 meters is generally required as a calibration site. Due to the limitations of the two aspects, the realization of mass production on a production line is seriously influenced.
Based on this, the present disclosure proposes a calibration method. As shown in fig. 1, the method may include:
step S101: acquiring an image to be processed shot by image acquisition equipment to be calibrated, wherein the image to be processed comprises a calibration plate which is preset relative to the image acquisition equipment to be calibrated;
step S102: detecting a first pixel position of a target point on the calibration plate on the image to be processed;
step S103: determining a target pixel position closest to the first pixel position, and determining a homography matrix of the image acquisition equipment to be calibrated according to a target homography matrix corresponding to the target pixel position in a homography matrix set; the target pixel position corresponding to one target homography matrix is the pixel position of a target point on the calibration plate in a training image shot by a reference image acquisition device in one pose.
In step S101, a calibration plate may be set in advance at a preset position of the image capturing apparatus to be calibrated. The calibration plate is a flat plate with an array of fixed pitch patterns. The image acquisition equipment to be calibrated can be equipment with image acquisition capacity such as a camera and a camera, and can be installed on a movable platform (such as a vehicle, an unmanned aerial vehicle and a movable robot) in the scenes such as auxiliary driving and unmanned driving. The mounting positions and the number can be determined according to the distance measurement requirement. For example, when it is required to measure the distance between the target object in front of the movable platform and the movable platform, the image acquisition device to be calibrated may be installed right in front of the movable platform; when the distance between a target object on the side surface of the movable platform and the movable platform needs to be measured, the image acquisition equipment to be calibrated can also be arranged on the side surface of the movable platform; when the distances between the target objects in multiple directions and the movable platform need to be measured, at least one image acquisition device to be calibrated can be respectively installed at different positions of the movable platform. Since the mounting position of the image capturing device to be calibrated on the movable platform is generally fixed, a calibration plate may be disposed at a preset position of the movable platform for convenience of processing. Taking the example that the image acquisition equipment to be calibrated is installed right in front of the movable platform, the calibration plate can be preset in a certain distance in front of the movable platform. During calibration, an image to be processed including the calibration plate is shot by an image acquisition device to be calibrated.
In step S102, a number of feature points on the image to be processed may be obtained, and then a first pixel position of the target point on the calibration board on the image to be processed may be calculated according to the pixel positions of the feature points on the image to be processed. In practical applications, the feature points may be corner points on the image to be processed. Accordingly, the first pixel position of the target point on the image to be processed can be determined by corner detection. Specifically, the corner detection may be performed on the image to be processed, so as to obtain the pixel position of the corner on the calibration board on the image to be processed; and calculating a first pixel position of the target point on the image to be processed according to the pixel position of the corner point on the calibration board on the image to be processed.
In some embodiments, the target point on the calibration plate may be a center point of the calibration plate. As shown in fig. 2, it is a schematic diagram of corner detection in the embodiment of the present disclosure. Using a corner detection algorithm, four corners on the calibration board can be detected, as shown at A, B, C, D. Then, the first pixel position of the center point may be calculated from the average of the pixel positions of A, B, C, D four corner points. Let the coordinates of the pixel positions of the four corner points be (x), respectivelyA,yA)、(xB,yB)、(xC,yC) And (x)D,yD) The coordinate (x) of the center pointO,yO) Can be written as:
Figure BDA0002476880170000091
Figure BDA0002476880170000092
in some embodiments, when performing corner detection, a confidence that each pixel point on the image to be processed is a corner may be detected; and if the confidence coefficient is greater than a preset confidence coefficient threshold value, determining that the pixel point on the image to be processed is an angular point. The corner detection algorithm may be an existing algorithm, which is not limited by this disclosure.
The confidence threshold may be set according to the illumination intensity of the image to be processed captured by the image capturing device to be calibrated. When the illumination intensity is large, the confidence threshold value may be set to a large value; when the illumination intensity is small, the confidence threshold may be set to a small value. In this way, the accuracy of corner detection can be improved.
In order to improve the detection efficiency, a Region of Interest (ROI) may be further set, corner detection is performed on the image to be processed in the ROI on the image to be processed, a confidence level that each pixel point in the ROI is a corner is obtained, and a pixel point of which the confidence level is greater than a preset confidence level threshold is determined as a third pixel position of the corner point on the calibration board on the image to be detected. In practical application, the offset range of the angle of the image acquisition device to be calibrated and the position of the calibration plate are known, so that the ROI can be determined according to the offset range of the angle of the image acquisition device to be calibrated and the physical position of the calibration plate. Because the shot image to be processed can also comprise other background images besides the calibration plate, the corner detection is only carried out on the image to be processed in the ROI by setting the ROI, so that the detection error can be reduced, and the corner detection efficiency is improved.
In step S103, a corresponding target homography matrix may be retrieved from the homography matrix set according to the first pixel position acquired in step S102. The homography matrix set comprises a plurality of target pixel positions and target homography matrices, and each target pixel position corresponds to one target homography matrix one by one. Each target homography matrix is the pixel position of a target point on the calibration plate in a training image shot by a reference image acquisition device in a pose.
Assume that the homography matrix set includes the following homography matrix H and the target pixel position Q in one-to-one correspondence: h1:Q1;H2:Q2;…;HK:QKWherein H isi:Qi(1. ltoreq. i. ltoreq.K) represents HiAnd QiOne-to-one correspondence, HiRepresenting the homography matrix, Q, of the target corresponding to the ith poseiRepresenting objects in training images taken of target points on a calibration plate in the ith poseAnd marking the pixel position. Then Q may be looked up in the homography matrix set1,Q2,…,QKThe target pixel position closest to the first pixel position, assumed to be QjJ ∈ {1,2, …, K }, then, according to QjCorresponding homography matrix HjAnd determining a homography matrix of the image acquisition equipment to be calibrated.
If the image capturing apparatus that captured the training image (i.e., the image capturing apparatus of the reference) satisfies: (1) the installation position of the reference image acquisition equipment on the movable platform is the same as the installation position of the image acquisition equipment to be calibrated on the movable platform; (2) the reference image acquisition equipment has the same internal reference as the image acquisition equipment to be calibrated; (3) the distance between the reference image acquisition equipment and the calibration plate is the same as the distance between the image acquisition equipment to be calibrated and the calibration plate; and (4) the height of the movable platform provided with the image acquisition equipment to be calibrated is the same as that of the movable platform provided with the reference image acquisition equipment, so that the searched target homography matrix can be directly determined as the homography matrix of the image acquisition equipment to be calibrated. If the at least one condition is not satisfied (the condition that is not satisfied is referred to as a target condition), the target homography matrix can be mapped to the homography matrix of the image acquisition device to be calibrated according to the target condition after the target homography matrix is found. For example, if the internal reference of the reference image capturing device is different from the internal reference of the image capturing device to be calibrated, the target homography matrix may be mapped to the homography matrix of the image capturing device to be calibrated according to the internal reference of the reference image capturing device and the internal reference of the image capturing device to be calibrated.
According to the embodiment of the invention, the homography matrix is calibrated in a retrieval mode, so that manual point selection is not needed when the homography matrix of the image acquisition equipment to be calibrated is calibrated, calibration of one homography matrix can be realized within about 1s, and the calibration efficiency is improved. Meanwhile, a reference object is not required to be placed in the calibration process, and the space required by calibration is reduced. Taking the movable platform as an example of a vehicle, by adopting the method of the embodiment of the disclosure, the homography matrix is calibrated only in an area with the width of the vehicle width and the length of 3 meters greater than the length of the vehicle.
Fig. 3 is a flowchart of a method for establishing a homography matrix set according to an embodiment of the disclosure. The method may comprise:
step S301: acquiring training images shot by reference image acquisition equipment at different poses, wherein the training images comprise a reference object and a calibration plate which are preset relative to the reference image acquisition equipment;
step S302: for each training image, respectively acquiring a target pixel position of the target point on the calibration plate on the training image, and determining a target homography matrix corresponding to the training image according to a second pixel position of the reference object on the training image and a physical position of the reference object in a physical space;
step S303: and respectively establishing corresponding relations between the target homography matrixes under the poses and the target pixel positions under the poses to obtain the homography matrix set.
The scheme described in this embodiment is a scheme in a data training phase, and a homography matrix set may be trained in advance outside a production line, and the homography matrix set is applied to a calibration process on the production line. In step S301, the image capturing device to be referred to is an image capturing device for capturing a training image during the process of establishing the homography matrix set, and an image capturing device having the same internal parameters as the image capturing device to be calibrated may be used, where the internal parameters may include, but are not limited to, a focal length and distortion parameters of the image capturing device.
The referenced image acquisition device may be captured in multiple poses, one or more training images being captured at each pose. For example, can be in the pose P1Lower shot training images
Figure BDA0002476880170000121
In the position and posture P2Lower shot training images
Figure BDA0002476880170000122
… …, in the sitting position PMLower shot training images
Figure BDA0002476880170000123
Wherein, M, N1,N2,…NMAre all positive integers. The relative position of the calibration plate to the reference image acquisition device is the same as the relative position of the calibration plate to the to-be-calibrated image acquisition device, which means that when the position of the calibration plate relative to the to-be-calibrated image acquisition device is translated by a distance in a certain direction by taking the to-be-calibrated image acquisition device as a center, the position of the calibration plate relative to the reference image acquisition device is translated by the same distance in the same direction by taking the reference image acquisition device as the center. For example, if the image capturing device to be calibrated captures an image to be processed, and the calibration plate is located 15 meters ahead of the image capturing device to be calibrated, when the reference image capturing device captures a training image, the calibration plate is also located 15 meters ahead of the reference image capturing device.
The number of poses selected may be determined according to an offset of an angle of the image capturing device to be calibrated, where the angle may include at least one of a pitch angle, a yaw angle, and a roll angle, the offset of the angle being an angular difference between the angle and a reference angle. Taking the pitch angle as an example, the reference angle of the pitch angle may be set to 0 ° in general. When the offset of the pitch angle is large, the number of poses to be selected is large, and the number of training images to be shot is also large; conversely, when the offset of the pitch angle is small, the number of the poses required to be selected is small, and the number of the training images required to be shot is small. The method comprises the steps that training images under various poses need to be shot by reference image acquisition equipment are determined according to the offset of an angle of the image acquisition equipment to be calibrated, on one hand, the training images can cover various poses of the image acquisition equipment to be calibrated as far as possible when the offset of the angle is large, and therefore calibration accuracy is improved; on the other hand, when the offset of the angle is small, the number of training images can be reduced, so that the calibration efficiency is improved, and the calculation amount in the calibration process is reduced.
In some embodiments, the training images include a first training image taken by the reference image capture device at a first preset position, and a second training image taken by the reference image capture device after leaving and returning to the first preset position. In the process of shooting the training images, the training images can not be shot at the same positions every time, so that the randomness of the training samples can be increased, errors caused by the position deviation of the image acquisition equipment in the process of shooting the training images are reduced, the richness of the samples in the homography matrix set is improved, and the calibration accuracy is improved.
In step S302, a number of feature points on the training image may be obtained, and then a target pixel position of the target point on the calibration board on the training image may be calculated according to the pixel positions of the feature points on the training image. In practical applications, the feature points may be corner points on the training image. Accordingly, the target pixel position of the target point on the calibration plate on the training image can be determined by corner detection. Specifically, the corner point detection may be performed on the training image to obtain a third pixel position of the corner point on the calibration board on the training image; and calculating the target pixel position of the target point on the training image according to the third pixel position of the corner point on the calibration plate on the training image.
In the above embodiment, the target point in the training image and the target point in the image to be processed are the same point on the calibration plate. In some embodiments, the target point on the calibration plate may be a center point of the calibration plate. The way of calculating the position of the center point of the calibration plate according to the position of the corner point of the calibration plate in the training image is the same as the way of calculating the position of the center point of the calibration plate in the image to be processed, and the details are not repeated here.
In some embodiments, when performing corner detection, a confidence that each pixel point on the training image is a corner may be detected; and if the confidence coefficient is greater than a preset confidence coefficient threshold value, determining that the pixel point on the training image is an angular point.
Wherein the confidence threshold is set according to the illumination intensity of the reference image acquisition device when the training image is captured. When the illumination intensity is large, the confidence threshold may be set to a large value; when the illumination intensity is small, the confidence threshold may be set to a small value. In this way, the accuracy of corner detection can be improved. The confidence threshold used in the process of performing corner detection on the training image may be the same as or different from the confidence threshold used in the process of performing corner detection on the image to be processed.
In order to improve the detection efficiency, a Region of Interest (ROI) may be further set, corner detection is performed on the training image in the Region of Interest on the training image, a confidence level that each pixel point in the Region of Interest is a corner is obtained, and a pixel point of which the confidence level is greater than a preset confidence level threshold is determined as a third pixel position of the corner point on the calibration board on the training image. In practical applications, the offset range of the angle of the reference image-capturing device and the position of the calibration plate are known, and thus, the ROI may be determined according to the offset range of the angle of the reference image-capturing device and the position of the calibration plate. Because the shot training image can also comprise other background images besides the calibration plate, the detection error can be reduced and the corner detection efficiency can be improved by setting the ROI and only carrying out the corner detection on the training image in the ROI.
In some embodiments, the target pixel positions may be clustered according to a preset clustering radius to obtain a plurality of clusters; respectively determining the clustering centers of all clusters; for any cluster, determining a target pixel position closest to the cluster center of the cluster from the target pixel positions of the cluster as a candidate target pixel position of the cluster, and extracting candidate training images corresponding to the candidate target pixel positions of all clusters from the training images; determining a target homography matrix corresponding to each candidate training image according to the second pixel position of the reference object on each candidate training image and the physical position of the reference object in a physical space; and establishing a corresponding relation between each candidate target homography matrix and each candidate target pixel position to obtain the homography matrix set.
In some embodiments, the cluster radius is 4 pixels wide or 5 pixels wide. Fig. 4 is a schematic diagram of the clustering process according to the embodiment of the disclosure, in which "+" indicates a target pixel position on each training image, and "·" indicates a cluster center. In order to make the effect look intuitive, the figure only shows the case that the number of clusters is 5, and the number of clusters in practical application may be much more than 5. Suppose that the cluster centers of the clusters are S1,S2,S3,S4,S5The respective S and S can be searched in all target pixel positions1,S2,S3,S4,S5The nearest target pixel positions (i.e., candidate target pixel positions) are assumed to be found as G11,G22,G35,G43,G56Corresponding target pixel position, G can be extracted from all training images11,G22,G35,G4,3G, 5 training images and respectively acquiring the target points on the calibration plate at G11,G22,G35,G43,G56The corresponding target pixel position is then respectively set in G according to the reference object11,G22,G35,G43,G56And the third pixel position of the reference object and the physical position of the reference object in the physical space are correspondingly determined to be G11,G22,G35,G43,G56The corresponding target homography matrix.
In the above embodiments, one useful reference is a cone target. The third pixel location at the bottom of each cone marker in the 5 training images can be calibrated. Meanwhile, in the physical world, the distance from each conical mark to the center of the movable platform can be measured in advance. A homography matrix of 3 x 3 is computed from pairs of points greater than or equal to 4 sets of third pixel locations-physical locations. The method for solving the homography matrix can use the existing method, and the disclosure does not limit the method.
Since the number of the acquired training images is very large, but many of the training images are relatively close to each other, in order to reduce the calibration cost, the homography matrix can be calibrated by only using the training images with larger differences, and the training images with smaller differences are discarded. The number of clusters can be determined according to the offset of the angle of the image acquisition equipment to be calibrated. When the offset is larger, the number of clusters is larger; when the offset is smaller, the number of clusters is smaller.
In step S303, a homography matrix set may be established according to the target homography matrix and the target pixel position and the corresponding relationship thereof acquired in step S302. Each element in the set includes a target homography matrix and a target pixel location corresponding to the target homography matrix.
In the above embodiment, first, in the training process, a candidate set of the homography matrix is generated by using the calibration board and the reference object, then, in the calibration process, a new picture to be calibrated is given, a first pixel position of a target point in the image to be calibrated is obtained, then, matching is performed in the homography matrix set according to the first pixel position, a target homography matrix corresponding to a target pixel position closest to the first pixel position (for example, may be a euclidean distance) is found, and the calibration process is completed. During calibration, a reference object is not required to be placed, and only a calibration plate is placed, so that the space of a calibration site is reduced. In addition, each target homography matrix corresponds to one target pixel position, so that only the first pixel position of the target point on the calibration plate needs to be acquired during calibration, and then the corresponding target homography matrix can be indexed through the first pixel position, and the calibration efficiency is improved.
Fig. 5 is a schematic view of a scene layout for establishing a homography matrix set according to an embodiment of the disclosure, and a site may be deployed outside a production line for establishing the homography matrix set. In this embodiment, the reference object is a cone-shaped mark, the movable platform is a vehicle, and the image capturing device to be calibrated is a camera. The first row of cone-shaped markers is 7 meters from the left front wheel of the vehicle in the longitudinal direction, the longitudinal distance between each row of cone-shaped markers is 5 meters, the transverse distance between the cone-shaped markers is 3.5 meters, and the position of the vehicle is centered left and right. The position of the calibration plate is also centered, the distance from the left front wheel of the vehicle is 3 meters, and the height of the calibration plate is 1.5 meters of the distance from the center of the calibration plate to the ground. When the training images are shot, the postures of the cameras, such as the pitch angles, need to be adjusted, and the training images of the cameras in different postures are shot. In some embodiments, the vehicle can be driven away from the original position and then returned to the original position again, and since it cannot be guaranteed that the vehicle stops at the same position every time when calibration is performed on the production line, some randomness can be increased by the method of the embodiment, errors caused by the deviation of the camera position in the process of shooting the training image are reduced, and the richness of samples in the homography matrix set is improved, so that the calibration accuracy is improved.
As shown in fig. 6, it is a schematic diagram of a calibration scenario arrangement in the embodiment of the present disclosure. At the moment, cone-shaped marks are not needed for collecting calibration data, the calibration plate is only needed to be placed at a position 3 meters away from the front wheels of the vehicle, the transverse position is centered, and a small field can be deployed on a production line for calibration. When data acquisition and calibration are carried out on a production line, vehicles provided with cameras to be calibrated can stop at the same position, and a batch of vehicles with the same model are calibrated at each time.
The numerical values can be adjusted according to the parameters of the camera under the actual condition, and the final purpose of adjustment is to enable the calibration plate and the reference object to be displayed in the picture as clearly as possible without losing the distance.
The scheme can be applied to unmanned driving and auxiliary driving scenes, for example, in the auxiliary driving scene, the situations that the following distance is too short or the vehicle deviates from the center of the lane line and the like can be found in time through distance measurement, and corresponding measures are taken, for example, alarming or automatically adjusting the driving route.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
As shown in fig. 7, the present disclosure also provides a calibration apparatus, the apparatus including:
the system comprises a first acquisition module 701, a second acquisition module and a calibration module, wherein the first acquisition module 701 is used for acquiring an image to be processed shot by an image acquisition device to be calibrated, and the image to be processed comprises a calibration plate which is preset relative to the image acquisition device to be calibrated;
a first detecting module 702, configured to detect a first pixel position of a target point on the calibration board on the image to be processed;
a calibration module 703, configured to determine a target pixel position closest to the first pixel position, and determine a homography matrix of the image acquisition device to be calibrated according to a target homography matrix in a homography matrix set corresponding to the target pixel position; the target pixel position corresponding to one target homography matrix is the pixel position of a target point on the calibration plate in a training image shot by a reference image acquisition device in one pose.
In some embodiments, the detection module comprises: the first detection unit is used for carrying out corner detection on the image to be processed and acquiring a second pixel position of a corner point on the calibration board on the image to be processed; and the first calculating unit is used for calculating a first pixel position of the target point on the image to be processed according to the second pixel position.
In some embodiments, the set of homography matrices is determined using the following modules: the second acquisition module is used for acquiring training images shot by reference image acquisition equipment at different poses, wherein the training images comprise reference objects and calibration plates which are preset relative to the reference image acquisition equipment; the determining module is used for respectively acquiring target pixel positions of target points on the calibration plate on the training images for each training image, and determining a target homography matrix corresponding to the training images according to a second pixel position of the reference object on the training images and a physical position of the reference object in a physical space; and the establishing module is used for establishing the corresponding relation between the target homography matrix under each pose and the target pixel position under the pose to obtain the homography matrix set.
In some embodiments, the determining module comprises: the clustering unit is used for clustering the positions of all target pixels according to a preset clustering radius to obtain a plurality of clusters; a first determining unit, configured to determine a clustering center of each cluster, respectively; an extracting unit, configured to determine, for any one cluster, a target pixel position closest to a cluster center of the cluster from the target pixel positions of the cluster as a candidate target pixel position of the cluster, and extract a candidate training image corresponding to the candidate target pixel position of each cluster from the training images; a second determining unit, configured to determine a target homography matrix corresponding to each candidate training image according to a second pixel position of the reference object on each candidate training image and a physical position of the reference object in a physical space, respectively; the establishing module is used for: and establishing a corresponding relation between each candidate target homography matrix and each candidate target pixel position to obtain the homography matrix set.
In some embodiments, the particular pixel location is determined using the following modules: the second detection module is used for carrying out angular point detection on the image to be detected and acquiring a third pixel position of an angular point on the calibration plate on the image to be detected; the image to be detected comprises an image of the calibration plate; the calculating module is used for calculating the specific pixel position of the target point on the image to be detected according to the third pixel position; when the image to be detected is the image to be processed, the specific pixel position is the first pixel position; and when the image to be detected is the training image, the specific pixel position is the target pixel position.
In some embodiments, the second detection module comprises: the detection unit is used for detecting the confidence coefficient that each pixel point in the image to be detected or the interested region on the image to be detected is an angular point; and the third determining unit is used for determining pixel points with the confidence degrees larger than a preset confidence degree threshold value as third pixel positions of corner points on the calibration board on the image to be detected, and the confidence degree threshold value is set according to the illumination intensity when the image to be detected is collected.
In some embodiments, the training images include a first training image taken by the reference image capture device at a first preset position, and a second training image taken by the reference image capture device after leaving and returning to the first preset position.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in the specification. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiments of the apparatus of the present specification can be applied to a computer device, such as a server or a terminal device. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for operation through the processor in which the file processing is located. From a hardware aspect, as shown in fig. 8, the hardware structure of the computer device in which the apparatus of this specification is located is shown in fig. 8, except for the processor 801, the memory 802, the network interface 803, and the nonvolatile memory 804 shown in fig. 8, a server or an electronic device in which the apparatus is located in the embodiment may also include other hardware according to an actual function of the computer device, which is not described again.
Accordingly, the embodiments of the present disclosure also provide a computer storage medium on which a computer program is stored, which when executed by a processor implements the method according to any of the embodiments.
Accordingly, embodiments of the present disclosure also provide a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the method according to any of the embodiments when executing the program.
As shown in fig. 9, an embodiment of the present disclosure further provides a calibration system, where the system includes:
the computer apparatus 901 as described in any of the above embodiments; and
the image acquisition equipment 902 to be calibrated is in communication connection with the computer equipment 901;
the image capturing device 902 to be calibrated is configured to capture an image to be processed, where the image to be processed includes a calibration board preset with respect to the image capturing device to be calibrated.
The image capture device 902 to be calibrated may be mounted on a movable platform for measuring the distance between the movable platform and other objects around. In this embodiment, a calibration board may be set at the preset position a in advance, the movable platform is driven to the preset position B, and then the image to be processed including the calibration board is photographed by the image capturing apparatus 902 to be calibrated. The image capturing device 902 to be calibrated may send the captured image to be processed to the computer device 901, and obtain the homography matrix of the image capturing device 902 to be calibrated through the computer device 901. The manner in which the computer device 901 acquires the homography matrix is the same as that in the above method embodiment, and is not described here again.
In some embodiments, the system further comprises: a reference image capture device 903 communicatively connected to the computer device 901; the reference image acquisition device 903 is used for shooting a group of training images shot under different poses, and the training images comprise reference objects and the calibration plates which are preset relative to the reference image acquisition device 903.
The present disclosure may take the form of a computer program product embodied on one or more storage media including, but not limited to, disk storage, CD-ROM, optical storage, and the like, having program code embodied therein. Computer-usable storage media include permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable commands, data structures, modules of a program, or other data. Examples of the storage medium of the computer include, but are not limited to: phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technologies, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by a computing device.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
The above description is only exemplary of the present disclosure and should not be taken as limiting the disclosure, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.

Claims (15)

1. A calibration method, characterized in that the method comprises:
acquiring an image to be processed shot by image acquisition equipment to be calibrated, wherein the image to be processed comprises a calibration plate which is preset relative to the image acquisition equipment to be calibrated;
detecting a first pixel position of a target point on the calibration plate on the image to be processed;
determining a target pixel position closest to the first pixel position, and determining a homography matrix of the image acquisition equipment to be calibrated according to a target homography matrix corresponding to the target pixel position in a homography matrix set; the target pixel position corresponding to one target homography matrix is the pixel position of a target point on the calibration plate in a training image shot by a reference image acquisition device in one pose.
2. The method of claim 1, wherein the set of homography matrices is determined by:
acquiring training images shot by reference image acquisition equipment at different poses, wherein the training images comprise a reference object and a calibration plate which are preset relative to the reference image acquisition equipment;
for each training image, respectively acquiring a target pixel position of a target point on the calibration plate on the training image, and determining a target homography matrix corresponding to the training image according to a second pixel position of the reference object on the training image and a physical position of the reference object in a physical space;
and establishing a corresponding relation between the target homography matrix under each pose and the target pixel position under the pose to obtain the homography matrix set.
3. The method of claim 2, wherein determining the target homography matrix corresponding to the training image according to the second pixel position of the reference object on the training image and the physical position of the reference object in the physical space comprises:
clustering the positions of all target pixels according to a preset clustering radius to obtain a plurality of clusters;
respectively determining the clustering centers of all clusters;
for any cluster, determining a target pixel position closest to the cluster center of the cluster from the target pixel positions of the cluster as a candidate target pixel position of the cluster, and extracting candidate training images corresponding to the candidate target pixel positions of all clusters from the training images;
determining a target homography matrix corresponding to each candidate training image according to the second pixel position of the reference object on each candidate training image and the physical position of the reference object in a physical space;
the establishing of the corresponding relationship between the target homography matrix under each pose and the target pixel position under the pose to obtain the homography matrix set comprises the following steps:
and establishing a corresponding relation between each candidate target homography matrix and each candidate target pixel position to obtain the homography matrix set.
4. A method according to any one of claims 1 to 3, wherein a particular pixel location is determined using the steps of:
carrying out angular point detection on an image to be detected, and acquiring a third pixel position of an angular point on the calibration plate on the image to be detected; the image to be detected comprises an image of the calibration plate;
calculating the specific pixel position of the target point on the image to be detected according to the third pixel position;
when the image to be detected is the image to be processed, the specific pixel position is the first pixel position; and when the image to be detected is the training image, the specific pixel position is the target pixel position.
5. The method according to claim 4, wherein performing corner detection on the image to be detected to obtain a third pixel position of a corner on the calibration board on the image to be detected comprises:
detecting the confidence coefficient that each pixel point in the image to be detected or the interested region on the image to be detected is an angular point;
and determining pixel points with the confidence degrees larger than a preset confidence degree threshold value as third pixel positions of corner points on the calibration board on the image to be detected, wherein the confidence degree threshold value is set according to the illumination intensity when the image to be detected is collected.
6. The method according to any one of claims 1 to 5, wherein the training images comprise a first training image taken by the reference image capturing device at a first preset position, and a second training image taken by the reference image capturing device after leaving and returning to the first preset position.
7. A calibration arrangement, characterized in that the arrangement comprises:
the device comprises a first acquisition module, a second acquisition module and a calibration module, wherein the first acquisition module is used for acquiring an image to be processed shot by image acquisition equipment to be calibrated, and the image to be processed comprises a calibration plate which is preset relative to the image acquisition equipment to be calibrated;
the first detection module is used for detecting a first pixel position of a target point on the calibration plate on the image to be processed;
the calibration module is used for determining a target pixel position closest to the first pixel position and determining a homography matrix of the image acquisition equipment to be calibrated according to a target homography matrix corresponding to the target pixel position in a homography matrix set; the target pixel position corresponding to one target homography matrix is the pixel position of a target point on the calibration plate in a training image shot by a reference image acquisition device in one pose.
8. The apparatus of claim 7, wherein the set of homography matrices is determined using:
the second acquisition module is used for acquiring training images shot by reference image acquisition equipment at different poses, wherein the training images comprise reference objects and calibration plates which are preset relative to the reference image acquisition equipment;
the determining module is used for respectively acquiring target pixel positions of target points on the calibration plate on the training images for each training image, and determining a target homography matrix corresponding to the training images according to a second pixel position of the reference object on the training images and a physical position of the reference object in a physical space;
and the establishing module is used for establishing the corresponding relation between the target homography matrix under each pose and the target pixel position under the pose to obtain the homography matrix set.
9. The apparatus of claim 8, wherein the determining module comprises:
the clustering unit is used for clustering the positions of all target pixels according to a preset clustering radius to obtain a plurality of clusters;
a first determining unit, configured to determine a clustering center of each cluster, respectively;
an extracting unit, configured to determine, for any one cluster, a target pixel position closest to a cluster center of the cluster from the target pixel positions of the cluster as a candidate target pixel position of the cluster, and extract a candidate training image corresponding to the candidate target pixel position of each cluster from the training images;
a second determining unit, configured to determine a target homography matrix corresponding to each candidate training image according to a second pixel position of the reference object on each candidate training image and a physical position of the reference object in a physical space, respectively;
the establishing module is used for:
and establishing a corresponding relation between each candidate target homography matrix and each candidate target pixel position to obtain the homography matrix set.
10. The apparatus of any one of claims 7 to 9, wherein the specific pixel location is determined using:
the second detection module is used for carrying out angular point detection on the image to be detected and acquiring a third pixel position of an angular point on the calibration plate on the image to be detected; the image to be detected comprises an image of the calibration plate;
the calculating module is used for calculating the specific pixel position of the target point on the image to be detected according to the third pixel position;
when the image to be detected is the image to be processed, the specific pixel position is the first pixel position; and when the image to be detected is the training image, the specific pixel position is the target pixel position.
11. The apparatus of claim 10, wherein the second detection module comprises:
the detection unit is used for detecting the confidence coefficient that each pixel point in the image to be detected or the interested region on the image to be detected is an angular point;
and the third determining unit is used for determining pixel points with the confidence degrees larger than a preset confidence degree threshold value as third pixel positions of corner points on the calibration board on the image to be detected, and the confidence degree threshold value is set according to the illumination intensity when the image to be detected is collected.
12. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any one of claims 1 to 6 when executing the program.
13. A calibration system, the system comprising:
the computer device of claim 12; and
the image acquisition equipment to be calibrated is in communication connection with the computer equipment;
the image acquisition equipment to be calibrated is used for shooting an image to be processed, and the image to be processed comprises a calibration plate which is preset relative to the image acquisition equipment to be calibrated.
14. The system of claim 13, further comprising:
a reference image acquisition device in communication with the computer device;
the reference image acquisition equipment is used for shooting training images under different poses, and the training images comprise reference objects and the calibration plates which are preset relative to the reference image acquisition equipment.
15. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of any one of claims 1 to 6.
CN202010367126.8A 2020-04-30 2020-04-30 Calibration method, device and system Pending CN111598956A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010367126.8A CN111598956A (en) 2020-04-30 2020-04-30 Calibration method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010367126.8A CN111598956A (en) 2020-04-30 2020-04-30 Calibration method, device and system

Publications (1)

Publication Number Publication Date
CN111598956A true CN111598956A (en) 2020-08-28

Family

ID=72186872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010367126.8A Pending CN111598956A (en) 2020-04-30 2020-04-30 Calibration method, device and system

Country Status (1)

Country Link
CN (1) CN111598956A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052900A (en) * 2021-04-23 2021-06-29 深圳市商汤科技有限公司 Position determination method and device, electronic equipment and storage medium
WO2023028939A1 (en) * 2021-09-02 2023-03-09 深圳市大疆创新科技有限公司 Information acquisition system, calibration method and apparatus therefor, and computer-readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570899A (en) * 2015-10-08 2017-04-19 腾讯科技(深圳)有限公司 Target object detection method and device
CN108053375A (en) * 2017-12-06 2018-05-18 智车优行科技(北京)有限公司 Image data correction method, device and its automobile
WO2018196391A1 (en) * 2017-04-28 2018-11-01 华为技术有限公司 Method and device for calibrating external parameters of vehicle-mounted camera
CN110580723A (en) * 2019-07-05 2019-12-17 成都智明达电子股份有限公司 method for carrying out accurate positioning by utilizing deep learning and computer vision
CN111047649A (en) * 2018-10-15 2020-04-21 华东交通大学 Camera high-precision calibration method based on optimal polarization angle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570899A (en) * 2015-10-08 2017-04-19 腾讯科技(深圳)有限公司 Target object detection method and device
WO2018196391A1 (en) * 2017-04-28 2018-11-01 华为技术有限公司 Method and device for calibrating external parameters of vehicle-mounted camera
CN108053375A (en) * 2017-12-06 2018-05-18 智车优行科技(北京)有限公司 Image data correction method, device and its automobile
CN111047649A (en) * 2018-10-15 2020-04-21 华东交通大学 Camera high-precision calibration method based on optimal polarization angle
CN110580723A (en) * 2019-07-05 2019-12-17 成都智明达电子股份有限公司 method for carrying out accurate positioning by utilizing deep learning and computer vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
武彦林;全燕鸣;郭清达;: "基于局部单应性矩阵的结构光系统标定研究" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052900A (en) * 2021-04-23 2021-06-29 深圳市商汤科技有限公司 Position determination method and device, electronic equipment and storage medium
WO2022222379A1 (en) * 2021-04-23 2022-10-27 深圳市商汤科技有限公司 Position determination method and apparatus, electronic device and storage medium
WO2023028939A1 (en) * 2021-09-02 2023-03-09 深圳市大疆创新科技有限公司 Information acquisition system, calibration method and apparatus therefor, and computer-readable storage medium

Similar Documents

Publication Publication Date Title
CN110285793B (en) Intelligent vehicle track measuring method based on binocular stereo vision system
CN110322702B (en) Intelligent vehicle speed measuring method based on binocular stereo vision system
CN109035320B (en) Monocular vision-based depth extraction method
CN108805934B (en) External parameter calibration method and device for vehicle-mounted camera
CN111179358A (en) Calibration method, device, equipment and storage medium
CN112444242B (en) Pose optimization method and device
CN109949365B (en) Vehicle designated position parking method and system based on road surface feature points
CN109919975B (en) Wide-area monitoring moving target association method based on coordinate calibration
CN112419374B (en) Unmanned aerial vehicle positioning method based on image registration
CN110334701B (en) Data acquisition method based on deep learning and multi-vision in digital twin environment
CN111429527B (en) Automatic external parameter calibration method and system for vehicle-mounted camera
CN109741241B (en) Fisheye image processing method, device, equipment and storage medium
CN107589069B (en) Non-contact type measuring method for object collision recovery coefficient
CN109214254B (en) Method and device for determining displacement of robot
Zou et al. Real-time full-stack traffic scene perception for autonomous driving with roadside cameras
CN109255818B (en) Novel target and extraction method of sub-pixel level angular points thereof
CN110889829A (en) Monocular distance measurement method based on fisheye lens
Cvišić et al. Recalibrating the KITTI dataset camera setup for improved odometry accuracy
CN111862180A (en) Camera group pose acquisition method and device, storage medium and electronic equipment
CN104949657A (en) Object detection device, object detection method, and computer readable storage medium comprising objection detection program
CN112950717A (en) Space calibration method and system
CN111598956A (en) Calibration method, device and system
CN111738071B (en) Inverse perspective transformation method based on motion change of monocular camera
CN112017238A (en) Method and device for determining spatial position information of linear object
CN115717867A (en) Bridge deformation measurement method based on airborne double cameras and target tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination