CN111986265A - Method, apparatus, electronic device and medium for calibrating camera - Google Patents

Method, apparatus, electronic device and medium for calibrating camera Download PDF

Info

Publication number
CN111986265A
CN111986265A CN202010773033.5A CN202010773033A CN111986265A CN 111986265 A CN111986265 A CN 111986265A CN 202010773033 A CN202010773033 A CN 202010773033A CN 111986265 A CN111986265 A CN 111986265A
Authority
CN
China
Prior art keywords
image
value
vehicle
data
parameter value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010773033.5A
Other languages
Chinese (zh)
Other versions
CN111986265B (en
Inventor
李帅杰
骆沛
倪凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heduo Technology Guangzhou Co ltd
Original Assignee
HoloMatic Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HoloMatic Technology Beijing Co Ltd filed Critical HoloMatic Technology Beijing Co Ltd
Priority to CN202010773033.5A priority Critical patent/CN111986265B/en
Publication of CN111986265A publication Critical patent/CN111986265A/en
Application granted granted Critical
Publication of CN111986265B publication Critical patent/CN111986265B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

Embodiments of the present disclosure disclose methods, apparatuses, electronic devices, and computer-readable media for calibrating a camera. One embodiment of the method comprises: acquiring an image set shot by a vehicle-mounted camera and acquiring data of a vehicle sensor corresponding to each image in the image set; determining the external parameter value of the vehicle-mounted camera as an initial parameter value; segmenting the image set and the data set to generate at least one image subset and at least one data subset, respectively; optimizing the initial parameter values to generate optimized parameter values; selecting a preset number of optimized parameter values from the optimized parameter value sequence and determining variance values of the preset number of optimized parameter values; selecting an optimized parameter value from a preset number of optimized parameters corresponding to the variance value as a calibration external parameter value of the vehicle-mounted camera; and calibrating the external reference value of the vehicle-mounted camera by utilizing the external reference value calibrated by the vehicle-mounted camera to obtain the calibrated external reference value of the vehicle-mounted camera. The situation that calibration objects and occasions are always limited in the traditional calibration method is solved.

Description

Method, apparatus, electronic device and medium for calibrating camera
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method, an apparatus, an electronic device, and a medium for calibrating a camera.
Background
Camera calibration is a method for optimizing camera parameters. The traditional camera calibration method needs to use a calibration object with known size, and optimizes camera parameters by utilizing a correlation algorithm by establishing a corresponding relation between a point with known coordinates on the calibration object and an image point of the point. The camera calibration method always needs a calibration object in the calibration process, and the manufacturing precision of the calibration object influences the calibration result. Meanwhile, some occasions are not suitable for placing calibration objects, so that the application of the calibration method is limited.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose methods, apparatuses, devices and computer readable media for calibrating a camera to solve the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a method for calibrating a camera, the method comprising: acquiring an image set shot by a vehicle-mounted camera, and acquiring data of a vehicle sensor corresponding to each image in the image set to obtain a data set, wherein the data comprises vehicle-mounted camera external parameter values; determining the vehicle-mounted camera external parameter value as an initial parameter value; segmenting the image set and the data set to generate at least one image subset and at least one data subset, respectively, according to a predetermined number threshold; generating a sequence of image data based on each of the at least one subset of images and a subset of data of the at least one subset of data corresponding to the subset of images, wherein the image data is a binary set comprising an image and the corresponding data of the image in the subset of data; for each image data sequence in the generated image data sequences, inputting each image data in the image data sequences into an optimization objective function so as to optimize initial parameter values in the image data to generate optimized parameter values, and obtaining optimized parameter value sequences; for each optimized parameter value sequence in the obtained optimized parameter value sequences, selecting a preset number of optimized parameter values from the optimized parameter value sequences and determining variance values of the preset number of optimized parameter values to obtain variance value sequences; in response to that at least one variance value in the variance value sequence is smaller than a preset variance threshold value, selecting an optimization parameter value from a preset number of optimization parameters corresponding to the variance value as a vehicle-mounted camera calibration external parameter value; and calibrating the external reference value of the vehicle-mounted camera by using the external reference value calibrated by the vehicle-mounted camera to obtain the calibrated external reference value of the vehicle-mounted camera.
In a second aspect, some embodiments of the present disclosure provide an apparatus for calibrating a camera, the apparatus comprising: an acquisition unit configured to acquire an image set captured by an on-vehicle camera and a data set of a vehicle sensor corresponding to each image, wherein the data of the vehicle sensor includes an on-vehicle camera external parameter; a determination unit configured to determine the vehicle-mounted camera external parameter value as an initial parameter value; a first generation unit configured to segment the image set and the data set according to a predetermined number threshold to generate at least one image subset and at least one data subset, respectively; a second generating unit configured to generate an image data sequence based on each of the at least one image subset and a data subset corresponding to the image subset in the at least one data subset, wherein the image data is a binary group including an image and corresponding data of the image in the data subset; a third generating unit, configured to, for each of the generated image data sequences, input each of the image data in the image data sequence to an optimization objective function so as to optimize an initial parameter value in the image data to generate an optimized parameter value, resulting in an optimized parameter value sequence; a fourth generating unit, configured to, for each of the obtained optimized parameter value sequences, select a predetermined number of optimized parameter values from the optimized parameter value sequences and determine variance values of the predetermined number of optimized parameter values to obtain a variance value sequence; the selecting unit is configured to respond that at least one variance value in the variance value sequence is smaller than a preset variance threshold value, and select one optimized parameter value from a preset number of optimized parameters corresponding to the variance value as a vehicle-mounted camera calibration external parameter value; and the calibration unit is configured to calibrate the external reference value of the vehicle-mounted camera by utilizing the external reference value calibrated by the vehicle-mounted camera to obtain the calibrated external reference value of the vehicle-mounted camera.
In a third aspect, some embodiments of the present disclosure provide an apparatus comprising: one or more processors; a storage device having one or more programs stored thereon; a camera configured to capture an image; when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
In a fourth aspect, some embodiments of the disclosure provide a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method as described in the first aspect.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: and determining the vehicle-mounted camera external parameter value in the data of the vehicle sensor as an initial parameter value by acquiring an image set shot by the vehicle-mounted camera and a data set of the vehicle sensor. Then, the image set and the data set are respectively segmented by utilizing a preset quantity threshold value to generate at least one image subset and at least one data subset, so that system overload caused by simultaneous processing of a large amount of subsequent data is avoided. Then, an image data sequence is generated based on each image subset in the at least one image subset and the data subset corresponding to the image subset in the at least one data subset, and the mutual corresponding relation between each image and each data is determined. Thereafter, a sequence of optimized parameter values is generated for each sequence of image data using the optimization objective function. In addition, for each optimized parameter value sequence in the obtained optimized parameter value sequences, selecting a preset number of optimized parameter values from the optimized parameter value sequences and determining variance values of the preset number of optimized parameter values to obtain variance value sequences. And finally, in response to the fact that at least one variance value in the variance value sequence is smaller than a preset variance threshold value, selecting one optimized parameter value from a preset number of optimized parameters corresponding to the variance value as a vehicle-mounted camera calibration external parameter value. And calibrating the external reference value of the vehicle-mounted camera by using the external reference value calibrated by the vehicle-mounted camera to obtain the calibrated external reference value of the vehicle-mounted camera. By the method, the camera parameter value after calibration is obtained by processing the image shot by the vehicle-mounted camera and the data value of the vehicle sensor, and the problem that a calibration object and occasions are always limited in the traditional calibration method is solved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a schematic diagram of an application scenario for a method of calibrating a camera according to some embodiments of the present disclosure;
FIG. 2 is a flow diagram of some embodiments of a method for calibrating a camera according to the present disclosure;
FIG. 3 is a schematic block diagram of some embodiments of a method apparatus for calibrating a camera according to the present disclosure;
FIG. 4 is a schematic block diagram of an electronic device for calibrating a camera method according to the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, which are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of one application scenario for a method of calibrating a camera according to some embodiments of the present disclosure.
In the application scenario of fig. 1, first, the computing device 101 may acquire a set of images 1021 taken by an onboard camera and a set of data 1022 for vehicle sensors. The camera parameter values in the data set 1022 are taken as initial parameter values 103. Then, the image set 1021 is divided into a plurality of image groups 104 using a predetermined threshold. Then, an image in the image set 104 and a corresponding vehicle sensor data in the data set 1022 are used as an image data binary, resulting in the image data sequence 105. Further, based on the image data sequence 105, an optimized parameter value sequence 106 is obtained using the initial parameter values 103 and the optimization objective function (fx). In addition, the variance values of a predetermined number of optimized parameter values in the optimized parameter value sequence 106 are determined, and a variance value sequence 107 is obtained. According to the predetermined variance threshold, when one variance value in the variance value sequence 107 is smaller than the predetermined variance threshold, one of the predetermined number of optimized parameter values in the optimized parameter sequence 106 corresponding to the variance value is used as the calibrated camera parameter value 108. And obtaining the calibrated external parameter value of the vehicle-mounted camera.
The computing device 101 may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of multiple servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices enumerated above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of computing devices in FIG. 1 is merely illustrative. There may be any number of computing devices, as implementation needs dictate.
With continued reference to FIG. 2, a flow 200 for calibrating some embodiments of a camera in accordance with the present disclosure is shown. The method for calibrating the camera comprises the following steps:
step 201, acquiring an image set shot by a vehicle-mounted camera and a data set of a vehicle sensor corresponding to each image.
In some embodiments, an executing subject (e.g., a server shown in fig. 1) for calibrating the camera method may acquire the image set captured by the onboard camera and the data set of the vehicle sensor from the vehicle sensor through a wired connection or a wireless connection. Wherein the data of the vehicle sensor includes an external parameter of the vehicle-mounted camera. Specifically, each image in the image set and each data in the data set correspond to each other.
In some optional implementations of some embodiments, the vehicle sensor data includes, but is not limited to, at least one of: the vehicle-mounted camera internal parameter matrix, the vehicle acceleration value, the vehicle angular velocity value, the measurement data of an inertial measurement unit IMU (inertial measurement unit), the coordinate data of an inertial measurement unit coordinate system and the coordinate data of a world coordinate system.
Step 202, determining the vehicle-mounted camera external parameter value as an initial parameter value.
In some embodiments, the execution body mayAnd taking the vehicle-mounted camera external parameter value as an initial parameter value. Specifically, the camera extrinsic parameters include a rotation matrix and a translation vector. For example, the rotation matrix may be:
Figure BDA0002617352490000061
the translation matrix may be:
Figure BDA0002617352490000062
step 203, segmenting the image set and the data set respectively according to a predetermined quantity threshold to generate at least one image subset and at least one data subset.
In some embodiments, the executing entity may divide the image set and the data set by using a predetermined number threshold, and generate at least one image subset and at least one data subset. Specifically, the predetermined number threshold may be set manually. For example, the predetermined number of thresholds may be set to: "1000". The images in the image set are then divided into at least one image subset, and the number of images in each data subset is 1000. Images remaining in the image set that fall short of the predetermined number threshold may not be used.
As an example, the predetermined number threshold may be: "100". The number of images in the image set may be: 1234, the number of data in the corresponding data set may be: 1234 of the total number of the filter residues. Then there are 12 image subsets and 12 data subsets after segmentation.
In some optional implementations of some embodiments, the executing subject may segment the image set and the data set according to a predetermined number threshold to generate at least one corresponding image subset and at least one corresponding data subset, respectively, including the following steps:
the method comprises the following steps of firstly, dividing the image set to obtain at least one image subset, wherein the number of images of each image subset in the at least one image subset is equal to the preset number threshold.
As an example, the predetermined image number threshold may be: "100". The number of images in the image set may be 1111, and then a total of 11 image subsets after segmentation may be:
[1,2,3,...,100],[101,102,103,...,200],...,[1001,1002,1003,...,1100]。
and secondly, dividing the data set to obtain at least one data subset, wherein the data quantity of each data subset in the at least one data subset is equal to the preset quantity threshold value.
As an example, the 11 segmented data subsets may be: [ a1, a2, a 3.,. a100], [ a101, a102, a 103.., a 200.,. a., [ a1001, a1002, a 1003.,. a1100 ].
Step 204, generating an image data sequence based on each image subset of the at least one image subset and the data subset corresponding to the image subset of the at least one data subset, wherein the image data is a binary group, and the binary group includes an image and the corresponding data of the image in the data subset.
In some embodiments, the execution agent may generate a sequence of image data based on each of the at least one image subset and a data subset of the at least one data subset corresponding to the image subset, wherein the image data is a binary set including an image and corresponding data of the image in the data subset. Specifically, the corresponding data may be data returned by each image and the vehicle sensor corresponding to the image as a binary group, resulting in an image data binary group.
As an example, there may be 11 image subsets and data subsets, respectively, and then the 11 image subsets may be:
[1,2,3,...,100],[101,102,103,...,200],...,[1001,1002,1003,...,1100]. The 11 subsets of data may be: [ a1, a2, a 3.,. a100], [ a101, a102, a 103.,. a 200.,. a.,
[ a1001, a1002, a1003,. -, a1100 ]. The resulting image data doublet may then be:
[1:a1,2:a2,3:a3,...,100:a100],
[101:a101,102:a102,103:a103,...,200:a200],...,
[1001:a1001,1002:a1002,1003:a1003,...,1100:a1100]。
step 205, inputting the image data sequence into an optimization objective function to optimize the initial parameter values to generate an optimized parameter value sequence.
In some embodiments, the executing subject may input, for each of the generated image data sequences, each of the image data in the image data sequences to an optimization objective function so as to optimize an initial parameter value in the image data to generate an optimized parameter value, resulting in an optimized parameter value sequence. Specifically, the image data of each image data sequence needs to be input to an optimization objective function to optimize the initial parameter, so as to obtain an optimized parameter value.
In some optional implementations of some embodiments, the executing subject may input each image data in the image data sequence to an optimization objective function so as to optimize an initial parameter value in the image data to generate an optimized parameter value, and obtain an optimized parameter value sequence, including the following steps:
in a first step, a list of optimized parameter values and a predetermined threshold of optimized parameter quantities are determined.
As an example, the list of optimized parameter values may be an empty list: []. The predetermined optimization parameter quantity threshold may be: 10.
and a second step of executing the following generation steps based on the initial parameter values and the image data sequence:
sequentially selecting a plurality of image data from the image data sequence as image data to be optimized;
inputting the image data to be optimized and the initial parameter value into an optimization objective function to generate an optimization parameter value;
adding the optimized parameter values to an optimized parameter value list;
in response to the fact that the number of parameter values in the parameter value list is equal to the preset optimization parameter number threshold, taking the parameter value list as an optimization parameter value sequence and outputting the optimization parameter value sequence;
and in response to the fact that the number of parameter values in the parameter value list is smaller than a preset optimization parameter number threshold value, taking the optimization parameter values as initial parameter values, and taking the image data sequence without the selected image data as an image data sequence to execute the generation steps again. Specifically, an empty optimization parameter value list is determined, the optimized optimization parameter values are added to the optimization parameter value list, and when the number of the optimization parameter values in the optimization parameter value list meets a preset optimization parameter number threshold, the list is returned.
As an example, the initial parameter values may be: 5. the selected parameter value sequence to be optimized may be: [1: a1,2: a2,3: a 3. The selected images may be: "1,2". The data values may be: "a 1, a 2". The optimized parameter values obtained by using the optimization objective function may then be: "4.99". The parameter value list after adding the optimized parameter value may be: [4.99]. Since the number of optimized parameter values in the parameter value list is 1, which is less than the predetermined number threshold 10. Then, the parameter values will be optimized: "4.99" as the initial parameter value. Removing the selected image data binary group to obtain an image data sequence: [3: a 3., 100: a100] the above-described generating step is performed again as a sequence of image data.
As an example, the initial parameter values may be: 5. the selected parameter value sequence to be optimized may be: [10: a10,11: a11,12: a 12. The selected images may be: "10,11". The data may be: "a 10, a 11". The optimized parameter values obtained by using the optimization objective function may then be: "4.90". The parameter value list after adding the optimized parameter value may be: [4.99,4.98,4.97,4.96,4.95,4.94,4.93,4.92,4.91,4.90]. Since the number of optimized parameter values in the parameter value list is 10, which is equal to the predetermined number threshold 10. Then, the list of parameter values is taken as the sequence of optimized parameter values: [4.99,4.98,4.97,4.96,4.95,4.94,4.93,4.92,4.91,4.90] and output.
In some optional implementations of some embodiments, the executing subject may input the image data to be optimized and the initial parameter value to an optimization objective function to generate an optimized parameter value, including the following steps:
firstly, carrying out corner detection on the image in the image data to be optimized to generate an image corner sequence. Specifically, the corner detection method may be a Fast (features from optimized Segment test) corner detection algorithm, and the Fast algorithm is used to perform corner detection on the image in the image data to be optimized to generate an image corner sequence.
As an example, the sequence of corner points may be: [ b1, b2, b 3].
And secondly, establishing an inertial measurement unit coordinate system and a world coordinate system by using the coordinate data of the inertial measurement unit coordinate system and the coordinate data of the world coordinate system. Specifically, an inertial measurement unit coordinate system is constructed according to the obtained coordinate data of the inertial measurement unit coordinate system. And constructing a world coordinate system according to the coordinate data of the world coordinate system. For example, the coordinate data of the inertial measurement unit coordinate system is: the vertical axis is horizontal upward, the origin is an inertia measurement unit, the horizontal axis and the vertical axis accord with the right-hand rule, and the direction is the angular velocity of the target vehicle. The world coordinate system may be an inertial measurement unit coordinate system corresponding to a first image captured by the onboard camera. Wherein the world coordinate system may be invariant.
And thirdly, generating a vehicle position attitude value corresponding to the image in the coordinate system of the inertial measurement unit by using the inertial measurement unit based on the vehicle acceleration and the vehicle angular velocity corresponding to the image data to be optimized. Specifically, the vehicle acceleration and the vehicle angular velocity in the data corresponding to one image and the coordinate system of the inertial measurement unit can obtain the vehicle pose and the pose value corresponding to the image in the inertial measurement coordinate system and the inertial measurement unit coordinate system.
And fourthly, determining a projection point of each image corner point in the image corner point sequence in the world coordinate system, generating an image corresponding to the image corner point and a number pair of the projection point, and obtaining a number pair set of the image and the projection point. Specifically, an image may correspond to a projection point, where the projection point is a projection point of a corner point of the image in a world coordinate system.
As an example, the projected point of each corner point in the sequence of corner points in the world coordinate system may be: [ [ a1, a2, a3], [ b1, b2, b3], [ c1, c2, c3]. The image may be: then, the set of numbered pairs of images and projection points may be: [ [ a: a1, a: a2, a: a3], [ b: b1, b: b2, b: b3], [ c: c1, c: c2, c: c 3.
Thirdly, optimizing the initial parameter values by using the following optimization objective function based on the image corner sequence set, the vehicle-mounted camera internal reference matrix, the vehicle position and attitude values and the initial parameter values to generate optimized parameter values:
Figure BDA0002617352490000101
where i denotes the inertial measurement unit coordinate system. c denotes an in-vehicle camera coordinate system. m denotes a corner number. n represents a picture number. w denotes a world coordinate system. P represents a corner coordinate value. p represents a projected point coordinate value. And E represents an external parameter matrix. T represents a position posture value. E denotes an initial external reference matrix. e represents a projection error value.
Figure BDA0002617352490000102
And represents the coordinate value of the mth angular point in the world coordinate system.
Figure BDA0002617352490000103
And the coordinate value of the projection point of the mth angular point in the world coordinate system in the nth image is represented.
Figure BDA0002617352490000104
And representing the vehicle position and attitude value of the inertial measurement unit coordinate system corresponding to the nth image in the world coordinate system.iEcAnd the initial external parameter matrix represents the coordinate system of the vehicle-mounted camera and the coordinate system of the inertial measurement unit. K denotes an in-vehicle camera internal reference matrix.
Figure BDA0002617352490000105
Indicates the m-th point is atProjection error values in the nth image. D represents a projection point corresponding to each corner point in the image and a number pair set formed by the image. J represents an optimization index. ()-1An inverse matrix is represented.
Figure BDA0002617352490000106
Representing a dimension reduction operation.iEc=[iRc itc]. Wherein the content of the first and second substances,iRcthe rotation matrix using euler angle representation, i.e. the inertial measurement unit coordinate system and the onboard camera coordinate system to be optimized, is represented.itcA translation matrix representing the inertial measurement unit coordinate system and the onboard camera coordinate system. Specifically, the euler angle representation is a set of three independent angular parameters that are used to uniquely determine the position of a fixed point rotational rigid body. The attitude in the external parameters is parameterized by using the Euler angles, so that step-by-step optimization can be performed in a targeted manner, and the problem that the attitude is not converged when the data is adjusted and optimized in a high-speed scene is solved.
Figure BDA0002617352490000111
Showing the dimension reduction operation on the a. For example:
Figure BDA0002617352490000112
wherein a is a three-dimensional matrix, [ alpha ] is]Each dimension in the representation three-dimensional matrix is divided by a third dimension. []1:2The representation takes the first two dimensions of the three-dimensional matrix. Thus, a dimension reduction operation is achieved. The initial external parameters represent the initial parameters described above. And inputting each image data to be optimized to an optimization objective function to optimize the initial parameters to obtain optimization indexes, and confirming the optimization effect of the initial parameters according to the optimization indexes.
Step 206, for each optimized parameter value sequence in the obtained optimized parameter value sequences, selecting a predetermined number of optimized parameter values from the optimized parameter value sequences and determining variance values of the predetermined number of optimized parameter values to obtain variance value sequences.
In some embodiments, the execution subject may select, for each of the obtained optimized parameter value sequences, a predetermined number of optimized parameter values from the optimized parameter value sequences and determine variance values of the predetermined number of optimized parameter values to obtain variance value sequences. Specifically, the predetermined number of variance values may be a number that is artificially set and does not exceed a total number of variance values in the variance value sequence. For example, there are 100 optimization parameter values in the optimization parameter value sequence, and the predetermined number of optimization parameters may be 10. The first 10 (1 st to 10 th) optimized parameter values are selected in order to calculate the variance. Then 10 (2 nd-11 th) optimized parameter values are selected to calculate the variance. By analogy, a variance value sequence can be obtained.
As an example, the sequence of optimized parameter values may be: [5.01,4.98,4.97,4.95,4.945]. Then, the predetermined number of optimization parameters may be: 3. there may be 3 variance values, and the variance value sequence may be: [0.00043,0.00023,0.00017].
Step 207, in response to that at least one variance value in the variance value sequence is smaller than a predetermined variance threshold, selecting an optimized parameter value from a predetermined number of optimized parameters corresponding to the variance value as a calibration external parameter value of the vehicle-mounted camera.
In some embodiments, the execution subject may select an optimization parameter value from a predetermined number of optimization parameters corresponding to the variance value as an in-vehicle camera calibration external parameter value in response to at least one variance value in the variance value sequence being smaller than a predetermined variance threshold value. Specifically, there is at least one variance value in the variance value sequence, and in response to at least one variance value in the variance value sequence being smaller than the predetermined threshold, it may be that one variance value in the variance value sequence is smaller than the predetermined threshold. Then, one optimized parameter value is selected from the preset number of optimized parameters corresponding to the variance value to serve as a vehicle-mounted county-level calibration external parameter value.
In some optional implementations of some embodiments, in response to at least one variance value in the variance value sequence being smaller than a predetermined variance threshold, the executing subject selecting an optimized parameter value from a predetermined number of optimized parameters corresponding to the variance value as an on-vehicle camera calibration external parameter value, may include the following steps:
first, a predetermined variance threshold is obtained.
As an example, the predetermined variance threshold may be: 0.0002.
and secondly, selecting an optimized parameter value from the obtained optimized parameter value sequence as a calibration external parameter value of the vehicle-mounted camera in response to at least one variance value in the variance value sequence being smaller than a preset variance threshold value.
As an example, the variance value sequence may be: [0.00043,0.00023,0.00017]. At least one variance value is less than a predetermined variance threshold of 0.0002. Then, the variance value 0.00017 is smaller than the predetermined variance threshold value 0.0002, so that one of the predetermined number of optimized parameter values [4.97,4.95,4.945] corresponding to more than the variance value 0.00017 is determined as the optimal parameter value as the calibration external parameter value of the vehicle-mounted camera.
And 208, calibrating the external reference value of the vehicle-mounted camera by using the external reference value calibrated by the vehicle-mounted camera to obtain the calibrated external reference value of the vehicle-mounted camera.
In some embodiments, the execution main body may calibrate the external parameter of the vehicle-mounted camera by using the external parameter of the vehicle-mounted camera to obtain the calibrated external parameter of the vehicle-mounted camera.
In some optional implementation manners of some embodiments, the execution main body may calibrate the external parameter of the vehicle-mounted camera by using the external parameter calibrated by the vehicle-mounted camera, so as to obtain the calibrated external parameter of the vehicle-mounted camera. Specifically, the external parameter value of the vehicle-mounted camera is replaced by the external parameter value calibrated by the vehicle-mounted camera, so that the calibrated external parameter value of the vehicle-mounted camera is obtained. For example, the external parameter values calibrated by the onboard camera may be a rotation matrix: r, and translation matrix: t. The external parameter calibrated by the vehicle-mounted camera can be a rotation matrix: r1, and translation matrix: t 1. Replacing the external parameter matrix of the vehicle-mounted camera to obtain a calibrated external parameter value of the vehicle-mounted camera, wherein the calibrated external parameter value of the vehicle-mounted camera can be as follows: r1 and t 1. Thus, the camera external parameter calibration is completed.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: and determining the vehicle-mounted camera external parameter value in the data of the vehicle sensor as an initial parameter value by acquiring an image set shot by the vehicle-mounted camera and a data set of the vehicle sensor. Then, the image set and the data set are respectively subjected to grouping processing by utilizing a preset quantity threshold value to obtain a data set corresponding to the image combination, so that system overload caused by large data volume in subsequent processing is avoided. Then, a binary group of each image group and each data group is obtained, and the mutual corresponding relation between each image and each data is determined. Thus, an optimization objective function is used to generate a sequence of optimization parameter values for each set of image data doublets. In addition, determining the variance values of the preset number of optimized parameter values in each optimized parameter value sequence to obtain a variance value sequence. And finally, determining that one variance value is smaller than a preset variance threshold value according to each variance value in the variance value sequence and a preset variance threshold value. Therefore, one of the optimization parameters in the preset number corresponding to the variance value smaller than the preset variance threshold value is used as the calibrated parameter of the vehicle-mounted camera, and the calibrated vehicle-mounted camera parameter is obtained. By the method, the camera parameters after calibration are obtained by processing the images shot by the vehicle-mounted camera and the data of the vehicle sensor, and the problem that calibration objects and occasions are always limited in the traditional calibration method is solved.
With further reference to fig. 3, as an implementation of the above-described method for the above-described figures, the present disclosure provides some embodiments for calibrating a camera device, which correspond to those of the method embodiments described above for fig. 2, and which may be applied in particular to various electronic devices.
As shown in fig. 3, the web page generation apparatus 300 of some embodiments includes: an acquisition unit 301, a determination unit 302, a first generation unit 303, a second generation unit 304, a third generation unit 305, a fourth generation unit 306, and a selection unit 307. The acquiring unit 301 is configured to acquire an image set captured by the vehicle-mounted camera and a data set of a vehicle sensor corresponding to each image, where the data of the vehicle sensor includes vehicle-mounted camera external parameters. A determination unit 302 configured to determine the vehicle-mounted camera external parameter value as an initial parameter value. A first generating unit 303 configured to segment the image set and the data set according to a predetermined number threshold to generate at least one image subset and at least one data subset, respectively. A second generating unit 304 configured to generate an image data sequence based on each of the at least one image subset and each of the at least one data subset, wherein the image data is a binary set comprising an image and data of the image in the corresponding data subset. A third generating unit 305 configured to input the image data sequence to an optimization objective function to optimize the initial parameter values to generate an optimized parameter value sequence. A fourth generating unit 306, configured to determine a variance value between each pair of adjacent optimized parameter values in the optimized parameter value sequence, resulting in a variance value sequence. A selecting unit 307 configured to select one optimized parameter value from the optimized parameter value sequence set as the vehicle-mounted camera calibration external parameter value in response to each variance value of the selected predetermined number of variance values in the variance value sequence being smaller than a predetermined variance threshold value. And a calibration unit 308 configured to calibrate the external reference value of the vehicle-mounted camera by using the external reference value of the vehicle-mounted camera to obtain a calibrated external reference value of the vehicle-mounted camera.
It will be understood that the units described in the apparatus 300 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 300 and the units included therein, and are not described herein again.
Referring now to fig. 4, a schematic diagram of an electronic device (e.g., terminal device 101 of fig. 1)400 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, electronic device 400 may include a processing device (e.g., central processing unit, graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM403 are connected to each other via a bus 404. An input/output (I/O) interface 404 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 404: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 4 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 409, or from the storage device 408, or from the ROM 402. The computer program, when executed by the processing apparatus 401, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus for calibrating a camera; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: and acquiring an image set shot by the vehicle-mounted camera and a data set of a vehicle sensor corresponding to each image, wherein the data of the vehicle sensor comprises vehicle-mounted camera external parameter values. And determining the vehicle-mounted camera external parameter value as an initial parameter value. The image set and the data set are segmented, respectively, according to a predetermined number threshold to generate at least one image subset and at least one data subset. Generating a sequence of image data based on each of the at least one image subset and each of the at least one data subset, wherein the image data is a binary set comprising an image and data of the image in the corresponding data subset. And inputting the image data sequence into an optimization objective function to optimize the initial parameter values so as to generate an optimized parameter value sequence. And determining a variance value between each pair of adjacent optimization parameter values in the optimization parameter value sequence to obtain a variance value sequence. And selecting one optimized parameter value from the optimized parameter value sequence set as a calibration external parameter value of the vehicle-mounted camera in response to the fact that each variance value in the variance value sequence with the preset number is smaller than a preset variance threshold value. And calibrating the external reference value of the vehicle-mounted camera by using the external reference value calibrated by the vehicle-mounted camera to obtain the calibrated external reference value of the vehicle-mounted camera.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor comprises an acquisition unit, a determination unit, a first generation unit, a second generation unit, a third generation unit, a fourth generation unit, a selection unit and a calibration unit. Where the names of these units do not in some cases constitute a limitation of the unit itself, the acquisition unit may also be described as a "unit acquiring a vehicle image set and an image data set", for example.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (10)

1. A method for calibrating a camera, comprising:
acquiring an image set shot by a vehicle-mounted camera, and acquiring data of a vehicle sensor corresponding to each image in the image set to obtain a data set, wherein the data comprises vehicle-mounted camera external parameter values;
determining the vehicle-mounted camera external parameter value as an initial parameter value;
segmenting the image set and the data set to generate at least one image subset and at least one data subset, respectively, according to a predetermined number threshold;
generating a sequence of image data based on each of the at least one subset of images and a subset of data of the at least one subset of data corresponding to the subset of images, wherein the image data is a binary set comprising an image and the corresponding data of the image in the subset of data;
for each image data sequence in the generated image data sequences, inputting each image data in the image data sequences into an optimization objective function so as to optimize initial parameter values in the image data to generate optimized parameter values, and obtaining optimized parameter value sequences;
for each optimized parameter value sequence in the obtained optimized parameter value sequences, selecting a preset number of optimized parameter values from the optimized parameter value sequences and determining variance values of the preset number of optimized parameter values to obtain variance value sequences;
in response to that at least one variance value in the variance value sequence is smaller than a preset variance threshold value, selecting an optimization parameter value from a preset number of optimization parameters corresponding to the variance value as a vehicle-mounted camera calibration external parameter value;
and calibrating the external reference value of the vehicle-mounted camera by using the external reference value calibrated by the vehicle-mounted camera to obtain the calibrated external reference value of the vehicle-mounted camera.
2. The method of claim 1, wherein said segmenting said image set and said data set to generate at least one image subset and at least one data subset, respectively, according to a predetermined number threshold comprises:
dividing the image set to obtain at least one image subset, wherein the number of images of each image subset in the at least one image subset is equal to the predetermined number threshold;
and dividing the data set to obtain at least one data subset, wherein the data quantity of each data subset in the at least one data subset is equal to the predetermined quantity threshold value.
3. The method of claim 2, wherein the data of the vehicle sensors further comprises at least one of: the vehicle-mounted camera internal reference matrix, the vehicle acceleration value, the vehicle angular velocity value, the measurement data of the inertial measurement unit, the coordinate data of the coordinate system of the inertial measurement unit and the coordinate data of the world coordinate system.
4. The method of claim 3, wherein the inputting each image data in the sequence of image data to an optimization objective function to optimize initial parameter values in the image data generates an optimized parameter value, resulting in a sequence of optimized parameter values, comprises:
determining an optimization parameter value list and a preset optimization parameter quantity threshold;
based on the initial parameter values and the sequence of image data, performing the generating steps of:
sequentially selecting a plurality of image data from the image data sequence as image data to be optimized;
inputting the image data to be optimized and the initial parameter value into an optimization objective function to generate an optimized parameter value;
adding the optimized parameter value to an optimized parameter value list;
in response to the number of parameter values in the parameter value list being equal to the predetermined optimization parameter number threshold, taking the parameter value list as an optimization parameter value sequence and outputting the optimization parameter value sequence;
and in response to the fact that the number of parameter values in the parameter value list is smaller than a preset optimization parameter number threshold value, taking the optimization parameter values as initial parameter values, and taking the image data sequence without the selected image data as an image data sequence to execute the generating step again.
5. The method of claim 4, wherein said selecting an optimization parameter value from a predetermined number of optimization parameters corresponding to the variance value as an in-vehicle camera calibration external parameter value in response to at least one variance value in the sequence of variance values being less than a predetermined variance threshold comprises:
acquiring a preset variance threshold;
and in response to that at least one variance value in the variance value sequence is smaller than a preset variance threshold value, selecting an optimization parameter value from a preset number of optimization parameters corresponding to the variance value as a vehicle-mounted camera calibration external parameter value.
6. The method of claim 5, wherein the calibrating the vehicle-mounted camera external reference value by using the vehicle-mounted camera external reference value to obtain a calibrated vehicle-mounted camera external reference value comprises:
and calibrating the external parameter value of the external parameter by using the vehicle-mounted camera, and replacing the external parameter value of the vehicle-mounted camera to obtain the calibrated external parameter of the vehicle-mounted camera.
7. The method of claim 6, wherein the inputting the image data to be optimized and initial parameter values to an optimization objective function to generate optimized parameter values comprises:
performing corner detection on the image in the image data to be optimized to generate an image corner sequence;
constructing an inertial measurement unit coordinate system and a world coordinate system by using the coordinate data of the inertial measurement unit coordinate system and the coordinate data of the world coordinate system;
based on the vehicle acceleration and the vehicle angular velocity corresponding to the image data to be optimized, generating a vehicle position attitude value corresponding to an image in an inertial measurement unit coordinate system by using the inertial measurement unit;
determining a projection point of each image corner point in the image corner point sequence in the world coordinate system, generating an image corresponding to the image corner point and a number pair of the projection point, and obtaining a number pair set of the image and the projection point;
based on the image corner sequence set, the vehicle-mounted camera internal reference matrix, the vehicle position and posture value and the initial parameter value, optimizing the initial parameter value by using the following optimization objective function to generate an optimized parameter value:
Figure FDA0002617352480000041
wherein i represents an inertial measurement unit coordinate system; c represents an onboard camera coordinate system; m represents the number of the angular point; n represents a picture number; w represents a world coordinate system; p represents a coordinate value of a corner point; p represents a projective point coordinate value; t represents a position posture matrix; e represents an initial external reference matrix; e represents a projection error value;
Figure FDA0002617352480000043
representing the coordinate value of the mth angular point in the world coordinate system;
Figure FDA0002617352480000042
representing the coordinate value of a projection point of the mth angular point in the world coordinate system in the nth image;
Figure FDA0002617352480000044
a vehicle position and posture matrix which represents an inertial measurement unit coordinate system corresponding to the nth image in the world coordinate system;iEcan initial external parameter matrix representing a vehicle-mounted camera coordinate system and an inertial measurement unit coordinate system; k represents an internal reference matrix of the vehicle-mounted camera;
Figure FDA0002617352480000045
representing the projection error value of the mth angular point in the nth image; d represents a number pair set consisting of a projection point corresponding to each corner point in the image and the image; j represents an optimization index;iEc=[iRcitc](ii) a Wherein the content of the first and second substances,iRca rotation matrix representing the inertial measurement unit coordinate system and the vehicle-mounted camera coordinate system to be optimized using euler angle representation;itca translation matrix representing an inertial measurement unit coordinate system and a vehicle-mounted camera coordinate system; ()-1Representing an inverse matrix;
Figure FDA0002617352480000046
representing a dimension reduction operation.
8. An apparatus for calibrating a camera, comprising:
an acquisition unit configured to acquire an image set captured by an on-vehicle camera and a data set of a vehicle sensor corresponding to each image, wherein the data of the vehicle sensor includes an on-vehicle camera external parameter;
a determination unit configured to determine the vehicle-mounted camera external parameter value as an initial parameter value;
a first generation unit configured to segment the image set and the data set according to a predetermined number threshold to generate at least one image subset and at least one data subset, respectively;
a second generating unit configured to generate a sequence of image data based on each of the at least one image subset and a data subset of the at least one data subset corresponding to the image subset, wherein the image data is a binary set comprising an image and the corresponding data of the image in the data subset;
a third generating unit configured to, for each of the generated image data sequences, input each of the image data in the image data sequence to an optimization objective function so as to optimize an initial parameter value in the image data to generate an optimized parameter value, resulting in an optimized parameter value sequence;
a fourth generating unit, configured to, for each of the obtained optimized parameter value sequences, select a predetermined number of optimized parameter values from the optimized parameter value sequences and determine variance values of the predetermined number of optimized parameter values to obtain a variance value sequence;
the selecting unit is configured to respond that at least one variance value in the variance value sequence is smaller than a preset variance threshold value, and select one optimized parameter value from a preset number of optimized parameters corresponding to the variance value as a vehicle-mounted camera calibration external parameter value;
and the calibration unit is configured to calibrate the external reference value of the vehicle-mounted camera by utilizing the external reference value calibrated by the vehicle-mounted camera to obtain the calibrated external reference value of the vehicle-mounted camera.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
a camera configured to capture an image;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-7.
CN202010773033.5A 2020-08-04 2020-08-04 Method, apparatus, electronic device and medium for calibrating camera Active CN111986265B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010773033.5A CN111986265B (en) 2020-08-04 2020-08-04 Method, apparatus, electronic device and medium for calibrating camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010773033.5A CN111986265B (en) 2020-08-04 2020-08-04 Method, apparatus, electronic device and medium for calibrating camera

Publications (2)

Publication Number Publication Date
CN111986265A true CN111986265A (en) 2020-11-24
CN111986265B CN111986265B (en) 2021-10-12

Family

ID=73446007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010773033.5A Active CN111986265B (en) 2020-08-04 2020-08-04 Method, apparatus, electronic device and medium for calibrating camera

Country Status (1)

Country Link
CN (1) CN111986265B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565683A (en) * 2022-03-02 2022-05-31 禾多科技(北京)有限公司 Precision determination method, device, equipment, medium and product
CN114708336A (en) * 2022-03-21 2022-07-05 禾多科技(北京)有限公司 Multi-camera online calibration method and device, electronic equipment and computer readable medium
CN116691694A (en) * 2023-05-29 2023-09-05 禾多科技(北京)有限公司 Parking space information generation method, device, electronic equipment and computer readable medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120154577A1 (en) * 2010-12-15 2012-06-21 Canon Kabushiki Kaisha Image processing apparatus, method of controlling the same, distance measurement apparatus, and storage medium
CN102663727A (en) * 2012-03-09 2012-09-12 天津大学 Method for calibrating parameters by dividing regions in a camera based on CMM moving target
CN103198524A (en) * 2013-04-27 2013-07-10 清华大学 Three-dimensional reconstruction method for large-scale outdoor scene
CN105023291A (en) * 2015-05-22 2015-11-04 燕山大学 Criminal scene reconstructing apparatus and method based on stereoscopic vision
CN106204574A (en) * 2016-07-07 2016-12-07 兰州理工大学 Camera pose self-calibrating method based on objective plane motion feature
CN107256570A (en) * 2017-06-12 2017-10-17 浙江理工大学 A kind of external parameters of cameras scaling method based on optimum estimation
CN107850436A (en) * 2015-05-23 2018-03-27 深圳市大疆创新科技有限公司 Merged using the sensor of inertial sensor and imaging sensor
CN109903341A (en) * 2019-01-25 2019-06-18 东南大学 Join dynamic self-calibration method outside a kind of vehicle-mounted vidicon
CN110007293A (en) * 2019-04-24 2019-07-12 禾多科技(北京)有限公司 The online calibration method of the multi-thread beam laser radar in field end
CN110880189A (en) * 2018-09-06 2020-03-13 舜宇光学(浙江)研究院有限公司 Combined calibration method and combined calibration device thereof and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120154577A1 (en) * 2010-12-15 2012-06-21 Canon Kabushiki Kaisha Image processing apparatus, method of controlling the same, distance measurement apparatus, and storage medium
CN102663727A (en) * 2012-03-09 2012-09-12 天津大学 Method for calibrating parameters by dividing regions in a camera based on CMM moving target
CN103198524A (en) * 2013-04-27 2013-07-10 清华大学 Three-dimensional reconstruction method for large-scale outdoor scene
CN105023291A (en) * 2015-05-22 2015-11-04 燕山大学 Criminal scene reconstructing apparatus and method based on stereoscopic vision
CN107850436A (en) * 2015-05-23 2018-03-27 深圳市大疆创新科技有限公司 Merged using the sensor of inertial sensor and imaging sensor
CN106204574A (en) * 2016-07-07 2016-12-07 兰州理工大学 Camera pose self-calibrating method based on objective plane motion feature
CN107256570A (en) * 2017-06-12 2017-10-17 浙江理工大学 A kind of external parameters of cameras scaling method based on optimum estimation
CN110880189A (en) * 2018-09-06 2020-03-13 舜宇光学(浙江)研究院有限公司 Combined calibration method and combined calibration device thereof and electronic equipment
CN109903341A (en) * 2019-01-25 2019-06-18 东南大学 Join dynamic self-calibration method outside a kind of vehicle-mounted vidicon
CN110007293A (en) * 2019-04-24 2019-07-12 禾多科技(北京)有限公司 The online calibration method of the multi-thread beam laser radar in field end

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱欢: "基于单目视觉的道路检测算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565683A (en) * 2022-03-02 2022-05-31 禾多科技(北京)有限公司 Precision determination method, device, equipment, medium and product
CN114565683B (en) * 2022-03-02 2022-09-27 禾多科技(北京)有限公司 Precision determination method, device, equipment, medium and product
CN114708336A (en) * 2022-03-21 2022-07-05 禾多科技(北京)有限公司 Multi-camera online calibration method and device, electronic equipment and computer readable medium
CN114708336B (en) * 2022-03-21 2023-02-17 禾多科技(北京)有限公司 Multi-camera online calibration method and device, electronic equipment and computer readable medium
CN116691694A (en) * 2023-05-29 2023-09-05 禾多科技(北京)有限公司 Parking space information generation method, device, electronic equipment and computer readable medium

Also Published As

Publication number Publication date
CN111986265B (en) 2021-10-12

Similar Documents

Publication Publication Date Title
CN111986265B (en) Method, apparatus, electronic device and medium for calibrating camera
CN112733820B (en) Obstacle information generation method and device, electronic equipment and computer readable medium
CN112348029B (en) Local map adjusting method, device, equipment and computer readable medium
CN113255619B (en) Lane line recognition and positioning method, electronic device, and computer-readable medium
CN113327318B (en) Image display method, image display device, electronic equipment and computer readable medium
CN112328731B (en) Vehicle lane level positioning method and device, electronic equipment and computer readable medium
CN113607185B (en) Lane line information display method, lane line information display device, electronic device, and computer-readable medium
CN114399589B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN112561990B (en) Positioning information generation method, device, equipment and computer readable medium
CN113787522A (en) Hand-eye calibration method for eliminating accumulated errors of mechanical arm
CN112183627A (en) Method for generating predicted density map network and vehicle annual inspection mark number detection method
WO2022252873A1 (en) Calibration and verification method and apparatus for intrinsic camera parameter, device, and medium
CN112598731B (en) Vehicle positioning method and device, electronic equipment and computer readable medium
CN111965383B (en) Vehicle speed information generation method and device, electronic equipment and computer readable medium
CN114419298A (en) Virtual object generation method, device, equipment and storage medium
CN112132909A (en) Parameter acquisition method and device, media data processing method and storage medium
CN115170674B (en) Camera principal point calibration method, device, equipment and medium based on single image
CN114842448B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN116630436B (en) Camera external parameter correction method, camera external parameter correction device, electronic equipment and computer readable medium
CN114399555B (en) Data online calibration method and device, electronic equipment and computer readable medium
CN113204661B (en) Real-time road condition updating method, electronic equipment and computer readable medium
CN112991542B (en) House three-dimensional reconstruction method and device and electronic equipment
CN112330711B (en) Model generation method, information extraction device and electronic equipment
CN114332379A (en) Three-dimensional model construction method and device and mobile terminal
CN117308929A (en) Method, device, equipment and medium for determining posture of optical positioner

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 201, 202, 301, No. 56-4 Fenghuang South Road, Huadu District, Guangzhou City, Guangdong Province, 510806

Patentee after: Heduo Technology (Guangzhou) Co.,Ltd.

Address before: 100095 101-15, 3rd floor, building 9, yard 55, zique Road, Haidian District, Beijing

Patentee before: HOLOMATIC TECHNOLOGY (BEIJING) Co.,Ltd.