CN112465920A - Vision sensor calibration method and device - Google Patents
Vision sensor calibration method and device Download PDFInfo
- Publication number
- CN112465920A CN112465920A CN202011424274.5A CN202011424274A CN112465920A CN 112465920 A CN112465920 A CN 112465920A CN 202011424274 A CN202011424274 A CN 202011424274A CN 112465920 A CN112465920 A CN 112465920A
- Authority
- CN
- China
- Prior art keywords
- coefficient
- calibration
- image
- mapping
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000013507 mapping Methods 0.000 claims abstract description 131
- 230000000007 visual effect Effects 0.000 claims abstract description 47
- 238000004364 calculation method Methods 0.000 claims description 21
- 238000003384 imaging method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Abstract
The invention discloses a method and a device for calibrating a visual sensor, wherein the method comprises the following steps: when the vehicle runs to a preset calibration position, acquiring an image in a preset area through a preset vision sensor; the preset area comprises a plurality of sub-areas; a plurality of targets are arranged in the subareas; the targets are positioned in the same horizontal plane; acquiring the image position of the subregion in the image; calculating a first mapping coefficient between the image position and the corresponding sub-region; calculating a second mapping coefficient of the image and the preset area; and determining the calibration coefficient of the visual sensor according to the plurality of first mapping coefficients and the second mapping coefficients. According to the invention, the influence of the internal and external parameters of the visual sensor on calibration is weakened, the calibration coefficient is determined through the mapping relation, and the calibration precision of the visual sensor is improved.
Description
Technical Field
The invention relates to the technical field of calibration, in particular to a method and a device for calibrating a visual sensor.
Background
For scenes needing auxiliary operation by using a vision system, such as an automatic driving scene of a vehicle, an automatic parking scene of the vehicle and the like, the pixel position of a target on an image is often required to be mapped to a position in a world coordinate system, so that the motion track of the vehicle can be planned based on the image, and therefore, a vision sensor needs to be calibrated to obtain the mapping relation between the image and the world.
In the prior art, a general method is that one or more targets are used for calibrating external parameters of a vision sensor, and the relation between pixel coordinates and world coordinates of the whole image can be calculated by combining internal parameters of a camera. However, the calibration method is sensitive to the precision of the internal reference calibration and the external reference calibration, and often cannot achieve high precision.
Disclosure of Invention
The invention provides a visual sensor calibration method and a device, which are used for solving or partially solving the technical problem that the existing calibration method is sensitive to the precision of internal reference calibration and external reference calibration and often cannot achieve high precision.
The invention provides a method for calibrating a visual sensor, wherein the visual sensor is arranged on a vehicle; the method comprises the following steps:
when the vehicle runs to a preset calibration position, acquiring an image in a preset area through a preset vision sensor; the preset area comprises a plurality of sub-areas; a plurality of targets are arranged in the subarea; the targets are positioned in the same horizontal plane;
acquiring the image position of the subregion in the image;
calculating a first mapping coefficient between the image position and the corresponding sub-region;
calculating a second mapping coefficient of the image and the preset area;
and determining a calibration coefficient of the vision sensor according to the plurality of first mapping coefficients and the second mapping coefficients.
Optionally, the image location comprises pixel coordinates of the target in the sub-region; the step of calculating a first mapping coefficient between the image position and the corresponding sub-region comprises:
acquiring a first world position coordinate of the preset calibration position;
obtaining second world position coordinates of the target within the sub-region;
and calculating a first mapping coefficient between the image position and the corresponding sub-region according to the first world position coordinate, the pixel coordinate of the target in the sub-region and the corresponding second world position coordinate.
Optionally, the step of calculating a second mapping coefficient between the image and the preset region includes:
and calculating a second mapping coefficient of the image and the preset area according to the first world position coordinate, the pixel coordinate of the target in the preset area and the corresponding second world position coordinate.
Optionally, the step of determining a calibration coefficient of the vision sensor according to a plurality of the first mapping coefficients and the second mapping coefficients includes:
calculating a first deviation value between every two first mapping coefficients and a second deviation value between the second mapping coefficient and the first mapping coefficient;
determining an error mapping coefficient based on the first deviation value and the second deviation value, and removing the error mapping coefficient to obtain an approximate calibration coefficient;
and determining the approximate calibration coefficient with the largest repetition number as the calibration coefficient.
The invention also provides a visual sensor calibration device, wherein the visual sensor is arranged on the vehicle; the device comprises:
the image acquisition module is used for acquiring images in a preset area through a preset vision sensor when the vehicle runs to a preset calibration position; the preset area comprises a plurality of sub-areas; a plurality of targets are arranged in the subarea; the targets are positioned in the same horizontal plane;
the image position acquisition module is used for acquiring the image position of the subregion in the image;
a first mapping coefficient calculation module for calculating a first mapping coefficient between the image position and the corresponding sub-region;
the second mapping coefficient calculation module is used for calculating a second mapping coefficient of the image and the preset area;
and the calibration coefficient determining module is used for determining the calibration coefficient of the visual sensor according to the plurality of first mapping coefficients and the second mapping coefficients.
Optionally, the image location comprises pixel coordinates of the target in the sub-region; the first mapping coefficient calculation module includes:
the first world position coordinate obtaining submodule is used for obtaining a first world position coordinate of the preset calibration position;
a second world position coordinate acquisition submodule for acquiring second world position coordinates of the target in the sub-region;
and the first mapping coefficient calculation submodule is used for calculating a first mapping coefficient between the image position and the corresponding sub-region according to the first world position coordinate, the pixel coordinate of the target in the sub-region and the corresponding second world position coordinate.
Optionally, the second mapping coefficient calculating module includes:
and the second mapping coefficient calculation submodule is used for calculating a second mapping coefficient of the image and the preset area according to the first world position coordinate, the pixel coordinate of the target in the preset area and the corresponding second world position coordinate.
Optionally, the calibration coefficient determining module includes:
the deviation value calculation sub-module is used for calculating a first deviation value between every two first mapping coefficients and a second deviation value between the second mapping coefficient and the first mapping coefficient;
an approximate calibration coefficient determining submodule, configured to determine an error mapping coefficient based on the first deviation value and the second deviation value, and remove the error mapping coefficient to obtain multiple calibration coefficients;
and the calibration coefficient determining submodule is used for determining the approximate calibration coefficient with the largest repetition times as the calibration coefficient.
The invention also provides an electronic device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the visual sensor calibration method according to any one of the above instructions in the program code.
The present invention also provides a computer readable storage medium for storing program code for performing a method of visual sensor calibration as described in any one of the above.
According to the technical scheme, the invention has the following advantages: according to the invention, a plurality of targets in the same horizontal plane are arranged in a preset area, then an image of the preset area is collected from a preset calibration position, so that a first mapping coefficient of the corresponding position of the sub-area and the image and a second mapping coefficient between the whole image and the whole preset area are calculated according to the position information of the sub-area in the preset area in the image and the preset area, and the calibration coefficient of the visual sensor is determined according to the first mapping coefficient and the second mapping coefficient. According to the invention, the influence of the internal and external parameters of the visual sensor on calibration is weakened, the calibration coefficient is determined through the mapping relation, and the calibration precision of the visual sensor is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart illustrating steps of a method for calibrating a vision sensor according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a calibration scenario provided in an embodiment of the present invention;
FIG. 3 is a flowchart illustrating steps of a method for calibrating a vision sensor according to another embodiment of the present invention;
fig. 4 is a block diagram of a visual sensor calibration apparatus according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a visual sensor calibration method and a device, which are used for solving or partially solving the technical problem that the existing calibration method is sensitive to the internal reference calibration precision and the external reference calibration precision and often cannot achieve high precision.
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating steps of a method for calibrating a vision sensor according to an embodiment of the present invention.
The invention provides a calibration method of a visual sensor, which specifically comprises the following steps:
a visual sensor: refers to an apparatus for acquiring image information of an external environment by using an optical element and an imaging device, and generally describes the performance of a vision sensor by using image resolution. The accuracy of the vision sensor is not only related to the resolution but also to the detection distance of the object to be measured. The farther the object is from the measurement object, the poorer the absolute positional accuracy. The visual sensor may be a laser scanner, a linear and area array CCD camera or a TV camera, a digital camera, etc. The embodiment of the present invention does not specifically limit the selection of the visual sensor.
It should be noted that, the vision sensor mentioned in the embodiment of the present invention is disposed on the vehicle body, and the disposition position is not limited, but once disposed on the vehicle body, the position is fixed, the angle is fixed, the parameters are fixed, and the vision sensor is immovable and non-deflectable. The method aims to ensure that the calibration coefficient of the visual sensor cannot be changed and avoid the follow-up repeated calibration. In addition, the number of the vision sensors is not specifically limited in the embodiment of the present invention, and a person skilled in the art may select the number according to actual application needs, for example, a vision sensor may be mounted at symmetrical positions on the left and right sides of the vehicle body, and the vision sensors are respectively used for acquiring image data of the left front and the right front of the vehicle.
When the vehicle runs to the preset calibration position, the vehicle-mounted system can acquire the image in the preset area through the vision sensor.
It should be noted that, in order to avoid the influence of the difference in the horizontal heights of the targets on the shooting angles of the vision sensors, all the targets and the vehicles involved in the embodiment of the present invention are in the same horizontal plane.
In one example, each target in the preset region is positioned equidistantly, and the targets in each sub-region are not in the same line in view of the effect of the viewing angle on the visual imaging. In addition, the number of the targets is at least four, and any four targets which are not on the same straight line can form a sub-region for adjusting the view angle of the image.
For easy understanding, please refer to fig. 2, and fig. 2 is a schematic diagram of a calibration scenario according to an embodiment of the present invention. In FIG. 2, A is the nominal position, B is the vehicle, C is the predetermined area, C1-C9 are the subregions, d1-d32 are the targets. Wherein A, B, d1-d32 are fixed in position. According to the embodiment of the invention, the visual sensor is started to acquire the image only when the vehicle B runs to the calibration position A, so that the distance parameter between the visual sensor and the target is accurate.
102, acquiring the image position of a subregion in an image;
in the embodiment of the invention, each subregion has a corresponding image position in the image, and the position information of each target in the image can be determined through the analysis of the pixels, so that the image position of each subregion in the image is determined.
103, calculating a first mapping coefficient between the image position and the corresponding sub-region;
according to the visual imaging principle, connecting lines of each corner point of the image position and the corresponding corner point of the corresponding sub-region can intersect at the same point, and according to the position relation of the imaging angle, the image corner point and the corner point of the sub-region, a first mapping coefficient of the image position and the sub-region can be determined.
based on the same principle as step 103, the embodiment of the present invention may find a second mapping coefficient between the entire image and the preset area.
And 105, determining a calibration coefficient of the visual sensor according to the plurality of first mapping coefficients and the plurality of second mapping coefficients.
After the first mapping coefficient and the second mapping coefficient are determined, calculation errors in the mapping calculation process are removed, and the calibration coefficient of the visual sensor can be obtained.
According to the invention, a plurality of targets in the same horizontal plane are arranged in a preset area, then an image of the preset area is collected from a preset calibration position, so that a first mapping coefficient of each sub-area and the corresponding position of the image and a second mapping coefficient between the whole image and the whole preset area are calculated according to the position information of each sub-area in the preset area in the image and the preset area, and the calibration coefficient of the visual sensor is determined according to the first mapping coefficient and the second mapping coefficient. According to the invention, the influence of the internal and external parameters of the visual sensor on calibration is weakened, the calibration coefficient is determined through the mapping relation, and the calibration precision of the visual sensor is improved.
Referring to fig. 3, fig. 3 is a flowchart illustrating steps of a method for calibrating a vision sensor according to another embodiment of the present invention.
The embodiment of the invention provides a visual sensor calibration method, which specifically comprises the following steps:
the description of steps 301-.
in embodiments of the invention where the image location comprises pixel coordinates of each target in the sub-region, step 303 may comprise the sub-steps of:
acquiring a first world position coordinate of a preset calibration position;
acquiring second world position coordinates of the target in the sub-area;
and calculating a first mapping coefficient between the image position and the corresponding sub-region according to the first world position coordinate, the pixel coordinate of the target in the sub-region and the corresponding second world position coordinate.
In the embodiment of the invention, in order to ensure the calibration accuracy, the first world position coordinate of the calibration position can be measured in advance, and the vehicle is driven to the calibration position, so that the calibration misalignment caused by the positioning deviation of the vehicle to the vehicle is reduced.
Similarly, the second world position coordinates of each target can also be calculated in advance. Therefore, the relative angle and the relative distance between the vehicle and the target are ensured to be accurate.
Next, based on the first world position coordinates, the second world position coordinates and the pixel coordinates of each target, a first mapping coefficient between the corresponding position of the sub-region in the image and the sub-region may be calculated. In the embodiment of the present invention, the calculation of the first mapping coefficient includes, but is not limited to, calculating a homography matrix, optimizing an euler angle, and calculating an external parameter matrix.
based on the operation similar to step 303, the embodiment of the present invention may obtain a mapping coefficient between the image and the preset region.
in a specific application, theoretically, the mapping coefficients between each sub-area or the complete preset area calculated on the same image and the image are the same, and in practice, the mapping coefficients have small and reasonable deviations under the influence of parameter errors of the visual sensor. The mapping coefficients will change significantly only if disturbed by factors such as errors in determination of the imaging position. Therefore, in the embodiment of the present invention, it is possible to determine that there is a calculation error in the first mapping coefficient or the second mapping coefficient by calculating a first deviation value between every two first mapping coefficients and a second deviation value between the second mapping coefficient and each first mapping coefficient.
and 307, determining the approximate calibration coefficient with the most repetition times as a calibration coefficient.
In the embodiment of the invention, after the mapping coefficient with the calculation error is removed, the mapping coefficient with the largest repeated occurrence number can be selected as the calibration coefficient of the visual sensor. To guide the operation of the vehicle according to the calibration factor.
According to the invention, a plurality of targets in the same horizontal plane are arranged in a preset area, then an image of the preset area is collected from a preset calibration position, so that a first mapping coefficient of the corresponding position of the sub-area and the image and a second mapping coefficient between the whole image and the whole preset area are calculated according to the position information of the sub-area in the preset area in the image and the preset area, and the calibration coefficient of the visual sensor is determined according to the first mapping coefficient and the second mapping coefficient. According to the invention, the influence of the internal and external parameters of the visual sensor on calibration is weakened, the calibration coefficient is determined through the mapping relation, and the calibration precision of the visual sensor is improved.
Referring to fig. 4, fig. 4 is a block diagram of a calibration apparatus for a vision sensor according to an embodiment of the present invention.
The embodiment of the invention provides a visual sensor calibration device, wherein a visual sensor is arranged on a vehicle; the device may specifically comprise the following modules:
the image acquisition module 401 is configured to acquire an image in a preset area through a preset vision sensor when the vehicle runs to a preset calibration position; the preset area comprises a plurality of sub-areas; a plurality of targets are arranged in the subareas; the targets are positioned in the same horizontal plane;
an image position acquiring module 402, configured to acquire an image position of the sub-region in the image;
a first mapping coefficient calculation module 403, configured to calculate a first mapping coefficient between an image position and a corresponding sub-region;
a second mapping coefficient calculating module 404, configured to calculate a second mapping coefficient between the image and the preset region;
a calibration coefficient determining module 405, configured to determine a calibration coefficient of the visual sensor according to the plurality of first mapping coefficients and the second mapping coefficients.
In an embodiment of the invention, the image location comprises pixel coordinates of the target in the sub-region; the first mapping coefficient calculating module 403 includes:
the first world position coordinate acquisition submodule is used for acquiring a first world position coordinate of a preset calibration position;
the second world position coordinate acquisition submodule is used for acquiring second world position coordinates of the targets in the sub-area;
and the first mapping coefficient calculation submodule is used for calculating a first mapping coefficient between the image position and the corresponding sub-region according to the first world position coordinate, the pixel coordinate of the target in the sub-region and the corresponding second world position coordinate.
In this embodiment of the present invention, the second mapping coefficient calculating module 404 includes:
and the second mapping coefficient calculation submodule is used for calculating a second mapping coefficient of the image and the preset area according to the first world position coordinate, the pixel coordinate of the target in the preset area and the corresponding second world position coordinate.
In this embodiment of the present invention, the calibration coefficient determining module 405 includes:
the deviation value calculation sub-module is used for calculating a first deviation value between every two first mapping coefficients and a second deviation value between the second mapping coefficient and the first mapping coefficient;
the approximate calibration coefficient determining submodule is used for determining an error mapping coefficient based on the first deviation value and the second deviation value and removing the error mapping coefficient to obtain a plurality of approximate calibration coefficients;
and the calibration coefficient determining submodule is used for determining the suspected calibration coefficient with the most repetition times as the calibration coefficient.
An embodiment of the present invention further provides an electronic device, where the device includes a processor and a memory:
the memory is used for storing the program codes and transmitting the program codes to the processor;
the processor is configured to execute the visual sensor calibration method according to the embodiments of the present invention according to instructions in the program code.
The embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium is used for storing a program code, and the program code is used for executing the visual sensor calibration method according to the embodiment of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of implementing the solution of the present embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A visual sensor calibration method is characterized in that a visual sensor is arranged on a vehicle; the method comprises the following steps:
when the vehicle runs to a preset calibration position, acquiring an image in a preset area through a preset vision sensor; the preset area comprises a plurality of sub-areas; a plurality of targets are arranged in the subarea; the targets are positioned in the same horizontal plane;
acquiring the image position of the subregion in the image;
calculating a first mapping coefficient between the image position and the corresponding sub-region;
calculating a second mapping coefficient of the image and the preset area;
and determining a calibration coefficient of the vision sensor according to the plurality of first mapping coefficients and the second mapping coefficients.
2. The method of claim 1, wherein the image location comprises pixel coordinates of the target in the sub-region; the step of calculating a first mapping coefficient between the image position and the corresponding sub-region comprises:
acquiring a first world position coordinate of the preset calibration position;
obtaining second world position coordinates of the target within the sub-region;
and calculating a first mapping coefficient between the image position and the corresponding sub-region according to the first world position coordinate, the pixel coordinate of the target in the sub-region and the corresponding second world position coordinate.
3. The method according to claim 2, wherein the step of calculating the second mapping coefficient between the image and the preset region comprises:
and calculating a second mapping coefficient of the image and the preset area according to the first world position coordinate, the pixel coordinate of the target in the preset area and the corresponding second world position coordinate.
4. The method of claim 1, wherein the step of determining a calibration coefficient for the vision sensor based on the plurality of first and second mapping coefficients comprises:
calculating a first deviation value between every two first mapping coefficients and a second deviation value between the second mapping coefficient and the first mapping coefficient;
determining an error mapping coefficient based on the first deviation value and the second deviation value, and removing the error mapping coefficient to obtain a plurality of approximate calibration coefficients;
and determining the approximate calibration coefficient with the largest repetition number as the calibration coefficient.
5. The vision sensor calibration device is characterized in that the vision sensor is arranged on a vehicle; the device comprises:
the image acquisition module is used for acquiring images in a preset area through a preset vision sensor when the vehicle runs to a preset calibration position; the preset area comprises a plurality of sub-areas; a plurality of targets are arranged in the subarea; the targets are positioned in the same horizontal plane;
the image position acquisition module is used for acquiring the image position of the subregion in the image;
a first mapping coefficient calculation module for calculating a first mapping coefficient between the image position and the corresponding sub-region;
the second mapping coefficient calculation module is used for calculating a second mapping coefficient of the image and the preset area;
and the calibration coefficient determining module is used for determining the calibration coefficient of the visual sensor according to the plurality of first mapping coefficients and the second mapping coefficients.
6. The apparatus of claim 5, wherein the image location comprises pixel coordinates of the target in the sub-region; the first mapping coefficient calculation module includes:
the first world position coordinate obtaining submodule is used for obtaining a first world position coordinate of the preset calibration position;
a second world position coordinate acquisition submodule for acquiring second world position coordinates of the target in the sub-region;
and the first mapping coefficient calculation submodule is used for calculating a first mapping coefficient between the image position and the corresponding sub-region according to the first world position coordinate, the pixel coordinate of the target in the sub-region and the corresponding second world position coordinate.
7. The apparatus of claim 6, wherein the second mapping coefficient calculating module comprises:
and the second mapping coefficient calculation submodule is used for calculating a second mapping coefficient of the image and the preset area according to the first world position coordinate, the pixel coordinate of the target in the preset area and the corresponding second world position coordinate.
8. The apparatus of claim 5, wherein the calibration coefficient determination module comprises:
the deviation value calculation sub-module is used for calculating a first deviation value between every two first mapping coefficients and a second deviation value between the second mapping coefficient and the first mapping coefficient;
the approximate calibration coefficient determining submodule is used for determining an error mapping coefficient based on the first deviation value and the second deviation value and removing the error mapping coefficient to obtain an approximate calibration coefficient;
and the calibration coefficient determining submodule is used for determining the approximate calibration coefficient with the largest repetition times as the calibration coefficient.
9. An electronic device, comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the method of visual sensor calibration of any one of claims 1-4 according to instructions in the program code.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium is configured to store program code for performing the vision sensor calibration method of any of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011424274.5A CN112465920A (en) | 2020-12-08 | 2020-12-08 | Vision sensor calibration method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011424274.5A CN112465920A (en) | 2020-12-08 | 2020-12-08 | Vision sensor calibration method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112465920A true CN112465920A (en) | 2021-03-09 |
Family
ID=74800285
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011424274.5A Pending CN112465920A (en) | 2020-12-08 | 2020-12-08 | Vision sensor calibration method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112465920A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114623856A (en) * | 2022-02-08 | 2022-06-14 | 武汉路特斯汽车有限公司 | Offline composite calibration system and method for multiple sensors |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018119771A1 (en) * | 2016-12-28 | 2018-07-05 | 深圳大学 | Efficient phase-three-dimensional mapping method and system based on fringe projection profilometry |
CN108765495A (en) * | 2018-05-22 | 2018-11-06 | 山东大学 | A kind of quick calibrating method and system based on binocular vision detection technology |
CN109215083A (en) * | 2017-07-06 | 2019-01-15 | 华为技术有限公司 | The method and apparatus of the calibrating external parameters of onboard sensor |
CN109712194A (en) * | 2018-12-10 | 2019-05-03 | 深圳开阳电子股份有限公司 | Vehicle-mounted viewing system and its stereo calibration method and computer readable storage medium |
CN109741455A (en) * | 2018-12-10 | 2019-05-10 | 深圳开阳电子股份有限公司 | A kind of vehicle-mounted stereoscopic full views display methods, computer readable storage medium and system |
CN110264520A (en) * | 2019-06-14 | 2019-09-20 | 北京百度网讯科技有限公司 | Onboard sensor and vehicle position orientation relation scaling method, device, equipment and medium |
WO2020062434A1 (en) * | 2018-09-30 | 2020-04-02 | 初速度(苏州)科技有限公司 | Static calibration method for external parameters of camera |
CN111899307A (en) * | 2020-07-30 | 2020-11-06 | 浙江大学 | Space calibration method, electronic device and storage medium |
CN111914049A (en) * | 2020-07-29 | 2020-11-10 | 北京天睿空间科技股份有限公司 | Method for mapping longitude and latitude coordinates and image coordinates |
-
2020
- 2020-12-08 CN CN202011424274.5A patent/CN112465920A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018119771A1 (en) * | 2016-12-28 | 2018-07-05 | 深圳大学 | Efficient phase-three-dimensional mapping method and system based on fringe projection profilometry |
CN109215083A (en) * | 2017-07-06 | 2019-01-15 | 华为技术有限公司 | The method and apparatus of the calibrating external parameters of onboard sensor |
CN108765495A (en) * | 2018-05-22 | 2018-11-06 | 山东大学 | A kind of quick calibrating method and system based on binocular vision detection technology |
WO2020062434A1 (en) * | 2018-09-30 | 2020-04-02 | 初速度(苏州)科技有限公司 | Static calibration method for external parameters of camera |
CN109712194A (en) * | 2018-12-10 | 2019-05-03 | 深圳开阳电子股份有限公司 | Vehicle-mounted viewing system and its stereo calibration method and computer readable storage medium |
CN109741455A (en) * | 2018-12-10 | 2019-05-10 | 深圳开阳电子股份有限公司 | A kind of vehicle-mounted stereoscopic full views display methods, computer readable storage medium and system |
CN110264520A (en) * | 2019-06-14 | 2019-09-20 | 北京百度网讯科技有限公司 | Onboard sensor and vehicle position orientation relation scaling method, device, equipment and medium |
CN111914049A (en) * | 2020-07-29 | 2020-11-10 | 北京天睿空间科技股份有限公司 | Method for mapping longitude and latitude coordinates and image coordinates |
CN111899307A (en) * | 2020-07-30 | 2020-11-06 | 浙江大学 | Space calibration method, electronic device and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114623856A (en) * | 2022-02-08 | 2022-06-14 | 武汉路特斯汽车有限公司 | Offline composite calibration system and method for multiple sensors |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107633536B (en) | Camera calibration method and system based on two-dimensional plane template | |
CN110689579B (en) | Rapid monocular vision pose measurement method and measurement system based on cooperative target | |
CN111263142B (en) | Method, device, equipment and medium for testing optical anti-shake of camera module | |
CN109544643B (en) | Video camera image correction method and device | |
US20060215881A1 (en) | Distance measurement apparatus, electronic device, distance measurement method, distance measurement control program and computer-readable recording medium | |
CN112070845A (en) | Calibration method and device of binocular camera and terminal equipment | |
JP4055998B2 (en) | Distance detection device, distance detection method, and distance detection program | |
CN113418543A (en) | Method and device for detecting automatic driving sensor, electronic equipment and storage medium | |
CN111508027A (en) | Method and device for calibrating external parameters of camera | |
CN110415286B (en) | External parameter calibration method of multi-flight time depth camera system | |
CN112489136A (en) | Calibration method, position determination method, device, electronic equipment and storage medium | |
CN110345875B (en) | Calibration and ranging method, device, electronic equipment and computer readable storage medium | |
CN112556994A (en) | Optical information detection method, device and equipment | |
CN112465920A (en) | Vision sensor calibration method and device | |
CN110188665B (en) | Image processing method and device and computer equipment | |
CN116704048A (en) | Double-light registration method | |
CN111336938A (en) | Robot and object distance detection method and device thereof | |
JP2013239905A (en) | Calibration apparatus for in-vehicle camera | |
CN114494466B (en) | External parameter calibration method, device and equipment and storage medium | |
WO2018230340A1 (en) | Stereo camera device | |
CN115713564A (en) | Camera calibration method and device | |
CN114979956A (en) | Unmanned aerial vehicle aerial photography ground target positioning method and system | |
CN115471566A (en) | Binocular calibration method and system | |
CN112233185A (en) | Camera calibration method, image registration method, camera device and storage device | |
CN113470118A (en) | Target size estimation system based on trinocular vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |