CN115170674B - Camera principal point calibration method, device, equipment and medium based on single image - Google Patents
Camera principal point calibration method, device, equipment and medium based on single image Download PDFInfo
- Publication number
- CN115170674B CN115170674B CN202210852920.0A CN202210852920A CN115170674B CN 115170674 B CN115170674 B CN 115170674B CN 202210852920 A CN202210852920 A CN 202210852920A CN 115170674 B CN115170674 B CN 115170674B
- Authority
- CN
- China
- Prior art keywords
- dimensional
- camera
- principal point
- line segment
- corner
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the disclosure discloses a camera principal point calibration method, a device, equipment and a medium based on a single image. One embodiment of the method comprises: carrying out corner detection processing on the target image to generate a two-dimensional corner set; generating a two-dimensional line segment set based on the two-dimensional corner set; setting initial camera principal point coordinates of a target camera according to the width and the height of the target image, wherein the target image is an image shot by the target camera; constructing a camera principal point optimization function according to the two-dimensional line segment set and the initial camera principal point coordinates; and calibrating the camera principal point of the target camera based on the camera principal point optimization function. The embodiment shortens the time for calibrating the camera main point and improves the efficiency for calibrating the camera main point.
Description
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a camera principal point calibration method, a device, equipment and a medium based on a single image.
Background
Computer vision algorithms commonly used in autopilot, such as visual localization, environmental perception, and map reconstruction, rely on accurate camera-internal parameters, including: principal point coordinates, focal length, distortion coefficient, and the like. At present, when calibrating a camera principal point, a commonly adopted mode is as follows: and calibrating the camera principal point by adopting a plurality of images.
However, the inventor found that when the camera principal point is calibrated in the above manner, the following technical problems often occur:
firstly, a plurality of images are adopted to calibrate the camera main point, and the time for calibrating the camera main point is longer;
secondly, in the camera principal point calibration process, distortion of an image shot by a camera is not considered, and the precision of camera principal point calibration is low.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, it may contain information that does not form the prior art that is already known to a person of ordinary skill in the art in this country.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a single-image-based camera principal point calibration method, apparatus, electronic device, computer-readable medium, and program product to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a method for calibrating a camera principal point based on a single image, the method including: carrying out corner detection processing on the target image to generate a two-dimensional corner set; generating a two-dimensional line segment set based on the two-dimensional corner set; setting initial camera principal point coordinates of a target camera according to the width and the height of the target image, wherein the target image is an image shot by the target camera; constructing a camera principal point optimization function according to the two-dimensional line segment set and the initial camera principal point coordinates; and calibrating the camera principal point of the target camera based on the camera principal point optimization function.
In a second aspect, some embodiments of the present disclosure provide a camera principal point calibration apparatus based on a single image, the apparatus including: the detection unit is configured to perform corner detection processing on the target image to generate a two-dimensional corner set; a generating unit configured to generate a two-dimensional line segment set based on the two-dimensional corner set; a setting unit configured to set initial camera principal point coordinates of a target camera according to a width and a height of the target image, wherein the target image is an image captured by the target camera; a construction unit configured to construct a camera principal point optimization function according to the two-dimensional line segment set and the initial camera principal point coordinates; and the calibration unit is configured to calibrate the camera principal point of the target camera based on the camera principal point optimization function.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first aspect.
In a fourth aspect, some embodiments of the disclosure provide a computer readable medium on which a computer program is stored, wherein the program when executed by a processor implements the method described in any implementation of the first aspect.
In a fifth aspect, some embodiments of the present disclosure provide a computer program product comprising a computer program that, when executed by a processor, implements the method described in any of the implementations of the first aspect above.
The above embodiments of the present disclosure have the following advantages: according to the camera main point calibration method based on the single image, the camera main point calibration time is shortened, and the camera main point calibration efficiency is improved. In particular, the reason why the time for camera principal point calibration is long is that: and the camera main point calibration is carried out by adopting a plurality of images, and the time for calibrating the camera main point is longer. Based on this, in the camera principal point calibration method based on a single image according to some embodiments of the present disclosure, first, a corner point detection process is performed on a target image to generate a two-dimensional corner point set. Therefore, a two-dimensional corner set of a single image is generated, and data support is provided for the construction of a camera principal point optimization function. And secondly, generating a two-dimensional line segment set based on the two-dimensional corner set. Therefore, data support is provided for constructing the camera principal point optimization function. Then, initial camera principal point coordinates of the target camera are set according to the width and height of the target image. Wherein the target image is an image captured by the target camera. And then, constructing a camera principal point optimization function according to the two-dimensional line segment set and the initial camera principal point coordinates. Thus, a camera principal point optimization function for a single image can be generated based on the initial camera principal point coordinates and the two-dimensional line segment set for the single image. Therefore, data support is provided for calibrating the camera principal point of the target camera. And finally, calibrating the camera principal point of the target camera based on the camera principal point optimization function. Therefore, the time for calibrating the camera main point is shortened, and the efficiency for calibrating the camera main point is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
FIG. 1 is a flow diagram of some embodiments of a single image based camera principal point calibration method according to the present disclosure;
FIG. 2 is a schematic block diagram view of some embodiments of a single image based camera principal point calibration apparatus according to the present disclosure;
FIG. 3 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates a flow 100 of some embodiments of a single image based camera home point calibration method according to the present disclosure. The camera principal point calibration method based on the single image comprises the following steps:
In some embodiments, an executing subject (e.g., a server) of the single-image-based camera principal point calibration method may perform a corner detection process on the target image to generate a two-dimensional corner set. The target image may be an image captured by a target camera. The target camera may be a camera requiring camera principal point calibration. The corner detection process may include, but is not limited to: corner detection based on gray level images, corner detection based on binary images, and corner detection based on contour curves. For example, the corner detection processing may be checkerboard corner detection processing based on a grayscale image.
And 102, generating a two-dimensional line segment set based on the two-dimensional corner set.
In some embodiments, the execution subject may generate a two-dimensional line segment set based on the two-dimensional corner set.
In practice, based on the two-dimensional corner set, the executing body may generate a two-dimensional line set by:
firstly, sorting the two-dimensional corner set according to the corner coordinates corresponding to the two-dimensional corner set so as to generate a two-dimensional corner sequence. The corner coordinates in each of the corner coordinates may be corner coordinates generated during corner detection. First, the execution body may sort the two-dimensional corner set according to the abscissa of each corner coordinate from small to large, so as to generate an initial two-dimensional corner sequence. Then, for each two-dimensional corner point with the same abscissa of the corresponding corner point coordinate in the initial two-dimensional corner point sequence, sorting each two-dimensional corner point with the same abscissa according to the ascending of the ordinate of the corner point coordinate corresponding to each two-dimensional corner point, so as to generate a two-dimensional corner point sequence.
And secondly, grouping the two-dimensional corner point sequences according to the abscissa of each corner point coordinate to generate a first two-dimensional corner point group sequence. And the abscissa of each first two-dimensional corner point of the first two-dimensional corner point group in the first two-dimensional corner point group sequence is the same. In practice, the execution subject may group each two-dimensional corner point with the same abscissa of the corresponding corner point coordinate in the two-dimensional corner point sequence to generate a first two-dimensional corner point group sequence.
And thirdly, grouping the two-dimensional corner point sequences according to the vertical coordinates of the corner point coordinates to generate a second two-dimensional corner point group sequence. And the vertical coordinates of each second two-dimensional corner point of the second two-dimensional corner point group in the second two-dimensional corner point group sequence are the same. In practice, the execution body may group two-dimensional corner points in the two-dimensional corner point sequence, where the two-dimensional corner points have the same vertical coordinate as the corresponding corner point coordinates, to generate a second two-dimensional corner point group sequence.
And fourthly, sequentially connecting each first two-dimensional corner included by the first two-dimensional corner group for each first two-dimensional corner group included by the first two-dimensional corner group sequence to generate a first two-dimensional line segment.
And fifthly, for each second two-dimensional corner group included in the second two-dimensional corner group sequence, sequentially connecting each second two-dimensional corner included in the second two-dimensional corner group to generate a second two-dimensional line segment.
And sixthly, combining the generated first two-dimensional line segments and the generated second two-dimensional line segments to generate a two-dimensional line segment set.
And 103, setting initial camera principal point coordinates of the target camera according to the width and the height of the target image.
In some embodiments, the execution subject may set initial camera principal point coordinates of the target camera according to a width and a height of the target image. Wherein the target image is an image captured by the target camera.
In practice, according to the width and height of the target image, the executing body may set initial camera principal point coordinates of the target camera by:
first, half of the number of each pixel point in the width direction of the target image is determined as the abscissa of the initial camera principal point. The number of each pixel point in the width direction may be the number of each pixel point in any one pixel line in the transverse direction.
And secondly, determining half of the number of each pixel point in the high direction of the target image as the vertical coordinate of the principal point of the initial camera. The number of each pixel point in the high direction may be the number of each pixel point in any one pixel line in the longitudinal direction.
And thirdly, combining the horizontal coordinate of the initial camera principal point and the vertical coordinate of the initial camera principal point into the initial camera principal point coordinate.
And 104, constructing a camera principal point optimization function according to the two-dimensional line segment set and the initial camera principal point coordinates.
In some embodiments, the executing agent may construct a camera principal point optimization function according to the two-dimensional line segment set and the initial camera principal point coordinates. Wherein, the two-dimensional line segment set comprises: a first two-dimensional line segment sequence and a second two-dimensional line segment sequence. In practice, according to the two-dimensional line segment set and the initial camera principal point coordinates, the executing entity may construct a camera principal point optimization function by:
in the first step, for each first two-dimensional line segment in the first two-dimensional line segment sequence, the following processing steps are executed:
the first sub-step, respectively back-projecting two end points of the distorted line segment corresponding to the first two-dimensional line segment, so as to generate two first back-projected rays under the camera coordinate system of the target camera. The distorted line segment may be a torsional line segment appearing on the image in the lens due to lens distortion when the target camera takes the target image.
And a second sub-step of performing an outer product process on the two first back projection rays to generate a first normal vector of a plane spanned by the two back projection rays.
And secondly, carrying out homogeneous equation set construction on each generated first normal vector to generate a longitudinal vanishing direction vector.
Thirdly, for each second two-dimensional line segment in the second two-dimensional line segment sequence, executing the following processing steps:
the first sub-step is to perform back projection on two end points of the distorted line segment corresponding to the second two-dimensional line segment respectively to generate two second back projection rays under the camera coordinate system of the target camera.
And a second sub-step of performing outer product processing on the two second back projection rays to generate a second normal vector of a plane formed by stretching the two back projection rays.
And fourthly, constructing a homogeneous equation set for each generated second normal vector to generate a transverse vanishing direction vector.
And fifthly, generating a camera principal point optimization function according to the initial camera principal point coordinates, the longitudinal vanishing direction vector and the transverse vanishing direction vector. The camera principal point optimization function may be a camera principal point optimization function constructed by an inner product of the longitudinal vanishing direction vector and the transverse vanishing direction vector, and the initial camera principal point coordinates may be an initial value of the camera principal point optimization function. The camera principal point optimization function here may be:
wherein, C x And an abscissa representing the coordinates of the initial camera principal point. c. C y And a vertical coordinate representing the initial camera principal point coordinate. (c) x ,c y ) Representing the initial camera principal point coordinates described above.The above-described transverse vanishing direction vector is shown. />The longitudinal vanishing direction vector is shown. />Means indicating a vector representing the above-mentioned transverse vanishing direction->And the longitudinal shadow-eliminating direction vector>The inner product of (d).
The related content of the step 104 is an inventive point of the present disclosure, and solves the technical problem mentioned in the background art that "in the process of calibrating the camera principal point, distortion of an image taken by the camera is not considered, and the precision of calibrating the camera principal point is low. ". The factors that the precision of the camera principal point calibration is low are as follows: in the camera principal point calibration process, the distortion of an image shot by a camera is not considered, and the precision of camera principal point calibration is low. If the factors are solved, the effect of enhancing the calibration precision of the camera principal point can be achieved. To achieve this effect, first, for each first two-dimensional line segment in the first two-dimensional line segment sequence, the following processing steps are performed: firstly, respectively carrying out back projection on two end points of a distorted line segment corresponding to the first two-dimensional line segment so as to generate two first back projection rays under a camera coordinate system of the target camera; secondly, performing outer product processing on the two first back projection rays to generate a first normal vector of a plane formed by stretching of the two back projection rays; and thirdly, constructing a homogeneous equation set for each generated first normal vector to generate a longitudinal vanishing direction vector. Therefore, the distortion line segment corresponding to the first two-dimensional line segment is utilized to construct a longitudinal vanishing direction vector, and the longitudinal deviation of camera principal point calibration caused by image distortion can be reduced. Next, for each second two-dimensional line segment in the above-mentioned second two-dimensional line segment sequence, the following processing steps are performed: firstly, respectively carrying out back projection on two end points of a distorted line segment corresponding to the second two-dimensional line segment so as to generate two second back projection rays under a camera coordinate system of the target camera; second, performing outer product processing on the two second back projection rays to generate a second normal vector of a plane formed by stretching of the two back projection rays; thirdly, constructing a homogeneous equation system for each generated second normal vector to generate a transverse vanishing direction vector. Therefore, a transverse vanishing direction vector is constructed by using a distorted line segment corresponding to the second two-dimensional line segment, and the transverse deviation of camera principal point calibration caused by image distortion can be reduced. And finally, generating a camera principal point optimization function according to the initial camera principal point coordinates, the longitudinal vanishing direction vector and the transverse vanishing direction vector. Therefore, the construction of the camera principal point optimization function is completed, and the effect of enhancing the precision of camera principal point calibration is achieved.
And 105, calibrating the camera principal point of the target camera based on the camera principal point optimization function.
In some embodiments, the execution subject may generate a two-dimensional line segment set based on the two-dimensional corner set.
In practice, based on the two-dimensional corner set, the executing body may generate a two-dimensional line set by:
the first step is to carry out camera principal point coordinate optimization processing on the camera principal point optimization function to generate optimized camera principal point coordinates. The optimization process may be performed on the camera principal point optimization function by using a Levenberg-Marquardt (LM) algorithm.
And secondly, calibrating the camera principal point of the target camera according to the optimized camera principal point coordinates. In practice, the executing body may replace the camera principal point coordinates of the camera principal point of the target camera with the optimized camera principal point coordinates.
The above embodiments of the present disclosure have the following beneficial effects: according to the camera main point calibration method based on the single image, the camera main point calibration time is shortened, and the camera main point calibration efficiency is improved. Specifically, the reason why the camera principal point calibration efficiency is not efficient enough is that: the camera main point calibration is carried out by adopting a plurality of images, and the time for calibrating the camera main point is longer. Based on this, in the camera principal point calibration method based on a single image according to some embodiments of the present disclosure, first, a corner point detection process is performed on a target image to generate a two-dimensional corner point set. Therefore, a two-dimensional corner set of a single image is generated, and data support is provided for construction of a camera principal point optimization function. And secondly, generating a two-dimensional line segment set based on the two-dimensional corner set. Therefore, data support is provided for constructing the camera principal point optimization function. Then, initial camera principal point coordinates of the target camera are set according to the width and height of the target image. The target image is an image shot by the target camera. And then, constructing a camera principal point optimization function according to the two-dimensional line segment set and the initial camera principal point coordinates. Thus, a camera principal point optimization function for a single image can be generated based on the initial camera principal point coordinates and the two-dimensional line segment set for the single image. Therefore, data support is provided for calibrating the camera principal point of the target camera. And finally, calibrating the camera principal point of the target camera based on the camera principal point optimization function. Therefore, the time for calibrating the camera main point is shortened, and the efficiency for calibrating the camera main point is improved.
With further reference to fig. 2, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of a single-image-based camera principal point calibration method apparatus, which correspond to those shown in fig. 1, and which can be applied in various electronic devices.
As shown in fig. 2, the single-image-based camera principal point calibration apparatus 200 of some embodiments includes: a detection unit 201, a generation unit 202, a setting unit 203, a construction unit 204, and a calibration unit 205. Wherein the detection unit 201 is configured to perform corner detection processing on the target image to generate a two-dimensional corner set; the generating unit 202 is configured to generate a two-dimensional line segment set based on the two-dimensional corner point set; the setting unit 203 is configured to set initial camera principal point coordinates of a target camera according to a width and a height of the target image, wherein the target image is an image captured by the target camera; the construction unit 204 is configured to construct a camera principal point optimization function according to the two-dimensional line segment set and the initial camera principal point coordinates; the calibration unit 205 is configured to calibrate the camera principal point of the target camera based on the camera principal point optimization function.
It will be understood that the units described in the apparatus 200 correspond to the various steps in the method described with reference to fig. 1. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 200 and the units included therein, and are not described herein again.
Referring now to FIG. 3, a block diagram of an electronic device (e.g., server) 300 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device in some embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing device (e.g., central processing unit, graphics processor, etc.) 301 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage device 308 into a Random Access Memory (RAM) 303. In the RAM303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate with other devices, wireless or wired, to exchange data. While fig. 3 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 3 may represent one device or may represent multiple devices, as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 309, or installed from the storage device 308, or installed from the ROM 302. The computer program, when executed by the processing apparatus 301, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: carrying out corner detection processing on the target image to generate a two-dimensional corner set; generating a two-dimensional line segment set based on the two-dimensional corner set; setting initial camera principal point coordinates of a target camera according to the width and the height of the target image, wherein the target image is an image shot by the target camera; constructing a camera principal point optimization function according to the two-dimensional line segment set and the initial camera principal point coordinates; and calibrating the camera principal point of the target camera based on the camera principal point optimization function.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, which may be described as: a processor includes a detection unit, a generation unit, a setting unit, a construction unit, and a calibration unit. The names of the units do not form a limitation on the units themselves in some cases, and for example, the calibration unit may be further described as a "unit that calibrates the camera principal point of the target camera based on the camera principal point optimization function".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
Some embodiments of the present disclosure also provide a computer program product comprising a computer program, which when executed by a processor implements any of the above-described single-image-based camera principal point calibration methods.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.
Claims (6)
1. A camera principal point calibration method based on a single image comprises the following steps:
carrying out corner detection processing on the target image to generate a two-dimensional corner set;
generating a two-dimensional line segment set based on the two-dimensional corner set, wherein the generating the two-dimensional line segment set based on the two-dimensional corner set comprises:
sequencing the two-dimensional corner set according to the corner coordinates corresponding to the two-dimensional corner set to generate a two-dimensional corner sequence;
grouping the two-dimensional corner sequence according to the abscissa of each corner coordinate to generate a first two-dimensional corner group sequence, wherein the abscissas of the first two-dimensional corner groups in the first two-dimensional corner group sequence are the same;
according to the ordinate of each corner coordinate, grouping the two-dimensional corner sequence to generate a second two-dimensional corner group sequence, wherein the ordinate of each second two-dimensional corner of a second two-dimensional corner group in the second two-dimensional corner group sequence is the same;
for each first two-dimensional corner group included in the first two-dimensional corner group sequence, sequentially connecting each first two-dimensional corner included in the first two-dimensional corner group to generate a first two-dimensional line segment;
for each second two-dimensional corner group included in the second two-dimensional corner group sequence, sequentially connecting each second two-dimensional corner included in the second two-dimensional corner group to generate a second two-dimensional line segment;
merging each generated first two-dimensional line segment and each generated second two-dimensional line segment to generate a two-dimensional line segment set;
setting initial camera principal point coordinates of a target camera according to the width and the height of the target image, wherein the target image is an image shot by the target camera;
constructing a camera principal point optimization function according to the two-dimensional line segment set and the initial camera principal point coordinates, wherein the two-dimensional line segment set comprises: the method comprises the following steps of constructing a camera principal point optimization function according to the two-dimensional line segment set and the initial camera principal point coordinates, wherein the first two-dimensional line segment sequence and the second two-dimensional line segment sequence comprise the following steps:
for each first two-dimensional line segment of the first sequence of two-dimensional line segments, performing the following processing steps:
respectively carrying out back projection on two end points of a distorted line segment corresponding to the first two-dimensional line segment so as to generate two first back projection rays under a camera coordinate system of the target camera;
performing outer product processing on the two first back projection rays to generate a first normal vector of a plane formed by the two back projection rays;
constructing a homogeneous equation set for each generated first normal vector to generate a longitudinal vanishing direction vector;
for each second two-dimensional line segment of the sequence of second two-dimensional line segments, performing the following processing steps:
respectively carrying out back projection on two end points of the distorted line segment corresponding to the second two-dimensional line segment so as to generate two second back projection rays under the camera coordinate system of the target camera;
performing outer product processing on the two second back projection rays to generate a second normal vector of a plane formed by the two back projection rays;
constructing a homogeneous equation set for each generated second normal vector to generate a transverse vanishing direction vector;
generating a camera principal point optimization function according to the initial camera principal point coordinates, the longitudinal vanishing direction vector and the transverse vanishing direction vector;
and calibrating the camera principal point of the target camera based on the camera principal point optimization function.
2. The method of claim 1, wherein said setting initial camera principal point coordinates of a target camera according to a width and a height of the target image comprises:
determining half of the number of each pixel point in the width direction of the target image as the abscissa of the initial camera principal point;
determining half of the number of each pixel point in the high direction of the target image as an initial camera principal point ordinate;
and combining the initial camera principal point horizontal coordinate and the initial camera principal point vertical coordinate into an initial camera principal point coordinate.
3. The method of claim 1, wherein the calibrating the camera principal point of the target camera based on the camera principal point optimization function comprises:
performing camera principal point coordinate optimization processing on the camera principal point optimization function to generate optimized camera principal point coordinates;
and calibrating the camera principal point of the target camera according to the optimized camera principal point coordinate.
4. A camera principal point calibration device based on a single image comprises:
a detection unit configured to perform corner detection processing on a target image to generate a two-dimensional corner set;
a generating unit configured to generate a two-dimensional line segment set based on the two-dimensional corner set, wherein the generating a two-dimensional line segment set based on the two-dimensional corner set comprises:
sequencing the two-dimensional corner set according to the corner coordinates corresponding to the two-dimensional corner set to generate a two-dimensional corner sequence;
grouping the two-dimensional corner sequence according to the abscissa of each corner coordinate to generate a first two-dimensional corner group sequence, wherein the abscissas of the first two-dimensional corner groups in the first two-dimensional corner group sequence are the same;
grouping the two-dimensional corner sequence according to the vertical coordinates of the corner coordinates to generate a second two-dimensional corner group sequence, wherein the vertical coordinates of the second two-dimensional corners of a second two-dimensional corner group in the second two-dimensional corner group sequence are the same;
for each first two-dimensional corner group included in the first two-dimensional corner group sequence, sequentially connecting each first two-dimensional corner included in the first two-dimensional corner group to generate a first two-dimensional line segment;
for each second two-dimensional corner group included in the second two-dimensional corner group sequence, sequentially connecting each second two-dimensional corner included in the second two-dimensional corner group to generate a second two-dimensional line segment;
merging each generated first two-dimensional line segment and each generated second two-dimensional line segment to generate a two-dimensional line segment set;
a setting unit configured to set initial camera principal point coordinates of a target camera according to a width and a height of the target image, wherein the target image is an image photographed by the target camera;
a construction unit configured to construct a camera principal point optimization function according to the two-dimensional line segment set and the initial camera principal point coordinates, wherein the two-dimensional line segment set includes: the method comprises the following steps of constructing a camera principal point optimization function according to the two-dimensional line segment set and the initial camera principal point coordinates, wherein the first two-dimensional line segment sequence and the second two-dimensional line segment sequence comprise the following steps:
for each first two-dimensional line segment of the sequence of first two-dimensional line segments, performing the following processing steps:
respectively carrying out back projection on two end points of the distorted line segment corresponding to the first two-dimensional line segment to generate two first back projection rays under a camera coordinate system of the target camera;
performing outer product processing on the two first back projection rays to generate a first normal vector of a plane formed by the two back projection rays;
constructing a homogeneous equation set for each generated first normal vector to generate a longitudinal vanishing direction vector;
for each second two-dimensional line segment of the sequence of second two-dimensional line segments, performing the following processing steps:
respectively carrying out back projection on two end points of the distorted line segment corresponding to the second two-dimensional line segment so as to generate two second back projection rays under the camera coordinate system of the target camera;
performing outer product processing on the two second back projection rays to generate a second normal vector of a plane formed by the two back projection rays;
constructing a homogeneous equation set for each generated second normal vector to generate a transverse vanishing direction vector;
generating a camera principal point optimization function according to the initial camera principal point coordinates, the longitudinal vanishing direction vectors and the transverse vanishing direction vectors;
a calibration unit configured to calibrate a camera principal point of the target camera based on the camera principal point optimization function.
5. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-3.
6. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210852920.0A CN115170674B (en) | 2022-07-20 | 2022-07-20 | Camera principal point calibration method, device, equipment and medium based on single image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210852920.0A CN115170674B (en) | 2022-07-20 | 2022-07-20 | Camera principal point calibration method, device, equipment and medium based on single image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115170674A CN115170674A (en) | 2022-10-11 |
CN115170674B true CN115170674B (en) | 2023-04-14 |
Family
ID=83494550
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210852920.0A Active CN115170674B (en) | 2022-07-20 | 2022-07-20 | Camera principal point calibration method, device, equipment and medium based on single image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115170674B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108876749A (en) * | 2018-07-02 | 2018-11-23 | 南京汇川工业视觉技术开发有限公司 | A kind of lens distortion calibration method of robust |
JP2020191624A (en) * | 2019-05-17 | 2020-11-26 | キヤノン株式会社 | Electronic apparatus and control method for the same |
CN112330752A (en) * | 2020-11-13 | 2021-02-05 | 深圳先进技术研究院 | Multi-camera combined calibration method and device, terminal equipment and readable storage medium |
WO2022120567A1 (en) * | 2020-12-08 | 2022-06-16 | 深圳先进技术研究院 | Automatic calibration system based on visual guidance |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101814186A (en) * | 2010-02-04 | 2010-08-25 | 上海交通大学 | Method utilizing curve-fitting to calibrate radial distortion of camera |
CN109859272B (en) * | 2018-12-18 | 2023-05-19 | 像工场(深圳)科技有限公司 | Automatic focusing binocular camera calibration method and device |
CN109816733B (en) * | 2019-01-14 | 2023-08-18 | 京东方科技集团股份有限公司 | Camera parameter initialization method and device, camera parameter calibration method and device and image acquisition system |
CN111243035B (en) * | 2020-04-29 | 2020-08-14 | 成都纵横自动化技术股份有限公司 | Camera calibration method and device, electronic equipment and computer-readable storage medium |
CN112489141B (en) * | 2020-12-21 | 2024-01-30 | 像工场(深圳)科技有限公司 | Production line calibration method and device for single-board single-image strip relay lens of vehicle-mounted camera |
CN113963065A (en) * | 2021-10-19 | 2022-01-21 | 杭州蓝芯科技有限公司 | Lens internal reference calibration method and device based on external reference known and electronic equipment |
CN115018920B (en) * | 2022-04-21 | 2024-10-29 | 成都数字天空科技有限公司 | Camera array calibration method and device, electronic equipment and storage medium |
-
2022
- 2022-07-20 CN CN202210852920.0A patent/CN115170674B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108876749A (en) * | 2018-07-02 | 2018-11-23 | 南京汇川工业视觉技术开发有限公司 | A kind of lens distortion calibration method of robust |
JP2020191624A (en) * | 2019-05-17 | 2020-11-26 | キヤノン株式会社 | Electronic apparatus and control method for the same |
CN112330752A (en) * | 2020-11-13 | 2021-02-05 | 深圳先进技术研究院 | Multi-camera combined calibration method and device, terminal equipment and readable storage medium |
WO2022120567A1 (en) * | 2020-12-08 | 2022-06-16 | 深圳先进技术研究院 | Automatic calibration system based on visual guidance |
Also Published As
Publication number | Publication date |
---|---|
CN115170674A (en) | 2022-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110413812B (en) | Neural network model training method and device, electronic equipment and storage medium | |
CN113177888A (en) | Hyper-resolution restoration network model generation method, image hyper-resolution restoration method and device | |
CN113327318B (en) | Image display method, image display device, electronic equipment and computer readable medium | |
CN112330788A (en) | Image processing method, image processing device, readable medium and electronic equipment | |
CN112418249A (en) | Mask image generation method and device, electronic equipment and computer readable medium | |
CN110705536A (en) | Chinese character recognition error correction method and device, computer readable medium and electronic equipment | |
CN113191257B (en) | Order of strokes detection method and device and electronic equipment | |
CN111915532B (en) | Image tracking method and device, electronic equipment and computer readable medium | |
CN111338827B (en) | Method and device for pasting form data and electronic equipment | |
CN115170674B (en) | Camera principal point calibration method, device, equipment and medium based on single image | |
CN114821540B (en) | Parking space detection method and device, electronic equipment and computer readable medium | |
CN116309137A (en) | Multi-view image deblurring method, device and system and electronic medium | |
CN110209851B (en) | Model training method and device, electronic equipment and storage medium | |
CN111680754B (en) | Image classification method, device, electronic equipment and computer readable storage medium | |
CN114419298A (en) | Virtual object generation method, device, equipment and storage medium | |
CN114399627A (en) | Image annotation method and device, electronic equipment and computer readable medium | |
CN110189279B (en) | Model training method and device, electronic equipment and storage medium | |
CN110348374B (en) | Vehicle detection method and device, electronic equipment and storage medium | |
CN112233207A (en) | Image processing method, device, equipment and computer readable medium | |
CN113066166A (en) | Image processing method and device and electronic equipment | |
CN110825480A (en) | Picture display method and device, electronic equipment and computer readable storage medium | |
CN116630436B (en) | Camera external parameter correction method, camera external parameter correction device, electronic equipment and computer readable medium | |
CN112215774B (en) | Model training and image defogging methods, apparatus, devices and computer readable media | |
CN115796637B (en) | Information processing method, device, equipment and medium based on angle steel tower material | |
CN111797932B (en) | Image classification method, apparatus, device and computer readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address | ||
CP03 | Change of name, title or address |
Address after: 201, 202, 301, No. 56-4 Fenghuang South Road, Huadu District, Guangzhou City, Guangdong Province, 510806 Patentee after: Heduo Technology (Guangzhou) Co.,Ltd. Address before: 100099 101-15, 3rd floor, building 9, yard 55, zique Road, Haidian District, Beijing Patentee before: HOLOMATIC TECHNOLOGY (BEIJING) Co.,Ltd. |