CN117710488A - Camera internal parameter calibration method, device, computer equipment and storage medium - Google Patents
Camera internal parameter calibration method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN117710488A CN117710488A CN202410067702.5A CN202410067702A CN117710488A CN 117710488 A CN117710488 A CN 117710488A CN 202410067702 A CN202410067702 A CN 202410067702A CN 117710488 A CN117710488 A CN 117710488A
- Authority
- CN
- China
- Prior art keywords
- calibration
- clustering
- corner points
- calibration image
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 238000004590 computer program Methods 0.000 claims description 22
- 238000012545 processing Methods 0.000 claims description 20
- 230000036544 posture Effects 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 8
- 230000000295 complement effect Effects 0.000 claims description 6
- 238000009827 uniform distribution Methods 0.000 abstract description 6
- 238000009826 distribution Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 8
- 238000000605 extraction Methods 0.000 description 5
- 238000012216 screening Methods 0.000 description 3
- 102100034112 Alkyldihydroxyacetonephosphate synthase, peroxisomal Human genes 0.000 description 2
- 101000799143 Homo sapiens Alkyldihydroxyacetonephosphate synthase, peroxisomal Proteins 0.000 description 2
- 238000000848 angular dependent Auger electron spectroscopy Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000009828 non-uniform distribution Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The application relates to a camera internal parameter calibration method, a camera internal parameter calibration device, computer equipment and a storage medium, wherein the camera internal parameter calibration method comprises the following steps: acquiring a calibration video acquired by a camera to be calibrated; the calibration video is a video shot when the calibration pattern is positioned at different positions and different angles; extracting a plurality of calibration images from the calibration video, and detecting corner points in each calibration image; clustering all the corner points in each calibration image to obtain a plurality of target clusters; and respectively selecting a preset number of corner points from each target cluster, and calibrating the internal parameters of the camera to be calibrated based on the selected corner points. The method can ensure that the quantity of the selected angular points from each target cluster is the same, and the uniform distribution of the selected angular points is ensured, so that the accuracy of a calibration result can be ensured when the internal parameters of the camera to be calibrated are calibrated based on the selected angular points.
Description
Technical Field
The present application relates to the field of computer vision, and in particular, to a method, an apparatus, a computer device, a storage medium, and a computer program product for calibrating internal parameters of a camera.
Background
Camera internal parameters are parameters describing the internal properties of the camera, including focal length, principal point (optical center) coordinates, distortion coefficients, etc. The camera intrinsic is the basis for correcting the distortion of the image taken by the camera, and especially in the application of advanced driving assistance systems (Advanced Driver Assistance System, ADAS), the accuracy of the camera intrinsic has a great influence on the safety of the end application and the user experience.
In the related art, the calibration method of the camera internal parameters is that a calibration personnel collects a certain number of images containing calibration patterns, all trusted angular points on all the images are identified through a calibration algorithm, and finally a group of parameters are converged, so that the error between the actual 3D position of the calibration patterns, which is mapped to the 2D position of the images through the internal parameters, and the position on the actually collected 2D images is minimum.
However, the current process of calibrating internal parameters needs to ensure the diversity of calibration data and the size and position distribution of the calibration patterns in the picture, the operation is complicated, and the workload of the calibration process is large, so that the calibration efficiency is low.
Disclosure of Invention
Based on this, it is necessary to provide a camera internal reference calibration method, apparatus, computer device, computer readable storage medium and computer program product for solving the technical problem that the calibration efficiency of the internal reference calibration method is low.
In a first aspect, the present application provides a method for calibrating an internal reference of a camera. The method comprises the following steps:
acquiring a calibration video acquired by a camera to be calibrated; the calibration video is a video shot when the calibration pattern is positioned at different positions and different angles;
extracting a plurality of calibration images from the calibration video, and detecting corner points in each calibration image;
clustering all the corner points in each calibration image to obtain a plurality of target clusters;
and respectively selecting a preset number of corner points from each target cluster, and calibrating the internal parameters of the camera to be calibrated based on the selected corner points.
In one embodiment, the clustering processing is performed on all the corner points in each calibration image to obtain a plurality of target clusters, and the method further includes:
determining distance and posture information of a calibration pattern contained in each calibration image relative to a camera coordinate system corresponding to the camera to be calibrated and at least one feature of center point coordinates of the calibration pattern contained in the calibration image; the center point coordinates are coordinates of the center point of the calibration pattern under a pixel coordinate system corresponding to the calibration image;
And clustering all the corner points in each calibration image according to at least one feature of the distance, the gesture information and the center point coordinates corresponding to each calibration image to obtain a plurality of target clusters.
In one embodiment, the method further comprises:
when clustering is carried out on all the corner points in each calibration image according to the distance corresponding to each calibration image, a plurality of preset target distances are taken as clustering centers, and clustering is carried out on all the corner points in each calibration image according to the distance corresponding to each calibration image, so that a plurality of target clustering clusters are obtained.
In one embodiment, the method further comprises:
when clustering is carried out on all the angular points in each calibration image according to the central point coordinates corresponding to each calibration image, uniformly selecting a plurality of target pixel points from each calibration image according to the central point coordinates corresponding to each calibration image;
and clustering all the corner points in each calibration image by taking each target pixel point as a clustering center to obtain a plurality of target clusters.
In one embodiment, the method further comprises:
when clustering is carried out on all the corner points in each calibration image according to the posture information corresponding to each calibration image, a plurality of preset target postures are taken as clustering centers, and clustering is carried out on all the corner points in each calibration image according to the posture information corresponding to each calibration image, so that a plurality of target clustering clusters are obtained.
In one embodiment, the method further comprises:
when clustering is carried out on all the corner points in each calibration image according to the distance, the gesture information and the center point coordinates corresponding to each calibration image, clustering is carried out on all the corner points in each calibration image according to the distance, and a plurality of first clustering clusters are obtained;
clustering the corner points in each first cluster according to the center point coordinates to obtain a plurality of second clusters;
and clustering the corner points in each second cluster according to the attitude information to obtain a plurality of target clusters.
In one embodiment, the method further comprises:
under the condition that any one of the target clusters does not have corner points or the number of the corner points is smaller than a threshold value, performing video complement based on the information of any one of the target clusters;
Calibrating the internal parameters of the camera to be calibrated according to the video of the supplementary shooting and the calibration video;
the information of any one target cluster comprises at least one of distance, attitude information and center point coordinates of a calibration pattern of the corner points in the target cluster relative to a camera coordinate system.
In a second aspect, the present application further provides a camera internal parameter calibration device. The device comprises:
the acquisition module is used for acquiring calibration videos acquired by the camera to be calibrated; the calibration video is a video shot when the calibration pattern is positioned at different positions and different angles;
the detection module is used for extracting a plurality of calibration images from the calibration video and detecting angular points in each calibration image;
the clustering module is used for carrying out clustering processing on all the corner points in each calibration image to obtain a plurality of target clustering clusters;
and the calibration module is used for respectively selecting a preset number of corner points from each target cluster, and calibrating the internal parameters of the camera to be calibrated based on the selected corner points.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
Acquiring a calibration video acquired by a camera to be calibrated; the calibration video is a video shot when the calibration pattern is positioned at different positions and different angles;
extracting a plurality of calibration images from the calibration video, and detecting corner points in each calibration image;
clustering all the corner points in each calibration image to obtain a plurality of target clusters;
and respectively selecting a preset number of corner points from each target cluster, and calibrating the internal parameters of the camera to be calibrated based on the selected corner points.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a calibration video acquired by a camera to be calibrated; the calibration video is a video shot when the calibration pattern is positioned at different positions and different angles;
extracting a plurality of calibration images from the calibration video, and detecting corner points in each calibration image;
clustering all the corner points in each calibration image to obtain a plurality of target clusters;
and respectively selecting a preset number of corner points from each target cluster, and calibrating the internal parameters of the camera to be calibrated based on the selected corner points.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
acquiring a calibration video acquired by a camera to be calibrated; the calibration video is a video shot when the calibration pattern is positioned at different positions and different angles;
extracting a plurality of calibration images from the calibration video, and detecting corner points in each calibration image;
clustering all the corner points in each calibration image to obtain a plurality of target clusters;
and respectively selecting a preset number of corner points from each target cluster, and calibrating the internal parameters of the camera to be calibrated based on the selected corner points.
According to the camera internal reference calibration method, device, computer equipment, storage medium and computer program product, the camera to be calibrated is used for collecting calibration videos, then a plurality of calibration images are automatically extracted from the calibration videos to replace the collection of images at the designated positions and the designated angles, the workload of calibration personnel can be reduced, after the angular points in each calibration image are detected, all the angular points in each calibration image are clustered to obtain a plurality of target cluster groups, the angular points with the preset number are selected from each target cluster group respectively, so that the number of the angular points selected from each target cluster group is the same, the uniform distribution of the selected angular points is ensured, and therefore, the accuracy of a calibration result can be ensured when the internal reference of the camera to be calibrated is calibrated based on the selected angular points.
Drawings
FIG. 1 is a schematic view of an embodiment in which corner points are uniformly and non-uniformly distributed;
FIG. 2 is a flow chart of a method for calibrating camera parameters according to an embodiment;
FIG. 3 is a schematic diagram of clustering diagonal points based on the distance between calibration patterns included in calibration images and a camera coordinate system corresponding to a camera to be calibrated in one embodiment;
FIG. 4 is a schematic diagram of clustering diagonal points based on center point coordinates of calibration patterns included in a calibration image in one embodiment;
FIG. 5 is a schematic diagram of clustering diagonal points based on pose information of calibration patterns included in calibration images relative to a camera coordinate system corresponding to a camera to be calibrated in one embodiment;
FIG. 6 is a flowchart of a method for calibrating camera parameters according to another embodiment;
FIG. 7 is a block diagram of a camera internal parameter calibration device in one embodiment;
fig. 8 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein.
It will be appreciated that the camera reference requires the calibration personnel to collect a certain number of images containing the calibration pattern, the calibration algorithm identifies all the trusted corner points on all the pictures (n in number), eventually converging a set of parameters such that the error between the actual 3D position (P) of the calibration pattern mapped to the 2D position (P) of the image by the reference (K and distortion coefficient coef) and the position on the actually collected 2D image (P') is minimal (E).
From the above formula, it can be seen that K and coef corresponding to the minimum projection errors of the n corner points are used as the final output of the calibration algorithm, so that the distribution of the n corner points in the image has a large influence on the final values of K and coef.
For example, as shown in fig. 1, a schematic view of the uniform distribution and the non-uniform distribution of the corner points is shown. The corner points in the (a) diagram in fig. 1 are ideally uniformly distributed, and finally the marked internal references are applicable to various positions of the image. The parameters are also adapted to different distances, provided that the uniform distribution comprises different distances, and the overall performance of the calibrated parameters will be better at the final application stage. Whereas the corner points in the (b) diagram of fig. 1 are unevenly distributed, the finally identified reference will perform better in the right-hand corner gathering area and at the specified distance, whereas the accuracy of the reference will be greatly reduced in other areas of the image location and at other distances, where the reference will lead to unstable performance in the application phase and is therefore not a usable reference.
Therefore, in order to solve the above problems, the application provides a camera internal reference calibration method for automatically balancing the viewing angle, calibration personnel do not need to perform complex operation, only need to shoot a video of a section of calibration pattern (covering common calibration positions) by using a camera, and a program can automatically screen images which can be used for internal reference calibration and finally perform calibration by adopting a traditional calibration method. In addition, the method can meet internal reference calibration under different application scenes of ADAS by configuring different screening strategies. Compared with the traditional calibration method, the method reduces the operation difficulty of the calibration personnel, can ensure that the internal reference calibration has certain precision, and does not increase any hardware cost.
Referring to fig. 2, a flowchart of a method for calibrating camera parameters is shown in an embodiment. The embodiment is described by taking the application of the method to a server as an example, and comprises the following steps:
step S210, obtaining a calibration video acquired by a camera to be calibrated; the calibration video is a video shot when the calibration pattern is located at different positions and different angles.
In particular, when the camera to be calibrated collects the calibration video, the range [ d ] of the distance between the calibration pattern and the camera to be calibrated can be determined min ,d max ]Within this range, the shooting position is selected, e.g. d 0 , d 1 ,...,d n-1 . And then, acquiring the position at each fixed distance through the camera to be calibrated, and translating the calibration pattern so that the acquired calibration video can cover different areas. Meanwhile, in the process of translating the calibration patterns at each fixed distance, the gesture of the calibration patterns can be continuously adjusted, so that the collected calibration video can contain the calibration patterns at different positions and different angles. And then, the calibration video acquired by the camera to be calibrated can be sent to the server in a wired communication or wireless communication mode, and the server processes the calibration video to finish the calibration of the internal parameters of the camera.
In the step, the calibration image acquired in the traditional method is modified into the calibration video, and the automatic screening of the calibration image can be carried out through a server, so that the shooting of the image at a specific position and a specific angle is not required, the distribution requirements of different positions and different angles are not required to be considered, and the workload of calibration personnel is reduced.
Step S220, extracting a plurality of calibration images from the calibration video, and detecting corner points in each calibration image.
Where corner points are points with significant variation in the camera image, typically edge intersections in the image or where the texture variation is large. In camera internal parameter calibration, corner points are used for camera calibration and correction to determine internal parameters of the camera.
In the specific implementation, as the frame rate of the calibration video is higher, the similarity of the images of the adjacent frames is higher, so that the collected calibration video can be subjected to frame extraction processing, and a part of video frames can be extracted from the collected calibration video to serve as the calibration images, so that the calibration efficiency of the camera to be calibrated is improved. The frame extraction can be performed at fixed intervals during frame extraction.
After each calibration image is extracted, the angular points in each calibration image can be further detected. When the corner detection is carried out, a frame skipping method can be adopted to accelerate the detection process. Specifically, the corner may be detected every several frames, for example, every 30 frames or 60 frames, and skipped frames do not need to be subjected to corner detection processing, thereby improving detection efficiency.
And step S230, clustering all the corner points in each calibration image to obtain a plurality of target clusters.
In the specific implementation, after the angular points of each calibration image are determined, the distance and posture information of the calibration pattern contained in each calibration image relative to the camera coordinate system corresponding to the camera to be calibrated and the center point coordinates of the calibration pattern contained in the calibration image can be respectively determined. And clustering all the corner points in each calibration image by taking at least one of the distance, the gesture information and the coordinates of the central point as a clustering feature to obtain a plurality of target clustering clusters.
And step S240, respectively selecting a preset number of corner points from each target cluster, and calibrating the internal parameters of the camera to be calibrated based on the selected corner points.
In a specific implementation, a preset number of corner points can be selected from each target cluster respectively, so that the number of the corner points selected from each target cluster is the same, and the uniform distribution of the selected corner points is ensured. And deleting the corner points of the unselected calibration patterns, and calibrating the internal parameters of the camera based on the corner points of the rest calibration patterns, thereby realizing the calibration of the internal parameters of the camera to be calibrated through uniformly distributed corner points.
According to the camera internal reference calibration method, the calibration video is collected through the camera to be calibrated, then a plurality of calibration images are automatically extracted from the calibration video, the work load of calibration personnel can be reduced instead of collecting images of specified positions and specified angles, after the angular points in each calibration image are detected, clustering is conducted on all the angular points in each calibration image to obtain a plurality of target cluster groups, the angular points with the preset number are selected from each target cluster group respectively, so that the number of the angular points selected from each target cluster group is the same, the uniform distribution of the selected angular points is ensured, and therefore, the accuracy of a calibration result can be ensured when the internal reference of the camera to be calibrated is calibrated based on the selected angular points.
In an exemplary embodiment, the step S230 performs clustering processing on all the corner points in each calibration image to obtain a plurality of target clusters, which specifically includes:
step S231, determining, for each calibration image, distance and posture information of a calibration pattern included in the calibration image relative to a camera coordinate system corresponding to a camera to be calibrated, and at least one feature of center point coordinates of the calibration pattern included in the calibration image; the center point coordinates are coordinates of the center point of the calibration pattern under a pixel coordinate system corresponding to the calibration image.
And S232, clustering all the corner points in each calibration image according to at least one feature of the distance, the gesture information and the center point coordinates corresponding to each calibration image to obtain a plurality of target clusters.
In a specific implementation, the basis for clustering all the corner points in each calibration image may be: the distance and posture information of the calibration pattern contained in the calibration image relative to the camera coordinate system corresponding to the camera to be calibrated, and the center point coordinate of the calibration pattern contained in the calibration image. For each calibration image, it can be determined that: distance, attitude information and center point coordinates.
The distance of each calibration image corresponding to the camera coordinate system can be determined according to the distance d when the calibration video is acquired. The center-point coordinates (center-u, center-v) corresponding to each calibration image can be obtained according to the corner sets in the calibration image. The pose information represents information of three directions of roll (roll), pitch (pitch) and yaw (yaw) of the calibration pattern relative to the camera coordinate system, and can be determined based on the principle of internal reference calibration.
The clustering of all the corner points in each calibration image can be realized based on one or more of the distance and the gesture information of the calibration pattern contained in the calibration image relative to the camera coordinate system corresponding to the camera to be calibrated and the center point coordinates of the calibration pattern contained in the calibration image.
When clustering is performed by using two or more features from the distance, the gesture information and the coordinates of the center point, hierarchical clustering is required. After all the corner points are clustered through one feature, a plurality of first clustering clusters are obtained, then in each first clustering cluster, a second feature is adopted for clustering, and the like, a plurality of target clustering clusters are obtained.
In this embodiment, by clustering all the corner points in each calibration image through one or more of the distance and the gesture information of the calibration pattern included in each calibration image relative to the camera coordinate system corresponding to the camera to be calibrated and the center point coordinates of the calibration pattern included in the calibration image, the subsequent selection of corner points in multiple dimensions can be realized, and the uniformity of the selected corner points in the multiple dimensions is ensured, so that the accuracy of the subsequent calibration of the internal parameters of the camera is improved.
In an exemplary embodiment, the step S232 further includes: when clustering is carried out on all the corner points in each calibration image according to the distance corresponding to each calibration image, clustering is carried out on all the corner points in each calibration image according to the distance corresponding to each calibration image by taking a plurality of preset target distances as clustering centers, and a plurality of target clusters are obtained.
In a specific implementation, the clustering of all the corner points in each calibration image can be implemented only based on the distance between the calibration pattern contained in the calibration image and the camera coordinate system corresponding to the camera to be calibrated. Specifically, a plurality of target distances can be preset as a clustering center, for example, the distance d when the calibration video is collected can be directly used 0 ,d 1 ,...,d n-1 As a clustering center, after determining the distance between the calibration patterns contained in each calibration image and the camera coordinate system corresponding to the camera to be calibratedAnd clustering each corner point according to the distance corresponding to each calibration image.
Referring to fig. 3, a schematic diagram of clustering diagonal points based on the distance between calibration patterns included in calibration images and a camera coordinate system corresponding to a camera to be calibrated is shown in an embodiment. To capture the distance d when calibrating the video 0 ,d 1 ,...,d n-1 And when the clustering centers are used for clustering, clustering the calibration images to the clustering centers according to the distances corresponding to the calibration images. For example, for a distance d 0 The clustering center of (2) is firstly determined that the corresponding distance is positioned at d 0 Nearby calibration images, and the calibration images are sent to a clustering center d 0 Similarly, the corresponding distance can be located at d 1 Clustering the nearby calibration images to a clustering center d 1 Similarly, the clustering of each calibration image to d can be realized 0 ,d 1 ,...,d n-1 And (5) the clustering centers are equalized, so that the clustering of the corner points of each calibration image is realized, and N target clustering clusters are obtained.
In this embodiment, all the corner points in each calibration image are clustered according to the distance between the calibration pattern included in each calibration image and the camera coordinate system corresponding to the camera to be calibrated, so that the subsequent selection of the corner points in the position distribution dimension of the corner points can be realized, and the uniformity of the distance between the selected corner points and the camera coordinate system is ensured.
In an exemplary embodiment, the step S232 further includes: when clustering is carried out on all the angular points in each calibration image according to the central point coordinates corresponding to each calibration image, uniformly selecting a plurality of target pixel points from each calibration image according to the central point coordinates corresponding to each calibration image; and clustering all the corner points in each calibration image by taking each target pixel point as a clustering center to obtain a plurality of target clustering clusters.
In a specific implementation, the clustering of all the corner points in each calibration image can also be realized only based on the distribution condition of the corner points in the calibration image. Specifically, according to the coordinates of the central points of the calibration patterns included in each calibration image, a plurality of target pixel points are uniformly selected from each calibration image to serve as clustering centers, each target pixel point serves as the clustering center, and clustering processing is performed on all the corner points in each calibration image to obtain a plurality of target clustering clusters.
More specifically, for each calibration image, the calibration image may be uniformly divided into a plurality of regions according to the center point coordinates corresponding to the calibration image, with the center pixel point in each region being the target pixel point. For example, after dividing the calibration image by 4, 9 and 16 equally, the center pixel point in each divided area is used as a clustering center, and each corner point is clustered. The method and the number of the target pixel points selected from each calibration image are the same, namely the modes of equally dividing each calibration image are the same, so that the corner points clustered to the target pixel points at the same position in each calibration image can be used as a cluster.
Referring to fig. 4, a schematic diagram of clustering corner points based on coordinates of center points of calibration patterns included in a calibration image is shown in an embodiment. Taking the calibration image 9 as an example, for each calibration image, according to the coordinates of the central point of the calibration pattern contained in each calibration image, the calibration image 9 is equally divided, the central pixel point in each region obtained after 9 is equally divided is used as a clustering center, and the corner points in the calibration image are clustered to the central pixel points in each region, so that 9 clustering clusters can be obtained in each calibration image. And then, clustering all the calibrated images to the corner points of each region according to the 9 equally-divided regions, and summarizing to obtain 9 target clustering clusters.
In this embodiment, a plurality of target pixel points uniformly selected from the calibration images are used as clustering centers, and all the corner points in each calibration image are clustered, so that the subsequent corner points in the dimension of the distribution condition of the corner points in the calibration images can be selected, and the distribution uniformity of the selected corner points in the calibration images is ensured.
In an exemplary embodiment, the step S232 further includes: when clustering is carried out on all the corner points in each calibration image according to the posture information corresponding to each calibration image, a plurality of preset target postures are taken as clustering centers, and clustering is carried out on all the corner points in each calibration image according to the posture information corresponding to each calibration image, so that a plurality of target clustering clusters are obtained.
In a specific implementation, clustering of all the corner points in each calibration image can be realized only based on the gesture of the calibration pattern contained in the calibration image relative to the camera coordinate system corresponding to the camera to be calibrated. Specifically, a plurality of target poses can be preset as clustering centers, and after pose information of calibration patterns contained in each calibration image relative to a camera coordinate system corresponding to a camera to be calibrated is respectively determined, each angular point is clustered according to the pose information corresponding to each calibration image.
For example, 7 target poses such as [0, 0, 0], [ -pi/4, 0, 0], [0, -pi/4, 0], [0, 0, -pi/4 ], [0, 0, pi/4 ] and the like can be selected as clustering centers, and when clustering is performed, corner points in each calibration image are clustered into each gesture area according to gesture information corresponding to each calibration image, so that 7 clustering target clusters are obtained. For example, for the clustering center of the target pose [0, 0, 0], determining the calibration images with the corresponding pose information near [0, 0, 0], clustering the angular points in the calibration images to the target pose [0, 0, 0] to obtain 1 target cluster, similarly, clustering the angular points in the calibration images with the corresponding pose information near [ -pi/4, 0, 0] to [ -pi/4, 0, 0], and so on, can realize clustering all the angular points in each calibration image to the clustering center of [0, 0, 0], [ -pi/4, 0, 0], [0, -pi/4, 0], [0, pi/4 ], [0, 0, -pi/4 ], [0, 0, pi/4 ] and so on, to obtain 7 target clusters. Referring to fig. 5, a schematic diagram of clustering diagonal points based on pose information of calibration patterns included in calibration images relative to a camera coordinate system corresponding to a camera to be calibrated is shown in an embodiment. FIG. 5 is a depiction of a two-dimensional plane, representing only pitch and yaw planes, with five regions, respectively a square in the middle, an inverted trapezoid in the top, a positive trapezoid in the bottom, two rotated trapezoids on both the left and right sides, and a total of 7 regions, plus two regions of the roll plane not shown.
In this embodiment, all the corner points in each calibration image are clustered according to the gesture information of the calibration pattern included in each calibration image relative to the camera coordinate system corresponding to the camera to be calibrated, so that subsequent selection from the corner points in the dimension of the gesture information can be realized, and the uniformity of the angle of the selected corner points relative to the camera coordinate system is ensured.
In an exemplary embodiment, the step S232 further includes: when clustering is carried out on all the corner points in each calibration image according to the distance, the gesture information and the center point coordinates corresponding to each calibration image, clustering is carried out on all the corner points in each calibration image according to the distance, and a plurality of first clustering clusters are obtained; clustering the corner points in each first cluster according to the coordinates of the central points to obtain a plurality of second clusters; and clustering the corner points in each second cluster according to the attitude information to obtain a plurality of target clusters.
In the specific implementation, when clustering is performed on all the corner points in each calibration image according to the distance, the gesture information and the central point coordinate corresponding to each calibration image, hierarchical clustering can be performed on all the corner points in each calibration image according to the distance, the central point coordinate and the gesture information sequentially, so that a plurality of target cluster clusters are obtained.
More specifically, as shown in FIG. 3, a predetermined target distance d is determined 0 ,d 1 ,...,d n-1 After the clustering centers are equal, all the corner points of all the calibration images can be clustered to each clustering center according to the corresponding distance of each calibration image, so that N first clustering clusters are obtained. For the corner points of each first cluster, as shown in fig. 4, clustering is performed according to uniformly selected target pixel points, and if M selected target pixel points are provided, n×m second cluster types can be obtained. Further, as shown in fig. 5, 7 target poses may be determined, and in each second cluster, the corners in each second cluster are clustered based on the pose information with 7 target poses as 7 cluster centers, to obtain nxM×7 target clusters.
When the three corresponding distances, the gesture information and the central point coordinates of each calibration image are used as clustering features together, the hierarchical clustering sequence can be adjusted, namely, clustering can be performed sequentially according to the distances, the central point coordinates and the gesture information, clustering can be performed sequentially according to the central point coordinates, the distances and the gesture information, 6 kinds of sorting modes are shared among the 3 kinds of features, and one sort mode is selected at will for hierarchical clustering.
In this embodiment, by using the distance and gesture information of the calibration pattern included in the calibration image relative to the camera coordinate system corresponding to the camera to be calibrated and the coordinates of the center point of the calibration pattern included in the calibration image as the clustering features, all the angular points in each calibration image are clustered in a layered manner, so that the multi-dimensional distribution uniformity is fully considered, the view angle balance can be realized, so that the distribution uniformity of the angular points selected later is convenient, and the accuracy of the camera internal parameters calibrated based on the selected angular points is improved.
In an exemplary embodiment, the method further includes: under the condition that no corner points exist in any one of the target clusters or the number of the corner points is smaller than a threshold value, performing video complement based on the information of any one of the target clusters; and calibrating the internal parameters of the camera to be calibrated according to the video of the supplementary shooting and the calibration video.
The information of any one target cluster comprises at least one of the distance, the gesture information and the center point coordinates of the calibration pattern of the corner point in the target cluster relative to the camera coordinate system.
In specific implementation, when determining each target cluster and selecting the preset number of corner points, if no corner point exists in a certain target cluster or the number of corner points is smaller than a threshold value, namely, the corner points in each target cluster meet the requirement, the corner points can be directly selected, and if the corner points selected are used for calibrating the internal parameters of the camera, if no corner point exists in a certain target attitude cluster or the number of corner points is smaller than the threshold value, the acquired calibration image is indicated to be not comprehensive enough, the information of the target cluster is recorded, and the method comprises the following steps: the distance, attitude information and center point coordinates of the calibration pattern of the corner point in the target cluster relative to the camera coordinate system corresponding to the camera to be calibrated. And performing video complement shooting according to the information of the target cluster. And performing internal reference calibration of the camera to be calibrated according to the video subjected to the supplementary shooting and the calibration video acquired for the first time.
In this embodiment, the number of the corner points in the clustering centers and each target cluster is checked to determine whether the feature distribution of each clustering center is reasonable and whether the number of each category is sufficient, and when the feature distribution is not satisfied, video supplementary shooting is performed, and calibration of the camera internal parameters is performed according to the video and the calibration video of the supplementary shooting, so as to ensure the distribution uniformity of the corner points for internal parameter calibration.
In one embodiment, to facilitate understanding of embodiments of the present application by those skilled in the art, the following description will be made with reference to specific examples of the drawings. Referring to fig. 6, a flow chart of a camera internal parameter calibration method is shown, comprising the following steps:
(1) And shooting the placing process of the calibration patterns at different positions and different angles through the camera to be calibrated to obtain a calibration video.
(2) And performing frame extraction processing on the calibration video to obtain a plurality of calibration images. A series of calibration images obtained by frame extraction can be taken as an image sequence and marked as image_list.
(3) And detecting the corner points of each calibration image, and determining the clustering characteristics of each calibration image.
Wherein the clustering features include: the distance and posture information of the calibration pattern contained in the calibration image relative to the camera coordinate system corresponding to the camera to be calibrated, and the center point coordinate of the calibration pattern contained in the calibration image. Wherein, the distance can be marked as d, the gesture information can be marked as (roll, pitch, yaw), the center point sitting mark is marked as (center_u, center_v), and then each image can extract the clustering feature: [ center_u, center_v, Z, roll, pitch, yaw ].
After the corner points of each calibration image are detected, the corner point set markers_list of each calibration image can be recorded, and the structure can be expressed as follows: { "image_name": { "burner": (X0, Y0), (X1, Y1), (xn-1, yn-1) ], and "transform": X, Y, Z, roll, pitch, yaw }.
Wherein (x 0, y 0), (x 1, y 1), (xn-1, yn-1) each represent a respective corner point in the set of corner points.
Wherein X, Y, Z represents the distances of the calibration pattern in the three coordinate axis directions of the camera coordinate system, and describes the position of the calibration pattern under the camera coordinate system together.
(4) And carrying out hierarchical clustering on the corner points of each calibration image according to each clustering characteristic.
Specifically, clustering processing can be performed on all the corner points in each calibration image according to the distance to obtain a plurality of first clustering clusters.
And then, clustering the corner points in each first cluster according to the coordinates of the central points to obtain a plurality of second clusters.
And finally, clustering the corner points in each second cluster according to the attitude information to obtain a plurality of target clusters.
(5) And selecting corner points based on the principle of view angle equalization.
(6) If the selected result meets the requirement, the calibration of the camera internal parameters to be calibrated can be performed through a traditional internal parameter calibration algorithm. The screening result meets the requirement that the number of the corner points in each target cluster is larger than or equal to a threshold value, a preset number of corner points are selected from each target cluster respectively, and the internal parameters of the camera to be calibrated are calibrated based on the selected corner points; wherein the preset number is not greater than the threshold.
(7) If the selection result does not meet the requirement, the video is subjected to the complement shooting. Namely, a cluster without corner points or with the number of corner points smaller than a threshold value exists, information of the cluster is recorded, video supplementary shooting is carried out, and a process of calibrating internal references of cameras to be calibrated is carried out again according to the video supplementary shooting and the calibration video acquired in the step (1).
In the camera internal reference calibration method provided by the embodiment, the camera to be calibrated is used for collecting the calibration video, then a plurality of calibration images are automatically extracted from the calibration video, the images of the designated positions and the designated angles are collected instead, the workload of calibration personnel can be reduced, after the angular points in each calibration image are detected, all angular points in each calibration image are clustered in a layered mode by taking the distance, the gesture information and the central point coordinates of the calibration pattern contained in the calibration image relative to the camera coordinate system corresponding to the camera to be calibrated as clustering features, the distribution uniformity of multiple dimensions is fully considered, the view angle balance can be realized, the distribution uniformity of the angular points selected subsequently can be conveniently realized, and the accuracy of the calibration result can be ensured when the internal reference of the camera to be calibrated is based on the selected angular points.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a camera internal parameter calibration device for realizing the camera internal parameter calibration method. The implementation of the solution provided by the device is similar to that described in the above method, so the specific limitation in the embodiments of the device for calibrating the internal parameters of the camera provided below may be referred to as the limitation of the method for calibrating the internal parameters of the camera, which is not described herein.
In one embodiment, as shown in fig. 7, there is provided a camera internal parameter calibration apparatus, including: an acquisition module 710, a detection module 720, a clustering module 730, and a calibration module 740, wherein:
the acquisition module 710 is configured to acquire a calibration video acquired by a camera to be calibrated; the calibration video is a video shot when the calibration pattern is positioned at different positions and different angles;
the detection module 720 is used for extracting a plurality of calibration images from the calibration video and detecting corner points in each calibration image;
the clustering module 730 is configured to perform clustering processing on all the corner points in each calibration image to obtain multiple target clusters;
the calibration module 740 is configured to select a preset number of corner points from each target cluster, and calibrate an internal reference of the camera to be calibrated based on the selected corner points.
In one embodiment, the clustering module 730 is further configured to determine, for each calibration image, distance and pose information of a calibration pattern included in the calibration image relative to a camera coordinate system corresponding to a camera to be calibrated, and at least one feature of a center point coordinate of the calibration pattern included in the calibration image; the center point coordinates are coordinates of the center point of the calibration pattern under a pixel coordinate system corresponding to the calibration image; and clustering all the corner points in each calibration image according to at least one feature of the distance, the gesture information and the center point coordinates corresponding to each calibration image to obtain a plurality of target cluster clusters.
In one embodiment, the clustering module 730 is further configured to, when performing clustering processing on all the corner points in each calibration image according to the distance corresponding to each calibration image, perform clustering processing on all the corner points in each calibration image according to the distance corresponding to each calibration image by using a preset plurality of target distances as a clustering center, so as to obtain a plurality of target clusters.
In one embodiment, the clustering module 730 is further configured to uniformly select a plurality of target pixel points from each calibration image according to the center point coordinates corresponding to each calibration image when performing clustering processing on all the corner points in each calibration image according to the center point coordinates corresponding to each calibration image; and clustering all the corner points in each calibration image by taking each target pixel point as a clustering center to obtain a plurality of target clustering clusters.
In one embodiment, the clustering module 730 is further configured to, when performing clustering processing on all the corner points in each calibration image according to the pose information corresponding to each calibration image, perform clustering processing on all the corner points in each calibration image according to the pose information corresponding to each calibration image by using a plurality of preset target poses as clustering centers, so as to obtain a plurality of target clusters.
In one embodiment, the clustering module 730 is further configured to, when performing clustering processing on all the corner points in each calibration image according to the distance, the gesture information and the center point coordinates corresponding to each calibration image, perform clustering processing on all the corner points in each calibration image according to the distance, so as to obtain a plurality of first cluster clusters; clustering the corner points in each first cluster according to the coordinates of the central points to obtain a plurality of second clusters; and clustering the corner points in each second cluster according to the attitude information to obtain a plurality of target clusters.
In one embodiment, the calibration module 740 is further configured to perform, when there are no corner points in any one of the target clusters or the number of corner points is less than a threshold value, video complement based on information of any one of the target clusters; calibrating the internal parameters of the camera to be calibrated according to the video of the supplementary shooting and the calibration video; the information of any one target cluster comprises at least one of the distance, the gesture information and the center point coordinates of the calibration pattern of the corner point in the target cluster relative to the camera coordinate system.
The modules in the camera internal parameter calibration device can be realized in whole or in part by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 8. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing data in the camera internal parameter calibration process. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a camera internal parameter calibration method.
It will be appreciated by those skilled in the art that the structure shown in fig. 8 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.
Claims (10)
1. A method for calibrating internal parameters of a camera, the method comprising:
acquiring a calibration video acquired by a camera to be calibrated; the calibration video is a video shot when the calibration pattern is positioned at different positions and different angles;
extracting a plurality of calibration images from the calibration video, and detecting corner points in each calibration image;
clustering all the corner points in each calibration image to obtain a plurality of target clusters;
And respectively selecting a preset number of corner points from each target cluster, and calibrating the internal parameters of the camera to be calibrated based on the selected corner points.
2. The method of claim 1, wherein the clustering all the corner points in each calibration image to obtain a plurality of target clusters, further comprises:
determining distance and posture information of a calibration pattern contained in each calibration image relative to a camera coordinate system corresponding to the camera to be calibrated and at least one feature of center point coordinates of the calibration pattern contained in the calibration image; the center point coordinates are coordinates of the center point of the calibration pattern under a pixel coordinate system corresponding to the calibration image;
and clustering all the corner points in each calibration image according to at least one feature of the distance, the gesture information and the center point coordinates corresponding to each calibration image to obtain a plurality of target clusters.
3. The method according to claim 2, wherein the method further comprises:
when clustering is carried out on all the corner points in each calibration image according to the distance corresponding to each calibration image, a plurality of preset target distances are taken as clustering centers, and clustering is carried out on all the corner points in each calibration image according to the distance corresponding to each calibration image, so that a plurality of target clustering clusters are obtained.
4. The method according to claim 2, wherein the method further comprises:
when clustering is carried out on all the angular points in each calibration image according to the central point coordinates corresponding to each calibration image, uniformly selecting a plurality of target pixel points from each calibration image according to the central point coordinates corresponding to each calibration image;
and clustering all the corner points in each calibration image by taking each target pixel point as a clustering center to obtain a plurality of target clusters.
5. The method according to claim 2, wherein the method further comprises:
when clustering is carried out on all the corner points in each calibration image according to the posture information corresponding to each calibration image, a plurality of preset target postures are taken as clustering centers, and clustering is carried out on all the corner points in each calibration image according to the posture information corresponding to each calibration image, so that a plurality of target clustering clusters are obtained.
6. The method according to claim 2, wherein the method further comprises:
when clustering is carried out on all the corner points in each calibration image according to the distance, the gesture information and the center point coordinates corresponding to each calibration image, clustering is carried out on all the corner points in each calibration image according to the distance, and a plurality of first clustering clusters are obtained;
Clustering the corner points in each first cluster according to the center point coordinates to obtain a plurality of second clusters;
and clustering the corner points in each second cluster according to the attitude information to obtain a plurality of target clusters.
7. The method according to claim 1, wherein the method further comprises:
under the condition that any one of the target clusters does not have corner points or the number of the corner points is smaller than a threshold value, performing video complement based on the information of any one of the target clusters;
calibrating the internal parameters of the camera to be calibrated according to the video of the supplementary shooting and the calibration video;
the information of any one target cluster comprises at least one of distance, attitude information and center point coordinates of a calibration pattern of the corner points in the target cluster relative to a camera coordinate system.
8. A camera internal parameter calibration device, the device comprising:
the acquisition module is used for acquiring calibration videos acquired by the camera to be calibrated; the calibration video is a video shot when the calibration pattern is positioned at different positions and different angles;
The detection module is used for extracting a plurality of calibration images from the calibration video and detecting angular points in each calibration image;
the clustering module is used for carrying out clustering processing on all the corner points in each calibration image to obtain a plurality of target clustering clusters;
and the calibration module is used for respectively selecting a preset number of corner points from each target cluster, and calibrating the internal parameters of the camera to be calibrated based on the selected corner points.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, carries out the steps of the camera intrinsic parameter calibration method according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the camera internal parameter calibration method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410067702.5A CN117710488A (en) | 2024-01-17 | 2024-01-17 | Camera internal parameter calibration method, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410067702.5A CN117710488A (en) | 2024-01-17 | 2024-01-17 | Camera internal parameter calibration method, device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117710488A true CN117710488A (en) | 2024-03-15 |
Family
ID=90150073
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410067702.5A Pending CN117710488A (en) | 2024-01-17 | 2024-01-17 | Camera internal parameter calibration method, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117710488A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107680035A (en) * | 2017-09-29 | 2018-02-09 | 广东中星微电子有限公司 | A kind of parameter calibration method and device, server and readable storage medium storing program for executing |
CN108122259A (en) * | 2017-12-20 | 2018-06-05 | 厦门美图之家科技有限公司 | Binocular camera scaling method, device, electronic equipment and readable storage medium storing program for executing |
CN112907675A (en) * | 2019-11-19 | 2021-06-04 | 浙江商汤科技开发有限公司 | Calibration method, device, system, equipment and storage medium of image acquisition equipment |
CN114049401A (en) * | 2021-11-04 | 2022-02-15 | 苏州迪凯尔医疗科技有限公司 | Binocular camera calibration method, device, equipment and medium |
CN116563292A (en) * | 2023-07-11 | 2023-08-08 | 聚时科技(深圳)有限公司 | Measurement method, detection device, detection system, and storage medium |
-
2024
- 2024-01-17 CN CN202410067702.5A patent/CN117710488A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107680035A (en) * | 2017-09-29 | 2018-02-09 | 广东中星微电子有限公司 | A kind of parameter calibration method and device, server and readable storage medium storing program for executing |
CN108122259A (en) * | 2017-12-20 | 2018-06-05 | 厦门美图之家科技有限公司 | Binocular camera scaling method, device, electronic equipment and readable storage medium storing program for executing |
CN112907675A (en) * | 2019-11-19 | 2021-06-04 | 浙江商汤科技开发有限公司 | Calibration method, device, system, equipment and storage medium of image acquisition equipment |
US20220270294A1 (en) * | 2019-11-19 | 2022-08-25 | Zhejiang Sensetime Technology Development Co., Ltd. | Calibration methods, apparatuses, systems and devices for image acquisition device, and storage media |
CN114049401A (en) * | 2021-11-04 | 2022-02-15 | 苏州迪凯尔医疗科技有限公司 | Binocular camera calibration method, device, equipment and medium |
CN116563292A (en) * | 2023-07-11 | 2023-08-08 | 聚时科技(深圳)有限公司 | Measurement method, detection device, detection system, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111640181A (en) | Interactive video projection method, device, equipment and storage medium | |
CN107301402B (en) | Method, device, medium and equipment for determining key frame of real scene | |
US20190132584A1 (en) | Method and device for calibration | |
CN107240082B (en) | Splicing line optimization method and equipment | |
CN108257186B (en) | Method and device for determining calibration image, camera and storage medium | |
CN108038826B (en) | Method and device for correcting perspective deformed shelf image | |
CN111291768A (en) | Image feature matching method and device, equipment and storage medium | |
CN115526892B (en) | Image defect duplicate removal detection method and device based on three-dimensional reconstruction | |
WO2022267939A1 (en) | Image processing method and apparatus, and computer-readable storage medium | |
KR102421604B1 (en) | Image processing methods, devices and electronic devices | |
CN107067441B (en) | Camera calibration method and device | |
CN116012241A (en) | Image distortion correction method, apparatus, computer device, and storage medium | |
CN104268550B (en) | Feature extracting method and device | |
CN109857895B (en) | Stereo vision retrieval method and system based on multi-loop view convolutional neural network | |
CN113918744A (en) | Similar image retrieval method, similar image retrieval device, storage medium and computer program product | |
CN117058022A (en) | Depth image denoising method and device, computer equipment and storage medium | |
CN115457202B (en) | Method, device and storage medium for updating three-dimensional model | |
CN117710488A (en) | Camera internal parameter calibration method, device, computer equipment and storage medium | |
JP2023092446A (en) | Cargo counting method and apparatus, computer apparatus, and storage medium | |
CN115018932A (en) | Camera calibration method and device, electronic equipment and storage medium | |
CN114897990A (en) | Camera distortion calibration method and system based on neural network and storage medium | |
CN111382753A (en) | Light field semantic segmentation method and system, electronic terminal and storage medium | |
CN111862098A (en) | Individual matching method, device, equipment and medium based on light field semantics | |
CN116030450B (en) | Checkerboard corner recognition method, device, equipment and medium | |
CN115600620B (en) | Code scanning method, device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |