CN114612575A - Camera parameter calibration and three-dimensional data generation method and system - Google Patents

Camera parameter calibration and three-dimensional data generation method and system Download PDF

Info

Publication number
CN114612575A
CN114612575A CN202210278095.8A CN202210278095A CN114612575A CN 114612575 A CN114612575 A CN 114612575A CN 202210278095 A CN202210278095 A CN 202210278095A CN 114612575 A CN114612575 A CN 114612575A
Authority
CN
China
Prior art keywords
camera
image
camera parameters
feature points
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210278095.8A
Other languages
Chinese (zh)
Inventor
吴博剑
樊鲁斌
周昌
黄建强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Alibaba Cloud Feitian Information Technology Co ltd
Original Assignee
Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Damo Institute Hangzhou Technology Co Ltd filed Critical Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority to CN202210278095.8A priority Critical patent/CN114612575A/en
Publication of CN114612575A publication Critical patent/CN114612575A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses a method and a system for calibrating camera parameters and generating three-dimensional data. According to the embodiment of the application, the camera parameters respectively corresponding to the camera at the first visual angle are determined based on the first images collected by the camera at the plurality of first visual angles and the focal length, then the target image characteristic points with the matching relation are determined from the image characteristic points respectively corresponding to the second image and the first image collected at the second visual angle, then the prediction data set for predicting the camera parameters at the second visual angle based on the camera parameters at the first visual angle is established based on the coordinate information respectively corresponding to the target image characteristic points at the first image and the second image, the camera parameters at other visual angles can be obtained based on the prediction data set without calibration objects, the parameter prediction at the continuous visual angle in the collection range can be realized, and the method is suitable for the dome camera with the continuously changed focal length.

Description

Camera parameter calibration and three-dimensional data generation method and system
Technical Field
The application relates to the technical field of data processing, in particular to a method and a device for calibrating camera parameters, a method and a device for generating three-dimensional data, a method and a device for generating a prediction data set, a method and a device for analyzing road information, electronic equipment, a machine readable medium and a software product.
Background
The monitoring camera is used as an efficient real-time data sensing device, is widely applied to municipal law enforcement and traffic scenes, is used for monitoring pedestrians, vehicles and the like in a target scene, assists law enforcement personnel in remotely acquiring various event accident information and knows real-time running conditions of a city.
However, the information obtained from the two-dimensional image of the monitoring picture usually has locality, the two-dimensional information of a single point location can only display the data of people and vehicles in a local area, and compared with the three-dimensional global display of a real scene, the information is lost. Therefore, the two-dimensional information in the image is back projected to a real three-dimensional space, the service city can be subjected to refined global treatment, and the method has important application value.
Disclosure of Invention
In view of the above, the present application is made to provide a calibration method and apparatus of camera parameters, a three-dimensional data generation method and apparatus, a generation method and apparatus of a prediction data set, a road information analysis method and apparatus, an electronic device, a machine readable medium, and a software product that overcome or at least partially solve the above problems.
According to an aspect of the present application, there is provided a calibration method of camera parameters, including:
determining camera parameters respectively corresponding to the cameras under a plurality of first visual angles based on first images acquired by the cameras at the first visual angles, wherein at least part of the first images correspond to different camera focal lengths;
determining target image feature points with a matching relation in image feature points corresponding to a second image and a first image acquired from a second visual angle respectively;
establishing a prediction data set for predicting camera parameters under a second view angle based on the camera parameters under the first view angle based on the coordinate information of the target image feature points respectively corresponding to the first image and the second image;
and obtaining camera parameters under a second view angle according to the prediction data set.
Optionally, the camera includes a variable focal length dome camera, and the camera parameters include camera internal parameters and camera external parameters.
Optionally, a content range covered by the image of the target position exceeds a preset proportion of the acquisition range of the camera.
Optionally, in the image feature points respectively corresponding to the second image and the at least one first image acquired at the second view angle, determining the target image feature point having the matching relationship includes:
extracting image characteristic points corresponding to the first image and the second image respectively;
and determining feature matching values among the image feature points, and determining target image feature points with matching relations according to the feature matching values.
Optionally, the establishing, based on the coordinate information of the target image feature point corresponding to the first image and the second image, a prediction data set for predicting the camera parameter at the second view angle based on the camera parameter at the first view angle includes:
constructing a mapping function, wherein the mapping function is used for converting the coordinate information of the target image feature point under the first visual angle into the coordinate information under the second visual angle according to the camera parameters respectively corresponding to the first visual angle and the second visual angle;
and iteratively optimizing the mapping function, and using the optimized mapping function as a prediction data set for predicting the camera parameters under the second view angle.
Optionally, the method further includes:
and determining initial values of camera parameters of the camera at a second view angle according to the coordinate information of the target image feature points in the first image and the second image, and optimizing the prediction data set according to the initial values.
Optionally, the determining, according to the coordinate information of the target image feature point in the first image and the second image, an initial value of a camera parameter of the camera at the second view angle includes:
converting the coordinate information of the target image feature point in the first image into coordinate information under a world coordinate system according to the camera parameter of the camera at the first view angle;
constructing a relation function based on coordinate information of the target image feature point in a world coordinate system, coordinate information in a second image and an initial value of a camera parameter of the camera at a second visual angle;
and solving the initial value of the camera parameter of the camera at the second visual angle according to the relation function.
Optionally, the image feature points include at least one of corner feature points, SIFT feature points, and ORB feature points, and the prediction data set is optimized by using a minimum reprojection error algorithm.
According to another aspect of the present application, there is also provided a three-dimensional data generation method, including:
acquiring a prediction data set for predicting camera parameters at a second view based on camera parameters at a first view; the prediction data set is established based on target image feature points with matching relations in a second image and a first image acquired from a plurality of first visual angles, camera parameters respectively corresponding to the first visual angles are determined based on the first image, and at least part of the first images correspond to different camera focal lengths;
obtaining camera parameters under a second view angle according to the prediction data set;
and projecting a second image acquired under a second visual angle to the three-dimensional space based on the camera parameters under the second visual angle to obtain three-dimensional data under the second visual angle.
According to another aspect of the present application, there is also provided a method for generating a prediction data set, including:
determining camera parameters respectively corresponding to the cameras under a plurality of first visual angles based on first images acquired by the cameras at the first visual angles, wherein at least part of the first images correspond to different camera focal lengths;
determining target image feature points with a matching relation in image feature points corresponding to a second image and a first image acquired from a second visual angle respectively;
and establishing a prediction data set for predicting the camera parameters under the second view angle based on the camera parameters under the first view angle based on the coordinate information of the target image feature points respectively corresponding to the first image and the second image.
According to another aspect of the present application, there is also provided a road information analysis method applied to a variable focal length dome camera installed at a position near a road, the method including:
acquiring a prediction data set for predicting camera parameters at a second view based on camera parameters at a first view; the prediction data set is established based on target image feature points with matching relations in a second image and a first image acquired from a plurality of first visual angles, camera parameters respectively corresponding to the first visual angles are determined based on the first image, and at least part of the first images correspond to different camera focal lengths;
obtaining camera parameters under a second view angle according to the prediction data set;
projecting a second image collected under a second visual angle to a three-dimensional space based on camera parameters under the second visual angle to obtain three-dimensional data under the second visual angle;
and analyzing the road information based on the three-dimensional data.
According to another aspect of the present application, there is also provided an electronic device, comprising: a processor; and
a memory having executable code stored thereon, which when executed, causes the processor to perform a method as in any one of the above.
According to another aspect of the present application, there is also provided one or more machine-readable media having stored thereon executable code that, when executed by a processor, implements a method as described in one of the above.
According to another aspect of the application, there is also provided a software product comprising computer programs/instructions, wherein the computer programs/instructions, when executed, enable carrying out the method according to any one of the preceding claims.
According to the embodiment of the application, the camera parameters respectively corresponding to the camera at the first visual angle are determined based on the first images collected by the camera at the plurality of first visual angles and the focal length, then the target image characteristic points with the matching relation are determined from the image characteristic points respectively corresponding to the second image and the first image collected at the second visual angle, then the prediction data set for predicting the camera parameters at the second visual angle based on the camera parameters at the first visual angle is established based on the coordinate information respectively corresponding to the target image characteristic points at the first image and the second image, the camera parameters at other visual angles can be obtained based on the prediction data set without calibration objects, the parameter prediction at the continuous visual angle in the collection range can be realized, and the method is suitable for the dome camera with the continuously changed focal length. If the content range covered by the image of the target position exceeds a certain proportion of the acquisition range of the camera, for example, the content range can completely cover the acquisition range of the dome camera, the parameter prediction of any visual angle in the range of the dome camera can be realized.
The above description is only an overview of the technical solutions of the present application, and the present application may be implemented in accordance with the content of the description so as to make the technical means of the present application more clearly understood, and the detailed description of the present application will be given below in order to make the above and other objects, features, and advantages of the present application more clearly understood.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 illustrates an example of data interaction for a content sharing scheme of the present application;
fig. 2 is a flowchart illustrating a calibration method of camera parameters according to an embodiment of the present application;
FIG. 3 is a flow chart of a three-dimensional data generation method according to the second embodiment of the present application;
FIG. 4 is a flow chart of a method for generating a prediction data set according to the third embodiment of the present application;
fig. 5 is a flowchart illustrating a road information analysis method according to a fourth embodiment of the present application;
fig. 6 is a block diagram illustrating a calibration apparatus for camera parameters according to a fifth embodiment of the present application;
fig. 7 is a block diagram showing a three-dimensional data generating apparatus according to a sixth embodiment of the present application;
fig. 8 is a block diagram illustrating a device for generating a prediction data set according to a seventh embodiment of the present application;
fig. 9 is a block diagram showing a configuration of a road information analysis apparatus according to an eighth embodiment of the present application;
fig. 10 illustrates an exemplary system that can be used to implement various embodiments described in this disclosure.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In a road monitoring scene, because the three-dimensional space range is wider, and the visible range of the gun camera with a fixed visual angle is limited, the wide-range coverage of a target scene cannot be met, a variable-focus ball machine is usually adopted at a traffic intersection or a high position of a roof, and large scene information far away or near is acquired by zooming and rotating the monitoring camera. However, the camera parameters can be directly changed by continuous zooming and rotation, so that how to efficiently calibrate the camera parameters of the variable-focus dome camera has practical application value.
The traditional camera calibration algorithm needs to use a three-dimensional calibration object or a planar calibration object with known real size, the internal and external parameters of a camera model are solved by utilizing an optimization algorithm by establishing the correspondence between a point with known coordinates on the calibration object and an image point of the point, the processing and maintenance of the calibration object are difficult, the placement of the calibration object under a road monitoring scene is difficult, and the camera calibration algorithm cannot be suitable for parameter calibration under the monitoring scene of the dome camera.
Especially, when the zoom dome camera shoots an image, angle rotation and focal length conversion can be continuously generated, the angle rotation causes different camera coordinate systems corresponding to different visual angles, and the camera is changed relative to camera external parameters of a world coordinate system. The change of the focal length causes the change of the camera parameters, so the camera parameters under different viewing angles also change, and the camera parameters need to be determined again according to different viewing angles.
The embodiment of the application provides a calibration scheme of camera parameters, and the calibration scheme is used in specific applications such as generation of three-dimensional data, generation of a prediction data set, road analysis and the like, and is described as follows.
The camera calibration is to determine the relative transformation relationship between the three-dimensional geometric position of a certain point on the surface of an object and the corresponding point on an image, so as to project the acquired image into a three-dimensional space according to the relative transformation relationship, and obtain three-dimensional data for describing the three-dimensional space. The above-mentioned relative transformation relationship is also a camera parameter, and can be specifically divided into camera internal parameters and camera external parameters. The camera parameters mainly comprise parameters such as a camera focal length, a principal point position, a beveling coefficient, distortion parameters and the like, and the distortion parameters can further comprise a radial distortion parameter and a tangential distortion parameter. The parameters of the camera external reference camera relative to the real world (three-dimensional space coordinate system, real scene) are used for converting the world coordinate system into the camera coordinate system, and the specific form can be a rotation matrix and a translation vector from the world coordinate system to the camera coordinate system.
The method includes the steps that images (marked as first images) of a camera under multiple visual angles (marked as first visual angles) are collected firstly, at least part of the first images correspond to different camera focal lengths, for example, all the first images correspond to different focal lengths, or part of the first images are the same in focal length and different in focal length. In addition, for each first view angle, the camera parameters under the view angle are determined respectively, and any existing applicable scheme may be specifically adopted, which is not limited here.
Further, camera parameters at a second view angle are determined from the image captured at the first view angle and the determined camera parameters at the first view angle. Firstly, determining target image feature points with a matching relationship in image feature points respectively corresponding to a second image and a first image acquired from a second view angle. By acquiring a second image acquired from a second visual angle, extracting image feature points of the first image and the second image respectively, and determining target image feature points with a matching relation aiming at the first image and the second image to be matched, namely, image feature points existing in the first image and the second image respectively represent that the first image and the second image have intersection on the content at the position of the target image feature points.
The feature point matching with the image of the second view angle is required, that is, the first view angle and the second view angle have intersection in the content of image acquisition. Therefore, the plurality of first viewing angles may be temporarily arbitrarily selected viewing angles or predetermined viewing angles. Therefore, when the first viewing angle is selected, the viewing angle range of the sum of the first viewing angles can be made as large as possible, for example, the viewing angle range of 360 degrees is divided into four areas, and the first viewing angles are arranged at four points, so that the coverage range of the first viewing angles can obtain all the collection ranges of the whole dome camera, for example, the first viewing angles are arranged in four directions of east, west, south and north. Of course, in order to collect clearer and more complete content, a plurality of first view angles can be selected, and the image collection quality of each view angle is ensured. The second view angle is a view angle of the camera parameter to be determined, and may be a view angle within a coverage range of the first view angle, and when the coverage range of the first view angle is large enough, the second view angle may be any view angle.
After the target image feature point is determined, a prediction data set for predicting camera parameters at a second view angle based on the camera parameters at the first view angle may be established based on the coordinate information of the target image feature point corresponding to the first image and the second image, respectively, and the camera parameters at the second view angle may be obtained according to the prediction data set. The camera parameters under other visual angles can be obtained based on the prediction data set without a calibration object, so that parameter prediction under continuous visual angles in a collection range can be realized.
If the content range covered by the image of the target position exceeds a certain proportion of the acquisition range of the camera, calibration of camera parameters in a larger range can be achieved, for example, the sum of the content ranges acquired at the first visual angle can completely cover the acquisition range of the dome camera, and parameter prediction of any visual angle in the range of the dome camera can be achieved.
In an optional embodiment, when determining target image feature points having a matching relationship among image feature points respectively corresponding to a second image and at least one first image acquired at a second view angle, image feature points respectively corresponding to the first image and the second image may be extracted, feature matching values between the image feature points are further determined, and the target image feature points having a matching relationship are determined according to the feature matching values.
The Feature points of the image are used for characterizing the characteristics of the image in some dimensions, and may include at least one of a corner point (corner point) Feature point, a SIFT (Scale Invariant Feature Transform) Feature point, and an ORB (Oriented FAST and Rotated BRIEF) Feature point, for example. The corner feature points are usually specific feature points having specific coordinates in the image and having some mathematical features, such as local maximum or minimum gray scale, some gradient features, etc. The SIFT feature points are local feature points of the image, which keep invariance to rotation, scale scaling and brightness change and also keep a certain degree of stability to view angle change, affine transformation and noise. The ORB feature points are feature points with local invariance selected by using a certain mode and operation, for example, when the image is rotated, the coordinate system needs to be correspondingly rotated, so that the feature points which are still unchanged after the rotation are the feature points with local invariance. In practical application, any other suitable feature point may be selected according to needs.
The feature matching values between the feature points represent the similarity between the feature points, and the feature points having a matching relationship can be considered as the feature points having a relatively high feature matching value. The feature points can be characterized in the form of feature vectors, and correspondingly, when feature matching values among the feature points are calculated, the similarity among the vectors can be calculated, and the feature points with the similarity satisfying conditions, such as the feature points with the larger similarity, are selected to be determined as the feature points with the matching relationship. The method can be realized by adopting a nearest feature vector matching method.
In an optional embodiment, in the process of establishing a prediction data set for predicting a camera parameter at a second view angle based on a camera parameter at a first view angle based on coordinate information corresponding to a target image feature point in a first image and a second image, a mapping function may be constructed, where the mapping function is configured to convert the coordinate information of the target image feature point at the first view angle into coordinate information at the second view angle according to the camera parameter corresponding to the first view angle and the second view angle, that is, to back-project the image feature point at the first view angle to the second view angle according to the camera parameter.
Therefore, the mapping function is solved to obtain the camera parameters corresponding to the second view angle, and the mapping function includes the unknown camera parameters under the second view angle and some unknown coefficients, so that the mapping function can be iteratively optimized, the mapping function obtained by optimization is used as a prediction data set for predicting the camera parameters under the second view angle, and the camera parameters under the second view angle can be further obtained based on the prediction data set.
When the mapping function is optimized, the related loss function can be determined according to the difference value between the coordinate information of the second visual angle estimated by the mapping function and the corresponding coordinate information of the target image feature point in the second image. The optimization process can be optimized by adopting a minimum reprojection error algorithm.
An example of the solution process for the camera parameters is given as follows:
assume that the first set of images is Ii(i ═ 1, 2., N), where N is greater than or equal to 4, and the in-camera view angle at the first view angle is obtained by using a conventional calibration algorithmParameter matrix KiThe corresponding focal length is denoted as fiPrincipal point position piFixed in the center of the image with distortion parameter disti={ki1,ki2,pi1,pi2The rotation matrix and the translation vector of the camera external parameter are respectively expressed as RiAnd ti. The second image under the second visual angle is Ic(subscript of corresponding parameter;)cRepresentation) and represent the matched image feature point sets as F respectivelycAnd FiSuppose a second image feature point xc∈FcAnd a certain first view angle (preset bit) first image feature point xi∈FiAre matched with each other.
First according to the distortion parameter distiX is to beiDistortion removal (distortion internal reference) to obtain
Figure BDA0003556907710000091
Then, under a preset camera coordinate system, rotating the light direction corresponding to the pixel after distortion removal to the current visual angle, specifically to
Figure BDA0003556907710000092
D is to becNormalizing in the z direction to obtain the standard coordinates of the current visual angle camera coordinate system
Figure BDA0003556907710000093
(2) For the current position, the internal reference matrix K under the second visual anglecThe distortion parameter dist ═ k1, k2, p1, p2, and will be
Figure BDA0003556907710000094
Forward projected onto the imaging plane at the second viewing angle as follows:
Figure BDA0003556907710000095
wherein
Figure BDA0003556907710000096
k1And k2Representing the radial distortion parameter, p1And p2Representing the tangential distortion parameter. Projecting the distorted coordinates to an imaging plane to obtain a pixel coordinate estimated value:
Figure BDA0003556907710000097
from this a mapping function is constructed:
Figure BDA0003556907710000098
the objective is to minimize the reprojection error, optimize the above function using the Levenberg-Marquard algorithm, and solve for the camera parameters at the second view angle.
In the above process of optimizing the prediction data set, an initial value of a second camera parameter may be determined, and the optimization of the prediction data set may be performed according to the initial value, or the initial value of the camera parameter of the camera at the second view angle may be determined according to the coordinate information of the target image feature point in the first image and the second image.
It is assumed that the imaging center of the dome camera remains unchanged in position during rotation, there is only rotation in the camera coordinate system, and the zoom center is the image pixel plane center point. The camera parameters of the first view angle and the second view angle can be converted and calculated through a world coordinate system, for example, according to the camera parameters of the camera at the first view angle, the coordinate information of the target image feature point in the first image is converted into the coordinate information under the world coordinate system, then a relation function is constructed based on the coordinate information of the target image feature point under the world coordinate system, the coordinate information in the second image and the initial values of the camera parameters of the camera at the second view angle, and the initial values of the camera parameters of the camera at the second view angle are solved according to the relation function.
An example of an implementation of the initial values is given as follows:
assume that the rotation of the first perspective with respect to the world coordinate system is Ravg(initialized to identity matrix) focal length favg(initialization to a smaller dimension of image resolution), otherwise falseSetting image characteristic point x of second visual anglec∈FcAnd a certain first perspective image feature point xi∈FiAre matched with each other. In the camera coordinate system of the first view angle, xiCorresponding to a light direction of
Figure BDA0003556907710000101
Where norm (×) represents vector normalization. Rotating the direction to a reference position according to the world coordinate system
Figure BDA0003556907710000102
(the equal sign represents the assignment operation), and then rotating the reference position to d according to the second view angle relative to the world coordinate systemc=Ravgdi. The focal length conversion can also convert the characteristic points of the second visual angle image into the light direction
Figure BDA0003556907710000103
Theoretically, dcAnd
Figure BDA0003556907710000104
the directions are consistent, so the following function is constructed:
Figure BDA0003556907710000105
wherein (· denotes the vector inner product, and N denotes the number of first views (preset bits), wherein the first image i and the second image have MiFor the feature matching points, R can be solved by optimizing the equationavgAnd favgIs started.
Translation vector t at second view anglecMay be estimated by constructing the following function:
Figure BDA0003556907710000106
can solve tc
And in the process of solving the camera parameters under the second visual angle by combining the initial values, wherein RcCan be prepared from RavgInitialization, by initial calibration, so that RcObtaining more accurate value according to the estimated focal length favgConstructing the above-mentioned internal reference matrix KcAnd a distortion parameter.
It should be noted that the implementation of the above scheme may be implemented inside the camera, or may be implemented in a server or a cloud. The solution correspondence may be implemented as a functional module in the form of an application program, a service, an instance, or software, a Virtual Machine (VM) or a container, or may also be implemented as a hardware device (such as a server or a terminal device) or a hardware chip (such as a CPU, a GPU, or an FPGA) having an image processing function. The computing platform can provide partial or all processes in the optimization of prediction data combination, the calculation of camera parameters and the processing of three-dimensional data by utilizing computing resources of the computing platform, a camera or a demand party can apply the processes through a client or a set interface and submit related data acquired by the camera, and the platform feeds back a processing result to the camera or the demand party through the method. Or it may be the camera or the demander's own computing resource to perform the above-described processing. The specific application architecture is set up for use and is not limited in this application.
The above scheme can be further applied to a three-dimensional data generation process, a prediction data set for predicting camera parameters at a second view angle based on camera parameters at a first view angle is obtained, the prediction data set is established based on target image feature points with matching relation in a second image and first images acquired at a plurality of first view angles, and the camera parameters respectively corresponding to each first view angle are determined based on the first images acquired respectively. Camera parameters at the second view angle may then be obtained from the prediction data set, such that a second image acquired at the second view angle may be projected into the three-dimensional space based on the camera parameters at the second view angle, to obtain three-dimensional data at the second view angle.
The method may also be provided as a method for generating a prediction data set, where camera parameters corresponding to a camera at a first view angle are determined based on a first image acquired by the camera at a plurality of first view angles, target image feature points having a matching relationship are determined in image feature points corresponding to a second image acquired by a second view angle and the first image, and a prediction data set for predicting the camera parameters at the second view angle based on the camera parameters at the first view angle is established based on coordinate information corresponding to the target image feature points in the first image and the second image.
The method can also be applied to a road analysis application scene, the scheme of the method can be deployed in a zoom dome camera installed near a road, a prediction data set for predicting camera parameters at a second visual angle based on camera parameters at a first visual angle is obtained, the prediction data set is established based on target image feature points with matching relation in a second image and a plurality of first images collected at the first visual angle, the camera parameters respectively corresponding to the first visual angle are determined based on the first images, then the camera parameters at the second visual angle can be obtained according to the prediction data set, then the second image collected at the second visual angle is projected to a three-dimensional space based on the camera parameters at the second visual angle, and three-dimensional data at the second visual angle is obtained, so that road information analysis can be performed based on the three-dimensional data, for example, road profile analysis, safety situation analysis, image analysis and the like can be performed based on the three-dimensional data, Congestion analysis, and the like.
An example of a camera parameter calibration scheme of the present application is given with reference to fig. 1.
Firstly, a ball machine arranged on a road collects images at four viewing angles of south, east, west and north as a first image, and camera parameters are calibrated at the four viewing angles. And then, collecting any view angle image as a second image, calculating the characteristic matching relation between the second image and the first image, and determining a matched pixel point between the two images. And constructing a mapping function based on the coordinate information of the matched pixel points in the two images, and converting the coordinate information of the target image feature point under the first visual angle into the coordinate information under the second visual angle according to the camera parameters respectively corresponding to the first visual angle and the second visual angle. By optimizing the mapping function, a prediction data set for predicting camera parameters at a second view angle based on camera parameters at a first view angle may be obtained. Camera parameters under a second visual angle are obtained based on the preset data set, and further images collected under the second visual angle can be projected to a three-dimensional space to obtain three-dimensional data for subsequent road analysis.
Referring to fig. 2, a flowchart of a calibration method for camera parameters according to an embodiment of the present application is shown, where the method specifically includes the following steps:
step 101, determining camera parameters respectively corresponding to cameras under a plurality of first visual angles based on first images acquired by the cameras at the first visual angles, wherein at least part of the first images correspond to different camera focal lengths;
102, determining target image feature points with a matching relation in image feature points corresponding to a second image and a first image acquired from a second visual angle;
103, establishing a prediction data set for predicting camera parameters under a second view angle based on the camera parameters under the first view angle based on the coordinate information of the target image feature points respectively corresponding to the first image and the second image;
and 104, obtaining camera parameters under a second view angle according to the prediction data set.
In an alternative embodiment, the camera comprises a variable focus dome camera, and the camera parameters comprise camera internal parameters and camera external parameters.
In an alternative embodiment, the image of the target position covers a content range exceeding a preset proportion of the acquisition range of the camera.
In an optional embodiment, in the image feature points corresponding to the second image and the at least one first image acquired at the second view angle, determining the target image feature points having a matching relationship includes:
extracting image characteristic points corresponding to the first image and the second image respectively;
and determining feature matching values among the image feature points, and determining target image feature points with matching relations according to the feature matching values.
In an optional embodiment, the establishing, based on the coordinate information of the target image feature point corresponding to the first image and the second image, a prediction data set for predicting the camera parameter at the second view based on the camera parameter at the first view includes:
constructing a mapping function, wherein the mapping function is used for converting the coordinate information of the target image feature point under the first visual angle into the coordinate information under the second visual angle according to the camera parameters respectively corresponding to the first visual angle and the second visual angle;
and iteratively optimizing the mapping function, and using the optimized mapping function as a prediction data set for predicting the camera parameters under the second view angle.
In an optional embodiment, the method further comprises:
and determining initial values of camera parameters of the camera at a second view angle according to the coordinate information of the target image feature points in the first image and the second image, and optimizing the prediction data set according to the initial values.
In an optional embodiment, the determining the initial value of the camera parameter of the camera at the second view angle according to the coordinate information of the target image feature point in the first image and the second image includes:
converting the coordinate information of the target image feature point in the first image into coordinate information under a world coordinate system according to the camera parameter of the camera at the first view angle;
constructing a relation function based on coordinate information of the target image feature point in a world coordinate system, coordinate information in a second image and initial values of camera parameters of the camera at a second visual angle;
and solving the initial value of the camera parameter of the camera at the second visual angle according to the relation function.
In an optional embodiment, the image feature points include at least one of corner feature points, SIFT feature points, and ORB feature points, and the prediction data set is optimized by using a minimum reprojection error algorithm.
According to the embodiment of the application, the camera parameters respectively corresponding to the cameras at the first view angles are determined based on the first images collected by the cameras at the first view angles, then the target image feature points with the matching relation are determined from the image feature points respectively corresponding to the second images collected by the second view angles and the first images, then the prediction data set for predicting the camera parameters at the second view angles based on the camera parameters at the first view angles is established based on the coordinate information respectively corresponding to the target image feature points at the first images and the second images, the camera parameters at other view angles can be obtained based on the prediction data set without calibration objects, the parameter prediction at the continuous view angles in the collection range can be realized, and the method is suitable for the dome camera with the continuously changed focal length. If the content range covered by the image of the target position exceeds a certain proportion of the acquisition range of the camera, for example, the content range can completely cover the acquisition range of the dome camera, the parameter prediction of any visual angle in the range of the dome camera can be realized.
Referring to fig. 3, a flowchart of a three-dimensional data generation method according to a second embodiment of the present application is shown, where the method specifically includes the following steps:
step 201, acquiring a prediction data set for predicting camera parameters under a second view angle based on camera parameters under a first view angle; the prediction data set is established based on target image feature points with matching relations in a second image and a first image acquired from a plurality of first visual angles, camera parameters respectively corresponding to the first visual angles are determined based on the first image, and at least part of the first images correspond to different camera focal lengths;
step 202, obtaining camera parameters under a second view angle according to the prediction data set;
step 203, projecting a second image acquired under a second viewing angle to a three-dimensional space based on the camera parameters under the second viewing angle, and obtaining three-dimensional data under the second viewing angle.
According to the embodiment of the application, the camera parameters respectively corresponding to the cameras at the first view angles are determined based on the first images collected by the cameras at the first view angles, then the target image feature points with the matching relation are determined from the image feature points respectively corresponding to the second images collected by the second view angles and the first images, then the prediction data set for predicting the camera parameters at the second view angles based on the camera parameters at the first view angles is established based on the coordinate information respectively corresponding to the target image feature points at the first images and the second images, the camera parameters at other view angles can be obtained based on the prediction data set without calibration objects, the parameter prediction at the continuous view angles in the collection range can be realized, and the method is suitable for the dome camera with the continuously changed focal length. If the content range covered by the image of the target position exceeds a certain proportion of the acquisition range of the camera, for example, the content range can completely cover the acquisition range of the dome camera, the parameter prediction of any visual angle in the range of the dome camera can be realized.
Referring to fig. 4, a flowchart of a method for generating a prediction data set according to a third embodiment of the present application is shown, where the method specifically includes the following steps:
step 301, determining, based on first images acquired by a camera at a plurality of first viewing angles, camera parameters respectively corresponding to the camera at the first viewing angle, where at least some of the first images correspond to different camera focal lengths;
step 302, determining target image feature points with a matching relationship in image feature points respectively corresponding to a second image and a first image acquired from a second view angle;
step 303, establishing a prediction data set for predicting the camera parameter at the second view angle based on the camera parameter at the first view angle based on the coordinate information of the target image feature point corresponding to the first image and the second image respectively.
According to the embodiment of the application, the camera parameters respectively corresponding to the cameras at the first view angles are determined based on the first images collected by the cameras at the first view angles, then the target image feature points with the matching relation are determined from the image feature points respectively corresponding to the second images collected by the second view angles and the first images, then the prediction data set for predicting the camera parameters at the second view angles based on the camera parameters at the first view angles is established based on the coordinate information respectively corresponding to the target image feature points at the first images and the second images, the camera parameters at other view angles can be obtained based on the prediction data set without calibration objects, the parameter prediction at the continuous view angles in the collection range can be realized, and the method is suitable for the dome camera with the continuously changed focal length. If the content range covered by the image of the target position exceeds a certain proportion of the acquisition range of the camera, for example, the content range can completely cover the acquisition range of the dome camera, the parameter prediction of any visual angle in the range of the dome camera can be realized.
Referring to fig. 5, a flowchart of a road information analysis method according to a fourth embodiment of the present disclosure is shown, where the method is applied to a zoom type dome camera installed at a position near a road, and the method may specifically include the following steps:
step 401, acquiring a prediction data set for predicting a camera parameter at a second view angle based on a camera parameter at a first view angle; the prediction data set is established based on target image feature points with matching relations in a second image and a first image acquired from a plurality of first visual angles, camera parameters respectively corresponding to the first visual angles are determined based on the first image, and at least part of the first images correspond to different camera focal lengths;
step 402, obtaining camera parameters under a second view angle according to the prediction data set;
step 403, projecting a second image acquired at a second view angle to a three-dimensional space based on the camera parameters at the second view angle to obtain three-dimensional data at the second view angle;
and step 404, analyzing road information based on the three-dimensional data.
According to the embodiment of the application, the camera parameters respectively corresponding to the cameras at the first view angles are determined based on the first images collected by the cameras at the first view angles, then the target image feature points with the matching relation are determined from the image feature points respectively corresponding to the second images collected by the second view angles and the first images, then the prediction data set for predicting the camera parameters at the second view angles based on the camera parameters at the first view angles is established based on the coordinate information respectively corresponding to the target image feature points at the first images and the second images, the camera parameters at other view angles can be obtained based on the prediction data set without calibration objects, the parameter prediction at the continuous view angles in the collection range can be realized, and the method is suitable for the dome camera with the continuously changed focal length. If the content range covered by the image of the target position exceeds a certain proportion of the acquisition range of the camera, for example, the content range can completely cover the acquisition range of the dome camera, the parameter prediction of any visual angle in the range of the dome camera can be realized.
Referring to fig. 6, a block diagram of a calibration apparatus for camera parameters according to a fifth embodiment of the present application is shown, where the apparatus may specifically include:
the camera parameter determining module 501 is configured to determine, based on first images acquired by a camera at a plurality of first viewing angles, camera parameters respectively corresponding to the camera at the first viewing angles, where at least some of the first images correspond to different camera focal lengths;
a feature point matching module 502, configured to determine target image feature points having a matching relationship in image feature points corresponding to a second image and a first image acquired at a second view;
a set creating module 503, configured to create a prediction data set for predicting camera parameters at a second view angle based on camera parameters at a first view angle based on coordinate information of the target image feature point corresponding to the first image and the second image, respectively;
a parameter determining module 504, configured to obtain a camera parameter at the second view angle according to the prediction data set.
In an alternative embodiment, the camera comprises a variable focus dome camera, and the camera parameters comprise camera internal parameters and camera external parameters.
In an alternative embodiment, the image of the target position covers a content range exceeding a preset proportion of the acquisition range of the camera.
In an optional embodiment, the feature point matching module is specifically configured to extract image feature points corresponding to the first image and the second image respectively; and determining feature matching values among the image feature points, and determining target image feature points with matching relations according to the feature matching values.
In an optional embodiment, the set creating module is specifically configured to construct a mapping function, where the mapping function is configured to convert, according to camera parameters corresponding to a first view and a second view, coordinate information of a target image feature point in the first view into coordinate information in the second view; and iteratively optimizing the mapping function, and taking the mapping function obtained through optimization as a prediction data set for predicting the camera parameters under the second view angle.
In an optional embodiment, the apparatus further comprises:
and the initial value determining module is used for determining the initial value of the camera parameter of the camera at the second view angle according to the coordinate information of the target image feature point in the first image and the second image, and optimizing the prediction data set according to the initial value.
In an optional embodiment, the initial value determining module is specifically configured to convert, according to a camera parameter of the camera at the first view angle, coordinate information of the target image feature point in the first image to coordinate information in a world coordinate system; constructing a relation function based on coordinate information of the target image feature point in a world coordinate system, coordinate information in a second image and initial values of camera parameters of the camera at a second visual angle; and solving the initial value of the camera parameter of the camera at the second visual angle according to the relation function.
In an optional embodiment, the image feature points include at least one of corner feature points, SIFT feature points, and ORB feature points, and the prediction data set is optimized by using a minimum reprojection error algorithm.
According to the embodiment of the application, the camera parameters respectively corresponding to the cameras at the first view angles are determined based on the first images collected by the cameras at the first view angles, then the target image feature points with the matching relation are determined from the image feature points respectively corresponding to the second images collected by the second view angles and the first images, then the prediction data set for predicting the camera parameters at the second view angles based on the camera parameters at the first view angles is established based on the coordinate information respectively corresponding to the target image feature points at the first images and the second images, the camera parameters at other view angles can be obtained based on the prediction data set without calibration objects, the parameter prediction at the continuous view angles in the collection range can be realized, and the method is suitable for the dome camera with the continuously changed focal length. If the content range covered by the image of the target position exceeds a certain proportion of the acquisition range of the camera, for example, the content range can completely cover the acquisition range of the dome camera, the parameter prediction of any visual angle in the range of the dome camera can be realized.
Referring to fig. 7, a block diagram of a three-dimensional data generating apparatus according to a sixth embodiment of the present application is shown, where the apparatus may specifically include:
a set acquiring module 601, configured to acquire a prediction data set for predicting a camera parameter at a second view based on a camera parameter at a first view; the prediction data set is established based on target image feature points with matching relations in a second image and a first image acquired from a plurality of first visual angles, camera parameters respectively corresponding to the first visual angles are determined based on the first image, and at least part of the first images correspond to different camera focal lengths;
a parameter determining module 602, configured to obtain a camera parameter under a second view according to the prediction data set;
the three-dimensional data projection module 603 is configured to project a second image acquired at a second viewing angle to the three-dimensional space based on the camera parameter at the second viewing angle, so as to obtain three-dimensional data at the second viewing angle.
According to the embodiment of the application, the camera parameters respectively corresponding to the cameras at the first view angles are determined based on the first images collected by the cameras at the first view angles, then the target image feature points with the matching relation are determined from the image feature points respectively corresponding to the second images collected by the second view angles and the first images, then the prediction data set for predicting the camera parameters at the second view angles based on the camera parameters at the first view angles is established based on the coordinate information respectively corresponding to the target image feature points at the first images and the second images, the camera parameters at other view angles can be obtained based on the prediction data set without calibration objects, the parameter prediction at the continuous view angles in the collection range can be realized, and the method is suitable for the dome camera with the continuously changed focal length. If the content range covered by the image of the target position exceeds a certain proportion of the acquisition range of the camera, for example, the content range can completely cover the acquisition range of the dome camera, the parameter prediction of any visual angle in the range of the dome camera can be realized.
Referring to fig. 8, a block diagram of a device for generating a prediction data set according to a seventh embodiment of the present application is shown, where the device specifically includes:
a parameter determining module 701, configured to determine, based on first images acquired by a camera at multiple first viewing angles, camera parameters corresponding to the camera at the first viewing angles, respectively, where at least some of the first images correspond to different camera focal lengths;
a matching point determining module 702, configured to determine target image feature points having a matching relationship in image feature points corresponding to a second image and a first image acquired from a second view, respectively;
a set creating module 703, configured to create a prediction data set for predicting the camera parameter at the second view angle based on the camera parameter at the first view angle, based on the coordinate information of the target image feature point corresponding to the first image and the second image, respectively.
According to the embodiment of the application, the camera parameters respectively corresponding to the cameras at the first view angles are determined based on the first images collected by the cameras at the first view angles, then the target image feature points with the matching relation are determined from the image feature points respectively corresponding to the second images collected by the second view angles and the first images, then the prediction data set for predicting the camera parameters at the second view angles based on the camera parameters at the first view angles is established based on the coordinate information respectively corresponding to the target image feature points at the first images and the second images, the camera parameters at other view angles can be obtained based on the prediction data set without calibration objects, the parameter prediction at the continuous view angles in the collection range can be realized, and the method is suitable for the dome camera with the continuously changed focal length. If the content range covered by the image of the target position exceeds a certain proportion of the acquisition range of the camera, for example, the content range can completely cover the acquisition range of a dome camera, the parameter prediction of any visual angle in the range of the dome camera can be realized.
Referring to fig. 9, a block diagram of a road information analysis apparatus according to an eighth embodiment of the present application is shown, and the apparatus is applied to a variable focal length ball machine installed at a position near a road, and specifically may include:
a prediction data set obtaining module 801, configured to obtain a prediction data set for predicting camera parameters at a second view based on camera parameters at a first view; the prediction data set is established based on target image feature points with matching relations in a second image and a first image acquired from a plurality of first visual angles, camera parameters respectively corresponding to the first visual angles are determined based on the first image, and at least part of the first images correspond to different camera focal lengths;
a parameter determining module 802, configured to obtain a camera parameter under a second view according to the prediction data set;
the three-dimensional data prediction module 803 is configured to project a second image acquired at a second view angle to a three-dimensional space based on a camera parameter at the second view angle, so as to obtain three-dimensional data at the second view angle;
and a road analysis module 804, configured to perform road information analysis based on the three-dimensional data.
According to the embodiment of the application, the camera parameters respectively corresponding to the cameras at the first view angles are determined based on the first images collected by the cameras at the first view angles, then the target image feature points with the matching relation are determined from the image feature points respectively corresponding to the second images collected by the second view angles and the first images, then the prediction data set for predicting the camera parameters at the second view angles based on the camera parameters at the first view angles is established based on the coordinate information respectively corresponding to the target image feature points at the first images and the second images, the camera parameters at other view angles can be obtained based on the prediction data set without calibration objects, the parameter prediction at the continuous view angles in the collection range can be realized, and the method is suitable for the dome camera with the continuously changed focal length. If the content range covered by the image of the target position exceeds a certain proportion of the acquisition range of the camera, for example, the content range can completely cover the acquisition range of the dome camera, the parameter prediction of any visual angle in the range of the dome camera can be realized.
For the apparatus and system embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference may be made to some descriptions of the method embodiments for relevant points.
Embodiments of the disclosure may be implemented as a system using any suitable hardware, firmware, software, or any combination thereof, in a desired configuration. Fig. 10 schematically illustrates an exemplary system (or apparatus) 900 that can be used to implement various embodiments described in this disclosure.
For one embodiment, fig. 10 illustrates an exemplary system 900 having one or more processors 902, a system control module (chipset) 904 coupled to at least one of the processor(s) 902, system memory 909 coupled to the system control module 904, non-volatile memory (NVM)/storage 908 coupled to the system control module 904, one or more input/output devices 910 coupled to the system control module 904, and a network interface 912 coupled to the system control module 909.
The processor 902 may include one or more single-core or multi-core processors, and the processor 902 may include any combination of general-purpose or special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In some embodiments, the system 900 can function as a browser as described in embodiments herein.
In some embodiments, system 900 may include one or more computer-readable media (e.g., system memory 909 or NVM/storage 908) having instructions and one or more processors 902 that are configured to execute the instructions to implement modules to perform the actions described in this disclosure, in conjunction with the one or more computer-readable media.
For one embodiment, the system control module 904 may include any suitable interface controllers to provide any suitable interface to at least one of the processor(s) 902 and/or any suitable device or component in communication with the system control module 904.
The system control module 904 may include a memory controller module to provide an interface to the system memory 909. The memory controller module may be a hardware module, a software module, and/or a firmware module.
System memory 909 may be used, for example, to load and store data and/or instructions for system 900. For one embodiment, system memory 909 may comprise any suitable volatile memory, such as suitable DRAM. In some embodiments, the system memory 909 may include a double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
For one embodiment, the system control module 904 may include one or more input/output controllers to provide an interface to the NVM/storage 908 and input/output device(s) 910.
For example, NVM/storage 608 may be used to store data and/or instructions. NVM/storage 608 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 608 may include storage resources that are physically part of a device on which system 600 is installed or may be accessed by the device and not necessarily part of the device. For example, NVM/storage 608 may be accessed over a network via input/output device(s) 910.
Input/output device(s) 910 may provide an interface for system 900 to communicate with any other suitable device, and input/output device(s) 910 may include communication components, audio components, sensor components, and so forth. Network interface 912 may provide an interface for system 900 to communicate over one or more networks, and system 900 may communicate wirelessly with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols, such as access to a communication standard-based wireless network, such as WiFi, 2G, 3G, 4G, or 5G, or a combination thereof.
For one embodiment, at least one of the processor(s) 902 may be packaged together with logic for one or more controller(s) (e.g., memory controller module) of the system control module 904. For one embodiment, at least one of the processor(s) 902 may be packaged together with logic for one or more controller(s) of the system control module 904 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 902 may be integrated on the same die with logic for one or more controller(s) of the system control module 904. For one embodiment, at least one of the processor(s) 902 may be integrated on the same die with logic of one or more controllers of the system control module 904 to form a system on a chip (SoC).
In various embodiments, system 900 may be, but is not limited to being: a browser, a workstation, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, system 900 may have more or fewer components and/or different architectures. For example, in some embodiments, system 900 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
Wherein, if the display includes a touch panel, the display screen may be implemented as a touch screen display to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also identify the duration and pressure associated with the touch or slide operation.
The present application further provides a non-volatile readable storage medium, where one or more modules (programs) are stored in the storage medium, and when the one or more modules are applied to a terminal device, the one or more modules may cause the terminal device to execute instructions (instructions) of method steps in the present application.
In one example, a computer device is provided, which comprises a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method according to the embodiment of the present application when executing the computer program.
There is also provided in one example a computer readable storage medium having a computer program stored thereon, wherein the program, when executed by a processor, implements a method as in embodiments of the application.
There is also provided in one example a software product comprising computer programs/instructions which, when executed, implement methods of performing embodiments of the present application.
Although certain examples have been illustrated and described for purposes of description, a wide variety of alternate and/or equivalent implementations, or calculations, may be made to achieve the same objectives without departing from the scope of practice of the present application. This application is intended to cover any adaptations or variations of the embodiments discussed herein. Therefore, it is manifestly intended that the embodiments described herein be limited only by the claims and the equivalents thereof.

Claims (14)

1. A calibration method of camera parameters is characterized by comprising the following steps:
determining camera parameters respectively corresponding to the cameras under a plurality of first visual angles based on first images acquired by the cameras at the first visual angles, wherein at least part of the first images correspond to different camera focal lengths;
determining target image feature points with a matching relation in image feature points corresponding to a second image and a first image acquired from a second visual angle respectively;
establishing a prediction data set for predicting camera parameters under a second view angle based on the camera parameters under the first view angle based on the coordinate information of the target image feature points respectively corresponding to the first image and the second image;
and obtaining camera parameters under a second view angle according to the prediction data set.
2. The method of claim 1, wherein the camera comprises a variable focus dome camera and the camera parameters comprise camera internal parameters and camera external parameters.
3. The method of claim 1, wherein the image of the target location covers a content range that exceeds a preset proportion of the acquisition range of the camera.
4. The method according to claim 1, wherein the determining of the target image feature points having the matching relationship from the image feature points corresponding to the second image and the at least one first image acquired at the second view angle comprises:
extracting image characteristic points corresponding to the first image and the second image respectively;
and determining feature matching values among the image feature points, and determining target image feature points with matching relations according to the feature matching values.
5. The method of claim 1, wherein establishing a prediction data set for predicting the camera parameter at the second view angle based on the camera parameter at the first view angle based on the coordinate information of the target image feature point corresponding to the first image and the second image respectively comprises:
constructing a mapping function, wherein the mapping function is used for converting the coordinate information of the target image feature point under the first visual angle into the coordinate information under the second visual angle according to the camera parameters respectively corresponding to the first visual angle and the second visual angle;
and iteratively optimizing the mapping function, and using the optimized mapping function as a prediction data set for predicting the camera parameters under the second view angle.
6. The method of claim 1, further comprising:
and determining initial values of camera parameters of the camera at a second view angle according to the coordinate information of the target image feature points in the first image and the second image, and optimizing the prediction data set according to the initial values.
7. The method of claim 6, wherein determining the initial value of the camera parameter of the camera at the second view angle according to the coordinate information of the target image feature point in the first image and the second image comprises:
converting the coordinate information of the target image feature point in the first image into coordinate information under a world coordinate system according to the camera parameter of the camera at the first view angle;
constructing a relation function based on coordinate information of the target image feature point in a world coordinate system, coordinate information in a second image and initial values of camera parameters of the camera at a second visual angle;
and solving the initial value of the camera parameter of the camera at the second visual angle according to the relation function.
8. The method of claim 1, wherein the image feature points comprise at least one of corner feature points, SIFT feature points, or ORB feature points, and wherein the prediction data set is optimized using a minimum reprojection error algorithm.
9. A three-dimensional data generation method, comprising:
acquiring a prediction data set for predicting camera parameters at a second view based on camera parameters at a first view; the prediction data set is established based on target image feature points with matching relations in a second image and a first image acquired from a plurality of first visual angles, camera parameters respectively corresponding to the first visual angles are determined based on the first image, and at least part of the first images correspond to different camera focal lengths;
obtaining camera parameters under a second visual angle according to the prediction data set;
and projecting a second image acquired under a second visual angle to the three-dimensional space based on the camera parameters under the second visual angle to obtain three-dimensional data under the second visual angle.
10. A method for generating a prediction data set, comprising:
determining camera parameters respectively corresponding to the cameras under a plurality of first visual angles based on first images acquired by the cameras at the first visual angles, wherein at least part of the first images correspond to different camera focal lengths;
determining target image feature points with a matching relation in image feature points corresponding to a second image and a first image acquired from a second visual angle respectively;
and establishing a prediction data set for predicting the camera parameters under the second view angle based on the camera parameters under the first view angle based on the coordinate information of the target image feature points respectively corresponding to the first image and the second image.
11. A road information analysis method is applied to a variable-focus ball machine installed at a position near a road, and is characterized by comprising the following steps:
acquiring a prediction data set for predicting camera parameters at a second view based on camera parameters at a first view; the prediction data set is established based on target image feature points with matching relations in a second image and a first image acquired from a plurality of first visual angles, camera parameters respectively corresponding to the first visual angles are determined based on the first image, and at least part of the first images correspond to different camera focal lengths;
obtaining camera parameters under a second visual angle according to the prediction data set;
projecting a second image collected under a second visual angle to a three-dimensional space based on camera parameters under the second visual angle to obtain three-dimensional data under the second visual angle;
and analyzing the road information based on the three-dimensional data.
12. An electronic device, comprising: a processor; and
a memory having executable code stored thereon that, when executed, causes the processor to perform the method of any of claims 1-11.
13. One or more machine-readable media having executable code stored thereon which, when executed by a processor, implement the method of any of claims 1-11.
14. A software product comprising computer programs/instructions, wherein the computer programs/instructions, when executed, enable performing the method of any of claims 1-11.
CN202210278095.8A 2022-03-21 2022-03-21 Camera parameter calibration and three-dimensional data generation method and system Pending CN114612575A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210278095.8A CN114612575A (en) 2022-03-21 2022-03-21 Camera parameter calibration and three-dimensional data generation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210278095.8A CN114612575A (en) 2022-03-21 2022-03-21 Camera parameter calibration and three-dimensional data generation method and system

Publications (1)

Publication Number Publication Date
CN114612575A true CN114612575A (en) 2022-06-10

Family

ID=81864278

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210278095.8A Pending CN114612575A (en) 2022-03-21 2022-03-21 Camera parameter calibration and three-dimensional data generation method and system

Country Status (1)

Country Link
CN (1) CN114612575A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115942119A (en) * 2022-08-12 2023-04-07 北京小米移动软件有限公司 Linkage monitoring method and device, electronic equipment and readable storage medium
CN116519569A (en) * 2023-07-05 2023-08-01 广东省冶金建筑设计研究院有限公司 Municipal fill foundation seepage and settlement deformation simulation test and prediction method
CN117857769A (en) * 2024-03-07 2024-04-09 长江龙新媒体有限公司 Self-adaptive multi-camera capturing and real-time free view video rendering method and system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115942119A (en) * 2022-08-12 2023-04-07 北京小米移动软件有限公司 Linkage monitoring method and device, electronic equipment and readable storage medium
CN115942119B (en) * 2022-08-12 2023-11-21 北京小米移动软件有限公司 Linkage monitoring method and device, electronic equipment and readable storage medium
CN116519569A (en) * 2023-07-05 2023-08-01 广东省冶金建筑设计研究院有限公司 Municipal fill foundation seepage and settlement deformation simulation test and prediction method
CN116519569B (en) * 2023-07-05 2023-09-15 广东省冶金建筑设计研究院有限公司 Municipal fill foundation seepage and settlement deformation simulation test and prediction method
CN117857769A (en) * 2024-03-07 2024-04-09 长江龙新媒体有限公司 Self-adaptive multi-camera capturing and real-time free view video rendering method and system

Similar Documents

Publication Publication Date Title
CN107330439B (en) Method for determining posture of object in image, client and server
CN108537721B (en) Panoramic image processing method and device and electronic equipment
CN109887003B (en) Method and equipment for carrying out three-dimensional tracking initialization
CN109683699B (en) Method and device for realizing augmented reality based on deep learning and mobile terminal
CN114612575A (en) Camera parameter calibration and three-dimensional data generation method and system
CN107516322B (en) Image object size and rotation estimation calculation method based on log polar space
CN107851196B (en) Image pattern matching method and device
US20240029297A1 (en) Visual positioning method, storage medium and electronic device
CN110111364B (en) Motion detection method and device, electronic equipment and storage medium
CN113793370B (en) Three-dimensional point cloud registration method and device, electronic equipment and readable medium
CN113724135A (en) Image splicing method, device, equipment and storage medium
CN111598777A (en) Sky cloud image processing method, computer device and readable storage medium
US20230394833A1 (en) Method, system and computer readable media for object detection coverage estimation
Li et al. Panodepth: A two-stage approach for monocular omnidirectional depth estimation
CN114387346A (en) Image recognition and prediction model processing method, three-dimensional modeling method and device
CN112102404B (en) Object detection tracking method and device and head-mounted display equipment
CN116129228B (en) Training method of image matching model, image matching method and device thereof
CN112215036B (en) Cross-mirror tracking method, device, equipment and storage medium
CN115861891B (en) Video target detection method, device, equipment and medium
CN116934591A (en) Image stitching method, device and equipment for multi-scale feature extraction and storage medium
CN116402867A (en) Three-dimensional reconstruction image alignment method for fusing SIFT and RANSAC
WO2022257778A1 (en) Method and apparatus for state recognition of photographing device, computer device and storage medium
CN111259702A (en) User interest estimation method and device
Wong et al. A study of different unwarping methods for omnidirectional imaging
WO2018150086A2 (en) Methods and apparatuses for determining positions of multi-directional image capture apparatuses

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240228

Address after: Room 553, 5th Floor, Building 3, No. 969 Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province, 311121

Applicant after: Hangzhou Alibaba Cloud Feitian Information Technology Co.,Ltd.

Country or region after: China

Address before: 311121 Room 516, floor 5, building 3, No. 969, Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant before: Alibaba Dharma Institute (Hangzhou) Technology Co.,Ltd.

Country or region before: China