CN115330886A - Camera external parameter calibration method and device, electronic equipment and computer readable storage medium - Google Patents

Camera external parameter calibration method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN115330886A
CN115330886A CN202211051513.6A CN202211051513A CN115330886A CN 115330886 A CN115330886 A CN 115330886A CN 202211051513 A CN202211051513 A CN 202211051513A CN 115330886 A CN115330886 A CN 115330886A
Authority
CN
China
Prior art keywords
calibration sample
calibration
camera
image
inputting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211051513.6A
Other languages
Chinese (zh)
Inventor
李耀萍
贾双成
朱磊
单国航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhidao Network Technology Beijing Co Ltd
Original Assignee
Zhidao Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhidao Network Technology Beijing Co Ltd filed Critical Zhidao Network Technology Beijing Co Ltd
Priority to CN202211051513.6A priority Critical patent/CN115330886A/en
Publication of CN115330886A publication Critical patent/CN115330886A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a camera external reference calibration method, a camera external reference calibration device, electronic equipment and a computer readable storage medium. The method comprises the following steps: acquiring at least three continuous calibration sample images shot by a camera to be calibrated aiming at the same calibration reference object; inputting at least three calibration sample images into the convolutional layer for feature extraction to obtain the characteristics of the calibration sample images; inputting the image characteristics of the calibration sample into the full-connection layer to obtain a characteristic integration result; inputting the feature integration result into a preset bidirectional gate control circulation unit to obtain a gate control output result; and integrating the gating output result through a full connection layer to obtain the external parameters of the camera to be calibrated. According to the method and the device, the calibration sample image is processed through convolution without a fixed starting point for multiple times, the feature map of the calibration sample image is extracted, the feature map is processed by adopting a bidirectional GRU structure, the error of external parameter calibration of the camera can be effectively reduced, the model accuracy can be effectively improved, meanwhile, manual errors are avoided, and the camera calibration efficiency is improved.

Description

Camera external parameter calibration method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of neural network technologies, and in particular, to a method and an apparatus for calibrating external parameters of a camera, an electronic device, and a computer-readable storage medium.
Background
With the rapid development of image recognition and computer technology, the automatic driving technology is also in a steady development stage.
The key to the automatic driving technology is the image recognition technology, obstacles, lane lines and the like on a road are recognized through a vehicle-mounted camera, the vehicle-mounted camera needs to be calibrated before being used, and the accuracy of a calibration result directly influences the recognition effect of pictures shot by the vehicle-mounted camera, so that the safety of automatic driving is influenced. In the existing calibration technology of a camera, a single photo is adopted for calibration, the single photo is labeled manually, then external parameters of the camera are calculated, the calibration result of the camera is not accurate due to the existence of manual errors, and a large error is generated when a simple neural network is adopted for calibration.
Disclosure of Invention
In order to solve or partially solve the problems in the related art, the application provides a camera external reference calibration method, device, electronic device and computer readable storage medium, which can calibrate a camera to be calibrated quickly and accurately, avoid manual errors and reduce calibration errors.
A first aspect of the present application provides a method for calibrating external parameters of a camera, including:
acquiring at least three continuous calibration sample images shot by a camera to be calibrated aiming at the same calibration reference object;
inputting the at least three continuous calibration sample images into the convolutional layer for feature extraction to obtain the characteristics of the calibration sample images;
inputting the image characteristics of the calibration sample into a full connection layer, and integrating the characteristics to obtain a characteristic integration result;
inputting the characteristic integration result into a preset bidirectional door control circulation unit to obtain a door control output result;
and integrating the gating output results through a full connection layer to obtain external parameters of the camera to be calibrated.
As a possible embodiment of the present application, in the embodiment, the inputting the at least three continuous calibration sample images into the convolutional layer for feature extraction to obtain the calibration sample image features includes:
and extracting the characteristics of one of the at least three calibration sample images by adopting multiple convolution operations to obtain the image characteristics of the one calibration sample image.
As a possible embodiment of the present application, in this embodiment, the extracting features of one calibration sample image of the at least three calibration sample images by using multiple convolution operations to obtain the image features of the one calibration sample image includes:
for the ith calibration sample image in the at least three calibration sample images, taking the ith characteristic point of the calibration sample image as a starting point, and performing characteristic extraction on the calibration sample image by adopting a convolution layer with the step length of n to obtain the image characteristics of the ith calibration sample image; wherein i is an integer greater than 0 and less than n.
As a possible embodiment of the present application, in this embodiment, the inputting the feature of the calibration sample image into the full connection layer, and integrating the feature to obtain a feature integration result includes:
and inputting the image characteristics of each calibrated sample image into the corresponding full-connection layer to obtain the characteristic integration result corresponding to each sample image.
As a possible embodiment of the present application, in this embodiment, the inputting the feature integration result into a preset bidirectional gate control loop unit to obtain a gate control output result includes:
and respectively adopting a bidirectional gate control circulation unit to perform feature fusion aiming at the feature integration result corresponding to each calibration sample image to obtain a gate control output result corresponding to each calibration sample image.
As a possible embodiment of the present application, in this embodiment, the obtaining an external parameter of a camera to be calibrated by integrating the gated output results through a full connection layer includes:
and integrating the gating output results corresponding to the images of the calibration samples through a full connection layer to obtain the external parameters of the camera to be calibrated.
This application second aspect provides a camera external reference calibration device, and the device includes:
the image acquisition module is used for acquiring at least three continuous calibration sample images shot by the camera to be calibrated aiming at the same calibration reference object;
the characteristic extraction module is used for inputting the at least three continuous calibration sample images into the convolutional layer for characteristic extraction to obtain the characteristics of the calibration sample images;
the characteristic integration module is used for inputting the image characteristics of the calibration sample into the full connection layer and integrating the characteristics to obtain a characteristic integration result;
the gate control module is used for inputting the characteristic integration result into a preset bidirectional gate control circulation unit to obtain a gate control output result;
and the external parameter determining module is used for integrating the gating output result through a full connection layer to obtain the external parameter of the camera to be calibrated.
As a possible embodiment of the present application, in the embodiment, the inputting the at least three continuous calibration sample images into the convolutional layer for feature extraction to obtain the calibration sample image features includes:
and extracting the characteristics of one of the at least three calibration sample images by adopting multiple convolution operations to obtain the image characteristics of the one calibration sample image.
As a possible embodiment of the present application, in this embodiment, the feature extraction module is configured to:
aiming at the ith calibration sample image in the at least three calibration sample images, taking the ith characteristic point of the calibration sample image as a starting point, and performing characteristic extraction on the calibration sample image by adopting a convolutional layer with the step length of n to obtain the image characteristics of the ith calibration sample image; wherein i is an integer greater than 0 and less than n.
As a possible embodiment of the present application, in this embodiment, the inputting the feature of the calibration sample image into the full connection layer, and integrating the feature to obtain a feature integration result includes:
and inputting the image characteristics of each calibrated sample image into the corresponding full-connection layer to obtain the characteristic integration result corresponding to each sample image.
As a possible embodiment of the present application, in this embodiment, the inputting the feature integration result into a preset bidirectional gate control loop unit to obtain a gate control output result includes:
and respectively adopting a bidirectional gate control circulation unit to perform feature fusion aiming at the feature integration result corresponding to each calibration sample image to obtain a gate control output result corresponding to each calibration sample image.
As a possible embodiment of the present application, in this embodiment, the obtaining an external parameter of a camera to be calibrated by integrating the gated output results through a full connection layer includes:
and integrating the gating output results corresponding to the images of the calibration samples through a full connection layer to obtain the external parameters of the camera to be calibrated.
A third aspect of the present application provides an electronic device comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method as described above.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon executable code, which, when executed by a processor of an electronic device, causes the processor to perform the method as described above.
According to the embodiment of the application, the calibration sample image is processed through convolution without a fixed starting point for many times, the characteristic diagram of the calibration sample image is extracted, the bidirectional GRU structure is adopted for processing the characteristic diagram, the error of camera external parameter calibration can be effectively reduced, an automatic calibration model is provided for the camera external parameter calibration, the model accuracy can be effectively improved, meanwhile, manual errors are avoided, and the camera calibration efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The foregoing and other objects, features and advantages of the application will be apparent from the following more particular descriptions of exemplary embodiments of the application as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the application.
Fig. 1 is a schematic flowchart of a camera external reference calibration method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a neural network structure shown in an embodiment of the present application;
FIG. 3 is a schematic diagram of a convolutional layer shown in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a bidirectional GRU shown in the embodiment of the present application;
FIG. 5 is a schematic structural diagram of a camera external reference calibration apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device shown in an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While embodiments of the present application are illustrated in the accompanying drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
With the rapid development of image recognition and computer technology, the automatic driving technology is also in a steady development stage. The key to the automatic driving technology is the image recognition technology, obstacles, lane lines and the like on a road are recognized by a vehicle-mounted camera, and before the vehicle-mounted camera is used, the vehicle-mounted camera needs to be calibrated, so that whether the calibration result is accurate or not directly influences the recognition effect of pictures shot by the vehicle-mounted camera, and further influences the safety of automatic driving. In the existing calibration technology of the camera, a single photo is adopted for calibration, the single photo is labeled manually, then the external parameters of the camera are calculated, the calibration result of the camera is not accurate due to the existence of manual errors, and a large error is generated when the calibration is carried out by adopting a simple neural network.
In view of the above problems, the embodiments of the present application provide a camera external reference calibration method, which can calibrate a camera to be calibrated quickly and accurately, avoid manual errors, and reduce calibration errors.
The technical solutions of the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a camera external reference calibration method according to an embodiment of the present application.
Referring to fig. 1, the external reference calibration method for a camera shown in the embodiment of the present application includes:
step S101, at least three continuous calibration sample images shot by the camera to be calibrated aiming at the same calibration reference object are obtained.
In the embodiment of the present application, for external reference calibration of a camera to be calibrated, at least three continuous calibration sample images are used, where the at least three images should be continuously captured by the camera to be calibrated for a same calibration reference object at every interval time, and optionally, when the at least three images are captured by the camera to be calibrated, an angle at which the calibration reference object is captured, a distance from the calibration reference object, and the like should be consistent. In the embodiment of the present application, the number of the calibration sample images is at least three, and the specific number is not limited in the present application.
As a possible implementation manner of the present application, for convenience of illustration, taking a specific example as an example, a calibration sample image taken by a camera to be calibrated is a lane line image of a real-world road, where the real-world road is a straight real-world road, the camera to be calibrated is installed on a detection vehicle, the detection vehicle drives on the real-world road at a constant speed and straight, and the camera to be calibrated takes a lane line image on the real-world road at certain time intervals, such as 0.5 s. Of course, the camera to be calibrated in the embodiment of the present application may be other cameras, and the calibration sample image may also be other images, which is not limited in the present application.
And S102, inputting the at least three continuous calibration sample images into the convolutional layer for feature extraction to obtain the characteristics of the calibration sample images.
In the embodiment of the application, feature extraction is performed on the calibration sample images through the convolution layer in the neural network, so that the image features of each calibration sample image are obtained.
As a possible implementation manner of the present application, the calibration sample image is converted into an RGB image, and then a pre-set convolution layer is used to perform feature extraction on the converted RGB image. In the embodiment of the present application, each of the at least three calibration sample images needs to be converted into an RGB image, and feature extraction is performed on each converted RGB image by using a convolution layer, so as to obtain an image feature corresponding to each calibration sample image. For example, if there are 3 calibration sample images, 3 image features can be obtained.
And S103, inputting the image characteristics of the calibration sample into a full connection layer, and integrating the characteristics to obtain a characteristic integration result.
In this embodiment of the application, for each image feature obtained in step S102, feature integration is performed by respectively using a preset full connection layer, so as to obtain a feature integration result.
And step S104, inputting the characteristic integration result into a preset bidirectional gate control circulation unit to obtain a gate control output result.
In the embodiment of the present application, a bidirectional GRU (gate controlled unit) is a structure of a neural network, and the structure is characterized in that when image recognition is performed, for a certain image, image information at time T can be transmitted to time T +1, and image information at time T +1 can also be transmitted back to time T.
In the embodiment of the application, a bidirectional GRU structure is adopted for processing the feature integration result of each calibration sample image, so as to obtain the gated output result of each calibration sample image.
And S105, integrating the gating output results through a full connection layer to obtain external parameters of the camera to be calibrated.
In the embodiment of the application, all gating output results are integrated by adopting the full connection layer, so that the corresponding external parameters of the camera to be calibrated can be obtained.
As a possible implementation manner of the present application, taking a specific example as an example, as shown in fig. 2, a schematic structural diagram of a camera external parameter calibration neural network provided in the embodiment of the present application is shown, where taking a calibration sample image as 3 images, each image needs to be processed by multiple convolutions, a full connection layer, a bidirectional GRU, and a full connection layer, and finally combined to obtain a camera external parameter.
According to the embodiment of the application, the calibration sample image is processed through convolution without a fixed starting point for many times, the characteristic diagram of the calibration sample image is extracted, the bidirectional GRU structure is adopted for processing the characteristic diagram, the error of camera external parameter calibration can be effectively reduced, an automatic calibration model is provided for the camera external parameter calibration, the model accuracy can be effectively improved, meanwhile, manual errors are avoided, and the camera calibration efficiency is improved.
As a possible embodiment of the present application, in the embodiment, the inputting the at least three continuous calibration sample images into the convolutional layer for feature extraction to obtain the calibration sample image features includes:
for one of the at least three calibration sample images, extracting features by adopting a plurality of convolution operations to obtain image features of the one calibration sample image, wherein the image features comprise:
for the ith calibration sample image in the at least three calibration sample images, taking the ith characteristic point of the calibration sample image as a starting point, and performing characteristic extraction on the calibration sample image by adopting a convolution layer with the step length of n to obtain the image characteristics of the ith calibration sample image; wherein i is an integer greater than 0 and less than n.
In the embodiment of the present application, for convenience of description, taking a specific embodiment as an example, as shown in fig. 3, for a calibration sample image, taking a dimension as an example, where there are 8 feature points, when performing feature extraction on the dimension, a convolution layer with a step length of 2 is used, when performing convolution for the 1 st time, the 1 st feature point is used as a starting point,starting convolution with step size of 2, and performing convolution three times to obtain convolution result a 1 、a 2 、a 3 (ii) a Then starting convolution with step length of 2 and taking the 2 nd characteristic point as a starting point for three times to obtain a convolution result b 1 、b 2 、b 3 Then the results a obtained by two convolutions are used 1 、a 2 、a 3 And b 1 、b 2 、b 3 And splicing to obtain a final output result. In the embodiment of the present application, the characteristic band of each calibration sample image may be two-dimensional or three-dimensional, and therefore, the convolution is also two-dimensional or three-dimensional, specifically, for a two-dimensional characteristic image, when the convolution with the step size of 2 is used for processing, a result of 2*2 is obtained, and if the convolution with the step size of 3 is used for processing, a result of 3*3 is obtained. For the embodiment of the present application, neither the dimension of the feature image nor the convolution step size is limited in the present application.
According to the embodiment of the application, the feature extraction is carried out on the calibration sample image through convolution without a fixed starting point for multiple times, and the error of the final camera external reference calibration model can be reduced.
As a possible embodiment of the present application, in this embodiment, the inputting the feature of the calibration sample image into the full connection layer, and integrating the feature to obtain a feature integration result includes:
and inputting the image characteristics of each calibration sample image into the corresponding full-connection layer to obtain a characteristic integration result corresponding to each sample image.
In the embodiment of the present application, as shown in fig. 2, for the image feature of each calibration sample image, a full connection layer is correspondingly disposed, and the full connection layer may be used to integrate the image features extracted by performing convolution on each calibration sample image. All image features can be integrated through the full connection layer, and subsequent processing of double GRUs is facilitated.
According to the embodiment of the application, all image characteristics subjected to convolution processing are integrated by setting the full connection layer, and the accuracy of the model is improved.
As a possible embodiment of the present application, in this embodiment, the inputting the feature integration result into a preset bidirectional gate control loop unit to obtain a gate control output result includes:
and respectively adopting a bidirectional gate control circulation unit to perform feature fusion on the feature integration result corresponding to each calibration sample image to obtain a gate control output result corresponding to each calibration sample image.
In the embodiment of the present application, when performing image recognition, a bidirectional GRU structure is adopted for a certain image, and image information at time T may be transmitted to time T +1, and image information at time T +1 may also be transmitted back to time T.
For convenience of description, a specific embodiment is taken as an example, and as shown in fig. 4, a schematic diagram of a bidirectional GRU structure provided in the embodiment of the present application is shown, where an example is taken that an input is three feature images, the three images are an image at a time t-1, an image at a time t, and an image at a time t +1, and image features of the three images are respectively input into the bidirectional GRU for feature integration.
The embodiment of the application can reduce the error of the camera external reference calibration model by adopting the bidirectional GRU structure.
As a possible embodiment of the present application, in this embodiment, the obtaining an external parameter of a camera to be calibrated by integrating the gated output results through a full connection layer includes:
and integrating the gating output results corresponding to the images of the calibration samples through a full connection layer to obtain the external parameters of the camera to be calibrated.
In the embodiment of the application, all output nodes of all bidirectional GRUs are connected with a full connection layer, and the full connection layer is adopted to integrate the gated output results of all bidirectional GRUs, so that a final camera external reference calibration result can be obtained.
According to the embodiment of the application, the calibration sample image is processed through convolution without a fixed starting point for many times, the characteristic diagram of the calibration sample image is extracted, the bidirectional GRU structure is adopted for processing the characteristic diagram, the error of camera external parameter calibration can be effectively reduced, an automatic calibration model is provided for the camera external parameter calibration, the model accuracy can be effectively improved, meanwhile, manual errors are avoided, and the camera calibration efficiency is improved.
Corresponding to the embodiment of the application function implementation method, the application also provides a camera external reference calibration device, electronic equipment and a corresponding embodiment.
Fig. 5 is a schematic structural diagram of a camera external reference calibration apparatus according to an embodiment of the present application.
Referring to fig. 5, the camera external reference calibration apparatus provided in the embodiment of the present application includes an image obtaining module 510, a feature extracting module 520, a feature integrating module 530, a gating module 540, and an external reference determining module 550, where:
an image obtaining module 510, configured to obtain at least three consecutive calibration sample images of a camera to be calibrated, which are shot for a same calibration reference object;
the feature extraction module 520 is configured to input the at least three continuous calibration sample images into the convolutional layer for feature extraction, so as to obtain calibration sample image features;
the feature integration module 530 is configured to input the image features of the calibration sample to a full connection layer, and integrate the features to obtain a feature integration result;
the gate control module 540 is configured to input the feature integration result to a preset bidirectional gate control circulation unit to obtain a gate control output result;
and an external parameter determining module 550, configured to integrate the gating output result through a full connection layer to obtain an external parameter of the camera to be calibrated.
As a possible embodiment of the present application, in the embodiment, the inputting the at least three continuous calibration sample images into the convolutional layer for feature extraction to obtain the calibration sample image features includes:
and extracting the characteristics of one of the at least three calibration sample images by adopting multiple convolution operations to obtain the image characteristics of the one calibration sample image.
As a possible embodiment of the present application, in this embodiment, the feature extraction module is configured to:
for the ith calibration sample image in the at least three calibration sample images, taking the ith characteristic point of the calibration sample image as a starting point, and performing characteristic extraction on the calibration sample image by adopting a convolution layer with the step length of n to obtain the image characteristics of the ith calibration sample image; wherein i is an integer greater than 0 and less than n.
As a possible embodiment of the present application, in this embodiment, the inputting the feature of the calibration sample image into the full connection layer, and integrating the feature to obtain a feature integration result includes:
and inputting the image characteristics of each calibrated sample image into the corresponding full-connection layer to obtain the characteristic integration result corresponding to each sample image.
As a possible embodiment of the present application, in this embodiment, the inputting the feature integration result into a preset bidirectional gate control loop unit to obtain a gate control output result includes:
and respectively adopting a bidirectional gate control circulation unit to perform feature fusion on the feature integration result corresponding to each calibration sample image to obtain a gate control output result corresponding to each calibration sample image.
As a possible embodiment of the present application, in this embodiment, the obtaining an external parameter of a camera to be calibrated by integrating the gated output results through a full connection layer includes:
and integrating the gating output results corresponding to the calibration sample images through a full connection layer to obtain the external parameters of the camera to be calibrated.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
According to the embodiment of the method, the calibration sample image is processed through convolution without a fixed starting point for multiple times, the feature map of the calibration sample image is extracted, the feature map is processed by adopting a bidirectional GRU structure, the error of external parameter calibration of the camera can be effectively reduced, an automatic calibration model is provided for external parameter calibration of the camera, the accuracy of the model can be effectively improved, manual errors are avoided, and the calibration efficiency of the camera is improved.
Fig. 6 is a schematic structural diagram of an electronic device shown in an embodiment of the present application.
Referring to fig. 6, the electronic device 60 includes a memory 610 and a processor 620.
Processor 620 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 610 may include various types of storage units such as system memory, read Only Memory (ROM), and permanent storage. Wherein the ROM may store static data or instructions that are required by the processor 620 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. In addition, the memory 610 may include any combination of computer-readable storage media, including various types of semiconductor memory chips (e.g., DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic and/or optical disks, as well. In some embodiments, memory 610 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a read-only digital versatile disc (e.g., DVD-ROM, dual layer DVD-ROM), a read-only Blu-ray disc, an ultra-density optical disc, a flash memory card (e.g., SD card, min SD card, micro-SD card, etc.), a magnetic floppy disc, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
The memory 610 has stored thereon executable code that, when processed by the processor 620, causes the processor 620 to perform some or all of the methods described above.
Furthermore, the method according to the present application may also be implemented as a computer program or computer program product comprising computer program code instructions for performing some or all of the steps of the above-described method of the present application.
Alternatively, the present application may also be embodied as a computer-readable storage medium (or non-transitory machine-readable storage medium or machine-readable storage medium) having executable code (or a computer program or computer instruction code) stored thereon, which, when executed by a processor of an electronic device (or server, etc.), causes the processor to perform part or all of the various steps of the above-described method according to the present application.
Having described embodiments of the present application, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A camera external reference calibration method is characterized by comprising the following steps:
acquiring at least three continuous calibration sample images shot by a camera to be calibrated aiming at the same calibration reference object;
inputting the at least three continuous calibration sample images into the convolutional layer for feature extraction to obtain the characteristics of the calibration sample images;
inputting the image characteristics of the calibration sample into a full connection layer, and integrating the characteristics to obtain a characteristic integration result;
inputting the characteristic integration result into a preset bidirectional gate control circulation unit to obtain a gate control output result;
and integrating the gating output results through a full connection layer to obtain external parameters of the camera to be calibrated.
2. The method for calibrating external parameters of a camera according to claim 1, wherein the inputting the at least three continuous calibration sample images into a convolutional layer for feature extraction to obtain the calibration sample image features comprises:
and extracting the characteristics of one of the at least three calibration sample images by adopting multiple convolution operations to obtain the image characteristics of the one calibration sample image.
3. The camera external reference calibration method according to claim 2, wherein the extracting features of one of the at least three calibration sample images by using a plurality of convolution operations to obtain the image features of the one calibration sample image comprises:
for the ith calibration sample image in the at least three calibration sample images, taking the ith characteristic point of the calibration sample image as a starting point, and performing characteristic extraction on the calibration sample image by adopting a convolution layer with the step length of n to obtain the image characteristics of the ith calibration sample image; wherein i is an integer greater than 0 and less than n.
4. The camera external reference calibration method according to claim 3, wherein the inputting the image features of the calibration sample into a full connection layer, and integrating the features to obtain a feature integration result comprises:
and inputting the image characteristics of each calibration sample image into the corresponding full-connection layer to obtain a characteristic integration result corresponding to each sample image.
5. The camera external reference calibration method according to claim 4, wherein the inputting the feature integration result into a preset bidirectional gate control circulation unit to obtain a gate control output result comprises:
and respectively adopting a bidirectional gate control circulation unit to perform feature fusion aiming at the feature integration result corresponding to each calibration sample image to obtain a gate control output result corresponding to each calibration sample image.
6. The method for calibrating the external parameters of the camera according to claim 5, wherein the obtaining the external parameters of the camera to be calibrated by integrating the gated output results through a full connection layer comprises:
and integrating the gating output results corresponding to the calibration sample images through a full connection layer to obtain the external parameters of the camera to be calibrated.
7. A camera external reference calibration device is characterized by comprising:
the image acquisition module is used for acquiring at least three continuous calibration sample images shot by the camera to be calibrated aiming at the same calibration reference object;
the characteristic extraction module is used for inputting the at least three continuous calibration sample images into the convolutional layer for characteristic extraction to obtain the characteristics of the calibration sample images;
the characteristic integration module is used for inputting the image characteristics of the calibration sample into the full connection layer, integrating the characteristics and obtaining a characteristic integration result;
the door control module is used for inputting the characteristic integration result into a preset bidirectional door control circulation unit to obtain a door control output result;
and the external parameter determining module is used for integrating the gating output result through a full connection layer to obtain the external parameter of the camera to be calibrated.
8. The camera external reference calibration device according to claim 7, wherein the feature extraction module is configured to:
for the ith calibration sample image in the at least three calibration sample images, taking the ith characteristic point of the calibration sample image as a starting point, and performing characteristic extraction on the calibration sample image by adopting a convolution layer with the step length of n to obtain the image characteristics of the ith calibration sample image; wherein i is an integer greater than 0 and less than n.
9. An electronic device, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any one of claims 1-6.
10. A computer-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the method of any of claims 1-6.
CN202211051513.6A 2022-08-31 2022-08-31 Camera external parameter calibration method and device, electronic equipment and computer readable storage medium Pending CN115330886A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211051513.6A CN115330886A (en) 2022-08-31 2022-08-31 Camera external parameter calibration method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211051513.6A CN115330886A (en) 2022-08-31 2022-08-31 Camera external parameter calibration method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115330886A true CN115330886A (en) 2022-11-11

Family

ID=83927209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211051513.6A Pending CN115330886A (en) 2022-08-31 2022-08-31 Camera external parameter calibration method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115330886A (en)

Similar Documents

Publication Publication Date Title
US20200160040A1 (en) Three-dimensional living-body face detection method, face authentication recognition method, and apparatuses
US10282860B2 (en) Monocular localization in urban environments using road markings
CN112634209A (en) Product defect detection method and device
JP2019075097A (en) Method and apparatus for data reduction of feature-based peripheral information of driver assistance system
CN114705121B (en) Vehicle pose measurement method and device, electronic equipment and storage medium
CN114359850A (en) Vehicle automatic driving control method, system, electronic device and storage medium
CN115330886A (en) Camera external parameter calibration method and device, electronic equipment and computer readable storage medium
CN116740145A (en) Multi-target tracking method, device, vehicle and storage medium
CN116580235A (en) Target detection device, method, equipment and medium based on YOLOv4 network optimization
CN116309628A (en) Lane line recognition method and device, electronic equipment and computer readable storage medium
CN114331848A (en) Video image splicing method, device and equipment
CN112825145B (en) Human body orientation detection method and device, electronic equipment and computer storage medium
CN114049394A (en) Monocular distance measuring method, device, equipment and storage medium
CN113538546B (en) Target detection method, device and equipment for automatic driving
CN113313770A (en) Calibration method and device of automobile data recorder
CN113408509B (en) Signboard recognition method and device for automatic driving
US20230386222A1 (en) Method for detecting three-dimensional objects in roadway and electronic device
CN116310174A (en) Semantic map construction method and device, electronic equipment and storage medium
CN113743340B (en) Computer vision network model optimization method and related device for automatic driving
CN117765494A (en) Target state detection method, device, vehicle and storage medium
CN114708333B (en) Method and device for generating automatic calibration camera external parameter model
CN117994762A (en) Detection method and device for barrier gate, vehicle and medium
CN113486795A (en) Visual identification performance test method, device, system and equipment
CN116129402A (en) Training method of neural network model and signal lamp grouping method and device
US20230386231A1 (en) Method for detecting three-dimensional objects in relation to autonomous driving and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination