CN114332248A - Automatic calibration method and device for external parameters of vision sensor - Google Patents
Automatic calibration method and device for external parameters of vision sensor Download PDFInfo
- Publication number
- CN114332248A CN114332248A CN202210218766.1A CN202210218766A CN114332248A CN 114332248 A CN114332248 A CN 114332248A CN 202210218766 A CN202210218766 A CN 202210218766A CN 114332248 A CN114332248 A CN 114332248A
- Authority
- CN
- China
- Prior art keywords
- lane line
- vanishing point
- image frame
- lane
- parameters
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention provides a method and a device for automatically calibrating external parameters of a vision sensor, wherein the method comprises the following steps: acquiring an image through a visual sensor, and extracting image frame data with a lane line; detecting a lane line in each image frame, and calculating a road vanishing point according to the lane line; calculating an optimal road vanishing point according to all road vanishing points in the image frame detected and extracted by the lane line; and calculating external parameters of the visual sensor according to the obtained optimal road vanishing point. The device comprises an image acquisition module, a lane line detection module, an optimal vanishing point calculation module and a correction vision sensor external parameter module. The method provides a scheme of calculating the vanishing point by detecting the road lane line, solves the external parameter by the calculated road vanishing point, realizes automatic calibration of the external parameter, and corrects the calibrated external parameter when the initial calibrated external parameter fails in a complex scene, and has the advantages of reliable calibration, high efficiency, and higher accuracy and precision.
Description
Technical Field
The invention relates to the technical field of unmanned driving, in particular to a method and a device for automatically calibrating external parameters of a vision sensor.
Background
In the unmanned technology, systems such as automatic driving or unmanned aerial vehicles mainly include aspects such as large modules, perception, cognition and control. The sequence is also exactly the sequence of operation, and firstly needs to accurately sense the environment, then processes the information, and finally sends an instruction to a control system of the automobile to realize specific functions.
Among the sensing modules, sensors are the most important hardware. There are also many kinds of sensors at present, and besides various radars, for example, laser radar, millimeter wave radar, a vision sensor such as a camera is also indispensable. The vehicle-mounted vision sensor based on the computer vision technology can sense a plurality of objects in the environment, and even some people think that the laser radar is not a necessary product along with the development of the computer vision.
The premise of the camera foresight ADAS system scheme is based on the accurate calibration of the camera, such as ranging, PCW (pedestrian collision warning), LDW (lane departure warning) and other functions, which have higher requirements on the calibration accuracy. At present, most of the schemes adopt a manual calibration scheme, namely, internal and external parameters of the camera are calibrated when the camera is installed and fixed, and the scheme of manually calibrating the external parameters by a machine has low efficiency and higher error instability caused by manual calibration; moreover, most manual calibration is initialization calibration setting, but external parameters of initialization calibration in the face of complex road scenes (such as uphill and downhill, bumpy road and accelerated hard braking) can fail due to large errors.
The lane line detection is the basis of LDW and LKA (lane centering auxiliary system) and is also a technical key point in the ADAS field, the accuracy and the accuracy of the lane line detection directly influence the functions of LDW and LKA, the existing mainstream two-stage lane line detection method has the problems of low efficiency in feature extraction first and low efficiency in post-processing fitting, incapability of effectively extracting the global features of an image, insufficient stability and accuracy of the lane line detection, insufficient capability of adapting to complex scenes and the like. The common end-to-end lane line detection method based on deep learning neural networks such as CNN, RNN and the like has the problems of generally large model, high computational requirement, difficulty in landing and applying to a vehicle-mounted chip for real-time detection and the like.
Disclosure of Invention
In view of this, a method and a device for automatically calibrating external parameters of a visual sensor are provided to solve the problems of low efficiency of manually calibrating the external parameters of the visual sensor and high error instability caused by manual calibration; the problem of failure of the initialized calibration parameters facing a complex scene is effectively solved, and real-time automatic calibration and calibration correction facing a failure scene can be carried out.
An automatic calibration method for external parameters of a vision sensor comprises the following steps:
acquiring an image through a visual sensor, and extracting image frame data with a lane line;
detecting a lane line in each image frame, and calculating a road vanishing point according to the lane line;
calculating an optimal road vanishing point according to all road vanishing points in the image frame detected and extracted by the lane line;
and calculating external parameters of the visual sensor according to the obtained optimal road vanishing point.
In some embodiments, the detecting the lane lines in each image frame includes performing parameter prediction by an end-to-end lane line detection neural network based on a Transfomer structure, and obtaining lane line equation parameters in an image coordinate system by the end-to-end lane line detection neural network to detect the lane lines in the image frame, where the lane line equation formula is as follows:
wherein (A), (B), (C), (B), (C), (B), (C), (B), (C)u,v) Expressed as pixel coordinates in the image,、、、、 、 are internal and external parameters of the vision sensor.
In some preferred embodiments, when calculating the vanishing point according to the detected lane line equation after detecting the lane line in each image frame, it is determined in advance by the prior constraint condition whether the lane line is a straight line, if so, the vanishing point is calculated according to the lane line equation, and if not, the vanishing point is skipped to be calculated, and the next image frame is determined.
In a specific embodiment, when the end-to-end lane line detection neural network based on the Transfomer structure is applied to the step of detecting the lane line in each image frame, the end-to-end lane line detection neural network adopts a pure attention mechanism structure and a non-shared curvature parameter, and performs the following improved training:
(1) changing a sparse sampling mode of a lane line sampling point into sampling with dense far ends and sparse near ends for training;
(2) and (4) adjusting the loss weight coefficients of all lane lines to be consistent.
In a specific embodiment, the optimal vanishing point calculation is performed by a clustering algorithm model, and specifically includes the following steps: inputting road vanishing points of a plurality of continuous image frames into a clustering algorithm model, and estimating an optimal road vanishing point through a DBSCAN clustering algorithm; and judging whether the road vanishing point of the image frame obtained by calculation according to the detected lane line equation meets the vanishing point adding clustering pool or not according to a priori constraint condition before the clustering algorithm is carried out, if so, adding the image frame into the clustering pool for calculation, and if not, determining as an interference noise point, skipping adding the image frame into the clustering pool, and judging and processing the next image frame.
In some preferred embodiments, the end-to-end lane line detection model is based on a reduced Resnet18 structure, in which the number of channels is reduced and the parameters of the feature extraction layer are reduced to avoid overfitting, the channels are increased through shallow features, and the channels are reduced through deep features to enhance the extraction of the spatial structure features of the lane line by the network;
the ResNet18 output channel is cut into 16, 32, 64 and 128 blocks, and the down sampling coefficient is set to 8, so as to enhance the capability of the neural network backbone in extracting low-resolution features from the input image; wherein the high-level spatial representation of the lane lines is encoded as H x W x C; this feature is flattened in the spatial dimension when constructing the sequence for the encoder input, resulting in a sequence S of size HW × C, where HW represents the length of the sequence and C represents the number of channels, which is taken as the encoder input.
And, a vision sensor external parameter automatic calibration device, it includes:
the image acquisition module acquires images through the visual sensor and extracts image frame data with lane lines;
the lane line detection module is used for detecting lane lines in each image frame and detecting road vanishing points;
the optimal vanishing point calculation module is used for calculating the optimal road vanishing point according to all road vanishing points in the image frame detected and extracted by the lane line;
and the vision sensor external parameter correcting module is used for calculating the vision sensor external parameters according to the obtained optimal road vanishing point.
In some preferred embodiments, the lane line detection module includes an end-to-end lane line detection neural network module based on a Transfomer structure, and the end-to-end lane line detection neural network module based on the Transfomer structure obtains lane line equation parameters in an image coordinate system through the end-to-end lane line detection neural network and calculates a road vanishing point in each image frame according to the lane line equation; when the lane lines in each image frame are detected, whether the lane lines are straight lines or not is further judged in advance through a priori constraint condition, if the lane lines are straight lines, road vanishing points are calculated according to a lane line equation, if the lane lines are not straight lines, the vanishing points are skipped to be calculated, and the next image frame is judged and processed, wherein the lane line equation formula is as follows:
wherein (A), (B), (C), (B), (C), (B), (C), (B), (C)u,v) Expressed as pixel coordinates in the image,、、、、 、 are internal and external parameters of the vision sensor.
In some preferred specific embodiments, the optimal vanishing point calculating module adopts a clustering algorithm model, the optimal vanishing point calculating module inputs road vanishing points of a plurality of continuous image frames into the clustering algorithm model, and the optimal road vanishing point is calculated through a DBSCAN clustering algorithm; the method comprises the steps of judging whether road vanishing points of image frames meet a vanishing point adding clustering pool condition or not according to prior constraint conditions before clustering algorithm, if yes, adding the vanishing point adding clustering pool for calculation, and if not, judging the next image frame.
In some preferred embodiments, when the end-to-end lane line detection neural network based on the Transfomer structure is applied to the step of detecting the lane line in each image frame, the end-to-end lane line detection neural network adopts a pure attention mechanism structure and unshared curvature parameters, and performs the following improved training:
(1) changing a sparse sampling mode of a lane line sampling point into sampling with dense far ends and sparse near ends for training;
(2) adjusting the loss weight coefficients of all lane lines to be consistent;
the end-to-end lane line detection model is carried out on the basis of a reduced Resnet18 structure, in the reduced Resnet18 structure, the number of channels is reduced, parameters of a feature extraction layer are reduced to avoid overfitting, channels are increased through shallow features, and channels are reduced through deep features;
the ResNet18 output channel is cut into 16, 32, 64 and 128 blocks, and the down sampling coefficient is set to 8, so as to enhance the capability of the neural network backbone in extracting low-resolution features from the input image; wherein the encoding size of the high-level spatial representation of the lane is H multiplied by W multiplied by C; this feature is flattened in the spatial dimension when constructing the sequence for the encoder input, resulting in a sequence S of size HW × C, where HW represents the length of the sequence and C represents the number of channels, which is taken as the encoder input.
The automatic calibration method and device for the external parameters of the vision sensor at least have the following advantages:
1. under the condition of the same computational power, the accuracy and precision of lane line detection are higher, the simultaneous multi-lane line detection can be realized, the full lane line detection and the end-to-end detection of the road surface are realized, the model directly outputs lane line equation parameters, the detection effect is high, and the detection real-time performance is improved;
2. the scheme of estimating the vanishing points by detecting the lane lines of the road is provided, the external parameters of the camera are solved by estimating the road vanishing points, the automatic calibration of the external parameters of the automatic camera is realized to replace the calibration of the external parameters of the manual camera, and the calibration of the external parameters is corrected when the initial calibration parameters fail in a complex scene, so that the calibration is reliable and the efficiency is high.
Drawings
Fig. 1 is a schematic flow chart of a method for automatically calibrating an external parameter of a visual sensor according to an embodiment of the present invention.
FIG. 2 is a schematic diagram of a model architecture used in a method for automatically calibrating external parameters of a vision sensor according to an embodiment of the present invention.
Fig. 3 is a specific flowchart of a method for automatically calibrating an external parameter of a visual sensor according to an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of a frame of the automatic calibration device for external parameters of a vision sensor according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1-3, a method for automatically calibrating an external parameter of a vision sensor according to an embodiment of the present invention is shown, the method including the following steps:
s10, collecting images through a vision sensor, and extracting image frame data with lane lines;
s20, detecting the lane lines in each image frame, and calculating the road vanishing point according to the lane lines;
s30, calculating the optimal road vanishing point according to all the road vanishing points in the image frame detected and extracted by the lane line;
and S40, calculating out-of-vision-sensor parameters according to the obtained optimal road vanishing point.
In step S10, in some embodiments of the present invention, the vision sensor is a video camera or a camera, and the road image is captured by the camera to generate a lane line image, preferably a continuous multi-frame image.
In step S20, the detecting the lane lines in each image frame includes performing parameter prediction by using an end-to-end lane line detection neural network based on a Transfomer structure, obtaining lane line equation parameters in an image coordinate system by using the end-to-end lane line detection neural network, and then detecting the lane lines in the image frame according to the lane line equation.
In some specific embodiments, step S20 includes two substeps:
and S21, detecting the straight line of the lane line in the single-frame image. Specifically, the method comprises the steps that an end-to-end lane line detection neural network is directly output lane line equation parameters under an image coordinate system to detect lane lines in a single-frame image by adopting a Transformer structural design neural network; judging whether the lane line is a straight line or not through the prior constraint condition, if so, calculating a vanishing point according to a lane line equation, namely, entering the next step S24, and if not, skipping the calculation of the vanishing point to detect the lane line in the next frame of image.
Specifically, the formula of the lane line equation is as follows:
wherein (A), (B), (C), (B), (C), (B), (C), (B), (C)u,v) Expressed as pixel coordinates in the image,、、、、 、 are internal and external parameters of the vision sensor.
Wherein, neural network Backbone (Backbone): the neural network model was mainly improved based on Resnet18, avoiding overfitting by reducing the number of channels and reducing the parameters of the feature extraction layer. The lane line structure detail information such as lane line texture, color, slender structure and the like is extracted more efficiently by increasing the channels through the shallow features and reducing the channels through the deep features, so that the detection rate of the model is improved.
Specifically, ResNet18 has four blocks and reduces the sample characteristics by a factor of 16. The number of output channels per block is "64, 128, 256, 512", respectively. In this embodiment, the inventors reduced the ResNet18 output channel to "16, 32, 64, 128" to avoid overfitting, and set the downsampling coefficient to 8 to reduce the loss of lane structure details. For example, in the first block, there are typically 4 layers of 3 × 3 convolution, and the present embodiment preferably reduces to 2 layers of 3 × 3 convolution. When an image is input to a neural network Backbone (Backbone), the capability of the neural network Backbone for extracting low-resolution features of the input image is strengthened, and the encoding size of the high-level spatial representation of the lane is H multiplied by W multiplied by C. To construct the sequence for the encoder input, the feature is preferably flattened in the spatial dimension, resulting in a sequence S of size HW × C, where HW represents the length of the sequence and C represents the number of channels. After the test of the inventor, the detection rate of the neural network model can be improved by more than 5%.
As shown in fig. 2, the implementation of an end-to-end neural network model architecture based on a neural network of a transform structure in this embodiment is shown, and specifically includes a neural network Backbone (Backbone) 21, a simplified transform network 22, a plurality of feed-forward networks (FFNs)23 for parameter prediction, and a hungarian loss module 24. Given an input image I, the neural network backbone extracts low resolution features and then compresses them into a sequence S by compressing the spatial dimensions. S and position embedding EpFeeding a Transformer, Encoder to output a representation sequence Se. Then, the Decoder first processes an initial query sequence SqLearning location embedding E differentiated from an implicit learning locationLLGenerating an output sequence SdCalculating the sum of SeAnd EpTo process the relevant features. Finally, the proposed output parameters are directly predicted by the various FFNs. And further fitting loss, and performing binary matching between the predicted parameters and the truth value lane through Hungarian fitting loss. The hungarian algorithm effectively solves the matching problem. The matching results are then used for optimizationRegression loss for a particular lane. S, S in FIG. 2eAnd EpRepresenting flat signature sequences, code sequences and sine position embeddings, all of which are tensors of shape HW × C. Sq、ELLAnd SdRepresenting a query sequence, learning lane embedding, and decoding sequences. Different colors in fig. 2 represent different output timings.
Improving the output of the neural network model: the shared curvature parameter is improved into a non-shared curvature parameter, so that the model has the advantages that the generalization and the accuracy of the model can be improved by respectively fitting and detecting different lane lines;
in addition, the neural network model of the embodiment is different from CNN and RNN, and the model is lighter and smaller in computation amount due to the adoption of a pure attention mechanism structure, so that the computational force requirement of real-time detection of the vehicle-mounted chip can be met, and the neural network model can be applied to the vehicle-mounted chip in a ground mode.
The training mode is improved, and specifically comprises the following training:
(1) changing a sparse sampling mode of a lane line sampling point into sampling with dense far ends and sparse near ends for training;
(2) adjusting the loss weight coefficient of each lane line to be consistent
And S24, calculating the single-frame road vanishing point according to the detected lane line straight line.
The road vanishing point is calculated mainly based on the assumption that the road surface lane lines are parallel, and the lane lines of the perspective view shot by the camera with the optical axis parallel to the lanes intersect at the vanishing point. Therefore, under certain set constraint conditions, specifically, the prior constraint condition in this step means that the yaw angle of the lane line is smaller than a set threshold, and the curvature radius is larger than the set threshold, for example, the curvature radius of the lane line is larger than 2000m, and the yaw angle is smaller than 5 degrees, and of course, the specific range of the threshold may be adjusted and changed according to actual conditions and needs. And then, calculating the image coordinates of the vanishing point according to the lane line parameters detected by the model, and adding the vanishing point image coordinates into a clustering pool in the subsequent step.
The first constraint condition in step S21 detects whether the lane line satisfies the curvature radius threshold and the yaw angle threshold, in order to ensure that the vanishing point calculated by the lane line is more accurate, to ensure that the direction of the lane line is parallel or nearly parallel to the optical axis direction of the camera when the vanishing point is calculated, and to eliminate the lane line detected in the case of a curve or lane change.
In step S30, in some embodiments, the optimal vanishing point calculation is performed by a clustering algorithm model, which specifically includes the following steps: and inputting the road vanishing points of a plurality of continuous image frames into a clustering algorithm model, and calculating the optimal road vanishing point by a DBSCAN clustering algorithm.
The optimal vanishing point calculating method is based on the fact that a real road vanishing point exists in a certain image area (hereinafter referred to as an "optimal area") in which most of vanishing point detection results are intensively distributed, and the vanishing point is estimated from the vanishing point detection results distributed in the area. In order to obtain the calculated value of the optimal road vanishing point, in this embodiment, first, all vanishing point detections are clustered in a clustering pool, noise points are removed, a cluster (hereinafter referred to as an "optimal cluster") with the most elements in a clustering result is selected as the vanishing point distributed in an optimal area, and finally, an optimal vanishing point is calculated according to all vanishing points in the optimal cluster.
In some preferred embodiments, before the clustering algorithm is performed, whether the road vanishing point of the image frame meets the condition of adding the vanishing point clustering pool is judged in advance according to the prior constraint condition, if so, the image frame is added into the clustering pool for calculation, and if not, the next image frame is judged and processed. The second condition is that whether a threshold value of the transverse coordinate range of the vanishing point is met or not is judged, the threshold value is set to be a super parameter and can be adjusted, and the purpose is further to preliminarily screen out reliable vanishing points and add the vanishing points into a clustering pool, and to reject some interference noise points with large deviation. For example, the vanishing point lateral coordinates preferably deviate from the image by less than 20 pixels.
In addition, after the camera external parameters are solved through the optimal vanishing points, the external parameters are corrected in real time, and then the camera external parameters are updated in time, so that continuous updating iteration is realized.
As shown in fig. 3, in an embodiment of the present invention, the flow of the method for automatically calibrating the external parameter of the visual sensor is as follows:
s11, a camera captures a lane line image, and image frame data having a lane line is extracted.
The road image can be collected in real time through the camera so as to carry out real-time detection and correct the external parameters of the vision sensor in real time. The method is not limited to cameras, and can be widely applied to automatic calibration of external parameters of cameras in various CV vision schemes, and the cameras correct and calibrate the external parameters in real time.
And S22, predicting the lane line equation parameters by the end-to-end neural network. The lane line in each frame image is detected to determine whether the lane line is a straight line as described above. Specifically, the end-to-end lane line detection is realized by adopting a neural network designed by a Transformer structure, the neural network directly outputs lane line equation parameters under an image coordinate system, and the lane lines in a single-frame image are detected; and judging whether the lane line is a straight line or not through the prior constraint condition, and if so, calculating a road vanishing point according to a lane line equation.
S23, judging whether the detected lane line meets the vanishing point condition, if so, entering the next step, namely calculating the vanishing point; and skipping if the vanishing point condition is not met, acquiring and detecting the next image frame, and repeating the steps in sequence. In practical application, a plurality of image frames can be simultaneously acquired and simultaneously detected, and when the vanishing point condition is not met, skipping is performed, and whether the lane line of the next image frame is a straight line or not is directly judged.
S25, calculating the vanishing point according to the detected lane line equation parameters, namely instantiating the step S24.
And S32, adding the cluster pool. In this embodiment, before performing a clustering algorithm, it is determined in advance according to a priori constraint condition whether a road vanishing point of an image frame meets a vanishing point clustering pool adding condition, if so, the road vanishing point is added to the clustering pool for calculation, and if not, a next image frame is determined, that is, the acquisition and detection of the next image frame are sequentially cycled.
And S33, calculating the optimal vanishing point by the DBSCAN clustering. As mentioned above, after the road vanishing points meeting the prior constraint condition are added into the clustering pool, the road vanishing points of a plurality of continuous image frames are input into the clustering algorithm model, and the optimal road vanishing points are calculated through the DBSCAN clustering algorithm.
And S41, calculating the camera external parameters according to the optimal vanishing point, namely calculating the vision sensor external parameters according to the optimal road vanishing point obtained by the clustering algorithm. After the camera external parameters are calculated, the external parameters are corrected in real time, and then the camera external parameters are updated in time, so that continuous updating iteration is realized.
Referring to fig. 4, in another aspect of the embodiment of the present invention, an automatic calibration apparatus for external parameters of a visual sensor is provided, which includes an image acquisition module 11, a lane line detection module 12, an optimal vanishing point calculation module 13, and a module 15 for correcting external parameters of a visual sensor.
Specifically, the image acquisition module 11 acquires an image by a vision sensor, and extracts image frame data having a lane line. In the embodiment of the present invention, the vision sensor is a video camera or a camera, and the road image is captured by the camera to generate the lane line image, preferably a continuous multi-frame image.
The lane line detection module 12 is configured to detect a lane line in each image frame and detect a road vanishing point. In some embodiments, the lane line detection module 12 includes an end-to-end lane line detection neural network module based on a Transfomer structure, and the end-to-end lane line detection neural network module based on the Transfomer structure obtains a lane line equation parameter in an image coordinate system through the end-to-end lane line detection neural network and calculates a road vanishing point in each image frame according to the lane line equation.
The method comprises the steps of detecting lane lines in each image frame, predicting parameters through an end-to-end lane line detection neural network based on a Transfomer structure, obtaining lane line equation parameters under an image coordinate system through the end-to-end lane line detection neural network, and detecting the lane lines in the image frames according to the lane line equation.
In some specific embodiments, in some preferred embodiments, the lane line detection module 12 further determines in advance whether the lane line is a straight line through an a priori constraint condition, calculates a road vanishing point according to a lane line equation if the lane line is a straight line, and determines a next image frame if the lane line is not a straight line. Therefore, the lane line detection module 12 further includes a straight line detection module and a vanishing point calculation module. The straight line detection module is used for detecting straight lines of the lane lines in the single-frame image. Specifically, the method comprises the steps that an end-to-end lane line detection neural network is directly output lane line equation parameters under an image coordinate system to detect lane lines in a single-frame image by adopting a Transformer structural design neural network; judging whether the lane line is a straight line or not through the prior constraint condition, if so, entering the next step S22, and otherwise, detecting the lane line in the next frame image.
Neural network Backbone (Backbone): the neural network model was mainly improved based on Resnet18, avoiding overfitting by reducing the number of channels and reducing the parameters of the feature extraction layer. The channels are added through the shallow features, the channels are reduced through the deep features, the detail information of the lane line structure is extracted more efficiently, and the detection rate of the model is improved;
improving the output of the neural network model: the shared curvature parameter is improved into a non-shared curvature parameter, so that the model has the advantages that the generalization and the accuracy of the model can be improved by respectively fitting and detecting different lane lines;
the training mode is improved, and specifically comprises the following training:
(1) changing the coefficient sampling points into dense uneven lane line sample points as targets (targets) for training;
(3) the loss weight coefficients of all the lane lines are adjusted to be consistent, and the detection rate of the short lane lines of the side lines is improved;
and the vanishing point calculating module is used for calculating the single-frame road vanishing point according to the detected lane line straight line.
The road vanishing point is calculated mainly based on the assumption that the road surface lane lines are parallel, and the lane lines of the perspective view shot by the camera with the optical axis parallel to the lanes intersect at the vanishing point. Therefore, under a certain set constraint condition, specifically, the prior constraint condition in this step means that the lane line yaw angle is smaller than the set threshold, and the curvature is larger than the set threshold (as described above), the vanishing point image coordinate is calculated according to the lane line parameters detected by the model, and then the vanishing point image coordinate is added to the clustering pool in the subsequent step.
Specifically, the optimal vanishing point calculating module 13 calculates the optimal road vanishing point according to all the road vanishing points obtained by the lane line detecting module 12.
In some specific embodiments, the optimal vanishing point calculating module 13 adopts a clustering algorithm model, and the optimal vanishing point calculating module 13 inputs road vanishing points of a plurality of continuous image frames into the clustering algorithm model, and calculates the optimal road vanishing point through a DBSCAN clustering algorithm.
The optimal vanishing point calculating method is based on the fact that a real road vanishing point exists in a certain image area (hereinafter referred to as an "optimal area") in which most of vanishing point detection results are intensively distributed, and the vanishing point is estimated from the vanishing point detection results distributed in the area. In order to obtain the calculated value of the optimal road vanishing point, in this embodiment, first, all vanishing point detections are clustered in a clustering pool, noise points are removed, a cluster (hereinafter referred to as an "optimal cluster") with the most elements in a clustering result is selected as the vanishing point distributed in an optimal area, and finally, an optimal vanishing point is calculated according to all vanishing points in the optimal cluster.
In some preferred embodiments, the optimal vanishing point calculating module 13 determines in advance according to a priori constraint conditions before performing the clustering algorithm whether the road vanishing point of the image frame meets a vanishing point clustering pool adding condition, if so, the road vanishing point is added into the clustering pool for calculation, and if not, the optimal vanishing point calculating module determines to process the next image frame.
The module 15 for correcting the external parameters of the vision sensor obtains the optimal road vanishing pointAnd (5) calculating external parameters of the vision sensor. Specifically, the step is mainly to calculate the rotation matrix R and translation vector T of the external parameter of the camera according to the internal parameter matrix of the camera (known parameters in calculation), the optimal road vanishing point obtained by the module 15, and the world coordinates of the origin of the coordinate system of the camera. Wherein, the parameter matrix in the cameraAnd the world coordinates of the origin of the camera coordinate system are known quantities determined at the time of camera installation. The specific derivation of the rotation matrix R and the translation vector T is performed by the well-known mathematical methods, and will not be described herein.
Therefore, when the vanishing point corresponding to the axis in the image and the camera intrinsic parameters are known, the rotation matrix R and the translation vector T can be directly calculated through the above formula. Thus, the visual sensor extrinsic parameters are calculated by using the optimal road vanishing point for extrinsic parameter correction.
Therefore, the method and the device for automatically calibrating the external parameters of the visual sensor at least have the following advantages:
1. under the condition of the same computational power, the accuracy and precision of lane line detection are higher, the simultaneous multi-lane line detection can be realized, the full lane line detection and the end-to-end detection of the road surface are realized, the model directly outputs lane line equation parameters, the detection effect is high, and the detection real-time performance is improved;
2. the scheme of estimating the vanishing points by detecting the lane lines of the road is provided, the external parameters of the camera are solved by estimating the road vanishing points, the automatic calibration of the external parameters of the automatic camera is realized to replace the calibration of the external parameters of the manual camera, and the calibration of the external parameters is corrected when the initial calibration parameters fail in a complex scene, so that the calibration is reliable and the efficiency is high.
It should be noted that the above-mentioned embodiments are only preferred embodiments of the present invention, and are not intended to limit the present invention, and those skilled in the art can make various modifications and changes. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. An automatic calibration method for external parameters of a visual sensor is characterized by comprising the following steps:
acquiring an image through a visual sensor, and extracting image frame data with a lane line;
detecting a lane line in each image frame, and calculating a road vanishing point according to the lane line;
calculating an optimal road vanishing point according to all road vanishing points in the image frame detected and extracted by the lane line;
and calculating external parameters of the visual sensor according to the obtained optimal road vanishing point.
2. The method for automatically calibrating extrinsic parameters of a vision sensor according to claim 1, wherein said detecting lane lines in each image frame includes performing parameter prediction by an end-to-end lane line detection neural network based on a Transfomer structure, and obtaining lane line equation parameters in an image coordinate system by the end-to-end lane line detection neural network to detect lane lines in the image frame, wherein the lane line equation formula is as follows:
3. The automatic calibration method for the external parameters of the vision sensor as claimed in claim 2, wherein when the vanishing point is calculated according to the detected lane line equation after the lane line in each image frame is obtained by detection, it is determined in advance by the prior constraint condition whether the lane line is a straight line, if so, the vanishing point is calculated according to the lane line equation, and if not, the vanishing point is skipped to calculate, and the next image frame is determined and processed.
4. The method for automatically calibrating extrinsic parameters of a visual sensor according to claim 2, wherein when said end-to-end lane line detection neural network based on a Transfomer structure is applied to the step of detecting lane lines in each image frame, said end-to-end lane line detection neural network employs a pure attention mechanism structure and unshared curvature parameters, and performs the following improved training:
(1) changing a sparse sampling mode of a lane line sampling point into sampling with dense far ends and sparse near ends for training;
(2) and (4) adjusting the loss weight coefficients of all lane lines to be consistent.
5. The method for automatically calibrating the extrinsic parameters of a visual sensor according to claim 2, wherein said optimal vanishing point calculation is performed by a clustering algorithm model, comprising the steps of: inputting road vanishing points of a plurality of continuous image frames into a clustering algorithm model, and estimating an optimal road vanishing point through a DBSCAN clustering algorithm; and judging whether the road vanishing point of the image frame obtained by calculation according to the detected lane line equation meets the vanishing point adding clustering pool or not according to a priori constraint condition before the clustering algorithm is carried out, if so, adding the image frame into the clustering pool for calculation, and if not, determining as an interference noise point, skipping adding the image frame into the clustering pool, and judging and processing the next image frame.
6. The method for automatic calibration of extrinsic parameters of visual sensors according to claim 5, wherein said end-to-end lane line detection model is based on a reduced Resnet18 structure, in which in a reduced Resnet18 structure the number of channels is reduced and the parameters of the feature extraction layer are reduced to avoid overfitting, and the channels are added by shallow features and the channels are reduced by deep features to enhance the extraction of the spatial structure features of lane lines by the network;
the ResNet18 output channel is cut into 16, 32, 64 and 128 blocks, and the down sampling coefficient is set to 8, so as to enhance the capability of the neural network backbone in extracting low-resolution features from the input image; wherein the high-level spatial representation of the lane lines is encoded as H x W x C; this feature is flattened in the spatial dimension when constructing the sequence for the encoder input, resulting in a sequence S of size HW × C, where HW represents the length of the sequence and C represents the number of channels, which is taken as the encoder input.
7. The automatic calibration device for the external parameters of the vision sensor is characterized by comprising:
the image acquisition module acquires images through the visual sensor and extracts image frame data with lane lines;
the lane line detection module is used for detecting lane lines in each image frame and detecting road vanishing points;
the optimal vanishing point calculation module is used for calculating the optimal road vanishing point according to all road vanishing points in the image frame detected and extracted by the lane line;
and the vision sensor external parameter correcting module is used for calculating the vision sensor external parameters according to the obtained optimal road vanishing point.
8. The automatic calibration device for extrinsic parameters of visual sensors according to claim 7, wherein said lane line detection module comprises an end-to-end lane line detection neural network module based on a Transfomer structure, said end-to-end lane line detection neural network module based on a Transfomer structure obtains the parameters of lane line equations in the image coordinate system through the end-to-end lane line detection neural network and calculates the road vanishing points in each image frame according to the lane line equations; when the lane lines in each image frame are detected, whether the lane lines are straight lines or not is further judged in advance through a priori constraint condition, if the lane lines are straight lines, road vanishing points are calculated according to a lane line equation, if the lane lines are not straight lines, the vanishing points are skipped to be calculated, and the next image frame is judged and processed, wherein the lane line equation formula is as follows:
9. The automatic calibration device for the extrinsic parameters of vision sensors according to claim 7, wherein said optimal vanishing point calculating module employs a clustering algorithm model, said optimal vanishing point calculating module inputs road vanishing points of a plurality of consecutive image frames into the clustering algorithm model, and calculates the optimal road vanishing point through a DBSCAN clustering algorithm; the method comprises the steps of judging whether road vanishing points of image frames meet a vanishing point adding clustering pool condition or not according to prior constraint conditions before clustering algorithm, if yes, adding the vanishing point adding clustering pool for calculation, and if not, judging the next image frame.
10. The automatic calibration device for extrinsic parameters of visual sensors according to claim 8, wherein when said end-to-end lane line detection neural network based on a Transfomer structure is applied in the step of detecting lane lines in each image frame, said end-to-end lane line detection neural network employs a pure attention mechanism structure and unshared curvature parameters, and performs the following improved training:
(1) changing a sparse sampling mode of a lane line sampling point into sampling with dense far ends and sparse near ends for training;
(2) adjusting the loss weight coefficients of all lane lines to be consistent;
the end-to-end lane line detection model is carried out on the basis of a reduced Resnet18 structure, in the reduced Resnet18 structure, the number of channels is reduced, parameters of a feature extraction layer are reduced to avoid overfitting, channels are increased through shallow features, and channels are reduced through deep features;
the ResNet18 output channel is cut into 16, 32, 64 and 128 blocks, and the down sampling coefficient is set to 8, so as to enhance the capability of the neural network backbone in extracting low-resolution features from the input image; wherein the encoding size of the high-level spatial representation of the lane is H multiplied by W multiplied by C; this feature is flattened in the spatial dimension when constructing the sequence for the encoder input, resulting in a sequence S of size HW × C, where HW represents the length of the sequence and C represents the number of channels, which is taken as the encoder input.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210218766.1A CN114332248A (en) | 2022-03-08 | 2022-03-08 | Automatic calibration method and device for external parameters of vision sensor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210218766.1A CN114332248A (en) | 2022-03-08 | 2022-03-08 | Automatic calibration method and device for external parameters of vision sensor |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114332248A true CN114332248A (en) | 2022-04-12 |
Family
ID=81033699
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210218766.1A Pending CN114332248A (en) | 2022-03-08 | 2022-03-08 | Automatic calibration method and device for external parameters of vision sensor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114332248A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160088986A (en) * | 2015-01-16 | 2016-07-27 | 경북대학교 산학협력단 | Lane detection method using disparity based on vanishing point |
CN109141347A (en) * | 2017-06-28 | 2019-01-04 | 京东方科技集团股份有限公司 | Vehicle-mounted vidicon distance measuring method and device, storage medium and electronic equipment |
CN111223150A (en) * | 2020-01-15 | 2020-06-02 | 电子科技大学 | Vehicle-mounted camera external parameter calibration method based on double vanishing points |
CN114022865A (en) * | 2021-10-29 | 2022-02-08 | 北京百度网讯科技有限公司 | Image processing method, apparatus, device and medium based on lane line recognition model |
-
2022
- 2022-03-08 CN CN202210218766.1A patent/CN114332248A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160088986A (en) * | 2015-01-16 | 2016-07-27 | 경북대학교 산학협력단 | Lane detection method using disparity based on vanishing point |
CN109141347A (en) * | 2017-06-28 | 2019-01-04 | 京东方科技集团股份有限公司 | Vehicle-mounted vidicon distance measuring method and device, storage medium and electronic equipment |
CN111223150A (en) * | 2020-01-15 | 2020-06-02 | 电子科技大学 | Vehicle-mounted camera external parameter calibration method based on double vanishing points |
CN114022865A (en) * | 2021-10-29 | 2022-02-08 | 北京百度网讯科技有限公司 | Image processing method, apparatus, device and medium based on lane line recognition model |
Non-Patent Citations (2)
Title |
---|
刘昌恒: "基于卷积神经网络的多车道线检测与研究", 《中国优秀硕士学位论文全文数据库(电子期刊)<工程科技II辑>》 * |
张浩: "基于机器视觉的车道保持控制算法研究", 《中国优秀硕士学位论文全文数据库(电子期刊)<工程科技II辑>》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111274976B (en) | Lane detection method and system based on multi-level fusion of vision and laser radar | |
US11488308B2 (en) | Three-dimensional object detection method and system based on weighted channel features of a point cloud | |
CN106845478B (en) | A kind of secondary licence plate recognition method and device of character confidence level | |
EP4152204A1 (en) | Lane line detection method, and related apparatus | |
CN107577996A (en) | A kind of recognition methods of vehicle drive path offset and system | |
CN109753913B (en) | Multi-mode video semantic segmentation method with high calculation efficiency | |
CN107590438A (en) | A kind of intelligent auxiliary driving method and system | |
CN114266891B (en) | Railway operation environment abnormality identification method based on image and laser data fusion | |
CN111401150B (en) | Multi-lane line detection method based on example segmentation and self-adaptive transformation algorithm | |
CN113673444B (en) | Intersection multi-view target detection method and system based on angular point pooling | |
CN106682586A (en) | Method for real-time lane line detection based on vision under complex lighting conditions | |
CN110210350A (en) | A kind of quick parking space detection method based on deep learning | |
CN112731436B (en) | Multi-mode data fusion travelable region detection method based on point cloud up-sampling | |
CN115049700A (en) | Target detection method and device | |
CN113011338B (en) | Lane line detection method and system | |
CN116434088A (en) | Lane line detection and lane auxiliary keeping method based on unmanned aerial vehicle aerial image | |
Getahun et al. | A deep learning approach for lane detection | |
CN115457780B (en) | Vehicle flow and velocity automatic measuring and calculating method and system based on priori knowledge set | |
CN112801021B (en) | Method and system for detecting lane line based on multi-level semantic information | |
CN118038396A (en) | Three-dimensional perception method based on millimeter wave radar and camera aerial view fusion | |
CN112446885B (en) | SLAM method based on improved semantic optical flow method in dynamic environment | |
CN116630917A (en) | Lane line detection method | |
CN116630904A (en) | Small target vehicle detection method integrating non-adjacent jump connection and multi-scale residual error structure | |
CN114332248A (en) | Automatic calibration method and device for external parameters of vision sensor | |
CN115690711A (en) | Target detection method and device and intelligent vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220412 |
|
RJ01 | Rejection of invention patent application after publication |