CN112819895A - Camera calibration method and device - Google Patents

Camera calibration method and device Download PDF

Info

Publication number
CN112819895A
CN112819895A CN201911125464.4A CN201911125464A CN112819895A CN 112819895 A CN112819895 A CN 112819895A CN 201911125464 A CN201911125464 A CN 201911125464A CN 112819895 A CN112819895 A CN 112819895A
Authority
CN
China
Prior art keywords
calibration
image
model
feature
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911125464.4A
Other languages
Chinese (zh)
Inventor
辛华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Huawei Technologies Co Ltd
Original Assignee
Xian Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Huawei Technologies Co Ltd filed Critical Xian Huawei Technologies Co Ltd
Priority to CN201911125464.4A priority Critical patent/CN112819895A/en
Publication of CN112819895A publication Critical patent/CN112819895A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a camera calibration method and a device, wherein the method comprises the following steps: acquiring a calibration image, wherein the calibration image is obtained by acquiring images of N characteristic main bodies by a camera, and the N characteristic main bodies totally comprise K characteristic points; searching K characteristic points in the calibration image and acquiring K pixel coordinates of the K characteristic points; receiving a calibration model, wherein the calibration model is obtained from a three-dimensional model of the high-precision map according to the address position and the shooting angle of the camera; searching the K characteristic points in the calibration model and acquiring K world coordinates of the K characteristic points, wherein the K world coordinates correspond to the K pixel coordinates one by one; and calibrating the camera according to the K pixel coordinates and the K world coordinates. By implementing the method and the device, the automatic calibration of the camera can be realized, and the camera calibration efficiency is improved.

Description

Camera calibration method and device
Technical Field
The application relates to the field of intelligent traffic, in particular to a camera calibration method and device.
Background
In an Intelligent Transportation System (ITS), a monitoring camera is deployed at each intersection of urban traffic for real-time monitoring of the intersection, and the camera needs to be calibrated in order to monitor and analyze the traffic state of the traffic intersection, for example, the running speed of vehicles at the traffic intersection, the queuing length of the vehicles, and the like. With the development of computer vision technology, new services based on surveillance cameras have been developed very rapidly, for example, digital intersection static element restoration based on high-precision maps, dynamic trajectory acquisition of vehicles in vehicle-road coordination, etc., and the new services put higher demands on camera calibration to realize intersection static traffic element digitization and dynamic target digitization.
In the existing camera calibration method, some calibration objects need to be manually placed in a geographic area, and geographic coordinates of easily-identified mark points on the calibration objects and corresponding pixel coordinates of the mark points in an image are acquired, so that the workload of calibration data is large, and the difficulty of manually marking the mark points in the image is large for a large-range visual field of the camera; it is also proposed in the industry to calibrate the camera by using known camera parameters or by using a plurality of parallel straight line segments which are not overlapped with each other on the road surface, but these methods all require the addition of known a priori knowledge, such as the height or depression angle of the camera, the lane width, the actual length of a line segment parallel to the lane, and the like; some cameras use geometric models in scenes, such as rectangular frames formed by end points of traffic sign lines, to achieve self-calibration of the cameras, but corresponding calibration can be completed only by manually placing markers in the scenes and requiring the rectangular frames in practical application scenes.
Disclosure of Invention
The embodiment of the application discloses a camera calibration method and a camera calibration device, which can realize automatic calibration of a camera and automatic conversion of the camera into high-precision map world coordinates, do not need manual participation in the whole process, and improve the efficiency and accuracy of camera calibration.
In a first aspect, the present application provides a camera calibration method, including: acquiring a calibration image, wherein the calibration image is obtained by acquiring images of N feature main bodies by a camera, the N feature main bodies totally comprise K feature points, anda feature body comprising M1A second feature point, the second feature body including M2A feature point, …, the Nth feature body comprising MNA characteristic point, K is equal to M1,M2,…,MNSumming; finding the K characteristic points in the calibration image, and acquiring K pixel coordinates of the K characteristic points in the calibration image, wherein a one-to-one correspondence relationship exists between the K characteristic points and the K pixel coordinates; receiving a calibration model, wherein the calibration model is obtained from a three-dimensional model of a high-precision map according to the address position and the shooting angle of the camera, and the three-dimensional model is obtained by carrying out omnibearing modeling on the feature main body; finding the K characteristic points in the calibration model, and acquiring K world coordinates of the K characteristic points in the calibration model, wherein the K characteristic points and the K world coordinates have a one-to-one correspondence relationship; and calibrating the camera according to the K pixel coordinates and the K world coordinates.
In the above scheme, the method first obtains a calibration image including N feature subjects captured by a camera, where the nth feature subjectiEach feature body comprises MiN feature subjects have K feature points in total; and then acquiring a calibration model containing N characteristic bodies, wherein the calibration model is acquired from the three-dimensional model of the high-precision map according to the address position and the shooting angle of the camera. Detecting K characteristic points in the calibration image, acquiring corresponding pixel coordinates of the K characteristic points in the calibration image, then determining the K characteristic points in the calibration model, acquiring corresponding pixel coordinates of the K characteristic points in the calibration model, and finally calibrating the camera according to the pixel coordinates and world coordinates of the K pair of characteristic points. According to the method and the device, the world coordinates of the actual geographic region characteristic points corresponding to the characteristic points in the calibration image are found from the calibration model to replace the world coordinates of the characteristic points obtained through manual field measurement, so that the automatic calibration of the camera is realized, and the calibration efficiency of the camera is improved.
In a possible implementation manner, the finding the K feature points in the calibration image includes: acquiring N image calibration frames of the N feature bodies from the calibration image, wherein a first image calibration frame is a calibration frame of the first feature body in the calibration image, a second image calibration frame is a calibration frame of the second feature body in the calibration image, …, and an Nth image calibration frame is a calibration frame of the Nth feature body in the calibration image; searching the K characteristic points from the N image calibration frames, wherein the first image calibration frame comprises M1A second image calibration frame including M2A feature point, …, the Nth image calibration frame including MNAnd (4) a characteristic point.
In a possible implementation manner, the finding the K feature points in the calibration model includes: obtaining N model calibration frames of the N feature bodies from the calibration model; matching the N image calibration frames with the N model calibration frames, thereby determining that a first model calibration frame is a calibration frame of the first feature subject in the calibration model, a second model calibration frame is a calibration frame of the second feature subject in the calibration model, …, and an Nth model calibration frame is a calibration frame of the Nth feature subject in the calibration model; searching the K feature points from the N model calibration frames, wherein the first model calibration frame comprises M1A second model calibration frame including M2Characteristic points, …, the Nth model calibration frame includes MNAnd (4) a characteristic point.
It can be seen that, in the above scheme, the calibration frames corresponding to the feature bodies in the calibration image and the calibration model are matched to enable the feature bodies in the calibration image to correspond to the feature bodies in the calibration model one by one, and then the feature points of the same feature body in the calibration image and the calibration model are matched to enable the feature points of the feature bodies in the calibration image to correspond to the feature points of the feature bodies in the calibration model one by one, so that the computation amount is greatly reduced, the matching efficiency of the feature points in the calibration image and the calibration model is improved, and the calibration efficiency of the camera is improved.
In a possible embodiment, the matching the N image scaling boxes and the N model scaling boxes includes: calculating the central points of the N image calibration frames to obtain N central point pixel coordinates, wherein the N central point pixel coordinates correspond to the N image calibration frames one by one; calculating central points of the N model calibration frames to obtain N central point world coordinates, wherein the N central point world coordinates correspond to the N model calibration frames one to one; traversing each central point in the N central points based on the N central point pixel coordinates, and establishing the up-down left-right relation between each central point and the peripheral central point of each central point to obtain a first position result; traversing each central point in the N central points based on X, Y coordinates of the world coordinates of the N central points, establishing the up-down and left-right relation between each central point and the peripheral central point of each central point, and obtaining a second position result; and determining that the N image calibration frames correspond to the N model calibration frames one by one according to the first position result and the second position result.
The position incidence relation between the characteristic main bodies in the calibration image and the calibration model is respectively established according to the same rule, and the matching of the characteristic main bodies in the calibration image and the calibration model is realized based on the position incidence relation, so that the matching efficiency of the characteristic main bodies is improved, and the calibration efficiency of the camera is improved.
In a possible embodiment, the matching the N image scaling boxes and the N model scaling boxes includes: calculating the central points of the N image calibration frames to obtain N central point pixel coordinates, wherein the N central point pixel coordinates correspond to the N image calibration frames one by one; calculating central points of the N model calibration frames to obtain N central point world coordinates, wherein the N central point world coordinates correspond to the N model calibration frames one to one; determining the number of central points respectively contained in the upper direction, the lower direction, the left direction and the right direction of each central point in the N central points based on the N central point pixel coordinates, and obtaining a third position result, wherein the sum of the number of the central points respectively contained in the upper direction, the lower direction, the left direction and the right direction of each central point is N-1; determining the number of central points respectively contained in the four directions of the top, the bottom, the left and the right of each central point in the N central points based on X, Y coordinates of the world coordinates of the N central points to obtain a fourth position result, wherein the sum of the number of the central points respectively contained in the four directions of the top, the bottom, the left and the right of each central point is N-1; and determining that the N image calibration frames correspond to the N model calibration frames one by one according to the third position result and the fourth position result.
The position incidence relation between the characteristic main bodies in the calibration image and the calibration model is respectively established according to the same rule, and the matching of the characteristic main bodies in the calibration image and the calibration model is realized based on the position incidence relation, so that the matching efficiency of the characteristic main bodies is improved, and the calibration efficiency of the camera is improved.
In a possible embodiment, the matching the N image scaling boxes and the N model scaling boxes includes: calculating the central points of the N image calibration frames to obtain N central point pixel coordinates, wherein the N central point pixel coordinates correspond to the N image calibration frames one by one; calculating central points of the N model calibration frames to obtain N central point world coordinates, wherein the N central point world coordinates correspond to the N model calibration frames one to one; based on the N central point pixel coordinates, sorting the N central point pixel coordinates according to a first sequence to obtain a first sorting result, wherein the first sequence comprises one of left to right, right to left, top to bottom and bottom to top; sorting the N central point world coordinates according to the first sequence based on X, Y coordinates of the N central point world coordinates to obtain a second sorting result, wherein the second sorting result is in one-to-one correspondence with the first sorting result; and determining that the N image calibration frames correspond to the N model calibration frames one by one according to the first sequencing result and the second sequencing result.
The position incidence relation between the characteristic main bodies in the calibration image and the calibration model is respectively established according to the same rule, and the matching of the characteristic main bodies in the calibration image and the calibration model is realized based on the position incidence relation, so that the matching efficiency of the characteristic main bodies is improved, and the calibration efficiency of the camera is improved.
In a possible embodiment, the K feature points are searched from the N model calibration boxes, wherein a first model calibration box includes M1A second model calibration frame including M2Characteristic points, …, the Nth model calibration frame includes MNA feature point comprising: m of feature subjects in the ith image scaling boxiThe pixel coordinates are sorted according to a second sequence to obtain a third sorting result, the second sequence comprises one of left to right, right to left, top to bottom and bottom to top, and i is an integer which is greater than or equal to 1 and less than or equal to N; m for feature principal in the ith model scaling boxiThe world coordinates are sorted according to the second sequence to obtain a fourth sorting result, the ith image calibration frame and the ith model calibration frame are in one-to-one correspondence, and i is an integer which is greater than or equal to 1 and less than or equal to N; and determining that the K pixel coordinates correspond to the K world coordinates one to one based on the third sorting result and the fourth sorting result.
The calibration image and the feature points of the same feature main body in the calibration model are matched by establishing a position incidence relation according to the same rule and realizing the matching of the feature points of the feature main body based on the position incidence relation, so that the matching efficiency of the feature points in the calibration image and the calibration model is improved, and the calibration efficiency of the camera is improved.
In one possible embodiment, the feature body includes a road traffic sign line including one or more of a turn line, a lane line, a stop line, a pedestrian crossing line.
In a possible embodiment, after the calibrating the camera according to the K pixel coordinates and the K world coordinates, the method further includes: acquiring two calibration images of adjacent frames and acquiring two pixel coordinates of the same characteristic point in the two calibration images, wherein the number of the same characteristic point is multiple; calculating an average value of distances between the two pixel coordinates of the same feature point; and when the average value is larger than a preset distance threshold value, recalibrating the camera.
After the initial calibration of the camera is completed, whether the shooting angle of the camera changes or not is judged by detecting the pixel coordinate change condition of the same characteristic point in the adjacent frame calibration images, and when the shooting angle of the camera changes, the camera is recalibrated to realize the self-correction of the calibration parameters of the camera, so that the calibration accuracy of the camera is improved.
In a second aspect, an embodiment of the present application provides an apparatus for camera calibration, including:
a receiving unit, configured to acquire a calibration image, where the calibration image is obtained by acquiring images of N feature bodies by a camera, where the N feature bodies include K feature points in total, and a first feature body includes M1A second feature point, the second feature body including M2A feature point, …, the Nth feature body comprising MNA characteristic point, K is equal to M1,M2,…,MNSumming;
the receiving unit is further configured to receive a calibration model, where the calibration model is obtained from a three-dimensional model of a high-precision map according to an address position and a shooting angle of the camera, and the three-dimensional model is obtained by performing omnibearing modeling on the feature main body;
the processing unit is used for searching the K characteristic points in the calibration image and acquiring K pixel coordinates of the K characteristic points in the calibration image, wherein a one-to-one correspondence relationship exists between the K characteristic points and the K pixel coordinates;
the processing unit is further configured to find the K feature points in the calibration model, and acquire K world coordinates of the K feature points in the calibration model, where there is a one-to-one correspondence between the K feature points and the K world coordinates;
and the computing unit is used for calibrating the camera according to the K pixel coordinates and the K world coordinates.
In a possible embodiment, the processing unit is specifically configured to: acquiring N image calibration frames of the N feature bodies from the calibration image, wherein a first image calibration frame is a calibration frame of the first feature body in the calibration image, a second image calibration frame is a calibration frame of the second feature body in the calibration image, …, and an Nth image calibration frame is a calibration frame of the Nth feature body in the calibration image; searching the K characteristic points from the N image calibration frames, wherein the first image calibration frame comprises M1A second image calibration frame including M2A feature point, …, the Nth image calibration frame including MNAnd (4) a characteristic point.
In a possible embodiment, the processing unit is specifically configured to: obtaining N model calibration frames of the N feature bodies from the calibration model; matching the N image calibration frames with the N model calibration frames, thereby determining that a first model calibration frame is a calibration frame of the first feature subject in the calibration model, a second model calibration frame is a calibration frame of the second feature subject in the calibration model, …, and an Nth model calibration frame is a calibration frame of the Nth feature subject in the calibration model; searching the K feature points from the N model calibration frames, wherein the first model calibration frame comprises M1A second model calibration frame including M2Characteristic points, …, the Nth model calibration frame includes MNAnd (4) a characteristic point.
In a possible embodiment, the processing unit is specifically configured to: calculating the central points of the N image calibration frames to obtain N central point pixel coordinates, wherein the N central point pixel coordinates correspond to the N image calibration frames one by one; calculating central points of the N model calibration frames to obtain N central point world coordinates, wherein the N central point world coordinates correspond to the N model calibration frames one to one; traversing each central point in the N central points based on the N central point pixel coordinates, and establishing the up-down left-right relation between each central point and the peripheral central point of each central point to obtain a first position result; traversing each central point in the N central points based on X, Y coordinates of the world coordinates of the N central points, establishing the up-down and left-right relation between each central point and the peripheral central point of each central point, and obtaining a second position result; and determining that the N image calibration frames correspond to the N model calibration frames one by one according to the first position result and the second position result.
In a possible embodiment, the processing unit is specifically configured to: calculating the central points of the N image calibration frames to obtain N central point pixel coordinates, wherein the N central point pixel coordinates correspond to the N image calibration frames one by one; calculating central points of the N model calibration frames to obtain N central point world coordinates, wherein the N central point world coordinates correspond to the N model calibration frames one to one; determining the number of central points respectively contained in the upper direction, the lower direction, the left direction and the right direction of each central point in the N central points based on the N central point pixel coordinates, and obtaining a third position result, wherein the sum of the number of the central points respectively contained in the upper direction, the lower direction, the left direction and the right direction of each central point is N-1; determining the number of central points respectively contained in the four directions of the top, the bottom, the left and the right of each central point in the N central points based on X, Y coordinates of the world coordinates of the N central points to obtain a fourth position result, wherein the sum of the number of the central points respectively contained in the four directions of the top, the bottom, the left and the right of each central point is N-1; and determining that the N image calibration frames correspond to the N model calibration frames one by one according to the third position result and the fourth position result.
In a possible embodiment, the processing unit is specifically configured to: calculating the central points of the N image calibration frames to obtain N central point pixel coordinates, wherein the N central point pixel coordinates correspond to the N image calibration frames one by one; calculating central points of the N model calibration frames to obtain N central point world coordinates, wherein the N central point world coordinates correspond to the N model calibration frames one to one; based on the N central point pixel coordinates, sorting the N central point pixel coordinates according to a first sequence to obtain a first sorting result, wherein the first sequence comprises one of left to right, right to left, top to bottom and bottom to top; sorting the N central point world coordinates according to the first sequence based on X, Y coordinates of the N central point world coordinates to obtain a second sorting result, wherein the second sorting result is in one-to-one correspondence with the first sorting result; and determining that the N image calibration frames correspond to the N model calibration frames one by one according to the first sequencing result and the second sequencing result.
In a possible embodiment, the processing unit is specifically configured to: m of feature subjects in the ith image scaling boxiThe pixel coordinates are sorted according to a second sequence to obtain a third sorting result, the second sequence comprises one of left to right, right to left, top to bottom and bottom to top, and i is an integer which is greater than or equal to 1 and less than or equal to N; m for feature principal in the ith model scaling boxiThe world coordinates are sorted according to the second sequence to obtain a fourth sorting result, the ith image calibration frame and the ith model calibration frame are in one-to-one correspondence, and i is an integer which is greater than or equal to 1 and less than or equal to N; and determining that the K pixel coordinates correspond to the K world coordinates one to one based on the third sorting result and the fourth sorting result.
In one possible embodiment, the feature body includes a road traffic sign line including one or more of a turn line, a lane line, a stop line, a pedestrian crossing line.
In a possible embodiment, after the computing unit completes the primary computation, the processing unit is further specifically configured to: acquiring two calibration images of adjacent frames and acquiring two pixel coordinates of the same characteristic point in the two calibration images, wherein the number of the same characteristic point is multiple; calculating an average value of distances between the two pixel coordinates of the same feature point; and when the average value is larger than a preset distance threshold value, recalibrating the camera.
In a third aspect, an embodiment of the present application provides a computing device, including a processor, a communication interface, and a memory, where the memory is configured to store instructions, the processor is configured to execute the instructions, and the communication interface is configured to receive or transmit data; wherein the processor executes the instructions to perform the method of the first aspect or any possible implementation manner of the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the processor executes the method described in the first aspect or any possible implementation manner of the first aspect.
In a fifth aspect, the present application provides a computer program product, where the computer program product includes instructions, and when the computer program product is executed by a computer, the computer may execute the method described in the first aspect or any possible implementation manner of the first aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a diagram of a system architecture provided by an embodiment of the present application;
fig. 2 is a schematic flowchart of a camera calibration method according to an embodiment of the present application;
fig. 3 is a schematic view of a traffic intersection photographed by a camera according to an embodiment of the present application;
fig. 4 is a flowchart of a method for matching corner points of a calibration image and a calibration model of a target area according to an embodiment of the present disclosure;
fig. 5 is a result diagram of some traffic sign lines provided in the embodiment of the present application after establishing a position association relationship in a calibration image;
fig. 6 is a result diagram of a traffic sign line after a position association relationship is established in a calibration model according to an embodiment of the present application;
fig. 7 is a result diagram of some calibration images and calibration models provided by the embodiment of the present application after matching corner points of the same traffic sign line;
fig. 8 is a schematic flowchart of a method for obtaining pixel coordinates of a corner point of a traffic sign line according to an embodiment of the present disclosure;
fig. 9 is a schematic flowchart of a method for obtaining world coordinates of a corner point of a traffic sign line according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a camera provided in an embodiment of the present application;
fig. 11 is a schematic structural diagram of an apparatus for camera calibration according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application are described below clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the Intelligent Transportation System (ITS), with the development of computer vision technology, emerging services based on surveillance cameras are developing very rapidly, for example: by combining with digital intersection static element restoration of a high-precision map, vehicle dynamic track acquisition in unmanned driving and the like, the development of new business puts higher requirements on the calibration of a traffic intersection camera, and on one hand, the camera is required to be calibrated quickly on line without manual participation; on the other hand, the two-dimensional image coordinates acquired by the camera can be converted into high-precision map three-dimensional world coordinates.
In the existing camera calibration method, some cameras need to manually place a calibration object on site and acquire three-dimensional coordinates of feature points on the calibration object in space, and the feature points are easy to identify images and have large calibration workload; there are also cases where no human involvement is required, but additional known a priori knowledge is required, such as: the calibration of the camera can be realized only by the height or elevation angle of the camera and the existence of a fixed geometric model, lane width and the like in a scene, and the constraint conditions are more. In a word, the existing camera calibration methods do not combine a high-precision map to perform world coordinate conversion, and need manual participation or have a plurality of constraint conditions, so that the high requirements of emerging services on camera calibration cannot be met. In order to solve the problems, the invention provides a method for identifying a traffic sign line on the ground of a traffic intersection based on an image, establishing a position association relationship of the traffic sign line, simultaneously establishing a position association relationship of the traffic sign line by using a point cloud map, matching corner points of the traffic sign line according to the position association relationship of the sign line, and establishing a corresponding relationship between coordinates of image pixel points and world coordinates in the point cloud, thereby completing the automatic calibration of a camera.
The purpose of camera calibration is to restore an object in a captured region space by using an image captured by a camera, and then a conversion relationship between a three-dimensional geometric position of a certain point on the surface of the object in the captured region and a corresponding pixel point in the image needs to be determined. It is understood that the area being photographed is a specific geographical area in the physical world, for example: in areas such as traffic intersections and traffic roads, the image of the photographed area may be an image of the geographical area photographed from a fixed angle, or may be any one of video frames in a video of the geographical area recorded by a camera installed at a fixed position. The essence of camera calibration is to calculate the conversion relationship between the world coordinates of points in the geographic area and the pixel coordinates of corresponding points in the image of the geographic area, thereby obtaining calibration parameters of the camera. According to the obtained calibration parameters, not only can the world coordinates of any object in the geographic area be converted into the pixel coordinates in the image, but also the pixel coordinates of any object in the geographic area in the image can be converted into the world coordinates of the object in the geographic area, namely, the specific position of the object in the geographic space is obtained.
The pixel coordinates of the image in the application are the coordinates of the pixel points of the position of the target in the image, the coordinates of the pixel points are two-dimensional, the world coordinates in the application are the three-dimensional coordinates of each point in the geographic region, and it can be understood that in a physical world, the same point can have different coordinate values under different coordinate systems. The world coordinate of the target in the present application may be a coordinate value in any coordinate system, for example, the world coordinate of the target in the present application may be a three-dimensional coordinate composed of longitude, latitude and altitude corresponding to the target, may also be a three-dimensional coordinate composed of X coordinate, Y coordinate and Z coordinate in a natural coordinate system corresponding to the target, and may also be a coordinate in other forms, as long as the coordinate can uniquely determine the position of the target in the geographic area, and the present application does not limit which form of coordinate is specifically selected. The world coordinates employed in the present application are world coordinates provided by high precision map point cloud data, for example, WGS84 coordinates.
As shown in fig. 1, the camera calibration system may be deployed on one or more computing devices (e.g., a central server) in a cloud environment, and in particular, in a cloud environment. The system may also be deployed in an edge environment, specifically on one or more computing devices (edge computing devices) in the edge environment, which may be servers. The cloud environment indicates a central cluster of computing devices owned by a cloud service provider for providing computing, storage, and communication resources; the edge environment indicates a cluster of edge computing devices geographically close to the primary data collection device for providing computing, storage, and communication resources. The raw data collecting device indicates a device for collecting raw data required by a camera calibration system, including but not limited to a camera, a radar, an infrared camera, a magnetic induction coil, etc., and the raw data collecting device includes a device (e.g., a camera) disposed at a fixed position of a traffic road for collecting real-time raw data (e.g., video data, radar data, infrared data, etc.) of the traffic road at its own view angle, and also includes a point cloud data collecting device (e.g., PrimeSensor by PrimeSense, Kinect by microsoft, XTionPRO by wonder, etc.), etc.
In order to avoid manually determining the feature points or the pixel coordinates of the mark points in the corresponding images by depending on specific marks in the field in the existing calibration method and reduce the workload of manually acquiring the world coordinates of the mark points in the field and manually marking the pixel coordinates of the corresponding mark points in the images, the application provides a camera calibration method and a camera calibration system.
It should be noted that the road traffic sign line is mainly used for traffic control and guidance, and generally includes various lines (solid lines or dotted lines) marked on the road surface, arrow lines (turning left, turning right, turning left and going straight, turning right and going straight, etc.), characters, contour marks, etc., and is often disposed on major traffic lanes such as expressways, primary roads, secondary roads, urban expressways, main roads, and special-purpose roads for automobiles.
The high-precision map point cloud of the traffic intersection can be provided by manufacturers, and the high-precision map not only contains a large amount of driving auxiliary information, such as semantic information of a road surface, for example, positions and characteristic information (such as types of sign lines) of traffic sign lines, lane characteristics (such as lane widths, slope regions, curvatures and the like), and the like, but also carries out accurate three-dimensional representation (centimeter-level precision) on a road network. The point cloud data of the high-precision map is usually acquired on site through a point cloud acquisition vehicle, the core equipment of the acquisition vehicle is a laser radar, and the environmental point cloud is formed through the reflection of laser so as to be used for the subsequent identification of each object in the environment. Certainly, the point cloud of the traffic intersection can also be obtained through an RGBD sensor, a three-dimensional scanner, and the like.
Referring to fig. 2, fig. 2 is a schematic flow chart of a camera calibration method according to an embodiment of the present disclosure. As shown in fig. 2, the method includes, but is not limited to, the following steps:
and S101, acquiring a calibration image.
Specifically, a camera calibration system receives camera shotsThe method comprises the steps of shooting a video of a target area by a camera, extracting a frame of calibration image from the received video of the target area, wherein the calibration image is obtained by carrying out image acquisition on N characteristic main bodies of the target area by the camera arranged at a fixed position of a traffic intersection, wherein the Nth characteristic main body isiEach feature body comprises MiAnd N feature subjects comprise K feature points in total, i is an integer which is greater than or equal to 1 and less than or equal to N, namely the N feature subjects in the calibration image have K feature points in total.
It should be noted that, in this embodiment of the application, the target area shot by the camera is an area including a feature subject, such as a traffic intersection, a road surface, and the like, the feature subject may be a traffic sign line of the road surface, and the feature point of the feature subject is an angular point of the road surface traffic sign line. The road traffic markings may be turn lines (left-turn, right-turn, left-turn and straight, right-turn and straight, etc.), stop lines, pedestrian crossing lines, lane-dividing dashed lines, headlights, etc.
Taking fig. 3 as an example, fig. 3 is a calibration image of a traffic intersection acquired by a camera, and as shown in the figure, the road traffic sign line included in the calibration image at least includes: crosswalk line 301, left turn waiting zone 302, left turn arrow 303, lane line 304, straight arrow 305, and stop line 306. The pedestrian crossing line 301 is a walking range of a specified person marked by a marking line such as a zebra crossing on a lane and crossing the lane; the left-turning waiting area 302 is a white dotted line frame which is added on a left-turning lane and is in the middle of a road and is several meters long, and is used for indicating that when a green light of a straight lane in the same direction is on, a vehicle on the left-turning lane must move forward to the waiting signal light of the waiting area; left turn arrow 303 is used to indicate that the vehicle may turn left in the lane; the lane lines 304 are used for lane demarcation on the road surface; the straight arrow 305 is used to indicate that the vehicle may travel straight in the lane; the stop line 306 is a solid horizontal line near the crosswalk, and is used to indicate that the vehicle should stop after the stop line when the stop signal lights.
It can be understood that, for the calibration image extracted from the video data by the camera calibration system, the feature bodies in the calibration image are clearly visible and are not occluded by other objects (such as vehicles, pedestrians, etc.) as far as possible.
S102, finding K characteristic points in the calibration image, and obtaining pixel coordinates of the K characteristic points in the calibration image.
Specifically, for the acquired calibration image, feature points of a feature subject in the calibration image are detected by using an image processing algorithm, in this embodiment, taking the feature subject as a traffic sign line as an example, that is, K corner points of N traffic sign lines in the calibration image are detected by using the image processing algorithm (where, the nth corner point isiEach traffic sign line comprises MiThe N traffic sign lines include K total corner points, i is an integer greater than or equal to 1 and less than or equal to N), and pixel coordinates of the K corner points in the calibration image are obtained, so that K pixel coordinates are obtained. It can be understood that K feature points in the calibration image correspond to K pixel coordinates in a one-to-one manner.
In an embodiment, the detection of the corner points of the traffic marking lines may be performed by extracting the traffic marking lines in the calibration image through a target detection algorithm, and detecting the extracted corner points of the traffic marking lines through a corner point detection algorithm, so as to obtain pixel coordinates of the corner points in the calibration image. The specific method for acquiring the pixel coordinates of the corner points of the traffic sign lines in the calibration image is described in detail later.
And S103, receiving a calibration model.
Specifically, the camera calibration system acquires a calibration model of a target area, wherein the calibration model is acquired from a three-dimensional model of a high-precision map according to the geographic position of the camera when the camera is installed in advance and the shooting angle of a lens. It will be appreciated that the calibration model of the target region is obtained by full-scale modeling of the feature bodies described above.
It should be noted that the high-precision map is 360-degree omni-directional adjustable, and since the calibration model of the target area is obtained according to the geographic position and the shooting angle of the camera, the calibration model of the target area and the calibration image of the target area are at the same viewing angle, that is, the calibration image of the target area, the number of traffic sign lines in the calibration model, and the position relationship of the traffic sign lines are the same, that is, it is stated that the number of feature subjects of the target area is also N, and the number of feature points is also K. It is to be understood that the calibration model of the target area may be point cloud data of the target area in the high-precision map.
S104, finding K characteristic points in the calibration model, and obtaining world coordinates of the K characteristic points in the calibration model, wherein the K characteristic points in the calibration model correspond to the K characteristic points in the calibration image one by one.
Specifically, if it is desired to implement the one-to-one correspondence between the K feature points in the calibration model and the K feature points in the calibration image, the feature points in the calibration model need to be detected first, and the feature points of the feature subject in the calibration model can be detected by using the processing algorithm based on the point cloudiEach traffic sign line comprises MiThe N traffic sign lines include K corner points in total, i is an integer greater than or equal to 1 and less than or equal to N), and world coordinates of the K corner points in the calibration model are obtained, so that K world coordinates are obtained.
In an embodiment, the detection of the corner points of the traffic sign line may be performed by extracting the traffic sign line in the calibration model through a target detection algorithm based on the point cloud, and then detecting the extracted corner points of the traffic sign line by using a corner point detection algorithm based on the point cloud, so as to obtain the world coordinates of the corner points in the calibration model. The specific method for obtaining the world coordinates of the corner points of the traffic sign lines in the calibration model is described in detail later.
After acquiring K corner points of N traffic sign lines in a calibration model, a camera calibration system further matches the K corner points in the calibration model with the K corner points in a calibration image, referring to fig. 4, where fig. 4 is a method for matching a calibration image of a target area with the corner points of the calibration model provided in an application embodiment, and the specific steps are as follows:
and S1041, respectively establishing position association relations between the N traffic sign lines in the calibration image and the calibration model according to the same rule.
Specifically, firstly, for the traffic sign lines detected in the calibration image and the calibration model, according to the same rule, based on the positions of the traffic sign lines in the calibration image and the calibration model, the position association relationship between the detected N traffic sign lines is respectively established in the calibration image and the calibration model. It should be noted that, after the traffic sign line is detected, the traffic sign line is labeled by using the calibration frames, and the pixel coordinates or the world coordinates of the center point of each calibration frame are calculated to be used as the position of the corresponding traffic sign line in the calibration image or the calibration model. For example, if the calibration frame is rectangular, in the calibration image, the pixel coordinates of the central point of the rectangular calibration frame can be calculated according to the pixel coordinates of the four vertices of the rectangular calibration frame.
For example, for the calibration image, the method for establishing the position association relationship of the N traffic sign lines in the calibration image may be: calculating the central points of N image calibration frames in the calibration image to obtain N central point pixel coordinates, wherein the N central point pixel coordinates correspond to the N image calibration frames one by one, traversing each central point in the N central points based on the pixel coordinates of each central point in the calibration image, and establishing the up-down and left-right relationship between each central point and the peripheral central point of the central point to obtain a first position result.
The peripheral center point of the center point is: in the calibration image, for the center point, firstly, the direction of the rest N-1 center points in the four directions of the upper direction, the lower direction, the left direction and the right direction of the center point is judged, then, the distance between the center point in each direction and the center point is calculated, the point with the minimum distance between the center point and the direction in each direction is taken as the nearest center point in the direction, and finally, the nearest center points in the four directions of the upper direction, the lower direction, the left direction and the right direction of the center point are the peripheral center points of the center point.
For example, as shown in fig. 5, (1) of fig. 5 is a calibration image, which shows an image coordinate system of the image, the upper left corner is a coordinate origin, the vertical direction is a Y-axis, which represents row coordinates of the image, and the horizontal direction is an X-axis, which represents column coordinates of the image. Fig. 5(2) is a diagram showing the result of the traffic sign line detection in the calibration image, and the detected traffic sign line is marked out by rectangular frames (e.g., a1, a2, A3, a4, a5, and a6) in the calibration image, and the center pixel coordinates of six rectangular frames are calculated from the four vertices of the rectangular frames, and the vertical and horizontal relationships between the remaining rectangular frames and the respective rectangular frames are determined for each rectangular frame based on the center pixel coordinates. The method for judging the up-down-left-right relationship of the rectangular frame can be as follows: sequentially comparing pixel coordinates of the target central point and the other central points to be compared, calculating a difference value between the row coordinate of the central point to be compared and the row coordinate of the target central point, and if the difference value is positive under the condition that the absolute value of the difference value is greater than or equal to a first threshold value, locating an image calibration frame corresponding to the central point to be compared below the target calibration frame; if the difference is not positive, the target is positioned above the target calibration frame; under the condition that the absolute value of the difference is smaller than the first threshold, calculating the difference between the row coordinate of the central point to be compared and the row coordinate of the target central point, and if the difference is positive, positioning the calibration frame corresponding to the central point to be compared on the right of the target calibration frame; if the difference is not positive, the target frame is located to the left of the target calibration frame. According to this method, for the rectangular frame a1, the top is A3, the right is a2, a4, a5 and a6, and the left and bottom are not. Next, on the right side, it can be found that the calibration frame most adjacent to the distance a1 in the right direction is a2 by calculating euclidean distances between the center point corresponding to a1 and the center points corresponding to a2, a4, a5, and a6 or comparing the column coordinates of the respective center points, and therefore, there are two peripheral center points of the center point of the a1 rectangular frame, which is the center point corresponding to A3 thereon and the right of which is the center point corresponding to a 2. The center points of all the rectangular frames are subjected to traversal analysis according to the same principle, and table 1 lists the position association relations of all the traffic sign lines in (2) of fig. 5.
For example, for the calibration image, the method for establishing the position association relationship of the N traffic sign lines in the calibration image may further be: calculating the central points of N image calibration frames in the calibration image to obtain N central point pixel coordinates (the pixel coordinates are composed of row coordinates and column coordinates), wherein the N central point pixel coordinates correspond to the N image calibration frames one by one, traversing each central point in the N central points based on the pixel coordinates of each central point in the calibration image, determining the number of the central points respectively contained in the upper direction, the lower direction, the left direction and the right direction of each central point, and obtaining a third position result, wherein the sum of the number of the central points respectively contained in the upper direction, the lower direction, the left direction and the right direction of each central point is understood to be N-1.
Specifically, taking one of the N center points as an example, the center point is a target center point, and the remaining N-1 center points are center points to be compared, and the method for determining the number of center points in the four directions of the center point, i.e., up, down, left, and right, may be: firstly, judging which direction the rest N-1 central points to be compared are positioned in, namely the upper direction, the lower direction, the left direction and the right direction of the target central point, wherein the judging mode of the direction can be as follows: the row coordinate of the central point to be compared is differed from the row coordinate of the target central point, and if the absolute value of the difference is greater than or equal to a second threshold value, the central point to be compared is positioned below the target central point; if the difference is less than or equal to 0, the target is positioned above the target central point; under the condition that the absolute value of the difference is smaller than a second threshold value, the row coordinate of the central point to be compared is differed from the row coordinate of the target central point, and if the difference is larger than 0, the central point to be compared is positioned on the right side of the target central point; if the difference is less than or equal to 0, the target is located on the left of the target center point. And finally, counting the number of center points of the target center point in the upper direction, the lower direction, the left direction and the right direction.
Specifically, referring to fig. 5, (2) is a diagram illustrating the result of the traffic sign line detection in the calibration image, and the detected traffic sign line is marked out using rectangular frames, and a1 to a6 are ID numbers of the respective rectangular frames. The detected traffic sign lines include lane lines (a1, a4, a6), stop lines (A3), and arrow indication lines (a2, a 5). And calculating to obtain the pixel coordinates of the central point of the corresponding rectangular frame according to the pixel coordinates of the four vertexes of each rectangular frame. According to the method for establishing the position association relationship of the traffic sign line, it can be obtained that, for the rectangular frame a1, 4 rectangular frames are arranged at the right side, namely a2, a4, a5 and a6, and 1 rectangular frame A3 is arranged at the upper side. For the rectangular frame a2, 1 rectangular frame a1 on the left side, 1 rectangular frame A3 on the upper side, and 3 rectangular frames a4, a5 and a6 on the right side can be obtained, all the traffic sign lines in fig. 5(2) are traversed in a loop, and table 2 lists the position association relationship of the traffic sign lines in the calibration image, that is, the number of the traffic sign lines in the four directions of the upper direction, the lower direction, the left direction and the right direction of each traffic sign line.
For example, for the calibration image, the method for establishing the position association relationship of the N traffic sign lines in the calibration image may further be: the method comprises the steps of calculating the center points of N image calibration frames in a calibration image to obtain N center point pixel coordinates, enabling the N center point pixel coordinates to correspond to the N image calibration frames one by one, and sequencing the N center point pixel coordinates in the calibration image according to a first sequence based on the pixel coordinates of each center point in the calibration image to obtain a first sequencing result, namely obtaining a position sequence arrangement mode for definitely representing the traffic sign lines of the calibration image. Wherein, the first order includes one of from left to right, from right to left, from top to bottom, and from bottom to top.
Specifically, the first order may be an order from left to right or from right to left based on the column coordinates of the pixel coordinates of the central point in the calibration image, and in some possible embodiments, when the column coordinate values are the same, the column coordinates are sorted in an order from top to bottom or from bottom to top according to the row coordinates of the central point. It is to be understood that the first order may also be an order from top to bottom or from bottom to top based on the row coordinates of the pixel coordinates of the center point in the calibration image, and in some possible embodiments, the row coordinates are the same, and then the row coordinates of the center point are sorted in an order from left to right or from right to left according to the column coordinates of the center point. It should be noted that the rule of the traffic sign line sequencing is not specifically limited in the present application.
Taking fig. 5(2) as an example, the first order from left to right is adopted to carry out the marked ordering on each traffic sign line in fig. 5(2), the ordering result is shown in fig. 5(3), and the arrangement numbers of a1, a2, A3, a4, a5 and a6 of the rectangular frame in (2) are 1,2, 4, 3, 5 and 6 in sequence.
TABLE 1
Figure BDA0002275333100000111
Figure BDA0002275333100000121
TABLE 2
Figure BDA0002275333100000122
Then, the same method is used for establishing the position association relation of each traffic sign line in the calibration model aiming at the N traffic sign lines detected in the calibration model. Specifically, N traffic sign lines detected in the calibration model correspond to N model calibration frames, where N traffic sign lines of the N model calibration frames correspond to one another. Calculating the central points of the N model calibration frames to obtain N central point world coordinates, and establishing the position incidence relation of the N traffic sign lines in the calibration model by adopting the same method as the method for establishing the position incidence relation of the N traffic sign lines in the calibration image based on X, Y coordinates in the world coordinates of the N central points.
It should be noted that, in the calibration model, the traffic sign lines of the road surface are all located on the same plane, so the spatial position relationship of the road surface traffic sign lines in the calibration model can be simplified into a planar position relationship, and only the X, Y coordinate of the calibration model needs to be considered when determining the position association relationship of the road surface traffic sign lines in the calibration model. Therefore, when the position association relation of the traffic sign line in the calibration model is established, only X, Y coordinates in the world coordinates of the center point of the model calibration frame corresponding to the traffic sign line need to be considered.
For example, for the calibration model, the method for establishing the position association relationship of the N traffic sign lines in the calibration model may be: and traversing each central point of the N central points based on X, Y coordinates in the world coordinates of the N central points, establishing the up-down and left-right relation between each central point and the peripheral central point of the central point, and obtaining a second position result.
For example, for the calibration model, the method for establishing the position association relationship of the N traffic sign lines in the calibration model may be: and determining the number of the central points respectively contained in the four directions of the top, the bottom, the left and the right of each central point in the N central points based on X, Y coordinates in the world coordinates of the N central points, and obtaining a fourth position result, wherein the sum of the number of the central points respectively contained in the four directions of the top, the bottom, the left and the right of each central point is N-1.
For example, for the calibration model, the method for establishing the position association relationship of the N traffic sign lines in the calibration model may be: and sorting the world coordinates of the N central points according to a first sequence adopted in the calibration image based on X, Y coordinates in the world coordinates of the N central points to obtain a second sorting result.
It can be understood that in the calibration model, the method used for establishing the position association relationship of the traffic sign line is consistent with the method used for establishing the position association relationship of the traffic sign line in the calibration image, so the principle of the adopted method is the same, and reference may be made to the related description in the calibration image, and for the sake of brevity of the description, the details are not repeated herein.
S1042, matching the traffic sign lines in the calibration image and the calibration model based on the position incidence relation of the traffic sign lines in the calibration image and the calibration model.
And finally, matching the traffic sign line in the calibration image with the traffic sign line in the calibration model according to the calibration image of the target area and the established position incidence relation of the traffic sign line in the calibration model, namely enabling the N image calibration frames in the calibration image to be in one-to-one correspondence with the N model calibration frames in the calibration model.
It can be understood that, because the calibration image and the calibration model of the target area are at the same view angle, that is, the position relation and the number of the traffic sign lines in the calibration image and the calibration model corresponding to the target area are the same, and when the position relation of the traffic sign lines is established, the two methods are the same to establish the position relation of the traffic sign lines, so that the traffic sign lines in the matched calibration image and the calibration model are the position relation of the matched calibration image and the calibration model, and when the position relation of a certain traffic sign line in the calibration image and the calibration model is the same, it indicates that the corresponding traffic sign lines in the calibration image and the calibration model are successfully matched, i.e. the same traffic sign line in the target area.
For example, if the position association relationship of the traffic sign lines is established by determining the up-down and left-right relationship between each traffic sign line and the surrounding traffic sign lines of the traffic sign line in the calibration image and the calibration model, the calibration image and each traffic sign line in the calibration model may be traversed in the same order (e.g., from left to right) from the same starting direction (e.g., left, right, etc.), based on the traffic sign lines in the calibration image, it is determined whether the distribution of the traffic sign lines in the up, down, left, and right directions of the traffic sign line at the corresponding position in the calibration model is the same as the distribution in the calibration image, and when the distribution of the traffic sign lines in the up, down, left, and right directions of all the traffic sign lines in the calibration image is the same as the distribution of the traffic sign lines in the up, down, left, and right directions of all the traffic sign lines in the calibration model, the matching of the traffic sign line in the calibration image and the calibration model is realized.
Exemplarily, if the calibration image and the calibration model both adopt the method of establishing the position association relationship of the traffic sign lines by analyzing the distribution number of the rest N-1 traffic sign lines in the upper, lower, left and right directions of each traffic sign line, the number of the traffic sign lines in the upper, lower, left and right directions of each traffic sign line in the calibration image and the calibration model only needs to be compared, and when the number of the traffic sign lines in the upper, lower, left and right directions of a certain traffic sign line in the two traffic sign lines is the same, the corresponding calibration image in the position association relationship and the traffic sign line in the calibration model are found, thereby realizing the matching of the traffic sign lines in the calibration image and the calibration model.
For example, if the same ordering rule is used in both the calibration image and the calibration model to order the traffic sign lines in the target area, since the calibration image and the calibration model in the target area are at the same viewing angle and both use the same ordering rule, the ordering result of the traffic sign lines in the calibration image directly corresponds to the ordering result of the traffic sign lines in the calibration model one-to-one, in other words, the ordered labels of the N image calibration frames in the calibration image correspond to the ordered labels of the N model calibration frames in the calibration model one-to-one, and the corresponding traffic sign lines in the calibration image and the calibration model can be determined by the ordering labels. For example, as shown in fig. 5(3) and fig. 6, fig. 5(3) is a schematic diagram of the calibration image after the traffic sign lines are sorted and labeled, fig. 6 is a schematic diagram of the calibration model after the traffic sign lines are sorted and labeled by using the same sorting rule as that in the calibration image, which is easy to obtain, and fig. 5 (1) in fig. 3 and fig. 6 (1) are in one-to-one correspondence and represent the same traffic sign line; 2 in fig. 5(3) and 2 in fig. 6 are in one-to-one correspondence, and similarly, 3, 4, 5 and 6 in fig. 5(3) are in one-to-one correspondence with 3, 4, 5 and 6 in fig. 6, respectively.
And S1043, matching K characteristic points of the calibration image and K characteristic points in the calibration model based on the matched traffic sign line.
Specifically, the camera calibration system sequentially extracts a calibration image and a corresponding traffic sign line in the calibration model according to a calibration image of a target area and the matched traffic sign line in the calibration model, and then matches a corner point of the traffic sign line in the image with a corner point of the traffic sign line in a corresponding point cloud to obtain at least one group of feature point matching pairs, wherein the feature point matching pairs are composed of pixel coordinates of the corner point in the calibration image and world coordinates of the corner point in the calibration model corresponding to the feature point matching pairs.
In a specific implementation, in the step S1042, N image calibration frames in the calibration image correspond to N model calibration frames in the calibration model one to one, the N image calibration frames correspond to N traffic sign lines, and the N model calibration frames correspond to N traffic sign lines, that is, the N traffic sign lines in the calibration image correspond to the N traffic sign lines in the calibration model one to one. If the calibration image is matched with the angular points in the calibration model, M of the traffic sign line in the ith image calibration frame is required to be matchediThe pixel coordinates are sorted according to a second sequence to obtain a third sorting result, wherein the second sequence comprises one of left to right, right to left, top to bottom and bottom to top; similarly, M of the traffic sign line in the ith model calibration frameiAnd sorting the world coordinates according to the same second sequence to obtain a fourth sorting result, wherein i is an integer which is more than or equal to 1 and less than or equal to N. It can be understood that the ith image calibration frame and the ith model calibration frame correspond to each other one by one. For the same traffic sign line, the labels in the third sequencing result and the labels in the fourth sequencing result are in one-to-one correspondence, that is, the same angular point of the same traffic sign line in the calibration image and the calibration model has the same sequencing label, thereby completing the matching of the corresponding angular points in the calibration image and the calibration model.
For example, referring to fig. 7, (1) and (2) in fig. 7 are respectively a left turn indicating line extracted from the calibration image and the calibration model, the circular solid points in fig. 7 are respectively corner points (or feature points) of the left turn indicating line detected in the calibration image and the calibration model, the pixel coordinates of the corner points of the left turn indicating line in (1) are sorted according to the left-to-right priority order, that is, the pixel coordinates of the corner points are sorted according to the order of the column coordinates in the pixel coordinates of the corner points from small to large, the sorting result is shown as serial numbers 1 to 9 in (1), only X, Y coordinates are considered in the world coordinates of the corner points of the left turn indicating line in (2), the X, Y coordinates are sorted according to the same sorting rule, the sorting result is shown as the number in (2), it is easy to understand that the calibration image corresponding to the corner points of the left turn indicating line is marked, The corner point sorting results in the calibration model are the same, the pixel coordinate of the corner point corresponding to the serial number 1 in (1) and the world coordinate of the corner point corresponding to the serial number 1 in (2) form a group of feature point matching pairs, and so on, and nine groups of feature point matching pairs can be obtained in fig. 7.
It should be noted that, after the feature point matching pairs are obtained, the distribution of the feature point matching pairs is prevented from being linearly distributed as much as possible, so that the calibration relationship of the camera obtained through subsequent calculation is more accurate. Specifically, after the feature point matching pairs are obtained, straight line fitting is performed on any one of the pixel coordinates or world coordinates of the feature points. Taking the pixel coordinates of the corner points as an example, performing linear fitting on the pixel coordinates of all the corner points to obtain a linear equation, calculating the distance from each pixel coordinate to a fitted straight line according to the distance from the Euclidean geometric midpoint to the straight line, and if the distance from the pixel coordinate to the straight line is smaller than a third threshold value and the number of the pixel coordinates with the distance smaller than the third threshold value is larger than a fourth threshold value, considering that all the corner points are distributed on the same straight line, and needing to reselect a feature point matching pair; otherwise, all the corner points are not distributed on a straight line, and the calibration relation can be calculated according to the obtained multiple groups of feature point matching pairs. It should be noted that the third threshold and the fourth threshold may be set according to actual needs, which is not limited in this application.
It can be understood that calibration calculation of the camera is performed on a calibration relation between two planes, and a straight line cannot determine one plane, which results in that a calibration parameter matrix obtained by calculation is not unique, so that in order to improve calibration accuracy, when a feature point matching pair is selected, the feature point matching pair can be formed based on one or more corner points in a plurality of traffic sign lines in a calibration image and corner points in a corresponding calibration model, so that the obtained feature point matching pair cannot be linearly distributed.
And S105, calibrating the camera according to the K pixel coordinates and the K world coordinates.
Specifically, the camera calibration system may establish a calibration relationship from the image under each camera view angle to the physical world according to the obtained feature point pairs (i.e., pixel coordinates of the feature points in the calibration image and world coordinates of the feature points in the calibration model corresponding thereto).
It should be noted that, the calibration relationship between the image under each camera view angle and the physical world may be established by a variety of methods to complete the calibration of the camera. For example, a direct linear transformation matrix L that converts pixel coordinates into world coordinates may be calculated according to the principle of Direct Linear Transformation (DLT), and the matrix L may be calculated using the following formula (1), where (u, v) are pixel coordinates of a feature point on an image and (x)w,yw,zw) Is its corresponding calibration modelWorld coordinates of the middle feature points, i.e. the world coordinates of the feature points in the image corresponding to the physical world,/i(i ═ 1,2, …,11) are the parameters to be determined for the matrix L, these 11 parameters characterizing the relationship between the image coordinate system and the world coordinate system. Therefore, the matrix L can be calculated by obtaining at least six sets of feature point matching pairs in step S104, and calibration of the camera is completed.
Figure BDA0002275333100000151
Illustratively, a homography transformation method can be adopted to establish a calibration relation from an image under each camera view angle to the physical world, so as to complete the calibration of the camera. That is, a homographic transformation matrix H for transforming pixel coordinates into world coordinates is calculated by using the homographic transformation principle, and the homographic transformation formula is (x)w,yw,zw)=H*(u,v),(xw,yw,zw) The three-dimensional coordinates of the feature points, and (u, v) the pixel coordinates of the feature points, are obtained by obtaining at least four sets of feature point matching pairs in step S104, i.e., an H matrix corresponding to the image captured by each camera is obtained by calculation, and the H matrices corresponding to the images captured by different cameras are different. It should be noted that the homography transformation algorithm is a basic content in the field of computer vision, and has been widely integrated by software such as open source computer vision library (OPENCV), MATLAB, and the like, and can directly call a related program to perform the computation of the homography transformation, and for brevity, details are not described here.
It is easy to understand that after the direct linear transformation matrix L or the homographic transformation matrix H of the camera corresponding to each image is obtained, the coordinates of the pixel points of the target in each camera image can be converted according to the obtained matrix parameters, and the world coordinates corresponding to the target are obtained.
It should be understood that steps S101 to S105 in the above method embodiments are merely schematic outlines, and should not be specifically limited, and the steps may be increased, decreased, or combined as needed.
In addition, it is to be understood that the calibration relationship obtained by performing camera calibration according to the above steps S101 to S105 is a correspondence relationship between the space of a geographic area (for example, a traffic intersection) and a calibration image obtained by capturing the geographic area from a fixed angle. After the initial calibration of the camera in the geographic area is completed, the situation that calibration parameters are inaccurate due to the change of the shooting angle of the camera may occur subsequently. Therefore, in this case, calibration parameters of the cameras in the geographic area need to be self-corrected, and the specific method is as follows: the camera calibration system extracts two images of adjacent frames at the current moment from a video shot by the camera, respectively executes the step S102 to extract traffic sign lines in the calibration images and angular points of each traffic sign line, randomly selects one or more calibration frames in one-to-one correspondence in the two calibration images, and detects the change condition of the pixel coordinates of the angular points corresponding to the traffic sign lines in the calibration frames to judge whether the shooting angle of the camera changes. For example, one of the two calibration images is selected, the euclidean distances between two pixel coordinates of the same corner point corresponding to the traffic sign line in the same calibration frame in the two calibration images are calculated, then the average values of the euclidean distances corresponding to all the corner points of the traffic sign line corresponding to the calibration frame are calculated, if the average value is less than or equal to the preset distance threshold value, the shooting angle of the camera is not changed, and the calibration parameters obtained by the latest calibration of the camera can be continuously used; and if the average value is greater than the preset distance threshold value, the shooting angle of the camera is changed, and the steps S1041 to S1043 and the step S105 are executed according to the frame of image closest to the current moment.
It should be noted that, when the shooting angle of the camera changes, the calibration parameters are self-corrected, that is, calibrated again, the calibration model of the target area does not need to be reprocessed, only the traffic sign line detection and the corner point detection need to be performed again on the calibration image of the target area, then the position association relationship of the traffic sign line in the calibration image is established by using the same rule as the position association relationship of the traffic sign line established in the calibration model, and finally the steps S1041 to S1043 and the step 105 are performed to recalculate the calibration parameters of the camera.
The above steps S101 to S105 are executed by the camera calibration system, and the method for the camera calibration system to acquire the pixel coordinates of the corner points of the traffic sign line in the calibration image in step S102 can be seen in fig. 8, and the specific method is as follows:
s401, detecting N traffic sign lines in the calibration image, and acquiring corresponding N image calibration frames.
Specifically, after acquiring a calibration image which is shot by a camera arranged at a fixed position of a target area and contains N traffic sign lines, a camera calibration system processes and analyzes the calibration image to detect the traffic sign lines in the calibration image, and marks the detected traffic sign lines in the calibration image by using an image calibration frame to determine the specific positions of the detected traffic sign lines in the calibration image, so that the N traffic sign lines in the calibration image correspond to the N image calibration frames, and the process of acquiring the image calibration frame is used for detecting the traffic sign lines in the calibration image.
It should be noted that, if the video data captured by a certain camera arranged at a fixed position is acquired by the camera calibration system, the video data is composed of video frames at different times in a time sequence, each video frame is a calibration image, and each video frame contains a road traffic sign line.
It can be understood that the method for detecting the traffic sign line can be implemented based on a deep neural network, after the camera calibration system acquires the calibration image containing the road traffic sign line, the calibration image is input into a trained traffic sign line detection model (for example, a regional convolutional neural network R-CNN, a YOLO target detection algorithm, an SSD target detection algorithm, a Fast regional convolutional neural network Fast R-CNN, and the like), the trained traffic sign line detection model has the capability of detecting the traffic sign line, the traffic sign line can be detected for any input calibration image containing the traffic sign line, and meanwhile, the image calibration frame (for example, a rectangular frame, a circular frame, an oval frame, and the like) is used for labeling the identified traffic sign line to determine the specific position of the traffic sign line. In some possible embodiments, the type of the identified traffic sign line may be distinguished by the color of the image scaling frame, or the type of the traffic sign line to which the text labels belongs may be distinguished.
It should be noted that the traffic sign line detection models described above all belong to deep learning models, the deep learning models used need to be trained, various traffic sign lines in the images are marked by manually using the calibration frames in the images prepared in advance, in some embodiments, the categories to which the traffic sign lines belong may also be marked, and the manually marked images and the information corresponding to the calibration frames in the images are used as the input of the deep learning models to train the models.
Of course, besides the deep learning model, the traditional image processing method of machine vision can be used to detect the traffic sign line in the calibration image, and the traditional target detection method mainly comprises three steps of region selection (sliding window), feature extraction (for example, SIFT feature descriptor, HOG feature descriptor, DPM feature descriptor, etc.) and classifier classification (for example, machine learning algorithms such as support vector machine SVM, Adaboost, etc.).
In a specific embodiment, the traffic sign line in the calibration image may be segmented by using color features (the traffic sign line is usually white or yellow, and is obviously different from the ground color, dark gray) and/or shape features (or called contour features) of the traffic sign line, directional gradient histogram HOG features of the segmented traffic sign line are extracted, finally, a trained MR _ Adaboost classifier is used to classify and identify various traffic sign lines, and the identified traffic sign line is labeled by a Boundingbox algorithm to determine a specific position of the traffic sign line. The MR _ Adaboost classifier is trained by using HOG features of various traffic sign lines extracted in advance.
It should be noted that, the present application does not limit the specific algorithm for detecting the traffic sign line, and the target detection algorithm based on deep learning or the target detection algorithm based on traditional machine vision mentioned above is very mature and widely applied in the field of computer vision, and the present application is only briefly described herein.
S402, detecting K corner points of the traffic sign line corresponding to the N image calibration frames, and obtaining pixel coordinates of the K corner points in the calibration image.
Specifically, after detecting the traffic sign line in the calibration image, the camera calibration system also detects the corner point of each identified traffic sign line, thereby obtaining the pixel coordinates of each corner point of the traffic sign line in the calibration image. The corner point generally refers to a point with a severe brightness change in the image or a point with a maximum curvature on the edge of the image, and the corner point of the traffic sign line may be a sharp point of an arrow, an intersection of lines, a vertex of a rectangular frame, an inflection point of a line, or the like.
Illustratively, a corner detection algorithm based on gray scale images may be employed to detect the corners of traffic sign lines, such as: harris corner detection. The Harris operator basic idea is to use a fixed window to slide on a calibration image in any direction, compare the gray scale change degrees of pixels in the window before and after sliding, if the sliding in any direction has large gray scale change, an angular point exists in the window, and determine the position corresponding to the large gray scale change as the position of the angular point. In addition, the corner detection algorithm based on the gray level image also comprises SUSAN corner detection, corner detection based on template matching and the like.
Illustratively, a binary image based corner detection algorithm may be employed to detect the corners of the traffic sign lines, for example: the angular point detection algorithm based on skeleton extraction firstly carries out skeleton refinement processing on an image, angular points are obtained by detecting points with the maximum disc radius of zero in the skeleton, angular points of concave points are obtained by complementing images of the angular points, exclusive or operation is carried out on the angular points obtained by an original image and the complementing images, all the angular points are obtained, and grid angular points generated after discretization are removed.
For example, a corner detection algorithm based on edge features may be employed to detect the corners of traffic sign lines, such as: the corner detection based on wavelet transformation comprises the steps of firstly extracting target edges through an edge detection operator, selecting wavelets with maximum detection capability, carrying out wavelet transformation under multiple scales, obtaining a module maximum point of a wavelet coefficient to obtain a candidate point set of corners, and finally setting a certain screening rule to select a correct corner point set. In addition, the corner detection algorithm based on the edge features also comprises corner detection based on edge chain codes or curvature scale spaces, and the like.
It should be noted that each image calibration frame in step S401 corresponds to one traffic sign line, and therefore after detecting the corner points of the traffic sign line in each frame, all the corner points of the corresponding traffic sign line in the image calibration frame are stored according to the image calibration frame to which the corner points belong.
The method for the camera calibration system to obtain the world coordinates of the corner points of the traffic sign line in the calibration model in step S104 can be seen in fig. 9, and the specific method is as follows:
s501, detecting N traffic sign lines in the calibration model, and acquiring corresponding N model calibration frames.
Specifically, after the camera calibration system obtains the calibration model corresponding to the target area, the calibration model is processed and analyzed to detect the road traffic sign line in the calibration model, and the large detected traffic sign line is marked in the calibration model by the model calibration frame to determine the specific position of the large detected traffic sign line in the calibration model, so that N traffic sign lines in the calibration model correspond to N model calibration frames, and the process of obtaining the model calibration frame is to detect the traffic sign line in the calibration model.
It should be noted that the calibration model of the target area and the calibration image are located at the same view angle, that is, the number and the positions of the traffic sign lines in the calibration model of the same target area are the same as the number and the positions of the traffic sign lines in the calibration image. It is to be understood that the calibration model of the target area may be point cloud data of the target area in the high-precision map.
Illustratively, after a camera calibration system acquires point clouds corresponding to a target area, original point clouds are segmented by using reflection intensity and elevation information of point cloud data to extract road sign line point clouds, so that a large amount of non-ground information is removed, then the segmented point cloud data are projected into a two-dimensional plane, point cloud intensity characteristic images are generated by using the reflection intensity information and point cloud spatial distribution information, namely, the images are subjected to subsequent analysis by using an image processing method, profile characteristics of all connected areas in the images are extracted, finally the profile characteristics of the connected areas are matched with template characteristics of traffic sign lines, so that the traffic sign lines are detected, and a calibration frame algorithm is used for labeling the detected traffic sign lines, so that a model calibration frame of the traffic sign lines is obtained.
For example, the embodiment of the present application may also adopt a target point cloud detection method based on deep learning, because a traffic sign line is located on the plane of the ground, extraction of road point cloud needs to be performed first, a moving window method is adopted to extract ground points from point cloud data in combination with a topological relation between adjacent scan lines, an intensity image is generated after the ground point cloud is obtained, a semantic segmentation model based on deep lab V3+ network is adopted to extract and classify the road traffic sign line, finally, a KD tree point cloud clustering segmentation algorithm is used to realize vectorization of the road traffic sign line in combination with a corresponding vectorization scheme, and a minimum circumscribed rectangle, i.e., a model calibration frame, of the extracted traffic sign line is labeled to determine a specific position of the traffic sign line in the point cloud.
It should be noted that the semantic segmentation model mentioned above needs to be trained, a ground point cloud is extracted from an acquired original point cloud in advance to generate a corresponding intensity feature map, a road traffic sign line in the intensity feature map is labeled, of course, in some possible embodiments, different types of traffic sign line labels may be distinguished by colors or characters, and finally the semantic segmentation model is trained by using the labeled intensity feature map.
It should be noted that the target object detection algorithm based on the point cloud data has been developed quite mature, and this application is only a simple description, and in addition, other methods may also be adopted to detect the road traffic sign line in the point cloud.
S502, detecting K corner points of the N model calibration frames to the traffic sign line, and obtaining world coordinates of the K corner points in the calibration model.
Specifically, after detecting the traffic sign line in the calibration model, the camera calibration system further detects the corner point of each identified traffic sign line, thereby obtaining the world coordinates of each corner point of the traffic sign line in the calibration model. The corner points of the traffic sign line may be the cusps of the arrows, the intersections of the lines, the vertices of the rectangular boxes, the inflection points of the lines, and the like.
For example, if the step S501 detects the traffic sign line by projecting the point cloud data onto the two-dimensional plane and extracting the contour features of the traffic sign line, then, for the extracted traffic sign line point cloud, an edge-based corner detection algorithm (e.g., edge chain code-based corner detection, boundary curvature-based corner detection, wavelet transform-based corner detection, etc.) may be used to identify the corners on the point cloud boundary curve of the traffic sign line. Taking corner detection based on wavelet transformation as an example, direction angle functions of all points at the point cloud edges of the traffic sign lines are calculated, wavelet transformation is carried out on the functions under different scales, points with the maximum modulus of function values under multiple scales are screened out and used as candidate corners, and finally the final corners can be determined by a threshold method.
For example, for a traffic sign line detected in the calibration model, a corner point extraction method based on local reconstruction may be adopted to extract a corner point of the traffic sign line. Specifically, feature measurement of each data point is calculated based on covariance of a local neighborhood, an initial feature point set is obtained through threshold filtering, a non-crossing feature region is constructed in the local neighborhood of each initial feature point to reflect a triangle set of local feature information of the point, a shared neighbor algorithm is used for clustering each constructed triangle normal to obtain a classification set of data points of the corresponding local region, and finally whether the point falls on a plurality of fitting planes at the same time is judged to extract angular points.
It should be noted that other methods may also be used to detect the corner point of the traffic sign line in the calibration model, and the embodiments of the present application are not particularly limited.
It should be noted that, after detecting the corner points of the corresponding traffic sign line in each model calibration frame, all the corner points of the corresponding traffic sign line in the model calibration frame are respectively stored according to the model calibration frame to which the corner points belong.
It should be noted that, no matter the detection of the traffic sign line corner points in the calibration image or the detection of the traffic sign line corner points in the calibration model, it is only required to complete the detection before the corner points of the calibration image and the corner points of the calibration model are matched in the step S1043.
After the cameras at the traffic intersection are calibrated, the cameras can be applied to various specific application scenes. For example, in a traffic intersection, the positioning of vehicles in the traffic intersection, the obtaining of the position relationship between the vehicles, traffic sign lines and pedestrians, and the like can be realized through videos shot by a camera.
In an application scene, video data of a traffic intersection shot by a camera is combined with a high-precision map to restore static elements and dynamic vehicle movement tracks of the intersection. Specifically, firstly, calibrating the cameras at the traffic intersection by using the method described in fig. 2, and establishing a calibration relationship between a pixel point in a calibration image at the viewing angle of each camera and a point in a high-precision map of the physical world; then, the data shot by the camera is processed, and the target (such as a vehicle, a building, a flower bed and the like) in the video can be identified and positioned by using a target detection algorithm (such as a trained neural network algorithm like SSD, RCNN and the like). After target detection, the pixel coordinates of the acquired static target objects (such as buildings, flower beds and the like) are converted into world coordinates in the physical world by using the calibration parameters of the camera, so that the restoration of static elements at the intersection is realized, and for vehicles, the geographic coordinates of all vehicles can be obtained by using the calibration parameters; furthermore, target tracking can be performed on all vehicles, namely, the motion trail of the target vehicle can be fitted according to the pixel coordinates of the target vehicle at the current moment and the pixel coordinates of the target vehicle at the historical moment and converted into the physical world, and the motion trails of all vehicles in the corresponding geographic area in the physical world are obtained, so that the motion trail of the dynamic vehicle is restored; and finally, according to the obtained motion track of the vehicle, combining the indication condition of a traffic intersection signal lamp, and judging whether the vehicle breaks rules and regulations at the traffic intersection. In addition, whether the motion tracks of the vehicles overlap with each other at the same time can be judged according to the motion tracks of the vehicles, so that whether a collision accident occurs or not can be judged.
In another application scenario of the present application, the method can be applied to future vehicle-road coordination, and by calibrating the cameras at each intersection, the motion trajectory of the vehicle at the intersection obtained in real time is fed back to the unmanned vehicle, so as to assist the vehicle to autonomously perform route planning. Specifically, the method shown in fig. 2 is used to calibrate the cameras at the traffic intersection to obtain the calibration parameters of the cameras at each intersection, then the data captured by the cameras are processed, a target object detection algorithm (such as a trained neural network algorithm like SSD or RCNN) can be used to detect the vehicles in the video, the vehicles are converted into world coordinates in a corresponding physical world geographical area according to the pixel coordinates of the vehicles, the motion trajectory of the target vehicle can be fitted according to the three-dimensional coordinates of the target vehicle at the current time and the world coordinates of the historical time, the motion trajectories of all vehicles are analyzed, if a large accident occurs on a certain road in a certain time period or the vehicles move slowly due to road blockage, the camera calibration system can feed the detected road information in front back to the unmanned vehicle to inform the following vehicles that the road in front is not closed, assisting the unmanned vehicle in re-planning the route.
The present embodiment relates to a camera 200, the camera 200 may be various cameras 200 such as an analog camera, a video camera, or an intelligent camera, and the present application takes a camera as an example, as shown in fig. 10, and fig. 10 is a block diagram illustrating a partial structure of a possible camera 200 according to the present application. The camera 200, also called an IP camera (IPC), adopts an embedded architecture, integrates multiple functions of video and audio acquisition, signal processing, encoding compression, front-end storage, network transmission, and the like, and combines a network video storage and recording system and management platform software to form a large-scale and distributed network video monitoring system. Referring to fig. 10, the camera 200 includes a lens and sensor 211, an encoding processor 212, a camera main control board 213, and the like. Those skilled in the art will appreciate that the camera configuration shown in fig. 10 does not constitute a limitation of the camera and may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components.
The following specifically describes the respective constituent components of the camera 200 with reference to fig. 10:
the lens in the lens and sensor 211 is a key device of the video monitoring system, and the quality of the lens directly affect the quality of the whole video camera 200. The lens can be used for imaging the external scenery on the sensor, at present, the lens of the camera 200 is threaded, and generally comprises a group of lenses and diaphragms, the lens is divided into a Manual Iris (MI) and an Automatic Iris (AI), the manual iris is suitable for the occasion with unchanged brightness, and the iris of the automatic iris can be automatically adjusted when the brightness of the automatic iris is changed, so that the camera is suitable for the occasion with changed brightness. Alternatively, the lens may be a standard lens, a telephoto lens, a zoom lens, a variable focus lens, or the like, and the material of the lens may be glass or plastic.
The sensor in the lens and sensor 211 may be an image sensor, such as a Charge Coupled Device (CCD) sensor or a Complementary Metal Oxide Semiconductor (CMOS) sensor, for converting an optical signal (an image of an object) received by the sensor into an electrical signal, outputting the electrical signal to the encoding processor 212 through a driving circuit, performing optimization processing, such as color, sharpness, white balance, etc., on a digital image signal collected by the lens and sensor 211 by the encoding processor 212, and inputting the digital image signal into the camera main control board 213 in the form of a network video signal, where the camera main control board 213 has functions of a Bayonet Nut Connector (BNC) video output, a network communication interface, an audio input, an audio output, an alarm input, a serial communication interface, etc. The encoding processor 212 is configured to perform an optimization process on the digital image signal transmitted from the lens and sensor 211, and the encoding processor 212 may include an Image Signal Processor (ISP) or an image decoder, which is not limited herein.
Although not shown, the camera 200 may further include a power source (such as a battery), a filter, or a bluetooth module, etc. for supplying power to various components, which will not be described herein.
Referring to fig. 11, fig. 11 is a schematic structural diagram of a device for calibrating a camera according to an embodiment of the present disclosure, and as shown in the drawing, the device is used in a camera calibration system including a camera and a point cloud acquisition device, wherein the camera is disposed at a fixed position at a traffic intersection, and the device calibrates the camera according to a calibration image acquired by the camera and a calibration model provided by the point source acquisition device. Specifically, the apparatus 200 includes: a receiving unit 201, a processing unit 202, and a calculating unit 203, wherein,
a receiving unit 201, configured to obtain a calibration image, where the calibration image is obtained by image-capturing N feature subjects by a camera, and the nth feature subjectiEach feature body comprises MiN feature bodies comprise K feature points in total, and i is an integer which is greater than or equal to 1 and less than or equal to N; the receiving unit 201 is further configured to receive a calibration model, wherein the calibration model is obtained from a three-dimensional model of the high-precision map according to the address position and the shooting angle of the camera, and the three-dimensional model is obtained by performing all-around modeling on the feature main body.
The processing unit 202 is configured to find the K feature points in the calibration image and obtain pixel coordinates of the K feature points in the calibration image; the processing unit 202 is further configured to search K feature points in the calibration model, obtain world coordinates of the K feature points in the calibration model, and match the K feature points of the calibration image with the K feature points in the calibration model through an algorithm, so that pixel coordinates of the K feature points correspond to the world coordinates of the K feature points one to one. Specifically, the method for obtaining pixel coordinates of K feature points in the calibration image may refer to S401 and S402 in the above embodiment, the method for obtaining world coordinates of K feature points in the calibration model may refer to S501 and S502 in the above embodiment, and the method for matching K feature points in the calibration image with K feature points in the calibration model may refer to S104 in the above embodiment, which is not described herein again.
The calculating unit 203 is configured to calibrate the camera according to the K pixel coordinates and the K world coordinates, where the K pixel coordinates and the K world coordinates are in one-to-one correspondence. Specifically, the method for calibrating the camera may refer to step S105, which is not described herein again.
It should be noted that the above-mentioned apparatus may be a single device, for example: the computer may also be a module in the camera, which has receiving, processing and computing capabilities, and the embodiment of the present application is not particularly limited.
Referring to fig. 12, fig. 12 is a schematic structural diagram of a computing device according to an embodiment of the present disclosure. As shown in fig. 12, the computing device 800 includes at least: processor 810, communication interface 820, and memory 830, the processor 810, communication interface 820, and memory 830 shown are interconnected by an internal bus 840. It is understood that the computing device 800 may be a computing device in a cloud environment, or a computing device in an edge environment.
The processor 810 may be formed of one or more general-purpose processors, such as a Central Processing Unit (CPU), or a combination of a CPU and a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
The bus 840 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 840 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 12, but not only one bus or type of bus.
Memory 830 may include volatile memory (volatile memory), such as Random Access Memory (RAM); the memory 830 may also include a non-volatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory (flash memory), a Hard Disk Drive (HDD), or a solid-state drive (SSD); the memory 830 may also include combinations of the above. The memory 830 may store program codes and program data, wherein the program codes include: extracting codes of traffic sign lines in the images, extracting codes of traffic sign lines in the point clouds, detecting codes of corner points in the images, detecting codes of corner points in the point clouds, establishing codes of position relations of the traffic sign lines and the like; the program data includes: traffic sign lines in images, traffic sign lines in point clouds, corner points in images, etc.
It should be noted that the processor 810 calls and executes the program code and program data in the memory 830 to implement the operation steps of the methods shown in fig. 2, 4, 8 and 9.
The processor 810 is configured to read the relevant instructions in the memory 830 to perform the following operations:
acquiring a calibration image, wherein the calibration image is obtained by acquiring images of N feature bodies by a camera, the N feature bodies totally comprise K feature points, and the first feature body comprises M1A second feature point, the second feature body including M2A feature point, …, the Nth feature body comprising MNA characteristic point, K is equal to M1,M2,…,MNSumming;
finding the K characteristic points in the calibration image, and acquiring K pixel coordinates of the K characteristic points in the calibration image, wherein a one-to-one correspondence relationship exists between the K characteristic points and the K pixel coordinates;
receiving a calibration model, wherein the calibration model is obtained from a three-dimensional model of a high-precision map according to the address position and the shooting angle of the camera, and the three-dimensional model is obtained by carrying out omnibearing modeling on the feature main body;
finding the K characteristic points in the calibration model, and acquiring K world coordinates of the K characteristic points in the calibration model, wherein the K characteristic points and the K world coordinates have a one-to-one correspondence relationship;
and calibrating the camera according to the K pixel coordinates and the K world coordinates.
Specifically, please refer to the related description of the above method embodiment for implementing the one-to-one correspondence between the K pixel coordinates in the calibration image and the K world coordinates of the calibration model, and the specific implementation of the computing device 800 for executing various operations may refer to the above method embodiment, which is not described herein again.

Claims (18)

1. A camera calibration method, the method comprising:
acquiring a calibration image, wherein the calibration image is obtained by acquiring images of N feature bodies by a camera, the N feature bodies totally comprise K feature points, and the first feature body comprises M1A second feature point, the second feature body including M2A feature point, …, the Nth feature body comprising MNA characteristic point, K is equal to M1,M2,…,MNSumming;
finding the K characteristic points in the calibration image, and acquiring K pixel coordinates of the K characteristic points in the calibration image, wherein a one-to-one correspondence relationship exists between the K characteristic points and the K pixel coordinates;
receiving a calibration model, wherein the calibration model is obtained from a three-dimensional model of a high-precision map according to the address position and the shooting angle of the camera, and the three-dimensional model is obtained by carrying out omnibearing modeling on the feature main body;
finding the K characteristic points in the calibration model, and acquiring K world coordinates of the K characteristic points in the calibration model, wherein the K characteristic points and the K world coordinates have a one-to-one correspondence relationship;
and calibrating the camera according to the K pixel coordinates and the K world coordinates.
2. The method according to claim 1, wherein the finding the K feature points in the calibration image comprises:
acquiring N image calibration frames of the N feature bodies from the calibration image, wherein a first image calibration frame is a calibration frame of the first feature body in the calibration image, a second image calibration frame is a calibration frame of the second feature body in the calibration image, …, and an Nth image calibration frame is a calibration frame of the Nth feature body in the calibration image;
searching the K characteristic points from the N image calibration frames, wherein the first image calibration frame comprises M1A second image calibration frame including M2A feature point, …, the Nth image calibration frame including MNAnd (4) a characteristic point.
3. The method as claimed in claim 2, wherein said finding said K feature points in said calibration model comprises:
obtaining N model calibration frames of the N feature bodies from the calibration model;
matching the N image calibration frames with the N model calibration frames, thereby determining that a first model calibration frame is a calibration frame of the first feature subject in the calibration model, a second model calibration frame is a calibration frame of the second feature subject in the calibration model, …, and an Nth model calibration frame is a calibration frame of the Nth feature subject in the calibration model;
searching the K feature points from the N model calibration frames, wherein the first model calibration frame comprises M1A second model calibration frame including M2Characteristic points, …, the Nth model calibration frame includes MNAnd (4) a characteristic point.
4. The method of claim 3, wherein matching the N image scaling boxes and the N model scaling boxes comprises:
calculating the central points of the N image calibration frames to obtain N central point pixel coordinates, wherein the N central point pixel coordinates correspond to the N image calibration frames one by one;
calculating central points of the N model calibration frames to obtain N central point world coordinates, wherein the N central point world coordinates correspond to the N model calibration frames one to one;
traversing each central point in the N central points based on the N central point pixel coordinates, and establishing the up-down left-right relation between each central point and the peripheral central point of each central point to obtain a first position result;
traversing each central point in the N central points based on X, Y coordinates of the world coordinates of the N central points, establishing the up-down and left-right relation between each central point and the peripheral central point of each central point, and obtaining a second position result;
and determining that the N image calibration frames correspond to the N model calibration frames one by one according to the first position result and the second position result.
5. The method of claim 3, wherein matching the N image scaling boxes and the N model scaling boxes comprises:
calculating the central points of the N image calibration frames to obtain N central point pixel coordinates, wherein the N central point pixel coordinates correspond to the N image calibration frames one by one;
calculating central points of the N model calibration frames to obtain N central point world coordinates, wherein the N central point world coordinates correspond to the N model calibration frames one to one;
determining the number of central points respectively contained in the upper direction, the lower direction, the left direction and the right direction of each central point in the N central points based on the N central point pixel coordinates, and obtaining a third position result, wherein the sum of the number of the central points respectively contained in the upper direction, the lower direction, the left direction and the right direction of each central point is N-1;
determining the number of central points respectively contained in the four directions of the top, the bottom, the left and the right of each central point in the N central points based on X, Y coordinates of the world coordinates of the N central points to obtain a fourth position result, wherein the sum of the number of the central points respectively contained in the four directions of the top, the bottom, the left and the right of each central point is N-1;
and determining that the N image calibration frames correspond to the N model calibration frames one by one according to the third position result and the fourth position result.
6. The method of claim 3, wherein matching the N image scaling boxes and the N model scaling boxes comprises:
calculating the central points of the N image calibration frames to obtain N central point pixel coordinates, wherein the N central point pixel coordinates correspond to the N image calibration frames one by one;
calculating central points of the N model calibration frames to obtain N central point world coordinates, wherein the N central point world coordinates correspond to the N model calibration frames one to one;
based on the N central point pixel coordinates, sorting the N central point pixel coordinates according to a first sequence to obtain a first sorting result, wherein the first sequence comprises one of left to right, right to left, top to bottom and bottom to top;
sorting the N central point world coordinates according to the first sequence based on X, Y coordinates of the N central point world coordinates to obtain a second sorting result, wherein the second sorting result is in one-to-one correspondence with the first sorting result;
and determining that the N image calibration frames correspond to the N model calibration frames one by one according to the first sequencing result and the second sequencing result.
7. The method according to any one of claims 3-6, which isCharacterized in that, the K feature points are searched from the N model calibration frames, wherein, the first model calibration frame comprises M1A second model calibration frame including M2Characteristic points, …, the Nth model calibration frame includes MNA feature point comprising:
m of feature subjects in the ith image scaling boxiThe pixel coordinates are sorted according to a second sequence to obtain a third sorting result, the second sequence comprises one of left to right, right to left, top to bottom and bottom to top, and i is an integer which is greater than or equal to 1 and less than or equal to N;
m for feature principal in the ith model scaling boxiThe world coordinates are sorted according to the second sequence to obtain a fourth sorting result, the ith image calibration frame and the ith model calibration frame are in one-to-one correspondence, and i is an integer which is greater than or equal to 1 and less than or equal to N;
and determining that the K pixel coordinates correspond to the K world coordinates one to one based on the third sorting result and the fourth sorting result.
8. The method of claim 1, wherein the feature body comprises a road traffic sign line including one or more of a turn line, a lane line, a stop line, a pedestrian crossing line.
9. An apparatus for camera calibration, the apparatus comprising:
a receiving unit, configured to acquire a calibration image, where the calibration image is obtained by acquiring images of N feature bodies by a camera, where the N feature bodies include K feature points in total, and a first feature body includes M1A second feature point, the second feature body including M2A feature point, …, the Nth feature body comprising MNA characteristic point, K is equal to M1,M2,…,MNSumming;
the receiving unit is further configured to receive a calibration model, where the calibration model is obtained from a three-dimensional model of a high-precision map according to an address position and a shooting angle of the camera, and the three-dimensional model is obtained by performing omnibearing modeling on the feature main body;
the processing unit is used for searching the K characteristic points in the calibration image and acquiring K pixel coordinates of the K characteristic points in the calibration image, wherein a one-to-one correspondence relationship exists between the K characteristic points and the K pixel coordinates;
the processing unit is further configured to find the K feature points in the calibration model, and acquire K world coordinates of the K feature points in the calibration model, where there is a one-to-one correspondence between the K feature points and the K world coordinates;
and the computing unit is used for calibrating the camera according to the K pixel coordinates and the K world coordinates.
10. The apparatus according to claim 9, wherein the processing unit is specifically configured to:
acquiring N image calibration frames of the N feature bodies from the calibration image, wherein a first image calibration frame is a calibration frame of the first feature body in the calibration image, a second image calibration frame is a calibration frame of the second feature body in the calibration image, …, and an Nth image calibration frame is a calibration frame of the Nth feature body in the calibration image;
searching the K characteristic points from the N image calibration frames, wherein the first image calibration frame comprises M1A second image calibration frame including M2A feature point, …, the Nth image calibration frame including MNAnd (4) a characteristic point.
11. The apparatus according to claim 10, wherein the processing unit is specifically configured to:
obtaining N model calibration frames of the N feature bodies from the calibration model;
matching the N image calibration frames with the N model calibration frames, thereby determining that a first model calibration frame is a calibration frame of the first feature subject in the calibration model, a second model calibration frame is a calibration frame of the second feature subject in the calibration model, …, and an Nth model calibration frame is a calibration frame of the Nth feature subject in the calibration model;
searching the K feature points from the N model calibration frames, wherein the first model calibration frame comprises M1A second model calibration frame including M2Characteristic points, …, the Nth model calibration frame includes MNAnd (4) a characteristic point.
12. The apparatus according to claim 11, wherein the processing unit is specifically configured to:
calculating the central points of the N image calibration frames to obtain N central point pixel coordinates, wherein the N central point pixel coordinates correspond to the N image calibration frames one by one;
calculating central points of the N model calibration frames to obtain N central point world coordinates, wherein the N central point world coordinates correspond to the N model calibration frames one to one;
traversing each central point in the N central points based on the N central point pixel coordinates, and establishing the up-down left-right relation between each central point and the peripheral central point of each central point to obtain a first position result;
traversing each central point in the N central points based on X, Y coordinates of the world coordinates of the N central points, establishing the up-down and left-right relation between each central point and the peripheral central point of each central point, and obtaining a second position result;
and determining that the N image calibration frames correspond to the N model calibration frames one by one according to the first position result and the second position result.
13. The apparatus according to claim 11, wherein the processing unit is specifically configured to:
calculating the central points of the N image calibration frames to obtain N central point pixel coordinates, wherein the N central point pixel coordinates correspond to the N image calibration frames one by one;
calculating central points of the N model calibration frames to obtain N central point world coordinates, wherein the N central point world coordinates correspond to the N model calibration frames one to one;
determining the number of central points respectively contained in the upper direction, the lower direction, the left direction and the right direction of each central point in the N central points based on the N central point pixel coordinates, and obtaining a third position result, wherein the sum of the number of the central points respectively contained in the upper direction, the lower direction, the left direction and the right direction of each central point is N-1;
determining the number of central points respectively contained in the four directions of the top, the bottom, the left and the right of each central point in the N central points based on X, Y coordinates of the world coordinates of the N central points to obtain a fourth position result, wherein the sum of the number of the central points respectively contained in the four directions of the top, the bottom, the left and the right of each central point is N-1;
and determining that the N image calibration frames correspond to the N model calibration frames one by one according to the third position result and the fourth position result.
14. The apparatus according to claim 11, wherein the processing unit is specifically configured to:
calculating the central points of the N image calibration frames to obtain N central point pixel coordinates, wherein the N central point pixel coordinates correspond to the N image calibration frames one by one;
calculating central points of the N model calibration frames to obtain N central point world coordinates, wherein the N central point world coordinates correspond to the N model calibration frames one to one;
based on the N central point pixel coordinates, sorting the N central point pixel coordinates according to a first sequence to obtain a first sorting result, wherein the first sequence comprises one of left to right, right to left, top to bottom and bottom to top;
sorting the N central point world coordinates according to the first sequence based on X, Y coordinates of the N central point world coordinates to obtain a second sorting result, wherein the second sorting result is in one-to-one correspondence with the first sorting result;
and determining that the N image calibration frames correspond to the N model calibration frames one by one according to the first sequencing result and the second sequencing result.
15. The apparatus according to any one of claims 11-14, wherein the processing unit is specifically configured to:
m of feature subjects in the ith image scaling boxiThe pixel coordinates are sorted according to a second sequence to obtain a third sorting result, the second sequence comprises one of left to right, right to left, top to bottom and bottom to top, and i is an integer which is greater than or equal to 1 and less than or equal to N;
m for feature principal in the ith model scaling boxiThe world coordinates are sorted according to the second sequence to obtain a fourth sorting result, the ith image calibration frame and the ith model calibration frame are in one-to-one correspondence, and i is an integer which is greater than or equal to 1 and less than or equal to N;
and determining that the K pixel coordinates correspond to the K world coordinates one to one based on the third sorting result and the fourth sorting result.
16. The apparatus of claim 9, wherein the feature body comprises a road traffic sign line including one or more of a turn line, a lane line, a stop line, a pedestrian crossing line.
17. A computing device, comprising a processor and a memory, the processor to execute computer instructions stored by the memory to cause the computing device to perform the method of any of claims 1-8.
18. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the method according to any one of claims 1-8.
CN201911125464.4A 2019-11-15 2019-11-15 Camera calibration method and device Pending CN112819895A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911125464.4A CN112819895A (en) 2019-11-15 2019-11-15 Camera calibration method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911125464.4A CN112819895A (en) 2019-11-15 2019-11-15 Camera calibration method and device

Publications (1)

Publication Number Publication Date
CN112819895A true CN112819895A (en) 2021-05-18

Family

ID=75852326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911125464.4A Pending CN112819895A (en) 2019-11-15 2019-11-15 Camera calibration method and device

Country Status (1)

Country Link
CN (1) CN112819895A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313770A (en) * 2021-06-29 2021-08-27 智道网联科技(北京)有限公司 Calibration method and device of automobile data recorder
CN113674358A (en) * 2021-08-09 2021-11-19 浙江大华技术股份有限公司 Method and device for calibrating radar vision equipment, computing equipment and storage medium
CN114863695A (en) * 2022-05-30 2022-08-05 中邮建技术有限公司 Overproof vehicle detection system and method based on vehicle-mounted laser and camera
CN116030418A (en) * 2023-02-14 2023-04-28 北京建工集团有限责任公司 Automobile lifting line state monitoring system and method
WO2023200501A1 (en) * 2022-04-15 2023-10-19 Cavnue Technology, LLC System calibration using remote sensor data
CN117274402A (en) * 2023-11-24 2023-12-22 魔视智能科技(武汉)有限公司 Calibration method and device for camera external parameters, computer equipment and storage medium
CN117994993A (en) * 2024-04-02 2024-05-07 中国电建集团昆明勘测设计研究院有限公司 Road intersection traffic light control method, system, electronic equipment and storage medium
CN113674358B (en) * 2021-08-09 2024-06-04 浙江大华技术股份有限公司 Calibration method and device of radar equipment, computing equipment and storage medium

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313770A (en) * 2021-06-29 2021-08-27 智道网联科技(北京)有限公司 Calibration method and device of automobile data recorder
CN113674358A (en) * 2021-08-09 2021-11-19 浙江大华技术股份有限公司 Method and device for calibrating radar vision equipment, computing equipment and storage medium
CN113674358B (en) * 2021-08-09 2024-06-04 浙江大华技术股份有限公司 Calibration method and device of radar equipment, computing equipment and storage medium
WO2023200501A1 (en) * 2022-04-15 2023-10-19 Cavnue Technology, LLC System calibration using remote sensor data
CN114863695A (en) * 2022-05-30 2022-08-05 中邮建技术有限公司 Overproof vehicle detection system and method based on vehicle-mounted laser and camera
CN114863695B (en) * 2022-05-30 2023-04-18 中邮建技术有限公司 Overproof vehicle detection system and method based on vehicle-mounted laser and camera
CN116030418A (en) * 2023-02-14 2023-04-28 北京建工集团有限责任公司 Automobile lifting line state monitoring system and method
CN116030418B (en) * 2023-02-14 2023-09-12 北京建工集团有限责任公司 Automobile lifting line state monitoring system and method
CN117274402A (en) * 2023-11-24 2023-12-22 魔视智能科技(武汉)有限公司 Calibration method and device for camera external parameters, computer equipment and storage medium
CN117274402B (en) * 2023-11-24 2024-04-19 魔视智能科技(武汉)有限公司 Calibration method and device for camera external parameters, computer equipment and storage medium
CN117994993A (en) * 2024-04-02 2024-05-07 中国电建集团昆明勘测设计研究院有限公司 Road intersection traffic light control method, system, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110175576B (en) Driving vehicle visual detection method combining laser point cloud data
CN112819895A (en) Camera calibration method and device
CN110148196B (en) Image processing method and device and related equipment
CN109059954B (en) Method and system for supporting high-precision map lane line real-time fusion update
CN107516077B (en) Traffic sign information extraction method based on fusion of laser point cloud and image data
US10078790B2 (en) Systems for generating parking maps and methods thereof
CN110619750B (en) Intelligent aerial photography identification method and system for illegal parking vehicle
CN110796714B (en) Map construction method, device, terminal and computer readable storage medium
CN112740225B (en) Method and device for determining road surface elements
CN110197173B (en) Road edge detection method based on binocular vision
CN112308913B (en) Vehicle positioning method and device based on vision and vehicle-mounted terminal
Goga et al. Fusing semantic labeled camera images and 3D LiDAR data for the detection of urban curbs
CN116452852A (en) Automatic generation method of high-precision vector map
CN112562005A (en) Space calibration method and system
Rezaei et al. Traffic-net: 3d traffic monitoring using a single camera
You et al. Joint 2-D–3-D traffic sign landmark data set for geo-localization using mobile laser scanning data
CN113219472B (en) Ranging system and method
CN111383286A (en) Positioning method, positioning device, electronic equipment and readable storage medium
Hernández et al. Lane marking detection using image features and line fitting model
CN116901089B (en) Multi-angle vision distance robot control method and system
CN116912517B (en) Method and device for detecting camera view field boundary
CN112115737B (en) Vehicle orientation determining method and device and vehicle-mounted terminal
Barua et al. An Efficient Method of Lane Detection and Tracking for Highway Safety
CN111754388A (en) Picture construction method and vehicle-mounted terminal
CN114782496A (en) Object tracking method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination