CN114862964A - Automatic calibration method for sensor, electronic device and storage medium - Google Patents

Automatic calibration method for sensor, electronic device and storage medium Download PDF

Info

Publication number
CN114862964A
CN114862964A CN202210455494.7A CN202210455494A CN114862964A CN 114862964 A CN114862964 A CN 114862964A CN 202210455494 A CN202210455494 A CN 202210455494A CN 114862964 A CN114862964 A CN 114862964A
Authority
CN
China
Prior art keywords
coordinate
point cloud
image
coordinate system
driving state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210455494.7A
Other languages
Chinese (zh)
Inventor
李旭兴
张蓉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Zhixing Technology Co ltd
Original Assignee
Wuhan Zhixing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Zhixing Technology Co ltd filed Critical Wuhan Zhixing Technology Co ltd
Priority to CN202210455494.7A priority Critical patent/CN114862964A/en
Publication of CN114862964A publication Critical patent/CN114862964A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention provides an automatic calibration method of a sensor, electronic equipment and a storage medium. The method comprises the following steps: acquiring laser point cloud data acquired by laser equipment and image data acquired by a camera; according to the driving state of the mobile device, identifying a scene target related to the driving state in the laser point cloud data to obtain a first coordinate of the scene target under a laser point cloud coordinate system; according to the driving state of the mobile device, identifying a scene target related to the driving state in the image data to obtain a second coordinate of the scene target in an image coordinate system; re-projecting the first coordinate into a third coordinate under an image coordinate system; matching the second coordinate with the third coordinate under an image coordinate system to obtain a matching result; and calibrating external parameters between the laser equipment and the camera based on the matching result. The embodiment of the invention does not depend on the traditional calibration scene, and can realize more accurate automatic calibration of external parameters between the laser equipment and the camera.

Description

Automatic calibration method for sensor, electronic device and storage medium
Technical Field
The invention relates to the field of automatic driving, in particular to an automatic calibration method of a sensor, electronic equipment and a storage medium.
Background
The automatic driving is generally composed of a sensing system, a decision-making system, an execution system and a communication system, and data are acquired by a vehicle, processed and output, and finally decision-making control is performed. Because of the high safety requirements of autonomous driving, sensors are needed to collect more sufficient ambient information to make reliable inferences.
However, no single sensor can sense the characteristics of containing distance and having comprehensive object information. Therefore, in order to obtain more comprehensive features, reliable sensor sensing integration schemes are lidar and cameras. The laser radar can better sense the position information of surrounding objects, the accuracy is high, but the data of the laser radar is sparse, and definite type characteristics cannot be given to partial obstacles encountered in automatic driving. Meanwhile, the system may fail in the case of rain, snow, light dust and the like, and although such applications may be optimized by the scheme of multiple echoes, the back end still needs to perform filtering of abnormal targets. The camera can better identify semantic information in a scene and identify traffic obstacles, but segmentation is easy to fail in partial scenes, and for example, objects with unobvious boundaries may cause overall detection to be abnormal.
Therefore, the automatic driving perception operation can be better assisted by combining the characteristics of the laser radar and the camera and fusing related information. Before fusion use, parameters need to be calibrated, so that the corresponding target position in the laser and the target position of the camera have no obvious deviation. Thereby better completing the fusion perception of the target.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the related art:
the integration of the camera and the laser radar is fixed, the relative position of the camera and the laser radar which are integrated and fixed can be changed due to jolting, sudden braking and the like in the driving process of the automatic driving vehicle, and at the moment, the calibration parameters of the camera and the laser radar in the integration process are not applicable any more, and the external parameters between the camera and the laser radar need to be calibrated again. Therefore, how to realize the external parameter automatic calibration of the camera and the laser radar becomes a technical problem to be solved urgently at present.
Disclosure of Invention
The technical scheme of the invention solves the technical problem that the external parameters of the camera and the laser radar cannot be automatically calibrated when the relative positions of the camera and the laser radar are changed in the driving process of the automatic driving vehicle in the prior art. In a first aspect, an embodiment of the present invention provides an automatic calibration method for a sensor, which is applied to a mobile device, and includes:
acquiring laser point cloud data acquired by laser equipment;
acquiring image data acquired by a camera;
according to the driving state of the mobile device, identifying a scene target related to the driving state in the laser point cloud data to obtain a first coordinate of the scene target under a laser point cloud coordinate system;
according to the driving state of the mobile device, identifying the scene target related to the driving state in the image data to obtain a second coordinate of the scene target in an image coordinate system;
re-projecting the first coordinate into a third coordinate under an image coordinate system;
matching the second coordinate with the third coordinate under the image coordinate system to obtain a matching result;
and calibrating external parameters between the laser equipment and the camera based on the matching result.
In a second aspect, an embodiment of the present invention provides an automatic calibration device for a sensor, including:
the point cloud acquisition module is used for acquiring laser point cloud data acquired by laser equipment;
the image acquisition module is used for acquiring image data acquired by the camera;
the first coordinate determination module is used for identifying a scene target related to the driving state in the laser point cloud data according to the driving state of the mobile device to obtain a first coordinate of the scene target under a laser point cloud coordinate system;
the second coordinate determination module is used for identifying the scene target related to the driving state in the image data according to the driving state of the mobile device to obtain a second coordinate of the scene target in an image coordinate system;
the third coordinate determination module is used for re-projecting the first coordinate into a third coordinate under an image coordinate system;
the matching module is used for matching the second coordinate with the third coordinate in the image coordinate system to obtain a matching result;
and the calibration module is used for calibrating external parameters between the laser equipment and the camera based on the matching result.
In a third aspect, an electronic device is provided, comprising: the sensor calibration system comprises at least one processor and a memory which is in communication connection with the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the steps of the sensor automatic calibration method according to any embodiment of the invention.
In a fourth aspect, an embodiment of the present invention provides a mobile device, including a body and the electronic device according to any embodiment of the present invention mounted on the body.
In a fifth aspect, an embodiment of the present invention provides a storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the steps of the automatic calibration method for a sensor according to any embodiment of the present invention.
In a sixth aspect, an embodiment of the present invention further provides a computer program product, which when running on a computer, causes the computer to execute the automatic calibration method for a sensor according to any one of the embodiments of the present invention.
The embodiment of the invention has the beneficial effects that: according to the technical scheme, the corresponding scene target is selected based on the driving state of the mobile device, the laser point cloud coordinate and the image coordinate of the scene target are matched, and the external parameter automatic calibration between the camera and the laser radar is realized according to the matching result.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a flowchart of an automatic calibration method for a sensor according to an embodiment of the present invention;
fig. 2 is a schematic diagram of signboard point cloud features with filtered height and reflectivity in point cloud data of an automatic sensor calibration method according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a detection effect of a road signboard in image data of an automatic sensor calibration method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a fusion display of a point cloud and an image of a traffic signboard in an automatic sensor calibration method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a pedestrian-vehicle projection matching of an image and a point cloud of an automatic sensor calibration method according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a laser point cloud and image fusion calibration process of an automatic sensor calibration method according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an automatic calibration device for a sensor according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an embodiment of an electronic device for automatically calibrating a sensor according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As will be appreciated by one skilled in the art, embodiments of the present application may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
For convenience of understanding, technical terms related to the present application are explained as follows:
the "moving device" referred to in the present application may be any equipment with moving capability, including, but not limited to, automobiles, ships, submarines, airplanes, aircrafts, etc., wherein the automobiles include vehicles with six automatic Driving technical grades, i.e., L0-L5, formulated by Society of automatic Engineers (SAE International) or national standard "automobile Driving automation classification", hereinafter referred to as automatic-Driving Vehicle ADV (automatic-Driving Vehicle).
An "autonomous vehicle ADV" as referred to herein may be a vehicle device or a robotic device having various functions as follows:
(1) manned functions, such as home cars, buses, and the like;
(2) cargo carrying functions, such as common trucks, van trucks, dump trailers, enclosed trucks, tank trucks, flat vans, container vans, dump trucks, special structure vans and the like;
(3) tool functions such as logistics distribution vehicles, Automated Guided Vehicles (AGV), patrol vehicles, cranes, excavators, bulldozers, forklifts, road rollers, loaders, off-road vehicles, armored vehicles, sewage treatment vehicles, sanitation vehicles, dust suction vehicles, ground cleaning vehicles, watering vehicles, sweeping robots, food delivery robots, shopping guide robots, lawn mowers, golf carts, etc.;
(4) entertainment functions such as recreational vehicles, amusement park autopilots, balance cars, etc.;
(5) special rescue functions, such as fire trucks, ambulances, electrical power rush-repair trucks, engineering rescue vehicles and the like.
Fig. 1 is a flowchart of an automatic calibration method for a sensor according to an embodiment of the present invention, which includes the following steps:
s11: acquiring laser point cloud data acquired by laser equipment;
s12: acquiring image data acquired by a camera;
s13: according to the driving state of the mobile device, identifying a scene target related to the driving state in the laser point cloud data to obtain a first coordinate of the scene target under a laser point cloud coordinate system;
s14: according to the driving state of the mobile device, identifying the scene target related to the driving state in the image data to obtain a second coordinate of the scene target in an image coordinate system;
s15: re-projecting the first coordinate into a third coordinate under an image coordinate system;
s16: matching the second coordinate with the third coordinate under the image coordinate system to obtain a matching result;
s17: and calibrating external parameters between the laser equipment and the camera based on the matching result.
The laser device of the present application may be, for example, a lidar.
In order to improve the universality of the technical scheme of the application, the scene target in the embodiment of the invention can be set as a target which is easily acquired around the vehicle, for example, the dynamic target comprises a pedestrian, a vehicle and the like, and the static target comprises a lane line, a signboard and the like.
In steps S11 and S12, laser point cloud data and image data are acquired by a camera, which are acquired by a laser device mounted on the vehicle.
In the embodiment of the invention, the corresponding relation between the driving state and the corresponding scene target is preset, for example, the driving state of the vehicle comprises vehicle driving, vehicle starting, vehicle stopping and the like; the scene targets corresponding to the running of the vehicle comprise dynamic targets such as pedestrians and/or vehicles, and the scene targets corresponding to the starting and stopping of the vehicle comprise static targets such as lane lines and/or signboard.
In the foregoing step S13, the scene object related to the driving state in the laser point cloud data is identified, in an embodiment, the scene object may be identified by using an object detection model, the object detection model may use, for example, a cnn-seg segmentation network, the cnn-seg segmentation network performs object detection by using a semantic segmentation method, performs a regression layer process while performing semantic segmentation, and performs aggregation according to the center offset and the result of semantic segmentation to obtain a detection result of a single object, thereby obtaining the laser point cloud data corresponding to each scene object. The point cloud coordinates of the scene target can be obtained from the laser point cloud data through a point cloud-based segmentation detection network, for example, the ground is removed from the laser point cloud data, foreground point cloud is extracted, specific clustering target features are obtained from the foreground point cloud according to specific clustering features, target information corresponding to the scene target is obtained through detection according to the clustering target features, and the target information includes a surrounding frame, a central point, height information and the like of the scene target. And obtaining information of grid clustering according to the clustering target characteristics, and obtaining a target category according to the grid characteristics, so as to judge the target and obtain the height, length, width, orientation, central point and other characteristics of the target. And mapping the scene target to respective coordinate systems to obtain a first coordinate of the scene target under the laser point cloud coordinate system.
For step S14, identifying the scene object from the image captured by the camera, the scene object may be identified by a preset object detection model, and optionally, the object detection model may use a visual detection model of yolov5 to obtain a second coordinate of the scene object in the image coordinate system. The target image coordinates of the scene target can be obtained from the image through an image-based detection segmentation model, the image-based detection segmentation model is an end-to-end learning network, the front N layers are used for extracting features, and the rear K layers are used for classifying the mentioned features to obtain target information of the scene target, wherein the target information comprises a detection frame, a direction, a central point and the like.
For step S15, since the point cloud coordinates are three-dimensional coordinates in the laser radar coordinate system and the image coordinates are two-dimensional coordinates in the camera coordinate system, the two coordinates cannot be directly compared. The point cloud coordinates need to be projected to the camera coordinate system and converted into image coordinates, that is, the first coordinates in step S13 are re-projected to be the third coordinates in the image coordinate system, and after that, the third coordinates converted into the image coordinate system and the second coordinates in the image coordinate system can be compared.
For step S16, the third coordinate obtained by conversion in the point cloud data is matched with the second coordinate of the image data, so as to obtain a matching result. The matching mode of the coordinates in the scene objects related to different driving states is different.
In one embodiment, when the driving state is a stopped state after starting, the scene object is a predetermined static object. The static objects include lane lines and/or signboards.
When the driving state is a static state after starting, matching the second coordinate and the third coordinate under the image coordinate system, and obtaining a matching result comprises:
after the first coordinate is re-projected to be a third coordinate under an image coordinate system, determining a point cloud detection frame of the static target based on the third coordinate;
determining a visual detection frame of the static target based on the second coordinates;
and when the intersection ratio of the point cloud detection frame and the visual detection frame is larger than a set ratio, determining the relative pose between the laser equipment and the camera as a matching result based on the reprojection relation between the laser point cloud coordinate system and the image coordinate system.
In the present embodiment, the application range and the actual road scene are taken into consideration. When the vehicle is started, the initial parameter estimation is firstly carried out, and the reflectivity of the scene laser radar of which the driving state is the static state after the starting is calibrated.
In this case, first the selection of the scene object is more likely to take into account the ground area and the suspended area, for example, in a real environment, a lane line and a signboard may be selected. Normally, the reflectivity of the laser at the road sign is near the maximum reflectivity, so that the lidar screens out the sign by virtue of the reflectivity signature, as shown in figure 2. In addition, the reflectivity of the ground lane lines is clearly distinguished from other non-lane line areas on the road. Thus, lane lines and overhead signs can be separated from the point cloud by their characteristics.
In the visual image, the trained signboard detection model is used, and as shown in fig. 3, a detection frame of the signboard without occlusion can be easily obtained. The traffic signboard detection network model of yolov3 can be used for detecting the traffic signboard on the upper half part of the image to carry out deep separation, so as to obtain the position of the signboard in the image. (FIGS. 2 and 3 are used as illustrations to show that FIGS. 2 and 3 are obtained from different reflectivities of laser light, and that real images are distinguished based on differences in gray scale).
The laser point cloud and the signboard re-projection relation in the image are utilized to perform space coordinate conversion on the point cloud, the rotating position relation of the point cloud signboard projected to the image is roughly obtained, the matching projection example is shown in fig. 4, the signboard and the lane line in the image and the point cloud are extracted, preliminary matching is achieved, and the purpose of parameter initialization iteration is achieved.
Specifically, the projection of the signboard divided from the point cloud is converted by using a projection matrix to obtain the projection of the point cloud on the image. The point cloud detection frame and the visual detection frame find out the position with the maximum proportion of the coincidence frame by calculating an IOU (Intersection over Union), if the proportion accumulation of the coincidence frame cannot reach a preset proportion threshold value P0 (the proportion threshold value is set to 75%, for example, a person skilled in the art can flexibly set the proportion threshold value according to actual requirements, and the application does not strictly limit), the preliminary registration effect can be considered as not meeting the requirements, the parameters are not updated, and the fine optimization of the parameters is not performed.
In view of the sparsity of the point cloud, in the process of detecting frame matching, point cloud rotation features may exist, so that the ground information is considered to be added for parameter preliminary optimization, such as lane lines on the ground. In the display of the point cloud of the lane line on the ground, extracting background points, namely ground information, extracting a lane line part by utilizing reflectivity characteristics in the ground information, finishing initialization matching of laser equipment and vision, and realizing preliminary calibration of calibration parameters of the laser equipment. According to the embodiment, the calibration method based on the initialization of the laser point cloud reflectivity parameter is realized for the static state after the vehicle is started based on the initialization calibration parameter, and the calibration method is applicable to various multi-line automatic driving laser radars at present and has wider applicability.
As another embodiment, when the driving state is motion, the scene object is a predetermined dynamic object. The dynamic targets include: pedestrians and/or vehicles.
When the driving state is motion, matching the second coordinate and the third coordinate in the image coordinate system, and obtaining a matching result comprises:
after the first coordinate is re-projected to a third coordinate in an image coordinate system, matching a multi-target model for a first number of dynamic targets in the third coordinate and a second number of dynamic targets in the second coordinate;
and performing multi-point perspective imaging projection on the matching result, and determining the relative pose between the laser equipment and the camera as the matching result.
In the present embodiment, as described in the defects of the prior art, after the autonomous vehicle travels for a period of time, an unavoidable situation such as shaking exists, which may cause a slight change in the relative position of the camera and the laser. After a period of driving, the relative position of the camera and the lidar changes, and the driving state is motion, so that the laser device and the camera need to be calibrated with high precision in the scene.
In this case, the selection of the dynamic target is more inclined to pedestrians and vehicles in the first place, as this is most easily available in driving. Similarly, the point cloud coordinate is a three-dimensional coordinate in the laser radar coordinate system, the image coordinate is a two-dimensional coordinate in the camera coordinate system, and the two coordinates cannot be directly compared. Or projecting the first coordinate in the point cloud coordinate system to a third coordinate in the image coordinate system by using the reprojection. When the difference between the converted third coordinate (converted point cloud coordinate) and the second coordinate (image coordinate) is equal to or greater than a set threshold, that is, the relative position of the laser radar and the camera changes.
Matching is performed based on the targets obtained by image detection and point cloud segmentation detection, for example, a gradient iteration method may be used to obtain matched projection images, such as pedestrians and vehicles shown in fig. 5. The overall matched optimization target comprises a second coordinate (x) of the ith scene target i ,y i ) And a third coordinate (m) k ,n k ). Aiming at the multi-objective model, according to the detected scene target, the following optimized objective function can be obtained:
min sum((x i -m k ) 2 +(y i -n k ) 2 ),i=0,1,2,3,...,s;k=0,1,2,3,...,s。
for the above objective function, pnp (multipoint perspective imaging) solution can be adopted, and external parameters (i.e. rotation translation matrix, coupling rotation matrix and translation matrix) between the camera and the laser radar can be obtained.
For step S17, calibrating external parameters between the laser device and the camera by using the matching result optimization objective function of the above steps.
In some application scenarios, the vehicle is initially in a stationary state, and when the vehicle is in a moving state after the vehicle is started, in order to further improve accuracy of external reference calibration between the laser device and the camera, as shown in fig. 6, a scene target (e.g., a static target, a lane line and/or a signboard, etc.) is determined when the vehicle is started, a matched static target (e.g., a lane line and a signboard) is identified from an image acquired by the camera and laser point cloud data acquired by the laser device, and it is determined whether the external reference calibration between the camera and the laser device is required according to a matching result between a second coordinate and a third coordinate corresponding to the static target, so as to realize preliminary parameter estimation (i.e., complete the first calibration); when the vehicle moves, the scene target is determined to be a dynamic target (such as a pedestrian and/or a vehicle), the matched dynamic target is identified from the image acquired by the camera and the laser point cloud data acquired by the laser device, and the external parameters between the camera and the laser device are calibrated according to the matching result between the second coordinate and the third coordinate corresponding to the dynamic target, so that the second calibration is realized. In a specific implementation, as shown in fig. 6, preliminary image and point cloud matching is performed by using the point cloud and a scene target (a high signboard, a lane line) of a scene 1 in a driving state of the image in a static state, and forward projection is performed by rotating the point cloud to complete search calculation of parameters and determine an initial position parameter. On the basis of the initialization parameters, data modeling of a scene 2 with a moving driving state is carried out, and the scene 2 depends on pedestrian and automobile information which is ubiquitous in roads. The method comprises the steps of extracting travel people and vehicles through a depth network to obtain central point features, then carrying out multi-target PNP matching, and finally iterating to obtain appropriate external parameters, namely a rotation matrix and an offset matrix, wherein Min F is a matching result of the vehicles and pedestrians (converted into an image coordinate system) obtained by using the objective function and based on image detection and laser point cloud, and T0 is a preset threshold value for comparison, is similar to the P0 intention of the example, and is not repeated here.
It can be seen from this embodiment that, during each start of the vehicle, the calibration parameter verification process is started, and when the effect of the verification parameter meets the expected threshold, that is: camera target (x) i ,y i ) With the point cloud object (m) i ,n i ) If the deviation is smaller than the set threshold, it can be considered that the laser device and the camera have no large shaking such as the external inclination angle. Radix CodonopsisWhen the number is too large or the position does not meet the requirement, the parameter self-inspection is realized by judging the difference between the target point cloud coordinate and the target image coordinate; aiming at the problems of inapplicability of parameters caused by automatic driving shaking, the system automatically acquires data and updates the closed-loop flow of the parameters, optimizes the idea of algorithm calibration, widens the usability, does not depend on the traditional calibration scene, and can realize more accurate automatic calibration of external parameter calibration between laser equipment and a camera.
After the laser point cloud calibration optimization, the method also considers the application universality and the special situation in the actual road, and can be adjusted in a targeted manner, such as:
because the road targets are not clearly divided in the use process of vision, corresponding features (such as ground patterns and the like) cannot be clearly obtained for determining whether some scene targets are foreground targets, and whether the scene targets are real targets can be obviously judged by means of the laser point cloud data.
In the actual operation process, obstacles are difficult to clearly classify into specific labeled categories according to a target detection model, but in automatic driving detection, the obstacles must be installed and avoided and the like. Therefore, the scene target is determined to exist really by extracting the point cloud of an indefinite type in the laser point cloud, and the safety is ensured.
In the scenes with weak vision such as night and the like, the reliability of the laser point cloud is higher, so that the weight of the visual target is reduced through the image, and the driving performance at night can be improved to a certain extent.
Similarly, after optimization of image calibration, the laser point cloud can generate more abnormal points in the weather of abnormal scenes such as rainy days, noise points in the lane can be judged by means of filtering of the visual images, filtering can be completed, abnormal obstacle rejection is guaranteed, misoperation such as emergency braking is avoided, and driving experience is improved.
According to different conditions, screening of different abnormal targets in laser and vision is achieved, the idea of removing and screening is provided, extraction of obstacles is facilitated, and the optimization effect is improved.
Fig. 7 is a schematic structural diagram of an automatic sensor calibration device according to an embodiment of the present invention, where the system can execute the automatic sensor calibration method according to any of the above embodiments, and is configured in a terminal.
The automatic sensor calibration device 10 provided by the embodiment includes: the system comprises a point cloud acquisition module 11, an image acquisition module 12, a first coordinate determination module 13, a second coordinate determination module 14, a third coordinate determination module 15, a matching module 16 and a calibration module 17.
The point cloud obtaining module 11 is configured to obtain laser point cloud data collected by a laser device; the image acquisition module 12 is used for acquiring image data acquired by a camera; the first coordinate determination module 13 is configured to identify a scene target related to a driving state in the laser point cloud data according to the driving state of the mobile device, so as to obtain a first coordinate of the scene target in a laser point cloud coordinate system; the second coordinate determination module 14 is configured to identify the scene object related to the driving state in the image data according to the driving state of the mobile device, so as to obtain a second coordinate of the scene object in an image coordinate system; the third coordinate determination module 15 is configured to re-project the first coordinate into a third coordinate in an image coordinate system; the matching module 16 is configured to match the second coordinate with the third coordinate in the image coordinate system to obtain a matching result; the calibration module 17 is configured to calibrate external parameters between the laser device and the camera based on the matching result.
Preferably, when the driving state is a stationary state after starting, the scene object is a predetermined stationary object.
Preferably, when the driving state is a static state after starting, the matching module 16 matches the second coordinate and the third coordinate in the image coordinate system, and obtaining a matching result includes: after the first coordinate is re-projected to be a third coordinate under an image coordinate system, determining a point cloud detection frame of the static target based on the third coordinate; determining a visual detection frame of the static target based on the second coordinates; and when the intersection ratio of the point cloud detection frame and the visual detection frame is larger than a set ratio, determining the relative pose between the laser equipment and the camera as a matching result based on the reprojection relation between the laser point cloud coordinate system and the image coordinate system.
Preferably, the static object comprises a lane line and/or a signboard.
Preferably, when the driving state is motion, the scene object is a predetermined dynamic object.
Preferably, when the driving state is a motion, the matching module 16 matches the second coordinate and the third coordinate in the image coordinate system, and obtaining a matching result includes: after the first coordinate is re-projected to a third coordinate in an image coordinate system, matching a multi-target model for a first number of dynamic targets in the third coordinate and a second number of dynamic targets in the second coordinate; and performing multi-point perspective imaging projection on the matching result, and determining the relative pose between the laser equipment and the camera as the matching result.
Preferably, the dynamic target comprises: pedestrians and/or vehicles.
Preferably, the laser apparatus and the camera trigger acquisition of laser point cloud data and image data in response to activation of the mobile device.
The embodiment of the invention also provides a nonvolatile computer storage medium, wherein the computer storage medium stores computer executable instructions which can execute the automatic calibration method of the sensor in any method embodiment;
as one embodiment, a non-volatile computer storage medium of the present invention stores computer-executable instructions configured to:
acquiring laser point cloud data acquired by laser equipment;
acquiring image data acquired by a camera;
according to the driving state of the mobile device, identifying a scene target related to the driving state in the laser point cloud data to obtain a first coordinate of the scene target under a laser point cloud coordinate system;
according to the driving state of the mobile device, identifying the scene target related to the driving state in the image data to obtain a second coordinate of the scene target in an image coordinate system;
re-projecting the first coordinate into a third coordinate under an image coordinate system;
matching the second coordinate with the third coordinate under the image coordinate system to obtain a matching result;
and calibrating external parameters between the laser equipment and the camera based on the matching result.
As a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the methods in embodiments of the present invention. One or more program instructions are stored in a non-transitory computer readable storage medium, which when executed by a processor, perform a method for automatic calibration of a sensor in any of the method embodiments described above.
An embodiment of the present invention further provides an electronic device, which includes: the sensor calibration system comprises at least one processor and a memory communicatively connected with the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute a sensor automatic calibration method.
In some embodiments, the present invention further provides a mobile device, including a body and the electronic device according to any one of the foregoing embodiments mounted on the body. The mobile device may be an unmanned vehicle such as an unmanned sweeper, an unmanned ground washing vehicle, an unmanned logistics vehicle, an unmanned passenger vehicle, an unmanned sanitation vehicle, an unmanned small/large bus vehicle, a truck, a mine car, or the like, or may be a robot or the like.
In some embodiments, the present invention further provides a computer program product, which when run on a computer, causes the computer to execute the method for automatically calibrating a sensor according to any one of the embodiments of the present invention.
Fig. 8 is a schematic diagram of a hardware structure of an electronic device according to an automatic sensor calibration method provided in another embodiment of the present application, and as shown in fig. 8, the electronic device includes:
one or more processors 810 and a memory 820, with one processor 810 being an example in FIG. 8. The device for the automatic calibration method of the sensor can also comprise: an input device 830 and an output device 840.
The processor 810, the memory 820, the input device 830, and the output device 840 may be connected by a bus or other means, such as the bus connection in fig. 8.
The memory 820 is a non-volatile computer-readable storage medium and can be used for storing non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the automatic sensor calibration method in the embodiments of the present application. The processor 810 executes various functional applications and data processing of the server by executing nonvolatile software programs, instructions and modules stored in the memory 820, so as to implement the automatic calibration method of the sensor according to the above-mentioned method embodiment.
The memory 820 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data and the like. Further, the memory 820 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 820 may optionally include memory located remotely from processor 810, which may be connected to a mobile device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 830 may receive input numeric or character information. The output device 840 may include a display device such as a display screen.
The one or more modules are stored in the memory 820 and, when executed by the one or more processors 810, perform a method for automatic calibration of a sensor in any of the method embodiments described above.
The product can execute the method provided by the embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the methods provided in the embodiments of the present application.
The non-volatile computer-readable storage medium may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the device, and the like. Further, the non-volatile computer-readable storage medium may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the non-transitory computer readable storage medium optionally includes memory located remotely from the processor, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
An embodiment of the present invention further provides an electronic device, which includes: the sensor calibration system comprises at least one processor and a memory which is in communication connection with the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the steps of the sensor automatic calibration method according to any embodiment of the invention.
The electronic device of the embodiments of the present application exists in various forms, including but not limited to:
(1) mobile communication devices, which are characterized by mobile communication capabilities and are primarily targeted at providing voice and data communications. Such terminals include smart phones, multimedia phones, functional phones, and low-end phones, among others.
(2) The ultra-mobile personal computer equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include PDA, MID, and UMPC devices, such as tablet computers.
(3) Portable entertainment devices such devices may display and play multimedia content. The devices comprise audio and video players, handheld game consoles, electronic books, intelligent toys and portable vehicle-mounted navigation devices.
(4) Other mobile devices with data processing capabilities.
As used herein, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (13)

1. An automatic calibration method of a sensor, which is applied to a mobile device, wherein the sensor comprises laser equipment and a camera, and the method comprises the following steps:
acquiring laser point cloud data acquired by laser equipment;
acquiring image data acquired by a camera;
according to the driving state of the mobile device, identifying a scene target related to the driving state in the laser point cloud data to obtain a first coordinate of the scene target under a laser point cloud coordinate system;
according to the driving state of the mobile device, identifying the scene target related to the driving state in the image data to obtain a second coordinate of the scene target in an image coordinate system;
re-projecting the first coordinate into a third coordinate under an image coordinate system;
matching the second coordinate with the third coordinate under the image coordinate system to obtain a matching result;
and calibrating external parameters between the laser equipment and the camera based on the matching result.
2. The method according to claim 1, wherein the scene object is a predetermined static object when the driving state is a stationary state after startup.
3. The method according to claim 2, wherein when the driving state is a static state after starting, matching the second coordinate and the third coordinate in the image coordinate system, and obtaining a matching result comprises:
after the first coordinate is re-projected to be a third coordinate under an image coordinate system, determining a point cloud detection frame of the static target based on the third coordinate;
determining a visual detection frame of the static target based on the second coordinates;
and when the intersection ratio of the point cloud detection frame and the visual detection frame is larger than a set ratio, determining the relative pose between the laser equipment and the camera as a matching result based on the reprojection relation between the laser point cloud coordinate system and the image coordinate system.
4. The method of claim 2, wherein the static objects comprise lane lines and/or signboards.
5. The method according to any one of claims 1-4, characterized in that the scene object is a predetermined dynamic object when the driving state is motion.
6. The method according to claim 5, wherein when the driving state is motion, the matching of the second coordinate and the third coordinate in the image coordinate system, and obtaining the matching result comprises:
after the first coordinate is re-projected to a third coordinate in an image coordinate system, matching a multi-target model for a first number of dynamic targets in the third coordinate and a second number of dynamic targets in the second coordinate;
and performing multi-point perspective imaging projection on the matching result, and determining the relative pose between the laser equipment and the camera as the matching result.
7. The method of claim 5, wherein the dynamic target comprises: pedestrians and/or vehicles.
8. The method of claim 1, wherein the laser apparatus and the camera trigger acquisition of laser point cloud data and image data in response to activation of the mobile device.
9. An automatic calibration device for a sensor is characterized by comprising:
the point cloud acquisition module is used for acquiring laser point cloud data acquired by laser equipment;
the image acquisition module is used for acquiring image data acquired by the camera;
the first coordinate determination module is used for identifying a scene target related to the driving state in the laser point cloud data according to the driving state of the mobile device to obtain a first coordinate of the scene target under a laser point cloud coordinate system;
the second coordinate determination module is used for identifying the scene target related to the driving state in the image data according to the driving state of the mobile device to obtain a second coordinate of the scene target in an image coordinate system;
the third coordinate determination module is used for re-projecting the first coordinate into a third coordinate under an image coordinate system;
the matching module is used for matching the second coordinate with the third coordinate in the image coordinate system to obtain a matching result;
and the calibration module is used for calibrating external parameters between the laser equipment and the camera based on the matching result.
10. An electronic device, comprising: at least one processor, and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of automatic sensor calibration of any of claims 1-8.
11. A mobile device comprising a body and the electronic apparatus of claim 10 mounted on the body.
12. A storage medium on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method for automatic calibration of a sensor according to any one of claims 1 to 8.
13. A computer program product, characterized in that it causes a computer to carry out the method for automatic calibration of a sensor according to any one of claims 1 to 8, when said computer program product is run on said computer.
CN202210455494.7A 2022-04-27 2022-04-27 Automatic calibration method for sensor, electronic device and storage medium Pending CN114862964A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210455494.7A CN114862964A (en) 2022-04-27 2022-04-27 Automatic calibration method for sensor, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210455494.7A CN114862964A (en) 2022-04-27 2022-04-27 Automatic calibration method for sensor, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN114862964A true CN114862964A (en) 2022-08-05

Family

ID=82632993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210455494.7A Pending CN114862964A (en) 2022-04-27 2022-04-27 Automatic calibration method for sensor, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN114862964A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115457088A (en) * 2022-10-31 2022-12-09 成都盛锴科技有限公司 Method and system for fixing axle of train

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115457088A (en) * 2022-10-31 2022-12-09 成都盛锴科技有限公司 Method and system for fixing axle of train

Similar Documents

Publication Publication Date Title
US20210122364A1 (en) Vehicle collision avoidance apparatus and method
US11126868B1 (en) Detecting and responding to parking behaviors in autonomous vehicles
US11663917B2 (en) Vehicular control system using influence mapping for conflict avoidance path determination
CN109814520B (en) System and method for determining safety events for autonomous vehicles
US20180074506A1 (en) Systems and methods for mapping roadway-interfering objects in autonomous vehicles
CN107972662A (en) To anti-collision warning method before a kind of vehicle based on deep learning
US11551458B1 (en) Plane estimation for contextual awareness
CN113631452B (en) Lane change area acquisition method and device
CN114945952A (en) Generating depth from camera images and known depth data using neural networks
US20220366175A1 (en) Long-range object detection, localization, tracking and classification for autonomous vehicles
US11702044B2 (en) Vehicle sensor cleaning and cooling
US20200377092A1 (en) Tracking vanished objects for autonomous vehicles
CN115187963A (en) Vehicle obstacle detection method, system, device, medium, and program
CN114572193A (en) Remote automatic parking control method and device and vehicle
CN114862964A (en) Automatic calibration method for sensor, electronic device and storage medium
CN113870246A (en) Obstacle detection and identification method based on deep learning
US20230281871A1 (en) Fusion of imaging data and lidar data for improved object recognition
RU2767838C1 (en) Methods and systems for generating training data for detecting horizon and road plane
CN114913329A (en) Image processing method, semantic segmentation network training method and device
CN114494444A (en) Obstacle dynamic and static state estimation method, electronic device and storage medium
US20240027213A1 (en) Systems and methods for determining and providing parking facility entrance characteristics
WO2023057261A1 (en) Removing non-relevant points of a point cloud
CN115797903A (en) Blind area memory method, equipment, mobile device and storage medium
CN117809145A (en) Fusion method, device, mobile device and storage medium of sensor semantic information
CN117456498A (en) Method, apparatus, mobile device and storage medium for dynamic and static estimation of object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination