CN117372536A - Laser radar and camera calibration method, system, equipment and storage medium - Google Patents

Laser radar and camera calibration method, system, equipment and storage medium Download PDF

Info

Publication number
CN117372536A
CN117372536A CN202311224509.XA CN202311224509A CN117372536A CN 117372536 A CN117372536 A CN 117372536A CN 202311224509 A CN202311224509 A CN 202311224509A CN 117372536 A CN117372536 A CN 117372536A
Authority
CN
China
Prior art keywords
image data
point cloud
cloud data
camera
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311224509.XA
Other languages
Chinese (zh)
Inventor
于乾坤
汪海
陈敏鹤
祁忠正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Secco Intelligent Technology Shanghai Co ltd
Original Assignee
Secco Intelligent Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Secco Intelligent Technology Shanghai Co ltd filed Critical Secco Intelligent Technology Shanghai Co ltd
Priority to CN202311224509.XA priority Critical patent/CN117372536A/en
Publication of CN117372536A publication Critical patent/CN117372536A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/092Reinforcement learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The application relates to the technical field of computers and provides a laser radar and camera calibration method, a system, equipment and a storage medium, wherein point cloud data of the laser radar and image data of a camera on a vehicle are obtained; preprocessing the point cloud data to obtain processed point cloud data; preprocessing the image data to obtain processed image data; according to the processed image data and the point cloud data, inputting the self-adaptive calibration model, searching a PPO algorithm by adopting a probability weighted random strategy to perform deep reinforcement learning training, obtaining a trained self-adaptive calibration model, correcting calibration parameters by the self-adaptive calibration model, and outputting corrected calibration parameters; the calibration parameters after correction are optimized to obtain the calibration parameters after optimization, and the method can correct the calibration parameters of the laser radar and the camera so as to improve the accuracy and stability of parameter calibration and effectively solve the problem of inaccurate calibration caused by vibration.

Description

Laser radar and camera calibration method, system, equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a laser radar and camera calibration method, system, device, and storage medium.
Background
The automatic driving technology enables an automobile to run autonomously without driving by using advanced technologies such as sensors, artificial intelligence, computer vision, machine learning, etc., with the aim of improving driving safety, reducing traffic accidents, and improving traffic efficiency, and the automatic driving automobile is configured to plan an optimal route, judge obstacles, and observe traffic rules by acquiring and analyzing environmental information such as roads, vehicles, pedestrians, etc., in real time.
During the running of an autopilot, the problem of inaccurate sensor calibration caused by vibrations may occur, which may be from uneven road surfaces or other factors in the running of the vehicle, such as speed variations and friction, etc., which may affect the accuracy of the vehicle sensors, thereby resulting in inaccurate sensor calibration of the autopilot system.
Disclosure of Invention
In order to solve or at least partially solve the technical problems, the application provides a laser radar and camera calibration method, a system, equipment and a storage medium, which can improve the accuracy and stability of laser radar and camera calibration.
In a first aspect, the present application provides a laser radar and camera calibration method, including:
Acquiring point cloud data of a laser radar and image data of a camera on a vehicle;
preprocessing the point cloud data to obtain processed point cloud data; preprocessing the image data to obtain processed image data;
according to the processed image data and the point cloud data, inputting the processed image data and the point cloud data into an adaptive calibration model, searching a PPO algorithm by adopting a probability weighted random strategy to perform deep reinforcement learning training, and obtaining a trained adaptive calibration model, wherein the adaptive calibration model can correct calibration parameters and output corrected calibration parameters;
and carrying out optimization treatment on the corrected calibration parameters to obtain the optimized calibration parameters.
In a second aspect, the present application provides a lidar and camera calibration system, comprising:
the data acquisition module is used for acquiring point cloud data of the laser radar on the vehicle and image data of the camera;
the data preprocessing module is used for preprocessing the point cloud data to obtain processed point cloud data; preprocessing the image data to obtain processed image data;
the training operation module is used for inputting the processed image data and the point cloud data into the self-adaptive calibration model, searching a PPO algorithm by adopting a probability weighted random strategy to perform deep reinforcement learning training, obtaining a trained self-adaptive calibration model, and correcting calibration parameters by the self-adaptive calibration model to output corrected calibration parameters;
And the optimization module is used for carrying out optimization treatment on the corrected calibration parameters to obtain the optimized calibration parameters.
In a third aspect, embodiments of the present application further provide a laser radar and camera calibration apparatus, including a processor and a memory; and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising steps for the lidar and camera calibration method of any of claims 1 to 7.
In a fourth aspect, embodiments of the present application further provide a computer readable storage medium storing program instructions that, when executed by a computer, cause the computer to perform the aforementioned laser radar and camera calibration method.
According to the embodiment of the application, the point cloud data of the laser radar on the vehicle and the image data of the camera are acquired; preprocessing the point cloud data to obtain processed point cloud data; preprocessing the image data to obtain processed image data; according to the processed image data and the point cloud data, inputting the self-adaptive calibration model, searching a PPO algorithm by adopting a probability weighted random strategy to perform deep reinforcement learning training, obtaining a trained self-adaptive calibration model, correcting calibration parameters by the self-adaptive calibration model, and outputting corrected calibration parameters; the calibration parameters after correction are optimized to obtain the calibration parameters after optimization, and the method can correct the calibration parameters of the laser radar and the camera so as to improve the accuracy and stability of parameter calibration and effectively solve the problem of inaccurate calibration caused by vibration.
Drawings
In order to more clearly illustrate the embodiments of the present application, a brief description of the associated drawings will be provided below. It is understood that the drawings in the following description are only for illustrating some embodiments of the present application, and that one of ordinary skill in the art can obtain many other technical features and connection relationships not mentioned herein from the drawings.
Fig. 1 is a schematic flow chart of a laser radar and camera calibration method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of another laser radar and camera calibration method according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of a laser radar and camera calibration system according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a laser radar and camera calibration device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims of this application and in the drawings, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The following describes the technical solutions in the embodiments of the present application in detail with reference to the drawings in the embodiments of the present application.
Example 1
As shown in fig. 1, an embodiment of the present application proposes a method for calibrating a laser radar and a camera, and fig. 1 is a schematic flow chart of the method for calibrating the laser radar and the camera according to the embodiment of the present application, where the method includes:
101. and acquiring point cloud data of a laser radar and image data of a camera on the vehicle.
In the implementation, the unmanned simulation platform can simulate the automatic driving scene of the vehicle and modify the position of the sensor in the unmanned simulation platform, and the sensor can comprise a laser radar and a camera, so that the sensor has a certain random movement effect, and the diversity and the authenticity of the automatic driving situation are improved; furthermore, the point cloud data of the laser radar and the image data of the camera can be acquired, and data can be provided for the subsequent calibration of the laser radar and the camera.
102. Preprocessing the point cloud data to obtain processed point cloud data; and preprocessing the image data to obtain processed image data.
The method comprises the steps of preprocessing point cloud data and image data, reducing the dimension of high-dimensional data perceived by a sensor to low-dimensional data so as to facilitate subsequent deep reinforcement learning training and data processing, specifically, converting the image data into a depth map, wherein the depth map comprises contents in a visual field, and converting the point cloud data of a laser radar into three-dimensional vector data to obtain the position of each object in the visual field.
103. And inputting the processed image data and the point cloud data into an adaptive calibration model, searching a PPO algorithm by adopting a probability weighted random strategy to perform deep reinforcement learning training, and obtaining a trained adaptive calibration model, wherein the adaptive calibration model can correct calibration parameters and output the corrected calibration parameters.
The probability weighted random strategy search (Proximal Policy Optimization, PPO) algorithm is used for calculating and training according to the processed image data and the point cloud data to obtain an adaptive calibration model of the sensor calibration parameters, and the model can improve the accuracy and stability of the sensor calibration by learning and adjusting the internal parameters of the sensor, so that the processed image data and the point cloud data can be input into the adaptive calibration model, and the PPO algorithm belongs to one of the generalized dominance estimation (Generalized advantage estimation, GAE) methods in the field of deep reinforcement learning, and aims to solve the early strategy transfer problem and improve the sampling efficiency.
104. And carrying out optimization treatment on the corrected calibration parameters to obtain the optimized calibration parameters.
Wherein, an external reference registration matrix can be introduced, the external reference registration matrix is used for transforming the coordinate of the point cloud from the coordinate system of the external reference registration matrix to the image coordinate system so as to realize the projection of the laser point cloud on the image, by adjusting the parameters of the rotation matrix and the translation vector, we can control the projection position and the gesture of the point cloud on the image, therefore, accurate matching of the point cloud and the image is realized, parameters of the rotation matrix and the translation vector are adjusted, namely, the process of optimizing the calibration parameters is realized, and the optimized calibration parameters can be obtained by finding the optimal parameter combination to minimize the error between the point cloud projection and the image characteristics.
According to the embodiment of the application, the point cloud data of the laser radar on the vehicle and the image data of the camera are acquired; preprocessing the point cloud data to obtain processed point cloud data; preprocessing the image data to obtain processed image data; according to the processed image data and the point cloud data, inputting the self-adaptive calibration model, searching a PPO algorithm by adopting a probability weighted random strategy to perform deep reinforcement learning training, obtaining a trained self-adaptive calibration model, correcting calibration parameters by the self-adaptive calibration model, and outputting corrected calibration parameters; the calibration parameters after correction are optimized to obtain the calibration parameters after optimization, and the method can correct the calibration parameters of the laser radar and the camera so as to improve the accuracy and stability of parameter calibration and effectively solve the problem of inaccurate calibration caused by vibration.
Optionally, in step 101, the acquiring the point cloud data of the lidar and the image data of the camera on the vehicle includes:
11. in a simulation environment provided by a simulation platform, simulating a driving scene of a vehicle;
12. controlling the running of the vehicle in the simulation environment, so that the vehicle in the simulation environment can navigate according to a set path;
13. And collecting point cloud data of a laser radar and image data of a camera on the vehicle in the automatic driving and driving process of the vehicle.
The system comprises an automatic driving simulator based on an open source and a robot operating system (Robot Operating System, ROS), wherein the control of the ROS on a vehicle in a simulation environment of a simulation platform can be realized, and the output of a sensor of the vehicle can be simulated in the simulation environment, and the sensor information comprises sensing information such as image data of a camera, point cloud data of a laser radar and the like; the point cloud data of the laser radar and the image data of the camera on the vehicle can be acquired through an application program interface of the automatic driving simulator.
Optionally, in step 102, preprocessing the point cloud data to obtain processed point cloud data includes:
21. taking each point in the point cloud data as a current point, respectively determining coordinate differences between the current point and two adjacent points, determining a maximum value between the coordinate differences between the current point and the two adjacent points and 0, and taking a point corresponding to the maximum value as an edge reference point corresponding to the current point;
22. and screening edge points from edge reference points corresponding to all points in the point cloud data according to preset filtering conditions to obtain point cloud data of the edge points.
Wherein, for the point cloud data of the laser radar, each point can be taken as the current point R i ,R i-1 And R is R i+1 For the neighboring point of the current point, because of parallax and occlusion, points farther than the neighboring point are unlikely to coincide with the image edge, each point cloud P can be calculated i To obtain a new point cloud X i Wherein X is i Each point p of (c) is assigned an edge distance value, which is derived from the following formula:
X i (P i )=max(R i-1 -R i ,R i+1 -R i ,0)
wherein the preset filtering condition can be X i The edge distance value corresponding to each point p is larger than a second preset value.
Optionally, in step 21, the preprocessing the image data to obtain processed image data includes:
processing each image in the image data to obtain edge information of each pixel point and obtaining an edge image containing the edge information;
the method for performing the distance inverse transformation on the edge image specifically comprises the following steps: determining the transformation distance between each pixel point of the edge image and the point to be transformed, and performing inverse distance transformation according to the correlation values of all the pixel points of the edge image and the transformation distances corresponding to the pixel points to obtain transformed image data; and generating a depth map according to the transformed image data.
Wherein filtering can be usedWave method for each image frame I of the camera i Processing is carried out to obtain edge information of each pixel point, so that edge information E of the whole image is obtained, and the image containing the edge information E is an edge image; then, for each edge image E i Applying an inverse distance transformation (Inverse Distance Transform, IDT), assuming a set of points P1, P2, pn, and a point Q to calculate the inverse distance transformation, each point has a value { V1, V2,., vn } associated with it that indicates its attribute or strength. The formula of the inverse distance transformation is as follows:
optionally, in step 102, the processing each image in the image data to obtain edge information of each pixel includes:
21. converting each image in the image data into a grayscale image;
22. and setting each pixel point in the gray image as the maximum absolute value of the pixel difference value between the pixel point and 8 adjacent pixels, and obtaining the edge information.
Through the steps, the edge information of the gray image can be obtained.
Optionally, the adaptive calibration model includes a policy network and a value network, the performing deep reinforcement learning training by searching a PPO algorithm according to the processed image data and the point cloud data and using a probability weighted random policy to obtain a trained adaptive calibration model includes:
31. Initializing parameters of a strategy network and a value network;
32. inputting the processed image data and point cloud data into a strategy network and a value network respectively, wherein the value network outputs a value estimation, the strategy network outputs a current driving strategy and a strategy advantage estimation, and the strategy function estimation and the value function estimation are used for evaluating strategy performance and decision value;
33. and calculating a loss function of the strategy by using the parameters of the current strategy network, and updating the strategy network according to the loss function to obtain an updated strategy network.
The strategy network is used for selecting proper actions according to the current state and the value estimation; the value network is used for evaluating the quality of the current strategy according to the reward signal and the next state vector of the reward function, the parameters of the strategy network and the value network are defined parameters of the neural network, and firstly, the parameters of the strategy network and the value network are initialized to prepare for subsequent training; then, collecting a group of vehicle running tracks and image data and point cloud data when the vehicle is navigated by using the parameters of the current strategy network as training data so as to acquire interaction information between the environment and the action; the advantage estimation and the value estimation are calculated based on the collected tracks and used for evaluating the strategy performance and the decision value, in particular, the advantage estimation can be obtained through an advantage function based on the information entropy of the current strategy and the target strategy parameters of the PPO algorithm, and the value function is output by a value neural network.
Wherein, the loss function can be expressed as follows:
in the above formula, A is a dominance function, L CPI Representing a loss value, θ representing a policy network parameter, r representing a reward function, a being an action, s being a state.
Optionally, the calculating the loss function of the policy by using the parameters of the current policy network, updating the policy network according to the loss function, and obtaining an updated policy network includes:
a1, calculating a gradient of strategy network update by using parameters of the current strategy network;
a2, applying the target strategy parameters of the strategy network to generate a new driving strategy;
a3, calculating an importance sampling ratio between the current running strategy and the new running strategy;
a4, calculating a loss function of the strategy based on the importance sampling ratio and a clipping threshold value;
and A5, updating the strategy network by using the loss function, and iteratively executing an updating operation step according to the updated strategy network until the preset iteration times or the convergence of the loss function are reached, so as to obtain the updated strategy network.
Wherein, a plurality of strategy iteration steps A1-A5 can be executed, firstly, the gradient of the strategy network update can be calculated by using the parameters of the neural network under the current strategy so as to optimize the strategy performance; then, applying the parameters of the neural network under the target strategy to the strategy network; further, calculating an importance sampling ratio between the current strategy and the old strategy for evaluating the effect of strategy updating; calculating a loss function of the strategy based on the importance sampling ratio and the clipping threshold value, wherein the loss function is used for guiding the updating direction of the strategy; updating the strategy network by using the loss function so as to gradually improve the strategy performance and learning effect; repeating the step A1 until the preset iteration times or the convergence of the loss function are reached; and returning to a trained strategy network for subsequent decision making and autonomous navigation, wherein the strategy network is used for outputting deep reinforcement learning actions, namely outputting calibration parameters, wherein the calibration parameters can comprise parameters such as internal parameters and external parameters of a camera, the point cloud density, the precision and the like of the laser radar, so that synchronous calibration between the laser radar and the camera is realized.
Wherein the importance sampling ratio and the clipping threshold are calculated by the following formulas, respectively:
in the above formula, L CPI Representing the loss value, θ representing the policy network parameter, clip being the clipping function, r representing the reward function, ε may be 0.2.
Optionally, in step 104, the corrected perceptual internal parameter includes a rotation matrix and a translation vector, and the optimizing process is performed on the corrected calibration parameter to obtain a processed calibration parameter, which includes:
41. constructing an extrinsic registration matrix according to the rotation matrix and the translation vector;
42. projecting the point cloud data onto image data of the camera according to the external reference registration matrix to form a depth map, wherein the depth map comprises projection of the point cloud data onto the image data;
43. searching a parameter space, adjusting the corrected calibration parameters according to the searched parameters until errors between the projection of the point cloud data onto the image data and the characteristics of the image data meet preset conditions, and outputting the adjusted calibration parameters.
The external registration matrix is generally represented by a 4x4 homogeneous transformation matrix, wherein the rotation matrix and the translation component determine a conversion relationship between coordinate systems, and the external registration matrix is represented as follows:
Wherein T is r Is an external reference registration matrix, is a 4x4 homogeneous transformation matrix, R is a 3x3 rotation matrix, describes the rotation relationship from a point cloud coordinate system to an image coordinate system, and T is a 3x1 translation vector, and represents the translation relationship from the point cloud coordinate system to the image coordinate system.
By introducing an external reference registration matrix, the external reference registration matrix is used for transforming the point cloud coordinates from the coordinate system of the external reference registration matrix to the image coordinate system so as to realize the projection of the laser point cloud on the image, by adjusting the parameters of the rotation matrix and the translation vector, we can control the projection position and the gesture of the point cloud on the image so as to realize the accurate matching of the point cloud and the image, and the process of adjusting the parameters of the rotation matrix and the translation vector, namely the process of optimizing the calibration parameters, can minimize the error between the point cloud projection and the image characteristics by finding the optimal parameter combination, and can obtain the optimized calibration parameters.
Optionally, in step 43, the method further includes:
44. performing laser radar and camera calibration according to the adjusted rotation matrix and translation vector;
45. acquiring the calibrated point cloud data of the edge points corresponding to the laser radar and the image data subjected to the distance inverse transformation corresponding to the camera;
46. Calculating the matching degree value of the point cloud data and the image data edge by adopting an objective function according to the point cloud data of the edge point and the image data subjected to the inverse distance transformation;
47. and when the matching degree value is smaller than a first preset value, determining that the error between the projection of the point cloud data onto the image data and the characteristic of the image data meets a preset condition.
Wherein, the objective function is:
by traversing all 3D points in all image frames and point cloud data, an objective function is calculated, the objective function describes the matching degree of depth discontinuity of laser return and the image edge of the calibration C, and then the error between the projection of the point cloud data onto the image data and the characteristics of the image data can be determined to meet the preset condition.
Example two
Consistent with the foregoing examples, as shown in fig. 2, an embodiment of the present application provides a method for calibrating a laser radar and a camera, and fig. 2 is a flowchart of steps of the method for calibrating a laser radar and a camera provided in the embodiment of the present application, where the method includes:
201. in a simulation environment provided by the simulation platform, a driving scene of the vehicle is simulated.
202. And controlling the running of the vehicle in the simulation environment, so that the vehicle in the simulation environment can navigate according to the set path.
203. And collecting point cloud data of a laser radar and image data of a camera on the vehicle in the automatic driving and driving process of the vehicle.
204. Preprocessing the point cloud data to obtain processed point cloud data; preprocessing the image data to obtain processed image data;
205. and inputting the processed image data and the point cloud data into an adaptive calibration model, searching a PPO algorithm by adopting a probability weighted random strategy to perform deep reinforcement learning training, and obtaining a trained adaptive calibration model, wherein the adaptive calibration model can correct calibration parameters and output the corrected calibration parameters.
206. And constructing an external reference registration matrix according to the rotation matrix and the translation vector.
207. And projecting the point cloud data onto the image data of the camera according to the external reference registration matrix to form a depth map, wherein the depth map comprises projection of the point cloud data onto the image data.
208. Searching a parameter space, adjusting the corrected calibration parameters according to the searched parameters until errors between the projection of the point cloud data onto the image data and the characteristics of the image data meet preset conditions, and outputting the adjusted calibration parameters.
209. And calibrating the laser radar and the camera according to the adjusted rotation matrix and translation vector.
210. And acquiring the calibrated point cloud data of the edge point corresponding to the laser radar and the image data subjected to distance inverse transformation corresponding to the camera.
211. And calculating the matching degree value of the point cloud data and the image data edge by adopting an objective function according to the point cloud data of the edge point and the image data subjected to the inverse distance transformation.
212. And when the matching degree value is smaller than a first preset value, determining that the error between the projection of the point cloud data onto the image data and the characteristic of the image data meets a preset condition.
By introducing an external reference registration matrix, the external reference registration matrix is used for transforming the point cloud coordinates from the coordinate system of the external reference registration matrix to the image coordinate system so as to realize the projection of the laser point cloud on the image, by adjusting the parameters of the rotation matrix and the translation vector, we can control the projection position and the gesture of the point cloud on the image so as to realize the accurate matching of the point cloud and the image, and the process of adjusting the parameters of the rotation matrix and the translation vector, namely the process of optimizing the calibration parameters, can minimize the error between the point cloud projection and the image characteristics by finding the optimal parameter combination, and can obtain the optimized calibration parameters.
Example III
As shown in fig. 3, fig. 3 is a schematic structural diagram of a laser radar and camera calibration system provided in an embodiment of the present application, and one embodiment of the present application provides a laser radar and camera calibration system, including a data acquisition module 100, a data warehouse 200, and an operation platform 300, where the operation platform includes a data preprocessing module 310, a training operation module 320, and an optimization module 330:
the data acquisition module 100 is used for acquiring point cloud data of a laser radar on a vehicle and image data of a camera;
a data warehouse 200 for storing point cloud data of the lidar and image data of the camera;
the data preprocessing module 310 is configured to preprocess the point cloud data to obtain processed point cloud data; preprocessing the image data to obtain processed image data;
the training operation module 320 is configured to input the self-adaptive calibration model according to the processed image data and the point cloud data, search the PPO algorithm by using a probability weighted random strategy, perform deep reinforcement learning training, obtain a trained self-adaptive calibration model, and correct calibration parameters by using the self-adaptive calibration model, and output corrected calibration parameters;
The optimizing module 330 is configured to perform an optimizing process on the corrected calibration parameter, to obtain an optimized calibration parameter.
Optionally, the data acquisition module includes a simulation platform 110, a simulation automobile control module 120 and a data collection module 130, where the simulation platform is used to provide a simulation environment for simulating a driving scene of the vehicle in terms of acquiring point cloud data of the laser radar and image data of the camera on the vehicle; the simulation automobile control module is used for controlling the running of the vehicle in a simulation environment, so that the vehicle in the simulation environment can navigate according to a set path; the data collection module is used for collecting point cloud data of the laser radar of the vehicle and image data of a camera.
Optionally, in the aspect of preprocessing the point cloud data to obtain processed point cloud data, the data preprocessing module 310 is configured to: taking each point in the point cloud data as a current point, respectively determining coordinate differences between the current point and two adjacent points, determining a maximum value between the coordinate differences between the current point and the two adjacent points and 0, and taking a point corresponding to the maximum value as an edge reference point corresponding to the current point;
And screening edge points from edge reference points corresponding to all points in the point cloud data according to preset filtering conditions to obtain point cloud data of the edge points.
Optionally, in the aspect of preprocessing the image data to obtain processed image data, the data preprocessing module 310 is configured to: processing each image in the image data to obtain edge information of each pixel point and obtaining an edge image containing the edge information;
the method for performing the distance inverse transformation on the edge image specifically comprises the following steps: determining the transformation distance between each pixel point of the edge image and the point to be transformed, and performing inverse distance transformation according to the correlation values of all the pixel points of the edge image and the transformation distances corresponding to the pixel points to obtain transformed image data; and generating a depth map according to the transformed image data.
Optionally, in the aspect of processing each image in the image data to obtain edge information of each pixel, the data preprocessing module 310 is configured to: converting each of the images in the image data into a grayscale image;
and setting each pixel point in the gray image as the maximum absolute value of the pixel difference value between the pixel point and 8 adjacent pixels, and obtaining the edge information.
Optionally, the corrected perceptual internal parameters include a rotation matrix and a translation vector, and the optimization module 330 is configured to:
constructing an extrinsic registration matrix according to the rotation matrix and the translation vector;
projecting the point cloud data onto image data of the camera according to the external reference registration matrix to form a depth map, wherein the depth map comprises projection of the point cloud data onto the image data;
searching a parameter space, adjusting the corrected calibration parameters according to the searched parameters until errors between the projection of the point cloud data onto the image data and the characteristics of the image data meet preset conditions, and outputting the adjusted calibration parameters.
Optionally, the optimizing module 310 is further configured to:
performing laser radar and camera calibration according to the adjusted rotation matrix and translation vector;
acquiring the calibrated point cloud data of the edge points corresponding to the laser radar and the image data subjected to the distance inverse transformation corresponding to the camera;
calculating the matching degree value of the point cloud data and the image data edge by adopting an objective function according to the point cloud data of the edge point and the image data subjected to the inverse distance transformation;
And when the matching degree value is smaller than a first preset value, determining that the error between the projection of the point cloud data onto the image data and the characteristic of the image data meets a preset condition.
Acquiring point cloud data of a laser radar on a vehicle and image data of a camera; preprocessing the point cloud data to obtain processed point cloud data; preprocessing the image data to obtain processed image data; according to the processed image data and the point cloud data, inputting the self-adaptive calibration model, searching a PPO algorithm by adopting a probability weighted random strategy to perform deep reinforcement learning training, obtaining a trained self-adaptive calibration model, correcting calibration parameters by the self-adaptive calibration model, and outputting corrected calibration parameters; the calibration parameters after correction are optimized to obtain the calibration parameters after optimization, and the method can correct the calibration parameters of the laser radar and the camera so as to improve the accuracy and stability of parameter calibration and effectively solve the problem of inaccurate calibration caused by vibration.
Example IV
As shown in fig. 4, fig. 4 is a schematic diagram of a laser radar and camera calibration device according to an embodiment of the present application, which includes a processor 410 and a memory 420; and one or more programs, wherein the one or more programs are stored in the memory, and the memory 420 may be a high-speed RAM memory or a nonvolatile memory (non-volatile memory), such as a disk memory. The memory 4000 is used for storing a set of program codes, and the processor 410 is used for calling the program codes stored in the memory 420 to execute the following operations:
Acquiring point cloud data of a laser radar and image data of a camera on a vehicle;
preprocessing the point cloud data to obtain processed point cloud data; preprocessing the image data to obtain processed image data;
according to the processed image data and the point cloud data, inputting the processed image data and the point cloud data into an adaptive calibration model, searching a PPO algorithm by adopting a probability weighted random strategy to perform deep reinforcement learning training, and obtaining a trained adaptive calibration model, wherein the adaptive calibration model can correct calibration parameters and output corrected calibration parameters;
and carrying out optimization treatment on the corrected calibration parameters to obtain the optimized calibration parameters.
In one possible example, the processor 410 is specifically configured to, in the acquiring of the point cloud data of the lidar and the image data of the camera on the vehicle:
in a simulation environment provided by a simulation platform, simulating a driving scene of a vehicle;
controlling the running of the vehicle in the simulation environment, so that the vehicle in the simulation environment can navigate according to a set path;
and collecting point cloud data of a laser radar and image data of a camera on the vehicle in the automatic driving and driving process of the vehicle.
In one possible example, in the aspect of preprocessing the point cloud data to obtain processed point cloud data, the processor 410 is specifically configured to:
taking each point in the point cloud data as a current point, respectively determining coordinate differences between the current point and two adjacent points, determining a maximum value between the coordinate differences between the current point and the two adjacent points and 0, and taking a point corresponding to the maximum value as an edge reference point corresponding to the current point;
and screening edge points from edge reference points corresponding to all points in the point cloud data according to preset filtering conditions to obtain point cloud data of the edge points.
In one possible example, the processor 410 is specifically configured to:
processing each image in the image data to obtain edge information of each pixel point and obtaining an edge image containing the edge information;
the method for performing the distance inverse transformation on the edge image specifically comprises the following steps: determining the transformation distance between each pixel point of the edge image and the point to be transformed, and performing inverse distance transformation according to the correlation values of all the pixel points of the edge image and the transformation distances corresponding to the pixel points to obtain transformed image data; and generating a depth map according to the transformed image data.
In one possible example, the processor 410 is specifically configured to, in terms of processing each image in the image data to obtain edge information of each pixel:
converting each of the images in the image data into a grayscale image;
and setting each pixel point in the gray image as the maximum absolute value of the pixel difference value between the pixel point and 8 adjacent pixels, and obtaining the edge information.
In one possible example, where the corrected perceptual internal parameters include a rotation matrix and a translation vector, the processor 410 is specifically configured to:
constructing an extrinsic registration matrix according to the rotation matrix and the translation vector;
projecting the point cloud data onto image data of the camera according to the external reference registration matrix to form a depth map, wherein the depth map comprises projection of the point cloud data onto the image data;
searching a parameter space, adjusting the corrected calibration parameters according to the searched parameters until errors between the projection of the point cloud data onto the image data and the characteristics of the image data meet preset conditions, and outputting the adjusted calibration parameters.
In one possible example, the processor 410 is further configured to:
in one possible example, the processor 410 is further specifically configured to, in the acquiring of the point cloud data of the lidar and the image data of the camera on the vehicle:
performing laser radar and camera calibration according to the adjusted rotation matrix and translation vector;
acquiring the calibrated point cloud data of the edge points corresponding to the laser radar and the image data subjected to the distance inverse transformation corresponding to the camera;
calculating the matching degree value of the point cloud data and the image data edge by adopting an objective function according to the point cloud data of the edge point and the image data subjected to the inverse distance transformation;
and when the matching degree value is smaller than a first preset value, determining that the error between the projection of the point cloud data onto the image data and the characteristic of the image data meets a preset condition.
Acquiring point cloud data of a laser radar on a vehicle and image data of a camera; preprocessing the point cloud data to obtain processed point cloud data; preprocessing the image data to obtain processed image data; according to the processed image data and the point cloud data, inputting the self-adaptive calibration model, searching a PPO algorithm by adopting a probability weighted random strategy to perform deep reinforcement learning training, obtaining a trained self-adaptive calibration model, correcting calibration parameters by the self-adaptive calibration model, and outputting corrected calibration parameters; the calibration parameters after correction are optimized to obtain the calibration parameters after optimization, and the method can correct the calibration parameters of the laser radar and the camera so as to improve the accuracy and stability of parameter calibration and effectively solve the problem of inaccurate calibration caused by vibration.
The embodiment of the application also provides a computer readable storage medium, which stores program instructions that when executed by a computer cause the computer to execute the laser radar and camera calibration method.
Embodiments of the present application also provide a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps described in any of the lidar and camera calibration methods described in the embodiments of the present application. The computer program product may be a software installation package.
Although the present application has been described herein in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the figures, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
It will be apparent to those skilled in the art that embodiments of the present application may be provided as a method, apparatus (device), or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. A computer program may be stored/distributed on a suitable medium supplied together with or as part of other hardware, but may also take other forms, such as via the Internet or other wired or wireless telecommunication systems.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable human-vehicle track analysis device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable human-vehicle track analysis device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable human-vehicle trajectory analysis device to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable human vehicle track analysis device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer implemented process such that the instructions which execute on the computer or other programmable device provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the present application has been described in connection with specific features and embodiments thereof, it will be apparent that various modifications and combinations can be made without departing from the spirit and scope of the application. Accordingly, the specification and drawings are merely exemplary illustrations of the present application as defined in the appended claims and are considered to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the present application. It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (15)

1. A laser radar and camera calibration method is characterized by comprising the following steps:
acquiring point cloud data of a laser radar and image data of a camera on a vehicle;
preprocessing the point cloud data to obtain processed point cloud data; preprocessing the image data to obtain processed image data;
according to the processed image data and the point cloud data, inputting the processed image data and the point cloud data into an adaptive calibration model, searching a PPO algorithm by adopting a probability weighted random strategy to perform deep reinforcement learning training, and obtaining a trained adaptive calibration model, wherein the adaptive calibration model can correct calibration parameters and output corrected calibration parameters;
and carrying out optimization treatment on the corrected calibration parameters to obtain the optimized calibration parameters.
2. The method for calibrating a laser radar and a camera according to claim 1, wherein the step of acquiring the point cloud data of the laser radar and the image data of the camera on the vehicle comprises the steps of:
in a simulation environment provided by a simulation platform, simulating a driving scene of a vehicle;
controlling the running of the vehicle in the simulation environment, so that the vehicle in the simulation environment can navigate according to a set path;
And collecting point cloud data of a laser radar and image data of a camera on the vehicle in the automatic driving and driving process of the vehicle.
3. The method for calibrating a laser radar and a camera according to claim 1, wherein the preprocessing the point cloud data to obtain processed point cloud data includes:
taking each point in the point cloud data as a current point, respectively determining coordinate differences between the current point and two adjacent points, determining a maximum value between the coordinate differences between the current point and the two adjacent points and 0, and taking a point corresponding to the maximum value as an edge reference point corresponding to the current point;
and screening edge points from edge reference points corresponding to all points in the point cloud data according to preset filtering conditions to obtain point cloud data of the edge points.
4. The method for calibrating a laser radar and a camera according to claim 3, wherein the preprocessing the image data to obtain processed image data includes:
processing each image in the image data to obtain edge information of each pixel point and obtaining an edge image containing the edge information;
the method for performing the distance inverse transformation on the edge image specifically comprises the following steps: determining the transformation distance between each pixel point of the edge image and the point to be transformed, and performing inverse distance transformation according to the correlation values of all the pixel points of the edge image and the transformation distances corresponding to the pixel points to obtain transformed image data; and generating a depth map according to the transformed image data.
5. The method for calibrating a laser radar and a camera according to claim 4, wherein the processing each image in the image data to obtain edge information of each pixel includes:
converting each of the images in the image data into a grayscale image;
and setting each pixel point in the gray image as the maximum absolute value of the pixel difference value between the pixel point and 8 adjacent pixels, and obtaining the edge information.
6. The method for calibrating a laser radar and a camera according to claim 5, wherein the corrected internal sensor includes a rotation matrix and a translation vector, and the optimizing the corrected calibration parameters to obtain the processed calibration parameters includes:
constructing an extrinsic registration matrix according to the rotation matrix and the translation vector;
projecting the point cloud data onto image data of the camera according to the external reference registration matrix to form a depth map, wherein the depth map comprises projection of the point cloud data onto the image data;
searching a parameter space, adjusting the corrected calibration parameters according to the searched parameters until errors between the projection of the point cloud data onto the image data and the characteristics of the image data meet preset conditions, and outputting the adjusted calibration parameters.
7. The method for calibrating the laser radar and the camera according to claim 2, wherein the adaptive calibration model comprises a strategy network and a value network, the self-adaptive calibration model is input according to the processed image data and the point cloud data, the probability weighted random strategy is adopted to search the PPO algorithm for deep reinforcement learning training, and the trained self-adaptive calibration model is obtained, and the method comprises the following steps:
initializing parameters of a strategy network and a value network;
inputting the processed image data and point cloud data into a strategy network and a value network respectively, wherein the value network outputs a value estimation, the strategy network outputs a current driving strategy and a strategy advantage estimation, and the strategy function estimation and the value function estimation are used for evaluating strategy performance and decision value;
and calculating a loss function of the strategy by using the parameters of the current strategy network, and updating the strategy network according to the loss function to obtain an updated strategy network.
8. The method for calibrating a laser radar and a camera according to claim 7, wherein calculating a loss function of a policy using parameters of a current policy network, updating the policy network according to the loss function, and obtaining an updated policy network, includes:
Calculating a gradient of the policy network update using parameters of the current policy network;
applying the target strategy parameters of the strategy network to generate a new driving strategy;
calculating an importance sampling ratio between the current driving strategy and a new driving strategy;
calculating a loss function of the strategy based on the importance sampling ratio and a clipping threshold;
updating the strategy network by using the loss function, and iteratively executing the updating operation step according to the updated strategy network until the preset iteration times or the convergence of the loss function are reached, so as to obtain the updated strategy network.
9. The lidar and camera calibration method of claim 6, further comprising:
performing laser radar and camera calibration according to the adjusted rotation matrix and translation vector;
acquiring the calibrated point cloud data of the edge points corresponding to the laser radar and the image data subjected to the distance inverse transformation corresponding to the camera;
calculating the matching degree value of the point cloud data and the image data edge by adopting an objective function according to the point cloud data of the edge point and the image data subjected to the inverse distance transformation;
And when the matching degree value is smaller than a first preset value, determining that the error between the projection of the point cloud data onto the image data and the characteristic of the image data meets a preset condition.
10. A lidar and camera calibration system, comprising:
the data acquisition module is used for acquiring point cloud data of the laser radar on the vehicle and image data of the camera;
the data preprocessing module is used for preprocessing the point cloud data to obtain processed point cloud data; preprocessing the image data to obtain processed image data;
the training operation module is used for inputting the processed image data and the point cloud data into the self-adaptive calibration model, searching a PPO algorithm by adopting a probability weighted random strategy to perform deep reinforcement learning training, obtaining a trained self-adaptive calibration model, and correcting calibration parameters by the self-adaptive calibration model to output corrected calibration parameters;
and the optimization module is used for carrying out optimization treatment on the corrected calibration parameters to obtain the optimized calibration parameters.
11. The lidar and camera calibration system of claim 10, wherein the data acquisition module comprises a simulation platform, a simulated car control module, and a data collection module, the simulation platform being configured to provide a simulation environment for simulating a driving scenario of the vehicle in terms of the acquisition of point cloud data of the lidar and image data of the camera on the vehicle; the simulation automobile control module is used for controlling the running of the vehicle in a simulation environment, so that the vehicle in the simulation environment can navigate according to a set path; the data collection module is used for collecting point cloud data of the laser radar of the vehicle and image data of a camera.
12. The lidar and camera calibration system of claim 11, wherein the data preprocessing module is configured to: taking each point in the point cloud data as a current point, respectively determining coordinate differences between the current point and two adjacent points, determining a maximum value between the coordinate differences between the current point and the two adjacent points and 0, and taking a point corresponding to the maximum value as an edge reference point corresponding to the current point;
and screening edge points from edge reference points corresponding to all points in the point cloud data according to preset filtering conditions to obtain point cloud data of the edge points.
13. The lidar and camera calibration system according to any of claims 10 to 12, wherein the data preprocessing module is configured to: processing each image in the image data to obtain edge information of each pixel point and obtaining an edge image containing the edge information;
the method for performing the distance inverse transformation on the edge image specifically comprises the following steps: determining the transformation distance between each pixel point of the edge image and the point to be transformed, and performing inverse distance transformation according to the correlation values of all the pixel points of the edge image and the transformation distances corresponding to the pixel points to obtain transformed image data; and generating a depth map according to the transformed image data.
14. A laser radar and camera calibration device, comprising a processor and a memory; and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising steps for the lidar and camera calibration method of any of claims 1 to 9.
15. A computer storage medium having stored thereon a computer program, which when executed by a processor performs the steps of the lidar and camera calibration method of any of claims 1 to 9.
CN202311224509.XA 2023-09-20 2023-09-20 Laser radar and camera calibration method, system, equipment and storage medium Pending CN117372536A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311224509.XA CN117372536A (en) 2023-09-20 2023-09-20 Laser radar and camera calibration method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311224509.XA CN117372536A (en) 2023-09-20 2023-09-20 Laser radar and camera calibration method, system, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117372536A true CN117372536A (en) 2024-01-09

Family

ID=89399248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311224509.XA Pending CN117372536A (en) 2023-09-20 2023-09-20 Laser radar and camera calibration method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117372536A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118135243A (en) * 2024-05-07 2024-06-04 陕西优耐特机械有限公司 Light path stability detection method of laser cutting machine

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118135243A (en) * 2024-05-07 2024-06-04 陕西优耐特机械有限公司 Light path stability detection method of laser cutting machine
CN118135243B (en) * 2024-05-07 2024-07-16 陕西优耐特机械有限公司 Light path stability detection method of laser cutting machine

Similar Documents

Publication Publication Date Title
JP6869562B2 (en) A method of tracking an object using a CNN including a tracking network, and a device using it {METHOD FOR TRACKING OBJECT BY USING CONVOLUTIONAL NEURAL NETWORK INCLUDING TRACKING NETWORK AND COMPUTING
CN112348846A (en) Artificial intelligence driven ground truth generation for object detection and tracking on image sequences
CN108805016B (en) Head and shoulder area detection method and device
Drews et al. Aggressive deep driving: Model predictive control with a cnn cost model
CN110197106A (en) Object designation system and method
EP3671555A1 (en) Object shape regression using wasserstein distance
KR102695522B1 (en) Method and device to train image recognition model and to recognize image
CN110986945B (en) Local navigation method and system based on semantic altitude map
CN112329645B (en) Image detection method, device, electronic equipment and storage medium
US11703596B2 (en) Method and system for automatically processing point cloud based on reinforcement learning
CN117372536A (en) Laser radar and camera calibration method, system, equipment and storage medium
CN115063447A (en) Target animal motion tracking method based on video sequence and related equipment
CN116258744A (en) Target tracking method based on visible light, infrared and laser radar data fusion
Ostankovich et al. Application of cyclegan-based augmentation for autonomous driving at night
US20230350418A1 (en) Position determination by means of neural networks
US20240051557A1 (en) Perception fields for autonomous driving
CN111008622B (en) Image object detection method and device and computer readable storage medium
Bhaggiaraj et al. Deep Learning Based Self Driving Cars Using Computer Vision
CN117232545A (en) Path planning method based on deep learning road environment perception
US11657506B2 (en) Systems and methods for autonomous robot navigation
CN113954836A (en) Segmented navigation lane changing method and system, computer equipment and storage medium
JP6800901B2 (en) Object area identification device, object area identification method and program
US10373004B1 (en) Method and device for detecting lane elements to plan the drive path of autonomous vehicle by using a horizontal filter mask, wherein the lane elements are unit regions including pixels of lanes in an input image
JP2021012446A (en) Learning device and program
CN117611788B (en) Dynamic truth value data correction method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination