WO2022179549A1 - 一种标定方法、装置、计算机设备和存储介质 - Google Patents

一种标定方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2022179549A1
WO2022179549A1 PCT/CN2022/077622 CN2022077622W WO2022179549A1 WO 2022179549 A1 WO2022179549 A1 WO 2022179549A1 CN 2022077622 W CN2022077622 W CN 2022077622W WO 2022179549 A1 WO2022179549 A1 WO 2022179549A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
point cloud
point
category
information
Prior art date
Application number
PCT/CN2022/077622
Other languages
English (en)
French (fr)
Inventor
马涛
刘知正
闫国行
李怡康
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2022179549A1 publication Critical patent/WO2022179549A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present disclosure relates to the technical field of automatic driving, and in particular, to a calibration method, apparatus, computer equipment and storage medium.
  • lidar and camera are often used jointly to identify and detect objects.
  • the process of external parameter calibration is mostly through the laser radar and the camera to perform feature extraction and feature matching on specific markers placed in advance, such as the checkerboard calibration board. Ref.
  • Embodiments of the present disclosure provide at least one calibration method, apparatus, computer device, and storage medium.
  • an embodiment of the present disclosure provides a calibration method, including: acquiring a target image and point cloud data in the same scene; determining a target pixel point of a target object with a linear shape in the target image, and acquiring the The point cloud data has a target point cloud of a target object in the shape of a straight line; the target object is an element associated with a road; based on the target pixel points and the target point cloud, determine the shooting device that shoots the target image
  • the external parameter calibration information between the radar and the radar that collects the point cloud data.
  • the target objects include multiple target objects located in different planes or in the same plane.
  • determining the target pixel point of the target object having a straight line shape in the target image includes: acquiring a target category of the target object having a straight line shape; and determining the target image based on the target image.
  • the category of each pixel point in the target image; based on the category of each pixel point, the target pixel point corresponding to the target category is selected from each pixel point.
  • the target category includes a ground line category; based on the category of each pixel point, the target pixel point corresponding to the target category is selected from each pixel point, including: Based on the category of each pixel point, determine the pixel point belonging to the ground category; based on the pixel brightness information of each pixel point in the pixel point of the ground category, determine the pixel point belonging to the ground straight line category, and assign the The pixel points belonging to the category of ground straight lines are used as the target pixel points.
  • the target category includes a road pole category; based on the category of each pixel point, filtering out the target pixel point corresponding to the target category from the each pixel point, including: Based on the category of each pixel point, a pixel point belonging to the road pole category is determined, and the pixel point belonging to the road pole category is used as the target pixel point.
  • acquiring the target point cloud of the target object with the straight line shape in the point cloud data includes: determining, based on the point cloud data, a ground feature of each point in the point cloud data information; based on the ground feature information of each point, determine the target point cloud of the target object with the straight line shape from the point cloud data.
  • determining a target point cloud of the target object with the straight line shape from the point cloud data including: based on the ground feature of each point information, and determine the points belonging to the ground; based on the reflection intensity information of each point in the points on the ground, determine the points belonging to the category of straight lines on the ground, and use the points belonging to the category of straight lines on the ground as having the straight line shape
  • the target point cloud of the target object based on the ground feature information of each point, and determine the points belonging to the ground; based on the reflection intensity information of each point in the points on the ground, determine the points belonging to the category of straight lines on the ground, and use the points belonging to the category of straight lines on the ground as having the straight line shape
  • the target point cloud of the target object based on the ground feature information of each point, and determine the points belonging to the ground; based on the reflection intensity information of each point in the points on the ground, determine the points belonging to the category of straight lines on the ground, and use the points belonging to the category of straight lines on the ground as having the straight
  • determining a target point cloud of the target object with the straight line shape from the point cloud data including: based on the ground feature of each point based on the height of the first object to which the point that does not belong to the ground belongs, screen the second object whose height is greater than the first preset value from the points that do not belong to the ground. including points; screening out the points included in the target object with the straight line shape from the points included in the second object, and using the points included in the screened out target object with the straight line shape as the target point cloud.
  • filtering out the points included in the target object having a straight line shape from the points included in the second object includes: determining the distance from each second object to the road surface boundary ; Use the points included in the second object whose distance is less than the second preset value as the target point cloud of the target object with the straight line shape; wherein the category of the target object with the straight line shape is the road pole category.
  • filtering out the points included in the target object having a straight line shape from the points included in the second object includes: determining the number of points included in each of the second objects. Quantity; taking the points included in the second object whose quantity is smaller than the third preset value as the target point cloud of the target object having the straight line shape.
  • determining the external parameter calibration information between the shooting device that captures the target image and the radar that captures the point cloud data including: Based on the target pixel point, determine the first equation of the target object corresponding to the target pixel point on the two-dimensional plane; based on the target point cloud, determine the position of the target object corresponding to the target point cloud. A second equation in the three-dimensional space; based on the first equation and the second equation, determine the external parameter calibration information between the photographing device that photographs the target image and the radar that collects the point cloud data.
  • determining the external parameter calibration information between the shooting device that captures the target image and the radar that captures the point cloud data including: First equations corresponding to a preset number of target objects are selected from the first equations; second equations matching the first equations corresponding to each selected target object are determined; first equations based on the preset number of objects are determined and a second equation matching the preset number of first equations to determine initial extrinsic parameter information between the imaging device that captures the target image and the radar that captures the point cloud data; based on the initial extrinsic parameter information , the target pixel point and the target point cloud to determine the external parameter calibration information.
  • selecting a first equation corresponding to a preset number of target objects from the first equation includes: selecting a first number of target objects corresponding to road pole categories from the first equation The first equation of the first equation and the first equation corresponding to the target objects of the second number of ground straight line categories; determining the second equation that matches the first equation corresponding to each selected target object, including: determining the ground corresponding to each selected A second equation matching the first equation corresponding to the target object of the straight line category, and a second equation matching the first equation corresponding to each selected target object of the road pole category.
  • determining the external parameter calibration information based on the initial external parameter information, the target pixel points and the target point cloud includes: The image is converted into a binarized image; based on the binarized image, the target pixel point, the initial external parameter information and the target point cloud, determine the matching degree of the target point cloud and the target pixel point information; based on the matching degree information, the initial external parameter information is adjusted to obtain the external parameter calibration information.
  • the matching between the target point cloud and the target pixel point is determined degree information, including: based on the binarized image, determining the distance information between each pixel in the binarized image and the target object to which the target pixel belongs; based on the initial external parameter information, from the In each pixel in the binarized image, a matching pixel that matches each point in the target point cloud is determined; the distance information between the matching pixel and the target object to which the target pixel belongs is taken as the target point The distance information of the point in the cloud corresponding to the matching pixel point; the matching degree information is determined based on the distance information of each point in the target point cloud and the initial external parameter information.
  • adjusting the initial external parameter information based on the matching degree information to obtain the external parameter calibration information includes: based on the matching degree information, adjusting the initial external parameter information The adjustment is performed until the matching degree corresponding to the matching degree information is the largest, and the adjusted initial external parameter information when the matching degree is the largest is used as the external parameter calibration information.
  • an embodiment of the present disclosure further provides a calibration device, including: a first acquisition module, configured to acquire a target image and point cloud data in the same scene; a second acquisition module, used to determine that the target image has target pixel points of a target object in a straight line shape, and obtain a target point cloud of the target object with the straight line shape in the point cloud data; the target object is an element associated with a road; a determination module is used for The target pixel point and the target point cloud determine the external parameter calibration information between the shooting device that captures the target image and the radar that captures the point cloud data.
  • an optional implementation manner of the present disclosure further provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the instructions stored in the memory.
  • machine-readable instructions when the machine-readable instructions are executed by the processor, when the machine-readable instructions are executed by the processor, the above-mentioned first aspect, or any possible implementation of the first aspect, is executed steps in the method.
  • an optional implementation manner of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run, the processor executes the first aspect, or the first aspect.
  • a computer program is stored on the computer-readable storage medium, and when the computer program is run, the processor executes the first aspect, or the first aspect.
  • an optional implementation manner of the present disclosure further provides a computer program, where the computer program is stored in a storage medium, and when the computer program is run, the processor executes the above-mentioned first aspect, or any possibility of the first aspect steps in an embodiment of .
  • FIG. 1 shows a flowchart of a calibration method provided by an embodiment of the present disclosure
  • FIG. 2 shows a schematic diagram of a binarized image provided by an embodiment of the present disclosure
  • FIG. 3 shows a schematic diagram of projecting a target point cloud into a binarized image provided by an embodiment of the present disclosure
  • FIG. 4 shows a schematic diagram of a calibration device provided by an embodiment of the present disclosure
  • FIG. 5 shows a schematic structural diagram of a computer device provided by an embodiment of the present disclosure.
  • references herein to "a plurality or several” means two or more.
  • "And/or" which describes the association relationship of the associated objects, means that there can be three kinds of relationships, for example, A and/or B, which can mean that A exists alone, A and B exist at the same time, and B exists alone.
  • the character "/" generally indicates that the associated objects are an "or" relationship.
  • the present disclosure provides a calibration method, device, computer equipment and storage medium. Since the target objects are all elements associated with the road and are features of the road itself, the target pixels and target point clouds corresponding to the target objects are used. To determine the external parameter calibration information, the entire calibration process can be completed automatically without the need to use additional calibration objects, which not only saves the time for setting calibration objects, but also improves the efficiency and speed of external parameter calibration.
  • the external parameter calibration is performed on the target object with a straight shape on the road itself, which can effectively reduce the amount of calculation and improve the calibration speed and the accuracy of the external parameter calibration.
  • CRF Conditional Random Fields, Conditional Random Fields, is a discriminative probability undirected graph model that outputs a conditional probability distribution model of another set of random variables given a set of input random variables;
  • RANSAC Random Sample Consensus, random sampling consensus algorithm, the input is a set of observation data (often containing large noise or invalid points), a parametric model used to explain the observation data and some credible parameters, and then through repeated selection a random subset of the data to achieve the goal;
  • Line Hough Transform A feature detection that can be used to identify straight line features in objects.
  • the execution subject of the calibration method provided by the embodiment of the present disclosure is generally a computer device with a certain computing capability, such as a computer device.
  • terminal equipment can be user equipment (User Equipment, UE), mobile equipment, user terminal, cellular phone, cordless phone, Personal Digital Assistant (Personal Digital Assistant, PDA), handheld device, Computing devices, in-vehicle devices, wearable devices, etc.
  • the calibration method may be implemented by the processor invoking computer-readable instructions stored in the memory.
  • the calibration method provided by the embodiment of the present disclosure will be described below by taking the execution subject as a computer device as an example.
  • a flowchart of a calibration method provided in an embodiment of the present disclosure may include the following steps:
  • S101 Acquire target image and point cloud data in the same scene.
  • the scene may be a road scene
  • the target image may be a road image captured by a photographing device mounted on a vehicle
  • the point cloud data may be road point cloud data acquired by a radar (eg, lidar) mounted on the same vehicle.
  • the target image corresponds to the camera coordinate system, including multiple objects, such as road poles, roads, lane lines, vehicles, trees, etc. Each object in the target image can be composed of a number of pixels, and each pixel is in the camera.
  • the coordinate system corresponds to a camera coordinate.
  • the point cloud data corresponds to the radar coordinate system, including several point clouds that form different objects, and each point cloud corresponds to a radar coordinate in the radar coordinate system.
  • the same scene can ensure that the objects corresponding to all pixels in the target image and the objects corresponding to all point clouds included in the point cloud data are the same.
  • it can be the target image and point cloud data obtained at the same time and at the same location. Then, the target pixel points and target point clouds representing the same object can be used to determine the extrinsic parameter calibration information.
  • S102 Determine the target pixel points of the target object with a straight line shape in the target image, and acquire the target point cloud of the target object with a straight line shape in the point cloud data, where the target object is an element associated with a road.
  • the target object is selected from multiple objects included in the target image in the target image
  • the point cloud data is selected from multiple objects corresponding to the point cloud data
  • the selected target object is in the real world the same object.
  • the shape of the target object with a straight line shape is clear, easy to extract, and there is a straight line equation that can express the straight line where the target object is located. Therefore, the embodiment of the present disclosure uses an object with a straight line shape as the target object.
  • the target object may be an element associated with the road, which belongs to the characteristics of the road itself.
  • the target object with a straight shape may be road poles arranged on both sides of the road and/or lane lines divided on the road.
  • all the pixels included in the target image can be classified into categories by means of semantic segmentation to determine the category of each pixel, and then according to the category of each pixel, the composition target can be determined among all the pixels The target pixel of the object.
  • the target pixel consistent with the target category can be accurately obtained, which improves the accuracy of the obtained target pixel.
  • the RANSAC algorithm can be used to fit the points on the ground in the point cloud data to obtain the fitted ground plane. Then, the entire point cloud data can be obtained by using the fitted ground plane. It is divided into two parts: one part belongs to the points on the ground, and the other part belongs to the points not on the ground. Then, the target point cloud of the target object with a straight line shape can be determined from the two parts of the point cloud data.
  • S103 Based on the target pixel point and the target point cloud, determine the external parameter calibration information between the shooting device that shoots the target image and the radar that collects the point cloud data.
  • the calibration method provided by the embodiment of the present disclosure can be applied in the technical field of automatic driving, and can be applied to the external parameter calibration between the photographing device corresponding to the camera coordinate system and the radar corresponding to the radar coordinate system during specific implementation.
  • the radar coordinate system where the target point cloud is located belongs to the coordinate system in the three-dimensional space.
  • the dimensions corresponding to the target pixel point and the target point cloud are different. Therefore, the corresponding relationship between the target pixel point and the target point cloud in different dimensions can be calibrated, and the external parameter calibration information expressing the corresponding relationship can be determined.
  • the calibration information can also reflect the correspondence between the shooting device that captures the target image and the radar that collects the point cloud data, that is, the external parameter calibration information between the shooting device that captures the target image and the radar that collects the point cloud data can be determined.
  • the camera coordinates of the target pixels can be used to determine the first equation on the two-dimensional plane of the straight line where the target object corresponding to the target pixels is located, and then the radar coordinates of the target point cloud can be used to determine the target corresponding to the target point cloud.
  • the second equation of the straight line where the object is located in the three-dimensional space, and further, the external parameter calibration information can be determined by using the first equation and the second equation.
  • the target objects are all elements associated with the road and are the characteristics of the road itself
  • the target pixel points and target point cloud corresponding to the target object are used to determine the external parameter calibration information.
  • the entire calibration process can also be completed automatically, which not only saves the time for setting the calibration object, but also improves the efficiency and speed of external parameter calibration. It can effectively reduce the amount of calculation and improve the calibration speed and the accuracy of external parameter calibration.
  • the following steps can be used to determine the target pixel point:
  • Step 1 Obtain the target category of the target object with a linear shape
  • Step 2 Based on the target image, determine the category of each pixel in the target image;
  • Step 3 Based on the category of each pixel point, screen out the target pixel point corresponding to the target category from each pixel point.
  • the category of each pixel can be determined, and then the target pixels corresponding to the target category can be screened out according to the category of each pixel and the target category.
  • the target objects include multiple target objects located in different planes or in the same plane.
  • the different planes may be road planes or planes perpendicular to the road plane, and the like. Performing external parameter calibration based on multiple target objects in different planes or in the same plane can effectively improve the calibration accuracy.
  • the target category includes road pole category and ground straight line category.
  • the ground straight line category can include lane line category, and the target objects belonging to the ground straight line category can be parallel to each other, and the target objects belonging to the road pole category can be perpendicular to the plane where the road is located.
  • Different target categories have different ways of determining target pixels. The following describes the two different target categories respectively.
  • the target category is the ground straight line category
  • the target image includes road information
  • all the pixels included in the target image have corresponding pixel brightness information, and the pixel brightness information of the pixels belonging to the ground straight line category is different from the pixel brightness information of the pixels at other positions on the road. Therefore, when it is determined that they belong to the ground category
  • the pixel point belonging to the ground line category can be determined according to the pixel brightness information of each pixel point in all the pixels belonging to the ground category, and the pixel point can be used as the target pixel point.
  • each pixel has corresponding pixel brightness information, and the brightness of the pixels belonging to the ground straight line category will be more obvious, such as the lane lines on the road, so the pixel brightness of each pixel is used. information, can accurately extract the target pixels belonging to the ground line category, which is beneficial to improve the accuracy of external parameter calibration.
  • the pixels belonging to the road pole category can be directly determined, and then used as the target pixel point.
  • Road poles belong to the objects with a straight line shape in the elements associated with the road. Extracting the target pixel points belonging to the road pole category can improve the accuracy and speed of external parameter calibration.
  • the pixel points belonging to the road pole category are extracted based on the category of each pixel point, which can improve the accuracy of target pixel point extraction and further improve the accuracy of external parameter calibration.
  • using two types of target pixel points to perform external parameter calibration can realize external parameter calibration in the dimensions of different planes, and further improve the accuracy of the obtained external parameter calibration results.
  • CRF technology can be used to densify the pixels of each target object, and then, more smooth and complete pixels can be obtained, which can improve the obtained target pixels.
  • the density of is conducive to improving the accuracy of the subsequent determination of the first equation corresponding to the target pixel.
  • the extraction of target pixels is completed, and the whole process takes about 5S, and further, the speed of subsequent determination of external parameter calibration information can be improved.
  • the external parameter calibration information can be determined based on the target point cloud and the target pixel point.
  • the ground feature information of each point can be determined based on the analysis of each point in the acquired point cloud data, wherein the ground feature information can indicate whether the point in the point cloud data belongs to the ground category, and further, the ground feature information can be determined according to each point in the point cloud data.
  • the ground feature information of a point and the target category of the target object determine the process of obtaining the target point cloud.
  • each point included in the point cloud data has corresponding reflection intensity information, and the reflection intensity information of points belonging to the ground straight line category is different from the reflection intensity information of points at other positions on the ground.
  • the reflection intensity corresponding to the reflection intensity information is relatively high. Therefore, the points belonging to the ground line category can be accurately extracted, which is beneficial to improve the accuracy of external parameter calibration.
  • the RANSAC algorithm can be used to analyze each point in the point cloud data to determine the ground feature information of each point, wherein the ground feature information can indicate whether the point in the point cloud data belongs to the ground Then, the ground feature information of each point can be used to fit the points belonging to the ground category to obtain the fitted ground plane. Then, using the fitted ground plane, the entire point cloud data can be divided into two parts: one part belongs to the ground plane. Some of the points on the ground are not on the ground. Then, the specific steps for obtaining the target point cloud can be determined according to the target category included in the target object.
  • the points on the ground may be determined based on the ground feature information of each point, and then, according to the reflection intensity of each point belonging to the points on the ground information, the point with higher reflection intensity corresponding to the reflection intensity information is regarded as the point belonging to the ground straight line category, and the determined point belonging to the ground straight line category is regarded as the target point cloud of the target object with the shape of a straight line, for example, the point belonging to the ground straight line category
  • the target object corresponding to the point can be represented as a lane line with a straight shape in the road.
  • the target category is the road pole category
  • the target point cloud can be determined according to the height of the object where each point is located (the object where the points not on the ground are located is also referred to as the first object).
  • point cloud data in addition to the point cloud of objects such as road poles and lane lines, it may also include point clouds of objects with linear shapes such as office buildings, houses, stone obstacles, etc.
  • Objects whose height is greater than the first preset value can be screened out from the points that do not belong to the ground based on the height of the object to which the point on the ground belongs by using the preset first preset value about the height (also known as the first preset value).
  • the preset first preset value about the height (also known as the first preset value).
  • the second preset value is to filter out the points included in the second object whose distance is smaller than the second preset value.
  • the distance between the second object and the road boundary can be represented by the horizontal distance from the center point of the second object to the road boundary. .
  • the points included in objects that are far away from the road boundary can be filtered out, for example, the points included in objects such as office buildings and houses, because in the process of building roads, places far away from buildings such as office buildings and houses are preferentially selected.
  • the object is far from the road boundary, and in the process of installing the road pole, the place closer to the road boundary will be preferentially selected to play the role of indicating driving. Therefore, the second preset value can be used to filter out the distance from the road boundary.
  • the points included in the near objects (such as road poles) can then be filtered out as the target point cloud of the target object with the shape of a straight line.
  • the object corresponding to each point in the point cloud data has a height. Therefore, the first preset value can be used to filter out the object with a certain height, and then, through the secondary screening, the target object with a straight shape can be accurately obtained.
  • Point such as the point of the target object of the Road Pole category.
  • the third preset value of the number of points can be used to further filter them.
  • the shape of road poles is mostly slender, and the shape of objects such as houses or office buildings is wider and longer than the road poles. Therefore, the number of points included in each road pole is much smaller than that of houses. Or the points included in objects such as office buildings, so the third preset value can be used to filter out the points included in the objects whose number is less than the third preset value. In this way, the height greater than the first preset value and the distance from the road boundary can be filtered out.
  • the objects with a larger number of points are included in the closer objects, and then the objects whose number is smaller than the third preset value may be regarded as the target object having the shape of a straight line, and the points of the target object may be regarded as the target point cloud.
  • the first preset value, the second preset value and the third preset value are used to screen each object that does not belong to the point on the ground three times, which can improve the accuracy of the obtained target point cloud.
  • the filtered target point cloud belonging to the ground line category and the target point cloud belonging to the road pole category can be integrated to obtain point cloud data that only includes the points of the target object.
  • the process of screening out the target point cloud can be completed within 1s.
  • only one or more of the first preset value, the second preset value, and the third preset value may be used to filter the points that do not belong to the ground to determine the target point Cloud is not limited here.
  • the determined target point cloud also needs to be not on the same plane and has the same target category as the target pixel. If the target object where the determined target pixel is located is on the same plane, in the process of determining the target point cloud, the determined target point cloud also needs to be on the same plane and has the same target category as the target pixel.
  • the external parameter calibration information between the shooting device that shoots the target image and the radar that collects the point cloud data can be determined according to the following steps:
  • Step 1 based on the target pixel point, determine the first equation of the target object corresponding to the target pixel point on the two-dimensional plane;
  • Step 2 Based on the target point cloud, determine the second equation of the target object corresponding to the target point cloud in the three-dimensional space;
  • Step 3 Based on the first equation and the second equation, determine the external parameter calibration information between the shooting device that shoots the target image and the radar that collects the point cloud data.
  • the first equation and the second equation can accurately represent the position information of the target object in different coordinate systems, and the first equation in the two-dimensional plane and the second equation in the three-dimensional space can be used to simplify the steps of determining the external parameter calibration information , and the calculation amount of the external parameter calibration information can be reduced, and the calculation speed of the external parameter calibration information can be improved.
  • the straight line Hough transform can be used to determine the first equation of the straight line where the target pixel included in each target object is located in the camera coordinate system.
  • the target point cloud included in the different target objects can be fitted by the RANSAC algorithm again to the target point cloud included in each target object, and the second equation of the straight line where the target point cloud is located in the radar coordinate system can be determined.
  • the process of external parameter calibration can be simplified to the process of solving the external parameter calibration information based on the first equation on the given two-dimensional plane and the second equation in the three-dimensional space.
  • the external parameter calibration information may be an external parameter matrix representing the pose relationship between the photographing device and the radar, and the external parameter matrix includes a rotation matrix and a translation matrix.
  • the external parameter calibration information can be determined according to the following process.
  • the target image can be converted into a binarized image by using the target pixel, wherein the target corresponding to each target object The value of a pixel in the binarized image is 1, and the value of other pixels in the target image is 0.
  • the binarized image can accurately reflect the position of the target object in the target image, as shown in Figure 2, A schematic diagram of a binarized image provided by an embodiment of the present disclosure, wherein the white part represents the target object corresponding to the target pixel, and the black part represents other objects corresponding to other pixels.
  • the target pixels may also be directly extracted, and then a binarized image of a preset size may be determined based on the target pixels, which is not limited here.
  • a preset number of target objects can be selected from the first equation, and then the first equation corresponding to the preset number of target objects can be used as the target equation, and then, based on the selected target objects, it can be obtained from the point cloud data.
  • a preset number of matching target objects are selected from the target objects, and the second equation corresponding to the preset number of target objects is used as the matching second equation, and further, the matching first equation and second equation can be used.
  • the initial external parameter information can roughly reflect the pose relationship between the shooting device and the radar, and has a certain accuracy. However, in order to further obtain more accurate external parameter calibration information, it is necessary to determine the binary value.
  • the inverse distance change is performed on the binarized image, and each pixel included in the binarized image is assigned a corresponding numerical value, wherein the numerical value is used to characterize the relationship between each pixel in the binarized image and the target object to which the target pixel belongs.
  • Distance information where the value of the distance information of each target pixel is 1, and then, according to the distance between other pixels except the target pixel and the target object to which the target pixel belongs, in turn, each other pixel Points are given different distance information.
  • the distance to the target object can be the distance of the target pixel point at the same height on the straight line where the first equation corresponding to the target object is located between other pixel points and the target object.
  • its value can be determined according to the distance between the target pixel points closest to the pixel point. For example, the value of other pixels with a distance of 1 from the target pixel is 0.999, and the value of other pixels with a distance of 2 from the target pixel is 0.998, and so on. The farther away from the target pixel, the smaller the corresponding value.
  • the initial extrinsic parameter information can be used to project the target point cloud included in the target object corresponding to the successfully matched second equation into the binarized image that has undergone inverse distance transformation, and from each pixel in the binarized image
  • the matching pixel points that match each point in the target point cloud are determined from the points, and the distance information between the matching pixel point and the target object to which the target pixel point belongs is used as the distance information of the point corresponding to the matching pixel point in the target point cloud.
  • the distance information of each point in the target point cloud can reflect the matching degree between the target point cloud and the target pixel point, as shown in FIG. Schematic diagram in the transformed image, where the gray dots represent the position of the projected target point cloud.
  • Using the binarized image can accurately display the difference between the target pixel and other pixels. Based on the matching degree information determined by the binarized image, it can accurately reflect that the target point cloud is projected onto the binarized image. The degree of overlap with the target pixel, and the matching degree information can be used to improve the accuracy of external parameter calibration.
  • the matching degree information between the target point cloud and the target pixel point can be determined, wherein the matching degree information can reflect the matching degree information between the target point cloud and the target pixel point.
  • Matching degree the larger the value corresponding to the matching degree, the more accurate the determined initial external parameter information. Project the target point cloud into the binarized image through the initial external parameter information, and then combine the distance information between each pixel point and the target object corresponding to the target pixel point, the point cloud projected into the binarized image can be accurately determined.
  • the overlapping degree of the target pixel points in the binarized image can accurately determine the matching degree information of the point cloud projected into the binarized image and the target pixel point.
  • the matching degree information may be determined according to the sum of the values corresponding to the distance information of each point in the target point cloud and the initial external parameter information, and the matching degree information may be represented by a specific function.
  • the matching degree information can be expressed by formula (1):
  • J represents the matching degree information
  • ⁇ line represents the summation of the values corresponding to the distance information of each point in the target point cloud
  • R represents the rotation matrix in the initial external parameter information
  • p represents the target point cloud in the radar coordinate system.
  • the coordinates of , t represents the translation vector in the initial external parameter information
  • pole represents the road pole category
  • lane represents the lane line category.
  • K represents the camera internal parameter matrix
  • H line represents the binarized image after inverse distance transformation
  • the first equation corresponding to the target object of the first number of road pole categories and the first equation corresponding to the target object of the second number of ground straight lines can be selected from the target image as the target first equation, here, in When the value of the second quantity is greater than 2, the selected target objects of the ground straight line category can be parallel, and then for the first equation of the first quantity, all the targets belonging to the road pole category can be obtained from the target objects corresponding to the target point cloud.
  • the second equation corresponding to the first number of target objects belonging to the road pole category is selected in turn, and then the second number of target objects belonging to the ground straight line category in the corresponding target objects in the target point cloud are selected in turn.
  • the second equation corresponding to the target object belonging to the ground line category, and then use the second equation and the first equation of the target selected each time to determine the corresponding initial external parameter information, and then use the initial external parameter information to select the second equation.
  • the target point cloud corresponding to the equation is projected into the binarized image that has undergone inverse distance transformation to determine the matching degree information corresponding to the initial external parameter information.
  • the initial external parameter information can be determined by using three first equations and their corresponding second equations.
  • the value of the first quantity can be 1, and the value of the second quantity can be 2.
  • all the first equations and the second equations can also be used to determine the initial external parameter information, which is not limited here.
  • the selected target objects can be parallel, and the target objects that are parallel to each other can reduce mutual interference when matching the target pixel point and the target point cloud, and there are many parallel elements in the ground straight line category, which is easy to select ; The same is true when selecting the road pole category.
  • the first equation of the target can be used to match the second equation of the target object of the corresponding category in the target point cloud in turn to obtain several initial external parameter information and its corresponding matching degree information, and then each matching degree can be The values of the matching degree corresponding to the information are compared, and the matching degree information corresponding to the matching degree with the largest value is used as the final matching degree information, and the initial external parameter information corresponding to the matching degree information is used as the final initial external parameter information.
  • the first equation and the second equation used in the initial external parameter information are used as matching correct equations.
  • the matching accuracy of the first equation and the second equation can be improved, and the initial external parameter information with the highest accuracy can be determined, and further, the accuracy of the obtained external parameter calibration information can be improved.
  • the matching degree is the largest, it means that the matching degree between the target pixel and the point cloud projected into the binarized image is the highest.
  • the initial external parameter information used to project the target point cloud into the binarized image is the most accurate.
  • the most accurate initial external parameter information is used as the external parameter calibration information, which can ensure the accuracy of the external parameter calibration.
  • a preset number of target objects may be selected in the target image, and then the first equation corresponding to the target objects may be determined, so that it is not necessary to determine all target objects It only needs to determine the equation corresponding to the selected target object, which can reduce the amount of calculation and improve the speed of external parameter calibration.
  • the second equation of all target objects to which the target point cloud belongs can be determined, and the preset The number of second equations is used as the second equation matching the first equation, and different initial external parameter information and matching degree information are respectively determined, and then the final initial external parameter information and matching degree information can be determined.
  • the method of parameter information and matching degree information is not limited here.
  • a preset number of second equations can also be selected first, then a first equation matching the second equation is determined, and finally, initial external parameter information and matching degree information are determined, which are not limited here.
  • the value of the matching degree corresponding to the matching degree information can be changed by adjusting the initial external parameter information, until the numerical value of the matching degree corresponding to the matching degree information is the largest.
  • the initial extrinsic parameter information obtained by adjusting at the same time, and the adjusted initial extrinsic parameter information is used as the extrinsic parameter calibration information. The higher the degree, the greater the value of the matching degree, which means the highest degree of matching.
  • the obtained initial external parameter information already has high precision, it can be adjusted within the preset range of the coordinates of the target point cloud corresponding to the initial external parameter information, that is, the external parameter calibration information can be obtained.
  • the initial external parameter information can be expressed by formula (2):
  • T represents the initial extrinsic parameter information or the adjusted extrinsic calibration information
  • R represents the rotation matrix in the initial extrinsic parameter information
  • t represents the translation vector in the initial extrinsic parameter information
  • x, y, z are the translation vector in the three-dimensional components in space.
  • the extreme value point of the matching degree information within the preset range can be used as the value when the matching degree is the largest. R and t, and then use different R and t to determine the values of different matching degrees.
  • the value of the matching degree is determined to be an extreme point, determine the initial external parameter information at this time, and use the initial external parameter information as The final external parameter calibration information T, in this way, the external parameter calibration information T with higher accuracy can be obtained, and the external parameter calibration information T can be used to accurately calibrate the pose relationship between the photographing device and the radar.
  • the process of determining the external parameter calibration information can be completed in about 1s. Based on this, the entire external parameter calibration process can be automatically completed within 10s, which greatly improves the speed of external parameter calibration.
  • the embodiment of the present disclosure determines the external parameter calibration information based on the extracted target objects belonging to the characteristics of the road itself (lane lines and road poles). It saves the time for setting calibration objects, and improves the efficiency and speed of external parameter calibration. In addition, using the target object with a straight line shape on the road itself to perform external parameter calibration can effectively reduce the amount of calculation and improve the calibration speed and external parameter calibration. precision.
  • the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
  • the embodiment of the present disclosure also provides a calibration device corresponding to the calibration method. Since the principle of solving the problem of the device in the embodiment of the present disclosure is similar to the above-mentioned calibration method in the embodiment of the present disclosure, the implementation of the device can refer to the method of implementation, and the repetition will not be repeated.
  • a schematic diagram of a calibration device provided in an embodiment of the present disclosure includes:
  • the first acquisition module 401 is used to acquire the target image and point cloud data in the same scene;
  • the second acquisition module 402 is configured to determine the target pixel point of the target object with the straight line shape in the target image, and acquire the target point cloud of the target object with the straight line shape in the point cloud data; the target object is the element associated with the road;
  • the determining module 403 is configured to determine, based on the target pixel points and the target point cloud, the external parameter calibration information between the shooting device that captures the target image and the radar that captures the point cloud data.
  • the target object includes multiple target objects located in different planes or in the same plane, wherein the different planes may be a road plane or a plane perpendicular to the road plane, or the like.
  • the second obtaining module 402 is configured to obtain the target category of the target object having a straight line shape
  • the target pixel point corresponding to the target category is selected from each pixel point.
  • the target category includes a ground straight line category
  • the second obtaining module 402 is configured to determine the pixels belonging to the ground category based on the category of each pixel;
  • a pixel point belonging to the ground straight line category is determined, and the pixel point belonging to the ground straight line category is used as the target pixel point.
  • the target category includes a road pole category
  • the second obtaining module 402 is configured to determine the pixel points belonging to the road pole category based on the category of each pixel point, and use the pixel points belonging to the road pole category as the target pixel point.
  • the second obtaining module 402 is configured to determine, based on the point cloud data, ground feature information of each point in the point cloud data;
  • a target point cloud of the target object having the straight line shape is determined from the point cloud data.
  • the second obtaining module 402 is configured to determine a point belonging to the ground based on the ground feature information of each point;
  • a point belonging to the category of straight lines on the ground is determined, and the points belonging to the category of straight lines on the ground are used as the target point cloud of the target object having the shape of the straight line.
  • the second obtaining module 402 is configured to determine a point that does not belong to the ground based on the ground feature information of each point;
  • the points included in the target object having the straight line shape are filtered out from the points included in the second object, and the filtered points included in the target object having the straight line shape are used as the target point cloud.
  • the second obtaining module 402 is configured to determine the distance from each of the second objects to the road boundary;
  • the points included in the second object whose distance is smaller than the second preset value are used as the target point cloud of the target object with the straight line shape; wherein the category of the target object with the straight line shape is a road pole category.
  • the second obtaining module 402 is configured to determine the number of points included in each of the second objects
  • the points included in the second object whose number is smaller than the third preset value are used as the target point cloud of the target object having the straight line shape.
  • the determining module 403 is configured to, based on the target pixel point, determine the first equation of the target object corresponding to the target pixel point on the two-dimensional plane;
  • the external parameter calibration information between the photographing device that photographs the target image and the radar that acquires the point cloud data is determined.
  • the determining module 403 is configured to select a first equation corresponding to a preset number of target objects from the first equation;
  • the external parameter calibration information is determined based on the initial external parameter information, the target pixel point and the target point cloud.
  • the preset number includes a first number and a second number
  • the determining module 403 is configured to select, from the first equation, the first equation corresponding to the target objects of the first number of road pole categories and the first equation corresponding to the target objects of the second number of ground straight lines. equation;
  • the determining module 403 is configured to convert the target image into a binarized image based on the target pixel point;
  • the target pixel point Based on the binarized image, the target pixel point, the initial external parameter information and the target point cloud, determine the matching degree information between the target point cloud and the target pixel point;
  • the initial external parameter information is adjusted to obtain the external parameter calibration information.
  • the determining module 403 is configured to determine, based on the binarized image, distance information between each pixel in the binarized image and the target object to which the target pixel belongs ;
  • the matching degree information is determined based on the distance information of each point in the target point cloud and the initial external parameter information.
  • the determining module 403 is configured to adjust the initial external parameter information based on the matching degree information until the matching degree corresponding to the matching degree information is the largest, and the matching degree is the largest
  • the adjusted initial external parameter information at the time is used as the external parameter calibration information.
  • An embodiment of the present disclosure further provides a computer device.
  • a schematic structural diagram of a computer device provided by an embodiment of the present disclosure includes:
  • a processor 51 and a memory 52 stores machine-readable instructions executable by the processor 51, the processor 51 is used to execute the machine-readable instructions stored in the memory 52, and the machine-readable instructions are executed by the processor 51 During execution, the processor 51 performs the following steps: acquiring the target image and point cloud data in the same scene; determining the target pixel point of the target object with a straight line shape in the target image, and acquiring the target object with a straight line shape in the point cloud data
  • the target point cloud is the target object, and the target object is an element associated with the road; and based on the target pixel point and the target point cloud, the external parameter calibration information between the shooting device that captures the target image and the radar that collects the point cloud data is determined.
  • the above-mentioned memory 52 includes a memory 521 and an external memory 522; the memory 521 here is also called an internal memory, and is used to temporarily store the operation data in the processor 51 and the data exchanged with the external memory 522 such as the hard disk.
  • the external memory 522 performs data exchange.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the steps of the calibration method described in the foregoing method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • the computer program product of the calibration method provided by the embodiments of the present disclosure includes a computer-readable storage medium storing program codes, and the instructions included in the program codes can be used to execute the steps of the calibration method described in the above method embodiments. Reference may be made to the foregoing method embodiments, and details are not described herein again.
  • the computer program product can be specifically implemented by hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
  • a software development kit Software Development Kit, SDK
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a processor-executable non-volatile computer-readable storage medium.
  • the computer software products are stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

本公开提供了一种标定方法、装置、计算机设备和存储介质,其中,该方法包括:获取同一场景下的目标图像和点云数据;确定所述目标图像中具有直线形状的目标对象的目标像素点,并获取所述点云数据中具有所述直线形状的目标对象的目标点云;所述目标对象为与道路关联的元素;基于所述目标像素点和所述目标点云,确定拍摄所述目标图像的拍摄设备与采集所述点云数据的雷达之间的外参标定信息。本公开实施例利用道路本身的具有直线形状的目标对象进行外参标定,能够有效降低计算量,提高标定速度和外参标定的精度。

Description

一种标定方法、装置、计算机设备和存储介质
相关申请的交叉引用
本申请要求在2021年2月26日提交至中国专利局、申请号为2021102188956的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及自动驾驶技术领域,具体而言,涉及一种标定方法、装置、计算机设备和存储介质。
背景技术
在自动驾驶领域,经常使用激光雷达和相机联合对物体进行识别和检测,为了保证物体识别和检测的精度,需要对激光雷达和相机进行精准的外参标定。外参标定的过程大多是通过激光雷达和相机对提前摆放的特定标志物,例如棋盘格标定板,进行特征提取和特征匹配,之后,基于特征匹配结果,标定激光雷达和相机之间的外参。
但是,上述方案摆放特定标志物的过程繁琐,灵活性较差且需要消耗大量的时间,影响了外参标定的速度和效率。
发明内容
本公开实施例至少提供一种标定方法、装置、计算机设备和存储介质。
第一方面,本公开实施例提供了一种标定方法,包括:获取同一场景下的目标图像和点云数据;确定所述目标图像中具有直线形状的目标对象的目标像素点,并获取所述点云数据中具有所述直线形状的目标对象的目标点云;所述目标对象为与道路关联的元素;基于所述目标像素点和所述目标点云,确定拍摄所述目标图像的拍摄设备与采集所述点云数据的雷达之间的外参标定信息。
在一种可能的实施方式中,所述目标对象包括位于不同平面内或同一平面内的多个目标对象。
在一种可能的实施方式中,确定所述目标图像中具有直线形状的目标对象的目标像素点,包括:获取所述具有直线形状的目标对象的目标类别;基于所述目标图像,确定所述目标图像中每一像素点的类别;基于所述每一像素点的类别,从所述每一像素点中筛选出所述目标类别对应的目标像素点。
在一种可能的实施方式中,所述目标类别包括地面直线类别;基于所述每一像素点的类别,从所述每一像素点中筛选出所述目标类别对应的目标像素点,包括:基于所述每一像素点的类别,确定属于地面类别的像素点;基于所述地面类别的像素点中每一像素点的像素亮度信息,确定属于所述地面直线类别的像素点,并将所述属于地面直线类别的像素点作为所述目标像素点。
在一种可能的实施方式中,所述目标类别包括路杆类别;基于所述每一像素点的类别,从所述每一像素点中筛选出所述目标类别对应的目标像素点,包括:基于所述每一像素点的类别,确定属于所述路杆类别的像素点,并将属于所述路杆类别的像素点作为所述目标像素点。
在一种可能的实施方式中,获取所述点云数据中具有所述直线形状的目标对象的目标点云,包括:基于所述点云数据,确定所述点云数据中每一点的地面特征信息;基于所述每一点的地面特征信息,从所述点云数据中确定具有所述直线形状的目标对象的目标点云。
在一种可能的实施方式中,基于所述每一点的地面特征信息,从所述点云数据中确定具有所述直线形状的目标对象的目标点云,包括:基于所述每一点的地面特征信息,确定属于地面上的点;基于所述地面上的点中的每一点的反射强度信息,确定属于地面直线类别的点,并将所述属于地面直线类别的点作为具有所述直线形状的目标对象的目标点云。
在一种可能的实施方式中,基于所述每一点的地面特征信息,从所述点云数据中确定具有所述直线形状的目标对象的目标点云,包括:基于所述每一点的地面特征信息,确定不属于地面上的点;基于所述不属于地面上的点所属的第一对象的高度,从所述不属于地面上的点中筛选高度大于第一预设值的第二对象所包括的点;从所述第二对象所包括的点中筛选出具有所述直线形状的目标对象所包括的点,并将筛选出的具有所述直线形状的目标对象所包括的点作为所述目标点云。
在一种可能的实施方式中,从所述第二对象所包括的点中筛选出所述具有直线形状的目标对象所包括的点,包括:确定每一所述第二对象到路面边界的距离;将所述距离小于第二预设值的第二对象所包括的点作为具有所述直线形状的目标对象的目标点云;其中具有所述直线形状的目标对象的类别为路杆类别。
在一种可能的实施方式中,从所述第二对象所包括的点中筛选出所述具有直线形状的目标对象所包括的点,包括:确定每一所述第二对象所包括的点的数量;将所述数量小于第三预设值的第二对象所包括的点作为具有所述直线形状的目标对象的目标点云。
在一种可能的实施方式中,基于所述目标像素点和所述目标点云,确定拍摄所述目标图像的拍摄设备与采集所述点云数据的雷达之间的外参标定信息,包括:基于所述目标像素点,确定所述目标像素点对应的所述目标对象在二维平面上的第一方程;基于所述目标点云,确定所述目标点云对应的所述目标对象的在三维空间中的第二方程;基于所述第一方程和所述第二方程,确定拍摄所述目标图像的拍摄设备与采集所述点云数据的雷达之间的外参标定信息。
在一种可能的实施方式中,基于所述第一方程和所述第二方程,确定拍摄所述目标图像的拍摄设备与采集所述点云数据的雷达之间的外参标定信息,包括:从所述第一方程中选取预设数量的目标对象对应的第一方程;确定与每个选取的目标对象对应的第一方程相匹配的第二方程;基于所述预设数量的第一方程和与所述预设数量的第一方程匹配的第二方程,确定拍摄所述目标图像的拍摄设备与采集所述点云数据的雷达之间的初始外参信息;基于所述初始外参信息、所述目标像素点和所述目标点云,确定所述外参标定信息。
在一种可能的实施方式中,从所述第一方程中选取预设数量的目标对象对应的第一方程,包括:从所述第一方程中选取第一数量的路杆类别的目标对象对应的第一方程和第二数量的地面直线类别的目标对象对应的第一方程;确定与每个选取的目标对象对应的第一方程相匹配的第二方程,包括:确定与每个选取的地面直线类别的目标对象对应的第一方程相匹配的第二方程,以及,与每个选取的路杆类别的目标对象对应的第一方程相匹配的第二方程。
在一种可能的实施方式中,基于所述初始外参信息、所述目标像素点和所述目标点云,确定所述外参标定信息,包括:基于所述目标像素点,将所述目标图像转化为二值化图像;基于所述二值化图像、所述目标像素点、所述初始外参信息和所述目标点云,确定所述目标点云与所述目标像素点的匹配程度信息;基于所述匹配程度信息,对所述初始外参信息进行调整,得到所述外参标定信息。
在一种可能的实施方式中,基于所述二值化图像、所述目标像素点、所述初始外参信息和所述目标点云,确定所述目标点云与所述目标像素点的匹配程度信息,包括:基于所述二值化图像,确定所述二值化图像中每一像素点与所述目标像素点所属的目标对象的距离信息;基于所述初始外参信息,从所述二值化图像中的每一像素点中确定与所述目标点云中每一点相匹配的匹配像素点;将所述匹配像素点与所述目标像素点所属的目标对象的距离信息作为目标点云中与该匹配像素点对应的点的距离信息;基于目标点云中每一点的距离信息和所述初始外参信息,确定所述匹配程度信息。
在一种可能的实施方式中,基于所述匹配程度信息,对所述初始外参信息进行调整,得到所述外参标定信息,包括:基于所述匹配程度信息,对所述初始外参信息进行调整,直至所述匹配程度信息对应的匹配程度最大,将匹配程度最大时的调整后的初始外参信息作为所述外参标定信息。
第二方面,本公开实施例还提供一种标定装置,包括:第一获取模块,用于获取同一场景下的目标图像和点云数据;第二获取模块,用于确定所述目标图像中具有直线形状的目标对象的目标像素点,并获取所述点云数据中具有所述直线形状的目标对象的目标点云;所述目标对象为与道路关联的元素;确定模块,用于基于所述目标像素点和所述目标点云,确定拍摄所述目标图像的拍摄设备与采集所述点云数据的雷达之间的外参标定信息。
第三方面,本公开可选实现方式还提供一种计算机设备,处理器、存储器,所述存储器存储有所述处理器可执行的机器可读指令,所述处理器用于执行所述存储器中存储的机器可读指令,所述机器可读指令被所述处理器执行时,所述机器可读指令被所述处理器执行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。
第四方面,本公开可选实现方式还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被运行时由处理器执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。
第五方面,本公开可选实现方式还提供一种计算机程序,该计算机程序存储于存储介质中,该计算机程序被运行时由处理器执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。
关于上述标定装置、计算机设备、及计算机可读存储介质的效果描述参见上述标定方法的说明,这里不再赘述。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举实施例,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1示出了本公开实施例所提供的一种标定方法的流程图;
图2示出了本公开实施例所提供的一种二值化图像示意图;
图3示出了本公开实施例所提供的一种将目标点云投影到二值化图像中的示意图;
图4示出了本公开实施例所提供的一种标定装置的示意图;
图5示出了本公开实施例所提供的一种计算机设备结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对本公开的实施例的详细描述并非旨在限制本公开要求保护的的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
另外,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。
在本文中提及的“多个或者若干个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
经研究发现,在自动驾驶领域,经常使用激光雷达和相机联合对物体进行识别和检测,为了保证物体识别和检测的精度,需要对激光雷达和相机进行精准的外参标定。目前,外参标定的过程大 多是通过激光雷达和相机对提前摆放的特定标志物,例如棋盘格标定板,进行特征提取和特征匹配,之后,基于特征匹配结果,标定激光雷达和相机之间的外参。但是,上述方案摆放特定标志物的过程繁琐,灵活性较差且需要消耗大量的时间,影响了外参标定的速度和效率。
基于上述研究,本公开提供了标定方法、装置、计算机设备和存储介质,由于目标对象都是与道路关联的元素,是道路本身具有的特征,所以利用目标对象对应的目标像素点和目标点云确定外参标定信息,在不需要使用额外设置的标定物的基础上,整个标定过程还可以自动完成,既节省了设置标定物的时间,又提高了外参标定的效率和速度,另外,利用道路本身的具有直线形状的目标对象进行外参标定,能够有效降低计算量,提高标定速度和外参标定的精度。
针对以上方案所存在的缺陷,均是发明人在经过实践并仔细研究后得出的结果,因此,上述问题的发现过程以及下文中本公开针对上述问题所提出的解决方案,都应该是发明人在本公开过程中对本公开做出的贡献。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
需要说明的是,本公开实施例中所提到的特定名词包括:
CRF:Conditional Random Fields,条件随机场,是一种判别式的概率无向图模型,在给定一组输入随机变量条件下,输出另外一组随机变量的条件概率分布模型;
RANSAC:Random Sample Consensus,随机抽样一致算法,输入是一组观测数据(往往含有较大的噪声或无效点),一个用于解释观测数据的参数化模型以及一些可信的参数,然后通过反复选择数据中的一组随机子集来达成目标;
直线霍夫变换:一种特征检测,可以用来辨别找出物件中的直线特征。
为便于对本实施例进行理解,首先对本公开实施例所公开的一种标定方法进行详细介绍,本公开实施例所提供的标定方法的执行主体一般为具有一定计算能力的计算机设备,该计算机设备例如包括:终端设备或服务器或其它处理设备,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、蜂窝电话、无绳电话、个人数字助理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。在一些可能的实现方式中,该标定方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。
下面以执行主体为计算机设备为例对本公开实施例提供的标定方法加以说明。
如图1所示,为本公开实施例提供的一种标定方法的流程图,可以包括以下步骤:
S101:获取同一场景下的目标图像和点云数据。
这里,场景可以是道路场景,目标图像可以是安装在车辆上的拍摄设备拍摄的道路图像,点云数据可以是安装在同一车辆上的雷达(例如,激光雷达)获取的道路点云数据。
目标图像对应于相机坐标系,包括多个对象,例如,路杆、道路、车道线、车辆、树木等,每 一个对象在目标图像中可以由若干数量的像素点组成,每一个像素点在相机坐标系下对应一个相机坐标。点云数据对应于雷达坐标系,包括组成不同对象的若干个点云,每一个点云在雷达坐标系下对应一个雷达坐标。
同一场景可以保障目标图像中所有像素点对应的对象和点云数据包括的所有点云对应的对象是相同的,具体实施时,可以是在同一时间和同一位置获取的目标图像和点云数据,然后,可以利用表征相同对象的目标像素点和目标点云来确定外参标定信息。
S102:确定目标图像中具有直线形状的目标对象的目标像素点,并获取点云数据中具有直线形状的目标对象的目标点云,目标对象为与道路关联的元素。
这里,目标对象在目标图像中是从目标图像包括的多个对象中选取的,在点云数据是从点云数据中对应的多个对象中选取的,且选取的目标对象在现实世界中是相同的对象。
直线形状的目标对象的形状清晰,便于提取且存在着能够表述目标对象所在直线的直线方程,因此,本公开实施例将具有直线形状的对象作为目标对象。并且,目标对象可以为与道路关联的元素,属于道路本身的特征,例如,具有直线形状的目标对象可以是设置在道路两旁的路杆和/或道路上划分的车道线。
具体实施时,可以利用语义分割的方式对目标图像中包括的所有像素点进行类别划分,以确定每一像素点的类别,然后可以根据每一像素点的类别,在所有像素点中确定组成目标对象的目标像素点。通过目标图像中每一像素点的类别,能够准确得到与目标类别相一致的目标像素点,提高了得到的目标像素点的准确性。
针对目标点云,在获取点云数据之后,可以利用RANSAC算法将点云数据中属于地面上的点进行拟合,得到拟合地平面,进而,利用拟合地平面,可以将整个点云数据分为两部分:一部分是属于地面上的点,一部分是不属于地面上的点,然后,可以从两部分点云数据中,确定具有直线形状的目标对象的目标点云。
S103:基于目标像素点和目标点云,确定拍摄目标图像的拍摄设备与采集点云数据的雷达之间的外参标定信息。
本公开实施例所提供的标定方法可以应用在自动驾驶技术领域中,具体实施时,可以应用在相机坐标系对应的拍摄设备和雷达坐标系对应的雷达之间的外参标定。
这里,在确定目标像素点和目标点云之后,由于目标像素点所在的相机坐标系是属于二维平面上的坐标系的,目标点云所在的雷达坐标系是属于三维空间中的坐标系,目标像素点和目标点云所对应的维度不同,因此,可以对目标像素点和目标点云在不同维度下的对应关系进行标定,确定表述该对应关系的外参标定信息,并且,该外参标定信息同时也能够反映拍摄目标图像的拍摄设备与采集点云数据的雷达之间的对应关系,即可以确定拍摄目标图像的拍摄设备与采集点云数据的雷达之间的外参标定信息。
具体实施时,可以利用目标像素点的相机坐标,确定目标像素点对应的目标对象所在的直线在 二维平面上的第一方程,再利用目标点云的雷达坐标,确定目标点云对应的目标对象所在的直线在三维空间中的第二方程,进而,可以利用第一方程和第二方程,确定外参标定信息。
在该实施例中,由于目标对象都是与道路关联的元素,是道路本身具有的特征,所以利用目标对象对应的目标像素点和目标点云确定外参标定信息,在不需要使用额外设置的标定物的基础上,整个标定过程还可以自动完成,既节省了设置标定物的时间,又提高了外参标定的效率和速度,另外,利用道路本身的具有直线形状的对象进行外参标定,能够有效降低计算量,提高标定速度和外参标定的精度。
在一种可能的实施例中,在获取目标图像和点云数据之后,可以利用以下步骤确定目标像素点:
步骤一、获取具有直线形状的目标对象的目标类别;
步骤二、基于目标图像,确定目标图像中每一像素点的类别;
步骤三、基于每一像素点的类别,从每一像素点中筛选出目标类别对应的目标像素点。
这里,基于语义分割的结果,可以确定每一像素点的类别,然后可以根据每一像素点的类别和目标类别,筛选出目标类别对应的目标像素点。
在一种可能的实施例中,目标对象包括位于不同平面内或同一平面内的多个目标对象。其中,不同平面可以是道路平面或与道路平面垂直的平面等。基于不同平面内或同一平面内的多个目标对象进行外参标定,能够有效提高标定精度。
目标类别包括路杆类别和地面直线类别,地面直线类别可以包括车道线类别,并且属于地面直线类别的目标对象可以互相平行,属于路杆类别的目标对象可以垂直于道路所在的平面。目标类别不同,确定目标像素点的方式不同,下面针对两种不同的目标类别分别进行介绍。
在目标类别为地面直线类别的情况下,基于语义分割的结果,可以确定属于地面类别的所有像素点,由于目标图像包括道路信息,所以可以确定属于地面类别的像素点包括道路对应的像素点,另外,目标图像包括的所有像素点都存在对应的像素亮度信息,并且属于地面直线类别的像素点的像素亮度信息与道路上其他位置的像素点的像素亮度信息不同,因此,在确定属于地面类别的所有像素点之后,可以根据属于地面类别的所有像素点中每一像素点的像素亮度信息,确定属于地面直线类别的像素点,并将该像素点作为目标像素点。
针对地面类别的像素点而言,每一个像素点都具有对应的像素亮度信息,属于地面直线类别的像素点的亮度会更加明显,例如道路上的车道线,因此利用每一像素点的像素亮度信息,能够准确地提取到属于地面直线类别的目标像素点,从而有利于提高外参标定的准确度。
在目标类别为路杆类别的情况下,基于语义分割的结果,可以直接确定出属于路杆类别的像素点,然后将其作为目标像素点。路杆属于与道路关联的元素中具有直线形状的对象,提取属于路杆类别的目标像素点,能够提高外参标定的准确度和速度。另外,基于每一像素点的类别来提取属于路杆类别的像素点,能够提高目标像素点提取的准确度,实现了对外参标定精度的进一步提高。加 之,利用两种类别的目标像素点进行外参标定,能够实现在不同平面的维度上进行外参标定,进而,能够进一步提高得到的外参标定结果的精度。
另外,在确定属于路杆类别的像素点之后,可以利用CRF技术对每一个目标对象的像素点进行稠密化处理,进而,可以得到更加光滑完整的像素点,这样,能够提高得到的目标像素点的密集程度,有利于提高后续确定目标像素点对应的第一方程的准确性。
具体实施时,基于语义分割的结果,完成目标像素点的提取,整个过程大概需要5S,进而,能够提高后续确定外参标定信息的速度。
另外,在确定目标像素点之后,还需要获取目标像素点对应的目标对象在点云数据中对应的目标点云,然后可以基于目标点云和目标像素点,确定外参标定信息。
具体实施时,可以基于对获取的点云数据中的每一点的分析,确定每一点的地面特征信息,其中,地面特征信息可以表示点云数据中的点是否属于地面类别,进而,可以根据每一点的地面特征信息和目标对象的目标类别,确定获取目标点云的过程。
其中,点云数据包括的每一点都存在对应的反射强度信息,并且属于地面直线类别的点的反射强度信息与地面上其他位置的点的反射强度信息不同,其中,属于地面直线类别的点的反射强度信息对应的反射强度比较高。因此能够准确地提取到属于地面直线类别的点,从而有利于提高外参标定的准确度。
具体实施时,在获取点云数据之后,可以利用RANSAC算法对点云数据中的每一点进行分析,确定每一点的地面特征信息,其中,地面特征信息可以表示点云数据中的点是否属于地面类别,然后可以利用每一点的地面特征信息,将属于地面类别的点进行拟合,得到拟合地平面,进而,利用拟合地平面,可以将整个点云数据分为两部分:一部分是属于地面上的点,一部分是不属于地面上的点,然后,可以根据目标对象包括的目标类别,确定获取目标点云的具体步骤。
在一种可能的实施例中,在目标类别为地面直线类别的情况下,可以基于每一点的地面特征信息,确定属于地面上的点,然后,根据属于地面上的点的每一点的反射强度信息,将反射强度信息对应的反射强度较高的点作为属于地面直线类别的点,将确定的属于地面直线类别的点作为具有直线形状的目标对象的目标点云,例如,属于地面直线类别的点对应的目标对象在道路中可以表现为具有直线形状的车道线。
在目标类别为路杆类别的情况下,则需要在不属于地面上的点中确定属于路杆类别的目标点云。具体实施时,针对不属于地面上的点,可以根据每一点所在的对象(不属于地面的点所在的对象也被称为第一对象)的高度,确定出目标点云。具体实施时,针对点云数据而言,可能除了包括路杆和车道线等物体的点云,其中还会包括比如写字楼、房屋、石块障碍物等具有直线形状的物体的点云,因此,可以利用预先设置的关于高度的第一预设值,并基于属于地面上的点所属的对象的高度,从不属于地面上的点中筛选出高度大于第一预设值的对象(也被称为第二对象)所包括的点,这样,可以过滤掉高度较矮的对象(如石块障碍物)所包括的点,然后根据第二对象到路面边界的距离, 利用预先设置的关于距离的第二预设值,筛选出距离小于第二预设值的第二对象所包括的点,第二对象到路面边界的距离可以通过所述第二对象的中心点到路面边界的水平距离来表示。这样,能够过滤掉距离路面边界较远的对象所包括的点,例如,写字楼和房屋等物体所包括的点,因为在修建道路的过程中,会优先选择远离写字楼和房屋等建筑的地方,这些对象距离路面边界较远,而在安装路杆的过程中,会优先选择距离路面边界较近的地方,以起到指示行驶的作用,因此,利用第二预设值可以筛选出距离路面边界较近的对象(如路杆)所包括的点,然后可以将筛选出的点作为具有直线形状的目标对象的目标点云。
点云数据中的每一点对应的对象都存在高度,因此,利用第一预设值能够筛选出具有一定高度的对象,然后,再经过二次筛选,能准确地得到具有直线形状的目标对象的点,例如路杆类别的目标对象的点。
为了进一步提高确定目标点云的准确性,在利用第二预设值筛选出距离路面边界较近的对象之后,还可以利用关于点的数量的第三预设值对其进行进一步筛选。具体实施时,路杆的形状大多为细长形,而房屋或者写字楼等物体形状相对路杆的宽度更宽、长度更高,因此,每一个路杆所包括的点的数量要远远小于房屋或者写字楼等物体所包括的点,所以,可以利用第三预设值筛选出数量小于第三预设值的对象所包括的点,这样,能够过滤掉高度大于第一预设值且距离路面边界较近的对象中包括点的数量较多的对象,然后可以将数量小于第三预设值的对象作为具有直线形状的目标对象,并将目标对象的点作为目标点云。这样,通过第一预设值、第二预设值和第三预设值对不属于地面上的点所属的每一对象进行三次筛选,能够提高得到的目标点云的准确性。
然后,可以将筛选出的属于地面直线类别的目标点云和属于路杆类别的目标点云进行整合,得到只包括的目标对象的点的点云数据。具体实施时,筛选出目标点云的过程可以在1s内完成。
在另一种实施例中,也可以只利用第一预设值、第二预设值和第三预设值中的一种或多种对不属于地面上的点进行筛选,以确定目标点云,这里不进行限定。
另外,如果确定的目标像素点所在的目标对象不在同一平面上,在确定目标点云的过程中,确定的目标点云也需要不在同一平面上,且与目标像素点具有相同的目标类别。如果确定的目标像素点所在的目标对象在同一平面上,在确定目标点云的过程中,确定的目标点云也需要在同一平面上,且与目标像素点具有相同的目标类别。
进一步的,在确定目标像素点和目标点云之后,可以按照以下步骤确定拍摄目标图像的拍摄设备与采集点云数据的雷达之间的外参标定信息:
步骤一、基于目标像素点,确定目标像素点对应的目标对象在二维平面上的第一方程;
步骤二、基于目标点云,确定目标点云对应的目标对象在三维空间中的第二方程;
步骤三、基于第一方程和第二方程,确定拍摄目标图像的拍摄设备与采集点云数据的雷达之间的外参标定信息。
第一方程和第二方程能够准确表征目标对象在不同坐标系下的位置信息,利用在二维平面上的 第一方程和在三维空间中的第二方程,能够简化确定外参标定信息的步骤,并且能够降低外参标定信息的计算量,提高外参标定信息的计算速度。具体实施时,针对确定出的不同的目标对象所包括的目标像素点,可以利用直线霍夫变换确定出每一个目标对象所包括目标像素点所在直线在相机坐标系下的第一方程,针对确定出的不同的目标对象所包括的目标点云,可以再次利用RANSAC算法对每一个目标对象所包括的目标点云进行拟合,确定出目标点云所在的直线在雷达坐标系下的第二方程,进而,利用第一方程和第二方程,可以将外参标定的过程简化为基于给定的二维平面上的第一方程和三维空间中的第二方程,求解外参标定信息的过程,具体实施时,外参标定信息可以是表示拍摄设备与雷达之间的位姿关系的外参矩阵,外参矩阵包括旋转矩阵和平移矩阵。
在一种可能的实施方式中,可以按照以下过程确定外参标定信息,在确定目标像素点之后,可以利用目标像素点将目标图像转化为二值化图像,其中,每一个目标对象对应的目标像素点在二值化图像的数值为1,目标图像其他的像素点的数值为0,这样,利用二值化图像能够准确的反映出目标对象在目标图像中的位置,如图2所示,为本公开实施例所提供的一种二值化图像示意图,其中,白色部分表示目标像素点对应的目标对象,黑色部分表示其他像素点对应的其他对象。或者,还可以直接将目标像素点提取出来,然后基于目标像素点确定预设大小的二值化图像,这里不进行限定。
进一步的,可以从第一方程中,选取预设数量的目标对象,然后将预设数量的目标对象对应的第一方程作为目标方程,进而,基于选中的目标对象,可以从点云数据中的目标对象中筛选出与其匹配的预设数量的目标对象,并将该预设数量的目标对象对应的第二方程作为匹配的第二方程,进而,可以利用相匹配的第一方程和第二方程确定初始外参信息,其中,初始外参信息可以粗略的反映拍摄设备与雷达之间的位姿关系,具有一定的精度,但为了进一步得到更精准的外参标定信息,需要对确定的二值化图像进行逆距离变化,为二值化图像中的包括的每一像素点分别赋予对应的数值,其中,数值用于表征二值化图像中每一像素点与目标像素点所属的目标对象的距离信息,其中,每一个目标像素点的距离信息的数值为1,然后,可以根据除目标像素点以外的其他像素点与目标像素点所属的目标对象的距离远近程度,依次为每一个其他像素点赋予不同的距离信息,具体实施时,与目标对象的距离可以为其他像素点与目标对象对应的第一方程所在的直线上的、且处于同一高度的目标像素点的距离,针对处于第一方程所在的直线上却不属于目标对象的其他像素点,可以根据与该像素点距离最近的目标像素点之间的距离确定其数值。例如,与目标像素点距离为1的其他像素点的数值为0.999,与目标像素点距离为2的其他像素点的数值为0.998,依次类推,距离目标像素点越远,对应的数值越小。
进而,可以利用初始外参信息,将匹配成功的第二方程对应的目标对象所包括的目标点云投影到进行过逆距离变换的二值化图像中,从二值化图像中的每一像素点中确定出与目标点云中每一个点相匹配的匹配像素点,将匹配像素点与目标像素点所属的目标对象的距离信息作为目标点云中与该匹配像素点对应的点的距离信息,并且目标点云中的每一点的距离信息可以反映出目标点云与目标像素点的匹配程度,如图3所示,为本公开实施例所提供的一种将目标点云投影到二值化图像中的示意图,其中,灰色点状部分表示投影后的目标点云的位置。利用二值化图像能够准确的显示出目标像素点和其他像素点之间的区别,基于二值化图像确定的匹配程度信息,能够准确地反映出目 标点云投影到二值化图像上之后,与目标像素点之间的重叠程度,利用该匹配程度信息,能够提高外参标定的精度。
然后,可以根据每一个目标对象所包括的每一点的距离信息和初始外参信息,确定目标点云与目标像素点的匹配程度信息,其中,匹配程度信息可以反映目标点云与目标像素点的匹配程度,匹配程度对应的数值越大,表示确定的初始外参信息越准确。通过初始外参信息将目标点云投影到二值化图像中,再结合每一像素点与目标像素点对应的目标对象的距离信息,能够准确的确定投影到二值化图像中的点云与二值化图像中目标像素点的重叠程度,即能够准确地确定投影到二值化图像中的点云与目标像素点的匹配程度信息。具体实施时,匹配程度信息可以是根据目标点云中每一点的距离信息对应的数值之和和初始外参信息确定的,且匹配程度信息可以用具体的函数表示。
例如,匹配程度信息可以利用公式(1)表示:
Figure PCTCN2022077622-appb-000001
其中,J表示匹配程度信息,∑ line表示对目标点云中每一点的距离信息对应的数值进行求和,R表示初始外参信息中的旋转矩阵,p表示目标点云在雷达坐标系下点的坐标,t表示初始外参信息中的平移向量,pole表示路杆类别,lane表示车道线类别。K表示相机内参矩阵,H line表示表示逆距离变换后的二值化图像,
Figure PCTCN2022077622-appb-000002
表示点云中所有属于路杆类别的目标对象或者车道线类别的目标对象的点的集合。
Figure PCTCN2022077622-appb-000003
表示所有属于路杆类别的目标对象或者车道线类别的目标对象的点的个数。
这里,在确定初始外参信息的过程中,需要在选定预设数量的第一方程之后,从目标对象对应的第二方程中确定与第一方程相匹配的第二方程,为了减少该过程的计算量,可以从目标图像中选取第一数量的路杆类别的目标对象对应的第一方程和第二数量的地面直线类别的目标对象对应的第一方程作为目标第一方程,这里,在第二数量的数值大于2的情况下,选取的地面直线类别的目标对象可以平行,然后针对第一数量的第一方程,可以从目标点云对应的目标对象中的所有属于路杆类别的目标对象中,依次选取第一数量的属于路杆类别的目标对象对应的第二方程,然后从目标点云中对应的目标对象中的所有属于地面直线类别的目标对象中,依次选取第二数量的属于地面直线类别的目标对象对应的第二方程,然后利用每一次选中的第二方程和目标第一方程,确定其对应的初始外参信息,然后在利用该初始外参信息将选中的第二方程对应的目标点云投影到进行过逆距离变换的二值化图像中,以确定该初始外参信息对应的匹配程度信息。
具体实施时,利用三个第一方程及其对应的第二方程就可以确定初始外参信息,例如,第一数量的数值可以为1,第二数量的数值可以为2,在一种可能的实施方式中,也可以利用所有的第一方程和第二方程,确定初始外参信息,这里不进行限定。
通过选取第一数量的路杆类别的第一方程和第二数量的地面直线类别的第一方程,可以保证选取到不同平面上的第一方程,进而能够实现在不同平面维度上进行外参标定,以提高外参标定精度。在选取地面直线类别时,选取的目标对象可以平行,相互平行的目标对象在进行目标像素点和目标点云匹配时可以减少相互的干扰,而且地面直线类别中相互平行的元素较多,易于选取;在选取路 杆类别时同理。基于此,可以利用目标第一方程依次与目标点云中对应类别的目标对象的第二方程进行匹配,得到若干个初始外参信息及其分别对应的匹配程度信息,然后可以将每一个匹配程度信息对应的匹配程度的数值进行比较,将数值最大的匹配程度对应的匹配程度信息作为最终的匹配程度信息,并将该匹配程度信息对应的初始外参信息作为最终的初始外参信息,将确定该初始外参信息用到的第一方程和第二方程作为匹配正确的方程。
这样,利用多次匹配的方式,既能够提高第一方程和第二方程的匹配准确度,又能够确定出精度最高的初始外参信息,进而,能够提高得到的外参标定信息的精度。匹配程度最大时,说明目标像素点与投影到二值化图像中的点云的匹配程度最高,此时用于将目标点云投影到二值化图像中的初始外参信息最为准确,将该最为准确的初始外参信息作为外参标定信息,能够保证外参标定的精度。
在另一种实施例中,在确定初始外参信息的过程中,可以先在目标图像中选取预设数量目标对象,然后再确定目标对象对应的第一方程,这样,不需要确定所有目标对象的第一方程,只需要确定选取的目标对象对应的方程,能够减少计算量,提高外参标定的速度,进一步的,可以确定目标点云所属的所有目标对象的第二方程,依次选取预设数量的第二方程作为与第一方程相匹配的第二方程,并分别确定不同的初始外参信息和匹配程度信息,进而,可以确定最终的初始外参信息和匹配程度信息,关于确定初始外参信息和匹配程度信息的方法,这里不进行限定。
在另一种实施例中,也可以先选取预设数量的第二方程,然后确定与第二方程相匹配的第一方程,最后,确定初始外参信息和匹配程度信息,这里不进行限定。
进一步的,在确定初始外参信息和匹配程度信息之后,可以利用调整初始外参信息的方式,改变匹配程度信息对应的匹配程度的数值,直至匹配程度信息对应的匹配程度的数值最大,确定此时调整得到的初始外参信息,并将调整得到的初始外参信息作为外参标定信息,其中,匹配程度的数值越大,投影到二值化图像上的目标点云与目标像素点的匹配程度越高,所以匹配程度的数值最大,表示匹配程度最高。
由于得到的初始外参信息已经具有较高的精度,因此,可以在初始外参信息对应的目标点云的坐标处的预设范围内对其进行调整,即可以得到外参标定信息,具体实施时,初始外参信息可以用公式(2)表示:
Figure PCTCN2022077622-appb-000004
其中,T表示初始外参信息或者调整后的外参标定信息,R表示初始外参信息中的旋转矩阵,t表示初始外参信息中的平移向量,x,y,z为该平移向量在三维空间中的分量。
以匹配程度信息为函数为例,可以将匹配程度信息在预设范围内的极值点作为匹配程度最大时的数值,具体实施时,可以通过对T在预设范围内进行搜索,确定不同的R和t,然后利用不 同的R和t确定不同的匹配程度的数值,在确定匹配程度的数值为极值点的情况下,确定此时的初始外参信息,并将该初始外参信息作为最终的外参标定信息T,这样,可以得到精度更高的外参标定信息T,利用该外参标定信息T能够实现对拍摄设备与雷达之间的位姿关系的精确标定。另外,确定外参标定信息的过程可以在1s左右完成,基于此,可以在10s内自动完成整个外参标定的过程,大幅度的提高了外参标定的速度。
本公开实施例基于提取的属于道路本身的特征(车道线和路杆)的目标对象来确定外参标定信息,在不需要使用额外设置的标定物的基础上,整个过程还可以自动完成,既节省了设置标定物的时间,又提高了外参标定的效率和速度,另外,利用道路本身的具有直线形状的目标对象进行外参标定,能够有效降低计算量,提高标定速度和外参标定的精度。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
基于同一发明构思,本公开实施例中还提供了与标定方法对应的标定装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述标定方法相似,因此装置的实施可以参见方法的实施,重复之处不再赘述。
如图4所示,为本公开实施例提供的一种标定装置的示意图,包括:
第一获取模块401,用于获取同一场景下的目标图像和点云数据;
第二获取模块402,用于确定所述目标图像中具有直线形状的目标对象的目标像素点,并获取所述点云数据中具有所述直线形状的目标对象的目标点云;所述目标对象为与道路关联的元素;
确定模块403,用于基于所述目标像素点和所述目标点云,确定拍摄所述目标图像的拍摄设备与采集所述点云数据的雷达之间的外参标定信息。
在一种可能的实施方式中,所述目标对象包括位于不同平面内或同一平面内的多个目标对象,其中,不同平面可以是道路平面或与道路平面垂直的平面等。
在一种可能的实施方式中,所述第二获取模块402,用于获取所述具有直线形状的目标对象的目标类别;
基于所述目标图像,确定所述目标图像中每一像素点的类别;
基于所述每一像素点的类别,从所述每一像素点中筛选出所述目标类别对应的目标像素点。
在一种可能的实施方式中,所述目标类别包括地面直线类别;
所述第二获取模块402,用于基于所述每一像素点的类别,确定属于地面类别的像素点;
基于所述地面类别的像素点中每一像素点的像素亮度信息,确定属于所述地面直线类别的像素点,并将所述属于地面直线类别的像素点作为所述目标像素点。
在一种可能的实施方式中,所述目标类别包括路杆类别;
所述第二获取模块402,用于基于所述每一像素点的类别,确定属于所述路杆类别的像素点,并将属于所述路杆类别的像素点作为所述目标像素点。
在一种可能的实施方式中,所述第二获取模块402,用于基于所述点云数据,确定所述点云数据中每一点的地面特征信息;
基于所述每一点的地面特征信息,从所述点云数据中确定具有所述直线形状的目标对象的目标点云。
在一种可能的实施方式中,所述第二获取模块402,用于基于所述每一点的地面特征信息,确定属于地面上的点;
基于所述地面上的点中的每一点的反射强度信息,确定属于地面直线类别的点,并将所述属于地面直线类别的点作为具有所述直线形状的目标对象的目标点云。
在一种可能的实施方式中,所述第二获取模块402,用于基于所述每一点的地面特征信息,确定不属于地面上的点;
基于所述不属于地面上的点所属的第一对象的高度,从所述不属于地面上的点中筛选高度大于第一预设值的第二对象所包括的点;
从所述第二对象所包括的点中筛选出具有所述直线形状的目标对象所包括的点,并将筛选出的具有所述直线形状的目标对象所包括的点作为所述目标点云。
在一种可能的实施方式中,所述第二获取模块402,用于确定每一所述第二对象到路面边界的距离;
将所述距离小于第二预设值的第二对象所包括的点作为具有所述直线形状的目标对象的目标点云;其中具有所述直线形状的目标对象的类别为路杆类别。
在一种可能的实施方式中,所述第二获取模块402,用于确定每一所述第二对象所包括的点的数量;
将所述数量小于第三预设值的第二对象所包括的点作为具有所述直线形状的目标对象的目标点云。
在一种可能的实施方式中,所述确定模块403,用于基于所述目标像素点,确定所述目标像素点对应的目标对象在二维平面上的第一方程;
基于所述目标点云,确定所述目标点云对应的目标对象在三维空间中的第二方程;
基于所述第一方程和所述第二方程,确定拍摄所述目标图像的拍摄设备与采集所述点云数据的雷达之间的外参标定信息。
在一种可能的实施方式中,所述确定模块403,用于从所述第一方程中选取预设数量的目 标对象对应的第一方程;
确定与每个选取的目标对象对应的第一方程相匹配的第二方程;
基于所述预设数量的第一方程和与所述预设数量的第一方程匹配的第二方程,确定拍摄所述目标图像的拍摄设备与采集所述点云数据的雷达之间的初始外参信息;
基于所述初始外参信息、所述目标像素点和所述目标点云,确定所述外参标定信息。
在一种可能的实施方式中,所述预设数量包括第一数量和第二数量;
所述确定模块403,用于从所述第一方程中选取所述第一数量的路杆类别的目标对象对应的第一方程和所述第二数量的地面直线类别的目标对象对应的第一方程;
以及,用于确定与每个选取的地面直线类别的目标对象对应的第一方程相匹配的第二方程,以及,与每个选取的路杆类别的目标对象对应的第一方程相匹配的第二方程。
在一种可能的实施方式中,所述确定模块403,用于基于所述目标像素点,将所述目标图像转化为二值化图像;
基于所述二值化图像、所述目标像素点、所述初始外参信息和所述目标点云,确定所述目标点云与所述目标像素点的匹配程度信息;
基于所述匹配程度信息,对所述初始外参信息进行调整,得到所述外参标定信息。
在一种可能的实施方式中,所述确定模块403,用于基于所述二值化图像,确定所述二值化图像中每一像素点与所述目标像素点所属的目标对象的距离信息;
基于所述初始外参信息,从所述二值化图像中的每一像素点中确定与所述目标点云中每一点相匹配的匹配像素点;
将所述匹配像素点与所述目标像素点所属的目标对象的距离信息作为目标点云中与该匹配像素点对应的点的距离信息;
基于目标点云中每一点的距离信息和所述初始外参信息,确定所述匹配程度信息。
在一种可能的实施方式中,所述确定模块403,用于基于所述匹配程度信息,对所述初始外参信息进行调整,直至所述匹配程度信息对应的匹配程度最大,将匹配程度最大时的调整后的初始外参信息作为所述外参标定信息。
关于装置中的各模块的处理流程、以及各模块之间的交互流程的描述可以参照上述方法实施例中的相关说明,这里不再详述。
本公开实施例还提供了一种计算机设备,如图5所示,为本公开实施例提供的一种计算机设备结构示意图,包括:
处理器51和存储器52;所述存储器52存储有处理器51可执行的机器可读指令,处理器51用于执行存储器52中存储的机器可读指令,所述机器可读指令被处理器51执行时,处理器51 执行下述步骤:获取同一场景下的目标图像和点云数据;确定目标图像中具有直线形状的目标对象的目标像素点,并获取点云数据中具有直线形状的目标对象的目标点云,目标对象为与道路关联的元素;以及基于目标像素点和目标点云,确定拍摄目标图像的拍摄设备与采集点云数据的雷达之间的外参标定信息。
上述存储器52包括内存521和外部存储器522;这里的内存521也称内存储器,用于暂时存放处理器51中的运算数据,以及与硬盘等外部存储器522交换的数据,处理器51通过内存521与外部存储器522进行数据交换。
上述指令的具体执行过程可以参考本公开实施例中所述的标定方法的步骤,此处不再赘述。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的标定方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。
本公开实施例所提供的标定方法的计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令可用于执行上述方法实施例中所述的标定方法的步骤,具体可参见上述方法实施例,在此不再赘述。
该计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机, 服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应所述以权利要求的保护范围为准。

Claims (20)

  1. 一种标定方法,其特征在于,包括:
    获取同一场景下的目标图像和点云数据;
    确定所述目标图像中具有直线形状的目标对象的目标像素点,并获取所述点云数据中具有所述直线形状的目标对象的目标点云;所述目标对象为与道路关联的元素;
    基于所述目标像素点和所述目标点云,确定拍摄所述目标图像的拍摄设备与采集所述点云数据的雷达之间的外参标定信息。
  2. 根据权利要求1所述的方法,其特征在于,所述目标对象包括位于不同平面内或同一平面内的多个目标对象。
  3. 根据权利要求1或2所述的方法,其特征在于,确定所述目标图像中具有直线形状的目标对象的目标像素点,包括:
    获取所述具有直线形状的目标对象的目标类别;
    基于所述目标图像,确定所述目标图像中每一像素点的类别;
    基于所述每一像素点的类别,从所述每一像素点中筛选出所述目标类别对应的目标像素点。
  4. 根据权利要求3所述的方法,其特征在于,所述目标类别包括地面直线类别;
    基于所述每一像素点的类别,从所述每一像素点中筛选出所述目标类别对应的目标像素点,包括:
    基于所述每一像素点的类别,确定属于地面类别的像素点;
    基于所述地面类别的像素点中每一像素点的像素亮度信息,确定属于所述地面直线类别的像素点,并将所述属于地面直线类别的像素点作为所述目标像素点。
  5. 根据权利要求3或4所述的方法,其特征在于,所述目标类别包括路杆类别;
    基于所述每一像素点的类别,从所述每一像素点中筛选出所述目标类别对应的目标像素点,包括:
    基于所述每一像素点的类别,确定属于所述路杆类别的像素点,并将属于所述路杆类别的像素点作为所述目标像素点。
  6. 根据权利要求1至5任一项所述的方法,其特征在于,获取所述点云数据中具有所述直线形状的目标对象的目标点云,包括:
    基于所述点云数据,确定所述点云数据中每一点的地面特征信息;
    基于所述每一点的地面特征信息,从所述点云数据中确定具有所述直线形状的目标对象的目标点云。
  7. 根据权利要求6所述的方法,其特征在于,基于所述每一点的地面特征信息,从所述点云数据中确定具有所述直线形状的目标对象的目标点云,包括:
    基于所述每一点的地面特征信息,确定属于地面上的点;
    基于所述地面上的点中的每一点的反射强度信息,确定属于地面直线类别的点,并将所述属于地面直线类别的点作为具有所述直线形状的目标对象的目标点云。
  8. 根据权利要求6所述的方法,其特征在于,基于所述每一点的地面特征信息,从所述点云数据中确定具有所述直线形状的目标对象的目标点云,包括:
    基于所述每一点的地面特征信息,确定不属于地面上的点;
    基于所述不属于地面上的点所属的第一对象的高度,从所述不属于地面上的点中筛选高度大于第一预设值的第二对象所包括的点;
    从所述第二对象所包括的点中筛选出具有所述直线形状的目标对象所包括的点,并将筛选出的具有所述直线形状的目标对象所包括的点作为所述目标点云。
  9. 根据权利要求8所述的方法,其特征在于,从所述第二对象所包括的点中筛选出具有所述直线形状的目标对象所包括的点,包括:
    确定每一所述第二对象到路面边界的距离;
    将所述距离小于第二预设值的第二对象所包括的点作为具有所述直线形状的目标对象的目标点云;其中具有所述直线形状的目标对象的类别为路杆类别。
  10. 根据权利要求8所述的方法,其特征在于,从所述第二对象所包括的点中筛选出具有所述直线形状的目标对象所包括的点,包括:
    确定每一所述第二对象所包括的点的数量;
    将所述数量小于第三预设值的第二对象所包括的点作为具有所述直线形状的目标对象的目标点云。
  11. 根据权利要求1至10任一项所述的方法,其特征在于,基于所述目标像素点和所述目标点云,确定拍摄所述目标图像的拍摄设备与采集所述点云数据的雷达之间的外参标定信息,包括:
    基于所述目标像素点,确定所述目标像素点对应的所述目标对象在二维平面上的第一方程;
    基于所述目标点云,确定所述目标点云对应的所述目标对象在三维空间中的第二方程;
    基于所述第一方程和所述第二方程,确定拍摄所述目标图像的拍摄设备与采集所述点云数据的雷达之间的外参标定信息。
  12. 根据权利要求11所述的方法,其特征在于,基于所述第一方程和所述第二方程,确定拍摄所述目标图像的拍摄设备与采集所述点云数据的雷达之间的外参标定信息,包括:
    从所述第一方程中选取预设数量的目标对象对应的第一方程;
    确定与每个选取的目标对象对应的第一方程相匹配的第二方程;
    基于所述预设数量的第一方程和与所述预设数量的第一方程匹配的第二方程,确定拍摄所述目标图像的拍摄设备与采集所述点云数据的雷达之间的初始外参信息;
    基于所述初始外参信息、所述目标像素点和所述目标点云,确定所述外参标定信息。
  13. 根据权利要求12所述的方法,其特征在于,从所述第一方程中选取预设数量的目标对象对应的第一方程,包括:
    从所述第一方程中选取第一数量的路杆类别的目标对象对应的第一方程和第二数量的地面直线类别的目标对象对应的第一方程;
    确定与每个选取的目标对象对应的第一方程相匹配的第二方程,包括:
    确定与每个选取的地面直线类别的目标对象对应的第一方程相匹配的第二方程,以及,与每个选取的路杆类别的目标对象对应的第一方程相匹配的第二方程。
  14. 根据权利要求12或13所述的方法,其特征在于,基于所述初始外参信息、所述目标像素点和所述目标点云,确定所述外参标定信息,包括:
    基于所述目标像素点,将所述目标图像转化为二值化图像;
    基于所述二值化图像、所述目标像素点、所述初始外参信息和所述目标点云,确定所述目标点云与所述目标像素点的匹配程度信息;
    基于所述匹配程度信息,对所述初始外参信息进行调整,得到所述外参标定信息。
  15. 根据权利要求14所述的方法,其特征在于,基于所述二值化图像、所述目标像素点、所述初始外参信息和所述目标点云,确定所述目标点云与所述目标像素点的匹配程度信息,包括:
    基于所述二值化图像,确定所述二值化图像中每一像素点与所述目标像素点所属的目标对象的距离信息;
    基于所述初始外参信息,从所述二值化图像中的每一像素点中确定与所述目标点云中每一点相匹配的匹配像素点;
    将所述匹配像素点与所述目标像素点所属的目标对象的距离信息作为目标点云中与该匹配像素点对应的点的距离信息;
    基于目标点云中每一点的距离信息和所述初始外参信息,确定所述匹配程度信息。
  16. 根据权利要求14或15所述的方法,其特征在于,基于所述匹配程度信息,对所述初始外参信息进行调整,得到所述外参标定信息,包括:
    基于所述匹配程度信息,对所述初始外参信息进行调整,直至所述匹配程度信息对应的匹配程度最大,将匹配程度最大时的调整后的初始外参信息作为所述外参标定信息。
  17. 一种标定装置,其特征在于,包括:
    第一获取模块,用于获取同一场景下的目标图像和点云数据;
    第二获取模块,用于确定所述目标图像中具有直线形状的目标对象的目标像素点,并获取所述点云数据中具有所述直线形状的目标对象的目标点云;所述目标对象为与道路关联的元素;
    确定模块,用于基于所述目标像素点和所述目标点云,确定拍摄所述目标图像的拍摄设备与采集所述点云数据的雷达之间的外参标定信息。
  18. 一种计算机设备,其特征在于,包括:处理器、存储器,所述存储器存储有所述处理器可执行的机器可读指令,所述处理器用于执行所述存储器中存储的机器可读指令,所述机器可读指令被所述处理器执行时,所述处理器执行如权利要求1至16任意一项所述的标定方法的步骤。
  19. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被计算机设备运行时,所述计算机设备执行如权利要求1至16任意一项所述的标定方法的步骤。
  20. 一种计算机程序,所述计算机程序存储于存储介质上,所述计算机程序被计算机设备运行时,所述计算机设备执行如权利要求1至16任意一项所述的标定方法的步骤。
PCT/CN2022/077622 2021-02-26 2022-02-24 一种标定方法、装置、计算机设备和存储介质 WO2022179549A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110218895.6A CN112802126A (zh) 2021-02-26 2021-02-26 一种标定方法、装置、计算机设备和存储介质
CN202110218895.6 2021-02-26

Publications (1)

Publication Number Publication Date
WO2022179549A1 true WO2022179549A1 (zh) 2022-09-01

Family

ID=75816077

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/077622 WO2022179549A1 (zh) 2021-02-26 2022-02-24 一种标定方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN112802126A (zh)
WO (1) WO2022179549A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112802126A (zh) * 2021-02-26 2021-05-14 上海商汤临港智能科技有限公司 一种标定方法、装置、计算机设备和存储介质
CN114241063A (zh) * 2022-02-22 2022-03-25 聚时科技(江苏)有限公司 一种基于深度霍夫变换的多传感器外参在线标定方法
CN114862866B (zh) * 2022-07-11 2022-09-20 深圳思谋信息科技有限公司 标定板的检测方法、装置、计算机设备和存储介质
CN115861439B (zh) * 2022-12-08 2023-09-29 重庆市信息通信咨询设计院有限公司 一种深度信息测量方法、装置、计算机设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108345822A (zh) * 2017-01-22 2018-07-31 腾讯科技(深圳)有限公司 一种点云数据处理方法及装置
US20180224289A1 (en) * 2017-02-03 2018-08-09 Ushr, Inc. Active driving map for self-driving road vehicle
CN110766731A (zh) * 2019-10-21 2020-02-07 武汉中海庭数据技术有限公司 一种全景影像与点云自动配准的方法、装置及存储介质
CN111383279A (zh) * 2018-12-29 2020-07-07 阿里巴巴集团控股有限公司 外参标定方法、装置及电子设备
CN112154446A (zh) * 2019-09-19 2020-12-29 深圳市大疆创新科技有限公司 立体车道线确定方法、装置和电子设备
CN112802126A (zh) * 2021-02-26 2021-05-14 上海商汤临港智能科技有限公司 一种标定方法、装置、计算机设备和存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5189556B2 (ja) * 2009-05-22 2013-04-24 富士重工業株式会社 車線検出装置
CN105701478B (zh) * 2016-02-24 2019-03-26 腾讯科技(深圳)有限公司 杆状地物提取的方法和装置
CN108304749A (zh) * 2017-01-13 2018-07-20 比亚迪股份有限公司 道路减速线识别方法、装置及车辆
US10754035B2 (en) * 2017-01-17 2020-08-25 Aptiv Technologies Limited Ground classifier system for automated vehicles
CN112069856B (zh) * 2019-06-10 2024-06-14 商汤集团有限公司 地图生成方法、驾驶控制方法、装置、电子设备及系统
CN111127563A (zh) * 2019-12-18 2020-05-08 北京万集科技股份有限公司 联合标定方法、装置、电子设备及存储介质
CN112017248B (zh) * 2020-08-13 2022-04-01 河海大学常州校区 一种基于点线特征的2d激光雷达相机多帧单步标定方法
CN112017240B (zh) * 2020-08-18 2022-08-26 浙江大学 一种面向无人叉车的托盘识别定位方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108345822A (zh) * 2017-01-22 2018-07-31 腾讯科技(深圳)有限公司 一种点云数据处理方法及装置
US20180224289A1 (en) * 2017-02-03 2018-08-09 Ushr, Inc. Active driving map for self-driving road vehicle
CN111383279A (zh) * 2018-12-29 2020-07-07 阿里巴巴集团控股有限公司 外参标定方法、装置及电子设备
CN112154446A (zh) * 2019-09-19 2020-12-29 深圳市大疆创新科技有限公司 立体车道线确定方法、装置和电子设备
CN110766731A (zh) * 2019-10-21 2020-02-07 武汉中海庭数据技术有限公司 一种全景影像与点云自动配准的方法、装置及存储介质
CN112802126A (zh) * 2021-02-26 2021-05-14 上海商汤临港智能科技有限公司 一种标定方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN112802126A (zh) 2021-05-14

Similar Documents

Publication Publication Date Title
WO2022179549A1 (zh) 一种标定方法、装置、计算机设备和存储介质
CN111968172B (zh) 料场物料的体积测量方法及系统
Fan et al. Registration of optical and SAR satellite images by exploring the spatial relationship of the improved SIFT
CN109520500B (zh) 一种基于终端拍摄图像匹配的精确定位及街景库采集方法
CN112581629A (zh) 增强现实显示方法、装置、电子设备及存储介质
CN111028358B (zh) 室内环境的增强现实显示方法、装置及终端设备
WO2021136386A1 (zh) 数据处理方法、终端和服务器
CN105721853A (zh) 用于深度图生成的数码相机的配置设置
CN112200854B (zh) 一种基于视频图像的叶类蔬菜三维表型测量方法
CN104331682A (zh) 一种基于傅里叶描述子的建筑物自动识别方法
CN110533774B (zh) 一种基于智能手机的三维模型重建方法
CN109697749A (zh) 一种用于三维建模的方法和装置
WO2023279584A1 (zh) 一种目标检测方法、目标检测装置及机器人
CN109242787A (zh) 一种中小学艺术测评中绘画录入方法
CN115908774B (zh) 一种基于机器视觉的变形物资的品质检测方法和装置
CN111507340B (zh) 一种基于三维点云数据的目标点云数据提取方法
CN114972646B (zh) 一种实景三维模型独立地物的提取与修饰方法及系统
CN113077523A (zh) 一种标定方法、装置、计算机设备及存储介质
CN116664892A (zh) 基于交叉注意与可形变卷积的多时相遥感图像配准方法
CN116051537A (zh) 基于单目深度估计的农作物株高测量方法
CN110910379A (zh) 一种残缺检测方法及装置
CN114119695A (zh) 一种图像标注方法、装置及电子设备
CN112446926B (zh) 一种激光雷达与多目鱼眼相机的相对位置标定方法及装置
CN117745850A (zh) 地图矢量化生成方法、装置和服务器
CN114913246B (zh) 相机标定方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22758913

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22758913

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07.02.2024)