CN111123912B - Calibration method and device for travelling crane positioning coordinates - Google Patents

Calibration method and device for travelling crane positioning coordinates Download PDF

Info

Publication number
CN111123912B
CN111123912B CN201911195997.XA CN201911195997A CN111123912B CN 111123912 B CN111123912 B CN 111123912B CN 201911195997 A CN201911195997 A CN 201911195997A CN 111123912 B CN111123912 B CN 111123912B
Authority
CN
China
Prior art keywords
calibration
target
coordinates
image
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911195997.XA
Other languages
Chinese (zh)
Other versions
CN111123912A (en
Inventor
龚伟林
郭晋文
郭立鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Zhijia Technology Co Ltd
Original Assignee
Suzhou Zhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhijia Technology Co Ltd filed Critical Suzhou Zhijia Technology Co Ltd
Priority to CN201911195997.XA priority Critical patent/CN111123912B/en
Publication of CN111123912A publication Critical patent/CN111123912A/en
Application granted granted Critical
Publication of CN111123912B publication Critical patent/CN111123912B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The embodiment of the application provides a method and a device for calibrating a driving positioning coordinate, wherein the method comprises the following steps: fixing a calibration plate in advance, wherein the edge area and the inner area of the calibration plate are respectively provided with light reflectors with different colors; during calibration, controlling the target vehicle to move, and respectively acquiring a plurality of point cloud data and a plurality of image data aiming at the calibration plate at a plurality of different time points; performing synchronous matching according to the point cloud data and the time points corresponding to the image data to obtain a matching group containing the point cloud data and the image data of the same time point; respectively determining radar coordinates, left lens image coordinates and right lens image coordinates of a time point corresponding to the matching group according to the point cloud data and the image data in the matching group to obtain corresponding matching coordinate pairs; and determining the calibration transformation parameters of the positioning coordinates according to the matching coordinate pairs. Therefore, the technical problems of low calibration efficiency and poor accuracy in the existing calibration method are solved.

Description

Method and device for calibrating travelling crane positioning coordinates
Technical Field
The application relates to the technical field of automatic driving, in particular to a method and a device for calibrating a driving positioning coordinate.
Background
When a vehicle runs in a mode such as automatic driving or auxiliary driving, the vehicle often detects a target object in a running environment through two different positioning modes, namely a laser radar based mode and a binocular camera comprising a left lens and a right lens based mode, so as to obtain corresponding detection data about the target object, and then combines the two modes to obtain the detection data to determine specific position information of the target object relative to the vehicle, so as to control the vehicle to avoid the target object to run safely according to the position information.
Before the position information of the target object relative to the vehicle is determined by comprehensively utilizing the detection data obtained by the two modes of the laser radar and the binocular camera, the driving positioning coordinates of the target vehicle obtained based on the two modes are usually calibrated.
At present, the implementation process of the existing calibration method is often complicated, and the technical problems of low calibration efficiency and poor accuracy exist.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the application provides a method and a device for calibrating a driving positioning coordinate, which are used for solving the technical problems of low calibration efficiency and poor accuracy in the existing method, and achieving the technical effects of simplifying a calibration process and efficiently and accurately calibrating by tracking a target point.
The embodiment of the application provides a method for calibrating a driving positioning coordinate, which comprises the following steps:
responding to a calibration request of a positioning coordinate, controlling a target vehicle to be calibrated to move, acquiring a plurality of point cloud data aiming at a calibration plate through a laser radar at a plurality of time points, and acquiring a plurality of image data aiming at the calibration plate through a binocular camera; the calibration plate is provided with a target point, the edge area of the calibration plate is provided with a reflector with a first color, the area between the edge area of the calibration plate and the target point is provided with a reflector with a second color, and the calibration plate is fixed at a preset position point;
establishing a plurality of matching groups according to the time points corresponding to the point cloud data and the time points corresponding to the image data, wherein the matching groups comprise the point cloud data and the image data corresponding to the same time points;
according to the point cloud data and the image data contained in the matching group, determining radar coordinates, left lens image coordinates and right lens image coordinates of a target point relative to a target vehicle, which are measured at a time point corresponding to the matching group, and obtaining a plurality of groups of matching coordinate pairs;
determining calibration transformation parameters of the positioning coordinates according to the multiple groups of matched coordinate pairs, wherein the calibration transformation parameters comprise: the camera comprises an inner parameter of a left lens, an inner parameter of a right lens, an outer parameter between the left lens and the right lens, and an outer parameter between the binocular camera and the laser radar.
In one embodiment, after determining calibration transformation parameters of the positioning coordinates according to the multiple sets of matching coordinate pairs, the method further includes:
acquiring point cloud data aiming at a target obstacle through a laser radar, and acquiring image data aiming at the target obstacle through a binocular camera;
determining the current relative position coordinates of the target obstacle relative to the target vehicle according to the point cloud data aiming at the target obstacle, the image data aiming at the target obstacle and the calibration transformation parameters;
and controlling the target vehicle to run according to the current relative position coordinate so as to bypass the target obstacle.
In one embodiment, determining the left lens image coordinate and the right lens image coordinate of the target point relative to the target vehicle, which are measured at the time point corresponding to the matching group, according to the image data included in the matching group includes:
determining image data of a calibration plate area in the left lens image and image data of the calibration plate area in the right lens image by respectively retrieving areas of which the brightness contrast values are larger than a preset contrast threshold value in the left lens image and the right lens image, wherein the image data comprises the left lens image and the right lens image;
determining the coordinates of the target point relative to the left lens image of the target vehicle according to the image data of the calibration plate area in the left lens image through Hough transformation; and determining the coordinates of the target point relative to the right lens image of the target vehicle according to the image data of the calibration plate area in the right lens image through Hough transformation.
In one embodiment, determining radar coordinates of a target point measured at a time point corresponding to the matching group relative to the target vehicle according to the point cloud data included in the matching group includes:
extracting point cloud data located in a preset interest area from the point cloud data;
according to the point cloud intensity, separating point cloud data with the point cloud intensity greater than the preset intensity from the point cloud data of the preset interest area to serve as the point cloud data of the calibration board;
and determining the three-dimensional coordinates of the target point according to the point cloud data of the calibration plate, and using the three-dimensional coordinates as radar coordinates of the target point relative to the target vehicle.
In one embodiment, after determining the calibration transformation parameters of the positioning coordinates according to the plurality of sets of matching coordinate pairs, the method further includes:
reconstructing an image three-dimensional coordinate of the target point relative to the target vehicle according to the left lens image coordinate and the right lens image coordinate in the matched coordinate pair;
according to the three-dimensional coordinates of the image and the radar coordinates in the matched coordinate pair, determining an alignment error;
and optimizing and calibrating transformation parameters according to the alignment error.
The embodiment of the present application further provides a calibration apparatus for driving location coordinates, including:
the acquisition module is used for responding to a calibration request of the positioning coordinates, controlling a target vehicle to be calibrated to move, acquiring a plurality of point cloud data aiming at the calibration plate through a laser radar at a plurality of time points, and acquiring a plurality of image data aiming at the calibration plate through a binocular camera; the calibration plate is provided with a target point, the edge area of the calibration plate is provided with a reflector with a first color, the area between the edge area of the calibration plate and the target point is provided with a reflector with a second color, and the calibration plate is fixed at a preset position point;
the matching module is used for establishing a plurality of matching groups according to the time points corresponding to the point cloud data and the time points corresponding to the image data, wherein the matching groups comprise the point cloud data and the image data corresponding to the same time points;
the first determining module is used for determining radar coordinates, left lens image coordinates and right lens image coordinates of a target point relative to a target vehicle, which are measured at a time point corresponding to the matching group, according to point cloud data and image data contained in the matching group to obtain a plurality of groups of matching coordinate pairs;
a second determining module, configured to determine calibration transformation parameters of the positioning coordinates according to the multiple sets of matching coordinate pairs, where the calibration transformation parameters include: the camera comprises an inner parameter of a left lens, an inner parameter of a right lens, an outer parameter between the left lens and the right lens, and an outer parameter between the binocular camera and the laser radar.
In one embodiment, the device further comprises an application module, wherein the application module is used for acquiring point cloud data aiming at a target obstacle through a laser radar and acquiring image data aiming at the target obstacle through a binocular camera; determining the current relative position coordinates of the target obstacle relative to the target vehicle according to the point cloud data aiming at the target obstacle, the image data aiming at the target obstacle and the calibration transformation parameters; and controlling the target vehicle to run according to the current relative position coordinate so as to bypass the target obstacle.
In one embodiment, the device further comprises an optimization module, wherein the optimization module is used for reconstructing image three-dimensional coordinates of the target point relative to the target vehicle according to the left lens image coordinates and the right lens image coordinates in the matched coordinate pair; according to the three-dimensional coordinates of the image and the radar coordinates in the matched coordinate pair, determining an alignment error; and optimizing and calibrating transformation parameters according to the alignment error.
The embodiment of the application also provides electronic equipment, which comprises a processor and a memory for storing executable instructions of the processor, wherein when the processor executes the instructions, the processor realizes a calibration request for responding to the positioning coordinates, controls a target vehicle to be calibrated to move, acquires a plurality of point cloud data aiming at a calibration plate through a laser radar at a plurality of time points, and acquires a plurality of image data aiming at the calibration plate through a binocular camera; the calibration plate is provided with a target point, the edge area of the calibration plate is provided with a reflector with a first color, the area between the edge area of the calibration plate and the target point is provided with a reflector with a second color, and the calibration plate is fixed at a preset position point; establishing a plurality of matching groups according to the time points corresponding to the point cloud data and the time points corresponding to the image data, wherein the matching groups comprise the point cloud data and the image data corresponding to the same time points; according to the point cloud data and the image data contained in the matching group, determining radar coordinates, left lens image coordinates and right lens image coordinates of a target point relative to a target vehicle, which are measured at a time point corresponding to the matching group, and obtaining a plurality of groups of matching coordinate pairs; determining calibration transformation parameters of the positioning coordinates according to the multiple groups of matched coordinate pairs, wherein the calibration transformation parameters comprise: the camera comprises an inner parameter of a left lens, an inner parameter of a right lens, an outer parameter between the left lens and the right lens, and an outer parameter between the binocular camera and the laser radar.
The embodiment of the application also provides a computer readable storage medium, on which computer instructions are stored, and when the instructions are executed, the instructions implement responding to a calibration request of a positioning coordinate, controlling a target vehicle to be calibrated to move, and at a plurality of time points, acquiring a plurality of point cloud data aiming at a calibration plate through a laser radar, and acquiring a plurality of image data aiming at the calibration plate through a binocular camera; the calibration plate is provided with a target point, the edge area of the calibration plate is provided with a reflector with a first color, the area between the edge area of the calibration plate and the target point is provided with a reflector with a second color, and the calibration plate is fixed at a preset position point; establishing a plurality of matching groups according to the time points corresponding to the point cloud data and the time points corresponding to the image data, wherein the matching groups comprise the point cloud data and the image data corresponding to the same time points; according to the point cloud data and the image data contained in the matching group, determining radar coordinates, left lens image coordinates and right lens image coordinates of a target point relative to a target vehicle, which are measured at a time point corresponding to the matching group, and obtaining a plurality of groups of matching coordinate pairs; determining calibration transformation parameters of the positioning coordinates according to the multiple groups of matching coordinate pairs, wherein the calibration transformation parameters comprise: the camera comprises an inner parameter of a left lens, an inner parameter of a right lens, an outer parameter between the left lens and the right lens, and an outer parameter between the binocular camera and the laser radar.
In the embodiment of the application, before calibration, a calibration plate is fixedly arranged at a preset position point, wherein the edge area and the inner area of the calibration plate are respectively provided with reflectors with different colors; during calibration, firstly controlling a target vehicle to move, respectively acquiring a plurality of point cloud data aiming at a calibration plate through a laser radar at a plurality of different time points, and simultaneously acquiring a plurality of image data aiming at the calibration plate through a binocular camera; then according to the point cloud data and the time points corresponding to the image data, establishing a matching group containing the point cloud data and the image data of the same time point through synchronous matching; further, according to the point cloud data and the image data in each matching group, respectively determining a radar coordinate, a left lens image coordinate and a right lens image coordinate of a time point corresponding to the matching group to obtain a plurality of corresponding matching coordinate pairs; and determining the calibration transformation parameters of the positioning coordinates according to the matching coordinate pairs. Therefore, multiple times of calibration are not needed, and the technical problems of low calibration efficiency and poor accuracy in the conventional calibration method are solved. The technical effects of simplifying the calibration process and efficiently and accurately calibrating by tracking the target point are achieved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
FIG. 1 is a flowchart illustrating a method for calibrating vehicle location coordinates according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an embodiment of a calibration method for vehicle location coordinates, which is provided by the embodiment of the present application, applied in a scene example;
fig. 3 is a schematic diagram of an embodiment of a calibration method for vehicle location coordinates, which is provided by the embodiment of the present application and is applied in a scene example;
fig. 4 is a structural diagram of a calibration apparatus for vehicle location coordinates according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device based on a method for calibrating a driving positioning coordinate provided in an embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Generally, when a vehicle enters an automatic driving mode or a semi-automatic driving mode such as an auxiliary driving mode, in order to accurately determine the relative position of an obstacle and the vehicle in a driving environment so as to accurately generate a corresponding driving strategy and avoid the obstacle, a plurality of positioning modes are mostly combined to detect the obstacle, so as to accurately determine the positioning coordinates of the obstacle relative to the vehicle.
For example, the vehicle acquires point cloud data for an obstacle by a laser radar mounted on the vehicle, and calculates radar coordinates of the obstacle based on a radar coordinate system (e.g., a three-dimensional coordinate system). Meanwhile, the vehicle calculates image coordinates of the obstacle based on an image coordinate system (e.g., a two-dimensional coordinate system) from image data by capturing the image data for the obstacle by a binocular camera including a left lens and a right lens provided on the vehicle. And then the positioning coordinates of the obstacle relative to the current vehicle can be more accurately determined by combining the two types of positioning coordinates.
However, in order to determine the relative position of the obstacle with respect to the vehicle by comprehensively using two different types of positioning coordinates (abbreviated as radar coordinates and image coordinates) obtained by the two different positioning methods, it is necessary to calibrate the positioning coordinates obtained by the two different positioning methods obtained by the vehicle in advance.
The specific implementation process of the existing calibration method is considered to be complex. For example, based on the existing calibration method, a checkerboard is used to perform a first calibration, and the internal reference of the left lens and the internal reference of the right lens in the camera are calibrated respectively. After the internal reference of the right lens and the internal reference of the right lens in the camera are determined, the checkerboard can be further reused for second calibration so as to determine the external reference between the left lens and the right lens in the camera. Finally, a third calibration is performed on the laser radar and the camera to determine external parameters between the camera and the radar. In addition, because multiple times of calibration are needed, new errors may be introduced into each time of calibration, and the errors of the final calibration result are relatively large due to the cumulative transmission of multiple times of calibration errors, thereby affecting the calibration accuracy.
Aiming at the problems existing in the existing calibration method in the concrete implementation, the calibration plate can be fixedly arranged at the preset position point before the calibration, wherein the edge area and the inner area of the calibration plate are respectively provided with the light reflecting objects with different colors. During calibration, a target vehicle is controlled to move firstly, a plurality of point cloud data aiming at the calibration plate are obtained through the laser radar at a plurality of different time points respectively, and a plurality of image data aiming at the calibration plate are obtained through the binocular camera. Therefore, the target point can be tracked by controlling the movement of the target vehicle, and a plurality of point cloud data and image data with higher reference values can be efficiently acquired. Then according to the point cloud data and the time points corresponding to the image data, establishing a matching group containing the point cloud data and the image data of the same time point through synchronous matching; further, according to the point cloud data and the image data in each matching group, respectively determining a radar coordinate, a left lens image coordinate and a right lens image coordinate of a time point corresponding to the matching group to obtain a plurality of corresponding matching coordinate pairs; and determining the calibration transformation parameters of the positioning coordinates according to the matching coordinate pairs. Therefore, the calibration transformation parameters including the internal reference of the left lens, the internal reference of the right lens, the external reference between the left lens and the right lens in the binocular camera and the external reference between the binocular camera and the laser radar can be determined only by once calibration without carrying out multiple times of calibration. Therefore, the technical problems of low calibration efficiency and poor accuracy in the existing calibration method are solved. The technical effects of simplifying the calibration process and efficiently and accurately calibrating by tracking and calibrating the target point are achieved.
Based on the thought, the embodiment of the application provides a calibration method for a driving positioning coordinate. Specifically, please refer to a processing flow chart of a calibration method for a vehicle location coordinate provided by an embodiment of the present application shown in fig. 1. The calibration method for the vehicle location coordinate provided by the embodiment of the application can comprise the following steps in specific implementation.
S101: responding to a calibration request of a positioning coordinate, controlling a target vehicle to be calibrated to move, acquiring a plurality of point cloud data aiming at a calibration plate through a laser radar at a plurality of time points, and acquiring a plurality of image data aiming at the calibration plate through a binocular camera; the calibration plate is provided with a target point, the edge area of the calibration plate is provided with a reflector with a first color, the area between the edge area of the calibration plate and the target point is provided with a reflector with a second color, and the calibration plate is fixed at a preset position point.
In one embodiment, the target vehicle may specifically include a vehicle provided with a laser radar, a binocular camera, and other related electronic devices (e.g., a processor, etc.). The laser radar can be used for collecting related point cloud data to determine radar coordinates based on a radar coordinate system. The binocular camera comprises a left lens and a right lens. In specific implementation, two different images, namely a left lens image and a right lens image, of the target object can be acquired through a left lens and a right lens of the binocular camera respectively and used as image data of the target object measured by the binocular camera. Further, the depth may be calculated from the two images by using the parallax, the relative position of the object with respect to the target vehicle may be specified, and the corresponding image coordinates based on the image coordinate system may be obtained. The target vehicle can integrate the two types of coordinates to more accurately determine the relative position of a target object in the driving environment, such as an obstacle, and the target vehicle, so as to obtain specific positioning coordinates for the target object. A corresponding vehicle running control strategy can be generated according to the positioning coordinates of the target object so as to control the target vehicle to run around the obstacle; or generating corresponding prompt information or an auxiliary strategy to assist the driving control target vehicle to drive around the obstacle.
Specifically, the target vehicle may be a vehicle that supports an autonomous driving mode, a vehicle that supports a semi-autonomous driving mode or a vehicle that supports only an assist driving mode, or the like.
In an embodiment, the method for calibrating driving coordinates may be specifically applied to an application scenario in which a production party calibrates positioning coordinates of a produced target vehicle based on a laser radar and positioning coordinates determined based on a binocular camera before the target vehicle leaves a factory. The method can also be applied to an application scene that when a user uses a target vehicle, the used target vehicle is recalibrated based on the positioning coordinates determined by the laser radar and the positioning coordinates determined by the binocular camera.
In one embodiment, the target vehicle may specifically receive a calibration request of the positioning coordinates from a production party or a user, and perform specific calibration of the positioning coordinates on the target vehicle.
In one embodiment, before the specific calibration, a calibration board may be fixedly disposed as an object for use in the specific positioning coordinate calibration. Specifically, as shown in fig. 2, the calibration plate may be a circular flat plate. For example, the calibration plate used may be a circular flat plate with a diameter of 1.8 meters. In a specific implementation, a center point in the calibration plate, for example, a center point in the circular flat plate, may be selected as the target point. Of course, it should be noted that the shapes and sizes of the calibration plates and the set target points are only illustrative. In specific implementation, according to specific situations, a flat plate with other shapes and sizes, such as a square flat plate with a side length of 2 meters, may be selected as the calibration plate, and other points in the calibration plate may be selected as the target points.
For the calibration board, in order to facilitate the acquisition and processing of the related data in the subsequent calibration process, in specific implementation, a light reflector of a first color may be disposed in an edge region of the calibration board, for example, a circular ring at the edge of the circular calibration board shown in fig. 2. In the area between the edge area of the calibration plate and the target point, for example the inner edge of the circular calibration plate shown in fig. 2, which is surrounded by a circle at the edge, a reflector of the second color is provided. Wherein, the first color and the second color are different colors. Further, in order to more accurately process the image data of the calibration board acquired by the binocular camera, the first color and the second color may be different colors with a significant color contrast therebetween. Specifically, the first color may be a darker color, such as red or blue, and the second color may be a lighter color, such as white or gray. In this embodiment, a red reflective stripe may be attached to the edge region as a first color reflector, and a white reflective stripe may be attached to the region between the edge region and the target point as a second color reflector. Therefore, when image data acquired by the binocular camera is processed subsequently, the image data of the area where the calibration plate is located can be quickly found out from the image data by detecting the area with larger contrast value and more obvious color contrast in the image data, and the image data of the area where the calibration plate is located can be accurately separated from the image data of the area where the adjacent environment of the calibration plate is located.
In one embodiment, it should be noted that after the calibration board is fixed at the preset position point, the fixing position of the calibration board is not changed. During specific calibration, the target vehicle can be controlled to move in a calibration scene, and a target point in the fixed calibration plate is tracked in the moving process, so that point cloud data and image data obtained at each time point in a plurality of time points in the moving process are obtained.
In one embodiment, in specific implementation, a calibration request of a positioning coordinate may be responded, a target vehicle to be calibrated is controlled to move in a calibration field of a calibration plate fixedly arranged, and a laser radar and a binocular camera of the target vehicle are controlled to face the calibration plate in a moving process, so that the target vehicle is controlled to acquire power data for the calibration plate through the laser radar at a time point every other time point in the moving process, and meanwhile, the target vehicle is also controlled to acquire a left lens image and a right lens image for the calibration plate as image data for the calibration plate through the binocular camera at the time point. Thereby obtaining a plurality of point cloud data and a plurality of image data. Wherein each point cloud data of the plurality of point cloud data corresponds to a time point, and each image data of the plurality of images corresponds to a time point.
In one embodiment, the plurality of time points may be equally spaced time points, for example, equally spaced time points of 30 seconds. It may be time points with unequal intervals, for example, the first time point is 30 seconds apart from the second time point, the second time point is 60 seconds apart from the third time point, and so on. The present application is not limited thereto.
In this embodiment, at different time points, the target vehicle may be located at different position points or may be located at the same position point due to movement.
Specifically, as shown in fig. 3, a site of 100 × 30 m is selected as a calibration site, and the calibration board is fixedly disposed at a predetermined position 5 m away from the target vehicle. Further, the target vehicle is controlled to adjust the direction of the vehicle head, so that the laser radar and the binocular camera which are arranged on the target vehicle can observe the calibration plate. After a data acquisition program is started, starting time is taken as an initial first time point, a laser radar is called to acquire point cloud data corresponding to a calibration plate at the first time point as point cloud data corresponding to the time point, and a binocular camera is called to shoot to obtain a left lens image and a right lens image which comprise the calibration plate as image data corresponding to the time point. And after the data acquisition of the first time point is finished, continuing to control the target vehicle to move in the calibration field, when the second time is reached, calling the laser radar again to obtain the point cloud data corresponding to the calibration plate at the second time point as the point cloud data corresponding to the time point, and simultaneously calling the binocular camera to shoot to obtain a left lens image and a right lens image which comprise the calibration plate as the image data corresponding to the time point. And completing data acquisition of the second time point. According to the mode, the point cloud data and the image data of a plurality of time points can be acquired through tracking detection aiming at the target point in the calibration plate, and the point cloud data and the image data which respectively correspond to the time points and have higher reference values are obtained.
In this embodiment, in order to make the subsequent calibration more accurate, a plurality of time points may be set, and a greater number of point cloud data and navigation positioning data may be collected. For example, 500 time points may be set, and corresponding 500 point cloud data and navigation positioning data may be acquired.
S102: and establishing a plurality of matching groups according to the time points corresponding to the point cloud data and the time points corresponding to the image data, wherein the matching groups comprise the point cloud data and the image data corresponding to the same time points.
In one embodiment, calibration is facilitated by point-to-point alignment of different types of location coordinates for subsequent use. In specific implementation, the acquired point cloud data and the image data can be synchronously matched according to the time points corresponding to the point cloud data and the image data, and the point cloud data and the image data corresponding to the same time point are divided into a matching group, so that a plurality of matching groups are obtained. And the point cloud data and the image data contained in each of the plurality of matching groups correspond to the same time point. It can also be understood that the point cloud data and the image data included in one match correspond to the same time point.
In one embodiment, it is further considered that the point cloud data and the image data corresponding to the same time point are obtained by moving the target vehicle to the same position point during the moving process. Therefore, it is also considered that a plurality of matching groups are established and obtained by determining the point cloud data and the acquired position points corresponding to the image data and matching the plurality of point cloud data and the plurality of image data according to the position points, wherein the point cloud data and the image data contained in the plurality of matching groups correspond to the same position point.
S103: and according to the point cloud data and the image data contained in the matching group, determining radar coordinates, left lens image coordinates and right lens image coordinates of a target point relative to the target vehicle, which are measured at a time point corresponding to the matching group, and obtaining a plurality of groups of matching coordinate pairs.
In one embodiment, in specific implementation, corresponding data processing may be performed on the point cloud data in each matching group, so as to determine radar coordinates of a target point measured at a time point corresponding to the matching group relative to a target vehicle; and respectively carrying out corresponding data processing on the image data (including the left lens image and the right lens image) in each matching group to determine the left lens image coordinate and the right lens image coordinate of the target point relative to the target vehicle, which are measured at the time point corresponding to the matching group.
In one embodiment, considering that the calibration board is a calibration board provided with reflectors of different colors, and the different colors cause a relatively obvious color brightness contrast in the calibration board region in the image data, the left lens image coordinate and the right lens image coordinate of the target point measured at the time point corresponding to the matching group relative to the target vehicle are determined, in specific implementation, the image brightness of the left lens image and the image brightness of the right lens image obtained through the left lens and the right lens in the binocular camera in the same matching group can be respectively retrieved, and the brightness contrast value in the image is found to be greater than a preset contrast threshold, that is, the region with relatively obvious color contrast is found. Further, the image data in the above-described region in the left lens image may be determined as the image data of the calibration plate region in the left lens image, and the image data in the above-described region in the right lens image may be determined as the image data of the calibration plate region in the right lens image.
Furthermore, hough transformation can be respectively carried out on the image data of the calibration plate region in the left lens image and the image data of the calibration plate region in the right lens image, the circle center in the calibration plate region in the left lens image is extracted to be used as a target point, and the coordinates of the target point in the matching group relative to the left lens image of the target vehicle are determined; and taking the circle center in the calibration plate area in the right lens image as a target point, and determining the coordinates of the target point in the matching group relative to the right lens image of the target vehicle. Therefore, the coordinates of the target point in each matching group relative to the left lens image of the target vehicle and the coordinates of the target point relative to the right lens image of the target vehicle can be efficiently and accurately determined. The left lens image coordinate of the target point relative to the target vehicle and the right lens image coordinate of the target point relative to the target vehicle are two-dimensional coordinates based on an image coordinate system.
In an embodiment, to further improve the processing efficiency, before searching for the region of the left lens image and the right lens image with the brightness contrast greater than the preset contrast threshold, the image data of the preset region of interest may be roughly found according to image recognition from the left lens image and the right lens image, then the image data of the preset region of interest in the left lens image may be searched for, the region of the left lens image with the brightness contrast greater than the preset contrast threshold may be found, the image data of the preset region of interest in the right lens image may be searched for, and the region of the right lens image with the brightness contrast greater than the preset contrast threshold may be found. Therefore, the search range can be effectively reduced, and the search efficiency is improved. The preset interest region may specifically include a range region including the calibration plate, and the preset interest region may specifically include image data in the range region where the calibration plate is located, and image data in a range region adjacent to the periphery of the calibration plate.
In the above manner, the image coordinates of the target point measured at each time point with respect to the target vehicle at the position point, that is, the left lens image coordinates and the right lens image coordinates, may be calculated respectively. And each left lens image coordinate and each right lens image coordinate are the same as the time points corresponding to the matching groups where the image data determining the left lens image coordinates and the right lens image coordinates are located.
In an embodiment, the radar coordinates of the target point measured at the time point corresponding to the matching group relative to the target vehicle are determined, and in specific implementation, point cloud data located in a preset interest area may be extracted from the point cloud data in the matching group. The preset interest area may specifically include a range area including the calibration plate, and the preset interest area may specifically include point cloud data in the range area where the calibration plate is located, and also include point cloud data in a range area adjacent to the periphery of the calibration plate.
Further, considering that the used calibration plate is a calibration plate provided with a reflecting object, point cloud data with point cloud intensity greater than the preset intensity can be screened and separated from the point cloud data of the preset interest area according to the point cloud intensity in the point cloud data of the preset interest area to serve as the point cloud data of the calibration plate. Therefore, the point cloud data of the calibration plate can be accurately divided from the point cloud data of the environment in the adjacent range of the periphery of the calibration plate.
And then, according to the point cloud data of the calibration plate, determining the three-dimensional coordinates of the target point as the radar coordinates of the target point relative to the target vehicle.
Specifically, for example, the point cloud data located at the position of the center point (i.e., the target point) of the calibration board may be found by using the minimum circumscribed circle according to the point cloud data of the calibration board, so that the time point corresponding to the matching group based on the radar coordinates, which is the radar coordinates of the target point relative to the target vehicle, may be calculated according to the point cloud data at the position.
In the above manner, the radar coordinates of the target point measured at each time point with respect to the target vehicle at the position point can be calculated, respectively. And each radar coordinate is the same as the time point corresponding to the matching group where the point cloud data of the radar coordinate is located.
Therefore, the radar coordinates, the left lens image coordinates and the right lens image coordinates of the time points corresponding to the matching groups can be determined according to the point cloud data and the image data in each matching group. And combining the radar coordinates, the left lens image coordinates and the right lens image coordinates of the same time point together to obtain a plurality of groups of matched coordinate pairs. Each of the multiple sets of matching coordinate pairs includes a radar coordinate, a left lens image coordinate, and a right lens image coordinate corresponding to a same time point.
S104: determining calibration transformation parameters of the positioning coordinates according to the multiple groups of matched coordinate pairs, wherein the calibration transformation parameters comprise: the camera comprises an internal parameter of a left lens, an internal parameter of a right lens, an external parameter between the left lens and the right lens and an external parameter between the binocular camera and the laser radar.
In one embodiment, the matching coordinate pairs of each group include radar coordinates, left lens image coordinates and right lens image coordinates corresponding to the same time point. Therefore, the parameter relationship between the left lens and the right lens in the binocular camera, the parameter relationship of the left lens in the binocular camera, the parameter relationship of the right lens in the binocular camera, the parameter relationship of the left lens in the binocular camera and the laser radar, and the parameter relationship of the whole binocular camera and the laser radar can be reflected simultaneously based on the data of the set of matching coordinates. And then, according to the multiple groups of matching coordinate pairs, calibrating and solving once to determine calibration transformation parameters of the positioning coordinates of various parameters including the internal reference of the left lens, the internal reference of the right lens, the external reference between the left lens and the right lens in the binocular camera, the external reference between the binocular camera and the laser radar and the like. Therefore, the required calibration conversion parameters containing various parameters can be determined by only one-time calibration without carrying out multiple times of calibration, the errors caused by multiple times of calibration are reduced, and the calibration accuracy is improved.
In one embodiment, each of the above-mentioned sets of matching point pairs includes both a point pair composed of left lens image coordinates and radar coordinates and a point pair composed of right lens image coordinates and radar coordinates. Therefore, an equivalent large-scale calibration board can be constructed based on the matching point pairs in specific calibration.
In one embodiment, when determining the internal reference of the left lens, the internal reference of the right lens, the external reference between the left lens and the right lens, and the external reference between the binocular camera and the laser radar in the calibration transformation parameters of the positioning coordinates according to the multiple sets of matching coordinate pairs, in specific implementation, the internal reference of the left lens, the internal reference of the right lens, and the external reference between the binocular camera and the laser radar in the binocular camera can be obtained by solving a PNP (passive N Points) problem according to the multiple sets of matching coordinate pairs in combination with the initial internal reference of the left lens and the initial internal reference of the right lens in the camera parameters; and then, calculating the external parameters between the left lens and the right lens in the target camera through conduction.
In one embodiment, when solving the calibration transformation parameters of the specific positioning coordinates in the above manner, it is further considered that the acquired image data is often approximately planar, so the calibrated point also approximately falls on a plane. Therefore, in order to solve the calibration transformation parameters more efficiently and stably, the binocular camera and the laser radar can be fixed firstly and then the deviation of the binocular camera in the front-back direction is taken as the initial value of mechanical installation during specific solution, and optimal solution is carried out on the basis of the initial value, so that the calibration transformation parameters can be obtained by fast solution, and the calibration efficiency is further improved.
In one embodiment, when a target vehicle specifically runs and a target obstacle is detected to exist on a running path of the target vehicle, point cloud data for the target obstacle can be acquired through a laser radar, meanwhile, a left lens image for the target obstacle is acquired through a left lens in a binocular camera, and a right lens image for the target obstacle is acquired through a right lens to obtain image data for the target obstacle; and then, according to the point cloud data of the target obstacle, the image of the target obstacle and the determined calibration transformation parameters, the relative position of the target obstacle relative to the target vehicle is accurately determined by fully utilizing the advantages of two different positioning perception modes. And then the target vehicle is controlled to run according to the relative position of the target obstacle relative to the target vehicle so as to bypass the target obstacle, or the driver is assisted to drive so as to bypass the target obstacle in time and ensure the running safety of the target vehicle.
In the embodiment of the application, compared with the existing method, before calibration, a calibration plate is fixedly arranged at a preset position point, wherein the edge area and the inner area of the calibration plate are respectively provided with the light reflectors with different colors; during calibration, a target vehicle is controlled to move firstly, a plurality of point cloud data aiming at a calibration plate are obtained through a laser radar at a plurality of different time points respectively, and a plurality of image data aiming at the calibration plate are obtained through a binocular camera; then according to the point cloud data and the time points corresponding to the image data, establishing a matching group containing the point cloud data and the image data of the same time point through synchronous matching; then, according to the point cloud data and the image data in each matching group, respectively determining radar coordinates, left lens image coordinates and right lens image coordinates of a time point corresponding to the matching group to obtain a plurality of corresponding matching coordinate pairs; and determining the calibration transformation parameters of the positioning coordinates according to the matching coordinate pairs. Therefore, multiple times of calibration are not needed, and the technical problems of low calibration efficiency and poor accuracy in the conventional calibration method are solved. The technical effects of simplifying the calibration process and efficiently and accurately calibrating by tracking the target point are achieved.
In an embodiment, after determining the calibration transformation parameters of the positioning coordinates according to the plurality of sets of matching coordinate pairs, the method may further include the following steps: acquiring point cloud data aiming at a target obstacle through a laser radar, and acquiring image data aiming at the target obstacle through a binocular camera; determining the current relative position coordinates of the target obstacle relative to the target vehicle according to the point cloud data aiming at the target obstacle, the image data aiming at the target obstacle and the calibration transformation parameters; and controlling the target vehicle to run according to the current relative position coordinate so as to bypass the target obstacle.
In an embodiment, after the current relative position coordinate of the target obstacle relative to the target vehicle is determined in the above manner, a quantized error value may be calculated according to the current relative position coordinate, a radar coordinate obtained based on the point cloud data, and an image coordinate obtained based on the image data, and then the calibration precision may be evaluated according to the quantized error value, so that the calibration transformation parameters used may be adjusted and optimized in a targeted manner according to the evaluation result, and the accuracy of the calibration transformation parameters may be further improved.
In an embodiment, the determining, according to the image data included in the matching group, a left lens image coordinate and a right lens image coordinate of the target point measured at the time point corresponding to the matching group with respect to the target vehicle may include the following steps: respectively searching areas with brightness contrast values larger than a preset contrast threshold value in the left lens image and the right lens image, and determining image data of a calibration plate area in the left lens image and image data of the calibration plate area in the right lens image, wherein the image data comprise the left lens image and the right lens image; determining the left lens image coordinate of the target point relative to the target vehicle according to the image data of the calibration plate area in the left lens image through Hough transformation; and determining the coordinates of the target point relative to the right lens image of the target vehicle according to the image data of the calibration plate area in the right lens image through Hough transformation.
In an embodiment, the determining, according to the point cloud data included in the matching group, radar coordinates of a target point measured at a time point corresponding to the matching group with respect to the target vehicle may include the following steps: extracting point cloud data located in a preset interest area from the point cloud data; according to the point cloud intensity, separating point cloud data with the point cloud intensity greater than the preset intensity from the point cloud data of the preset interest area to serve as the point cloud data of the calibration board; and determining the three-dimensional coordinates of the target point according to the point cloud data of the calibration plate, and taking the three-dimensional coordinates as the radar coordinates of the target point relative to the target vehicle.
In an embodiment, after determining the calibration transformation parameters of the positioning coordinates according to the plurality of sets of matching coordinate pairs, the method may further include the following steps: reconstructing an image three-dimensional coordinate of the target point relative to the target vehicle according to the left lens image coordinate and the right lens image coordinate in the matched coordinate pair; according to the three-dimensional coordinates of the image and the radar coordinates in the matched coordinate pair, determining an alignment error; and optimizing and calibrating transformation parameters according to the alignment error.
In this embodiment, it is further considered that the left lens image coordinate and the right lens image coordinate in the matching coordinate pair are both two-dimensional coordinates, and the radar coordinate is a three-dimensional coordinate, and the influence of the reprojection is only considered when determining the calibration transformation parameter according to the left lens image coordinate, the right lens image coordinate, and the radar coordinate in the matching coordinate pair, so that a better positioning effect can be obtained on the two-dimensional coordinate plane when the calibration transformation parameter obtained in the above manner is used subsequently, but some errors may also exist on the three-dimensional coordinate plane. Based on the above consideration, in order to obtain a better positioning effect when the obtained calibration transformation parameters are applied to a three-dimensional coordinate plane and further improve the accuracy of the determined calibration transformation parameters, the image three-dimensional coordinates of the target point relative to the target vehicle can be reconstructed according to the left lens image coordinates and the right lens image coordinates in the matched coordinate pair; determining an alignment error according to the three-dimensional coordinates of the image and corresponding radar coordinates; and further, the calibration transformation parameters can be further optimized in a combined manner by utilizing the alignment errors obtained in the above manner, so that the calibration transformation parameters have higher accuracy. Specifically, for example, after the joint optimization is performed in the above manner, the error of the calibrated conversion parameter after calibration in application can be reduced by 1/3, and a better effect is obtained.
In an embodiment, in the above reconstructing the image three-dimensional coordinate of the target point relative to the target vehicle according to the left lens image coordinate and the right lens image coordinate in the matching coordinate pair, in specific implementation, the image three-dimensional coordinate of the target point relative to the target vehicle may be reconstructed through triangulation according to the left lens image coordinate and the right lens image coordinate in the matching coordinate pair.
In one embodiment, it is further considered that the size of the checkerboard used in calibration in the existing method is small, which results in that calibration can be performed only at a short distance from the checkerboard to determine the calibration transformation parameters, so that the determined calibration transformation parameters have relatively large errors when applied to positioning in a long distance range. In view of the above problems, in the embodiment of the present application, a calibration board with a size larger than the checkerboard is considered to be used, and meanwhile, the calibration board is fixed at a preset position, so as to control the target vehicle to move, and track and detect a target point in the calibration board, so that a plurality of image data and radar data can be accurately obtained from a far distance range (for example, a position 100 meters away from the calibration board), and thus calibration transformation parameters determined based on the image data and the radar data also have good precision when being applied to the far distance range.
From the above description, it can be seen that in the calibration method of the vehicle location coordinate provided in the embodiment of the present application, before calibration, a calibration plate is fixedly disposed at a preset position, wherein reflective objects with different colors are respectively disposed in an edge area and an inner area of the calibration plate; during calibration, firstly controlling a target vehicle to move, respectively acquiring a plurality of point cloud data aiming at a calibration plate through a laser radar at a plurality of different time points, and simultaneously acquiring a plurality of image data aiming at the calibration plate through a binocular camera; then according to the point cloud data and the time points corresponding to the image data, establishing a matching group containing the point cloud data and the image data of the same time point through synchronous matching; further, according to the point cloud data and the image data in each matching group, respectively determining a radar coordinate, a left lens image coordinate and a right lens image coordinate of a time point corresponding to the matching group to obtain a plurality of corresponding matching coordinate pairs; and determining the calibration transformation parameters of the positioning coordinates according to the matching coordinate pairs. Therefore, the technical problems of low calibration efficiency and poor accuracy in the existing calibration method are solved. The technical effects of simplifying the calibration process and efficiently and accurately calibrating by tracking the target point are achieved. Reconstructing an image three-dimensional coordinate of the target point relative to the target vehicle according to the left lens image coordinate and the right lens image coordinate in the matched coordinate pair; then according to the three-dimensional coordinates of the image and the corresponding radar coordinates, determining an alignment error; and then, the alignment error obtained by the method is utilized to optimize the calibration transformation parameters, thereby further improving the calibration accuracy and reducing the calibration error.
Based on the same inventive concept, the embodiment of the present application further provides a calibration apparatus for vehicle location coordinates, as described in the following embodiments. The principle of solving the problems of the calibration device for the driving positioning coordinate is similar to that of the calibration method for the driving positioning coordinate, so the implementation of the calibration device for the driving positioning coordinate can refer to the implementation of the calibration method for the driving positioning coordinate, and repeated parts are not repeated. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the following embodiments describe a calibration arrangement for the driving position coordinates preferably implemented in software, implementations in hardware, or a combination of software and hardware are also possible and contemplated. Fig. 4 is a structural diagram of a calibration apparatus for vehicle location coordinates according to an embodiment of the present disclosure, where the apparatus may specifically include: an obtaining module 401, a matching module 402, a first determining module 403, and a second determining module 404, which are described in detail below.
The obtaining module 401 may be specifically configured to respond to a calibration request of a positioning coordinate, control a target vehicle to be calibrated to move, obtain, at multiple time points, multiple point cloud data for a calibration plate through a laser radar, and obtain, through a binocular camera, multiple image data for the calibration plate; the calibration plate is provided with a target point, the edge area of the calibration plate is provided with a reflector with a first color, the area between the edge area of the calibration plate and the target point is provided with a reflector with a second color, and the calibration plate is fixed at a preset position point;
the matching module 402 may be specifically configured to establish a plurality of matching groups according to time points corresponding to the point cloud data and time points corresponding to the image data, where the matching groups include the point cloud data and the image data corresponding to the same time points;
the first determining module 403 may be specifically configured to determine, according to the point cloud data and the image data included in the matching group, radar coordinates, left lens image coordinates, and right lens image coordinates of a target point, which are measured at a time point corresponding to the matching group, with respect to the target vehicle, so as to obtain a plurality of sets of matching coordinate pairs;
the second determining module 404 may be specifically configured to determine calibration transformation parameters of the positioning coordinates according to the multiple sets of matching coordinate pairs, where the calibration transformation parameters include: the camera comprises an inner parameter of a left lens, an inner parameter of a right lens, an outer parameter between the left lens and the right lens, and an outer parameter between the binocular camera and the laser radar.
In one embodiment, the apparatus may further include an application module, which may be specifically configured to acquire point cloud data for a target obstacle through a lidar and acquire image data for the target obstacle through a binocular camera; determining the current relative position coordinates of the target obstacle relative to the target vehicle according to the point cloud data aiming at the target obstacle, the image data aiming at the target obstacle and the calibration transformation parameters; and controlling the target vehicle to run according to the current relative position coordinate so as to bypass the target obstacle.
In one embodiment, the apparatus may further include an optimization module, where the optimization module may be specifically configured to reconstruct an image three-dimensional coordinate of the target point relative to the target vehicle according to the left lens image coordinate and the right lens image coordinate in the matching coordinate pair; according to the three-dimensional coordinates of the image and the radar coordinates in the matched coordinate pair, determining an alignment error; and optimizing and calibrating transformation parameters according to the alignment error.
In one embodiment, the first determining module 403 may be specifically configured to determine image data of a calibration plate region in the left lens image and image data of a calibration plate region in the right lens image by respectively retrieving regions of the left lens image and the right lens image, where the luminance contrast values of the regions are greater than a preset contrast threshold value, where the image data includes the left lens image and the right lens image; determining the coordinates of the target point relative to the left lens image of the target vehicle according to the image data of the calibration plate area in the left lens image through Hough transformation; and determining the coordinates of the target point relative to the right lens image of the target vehicle according to the image data of the calibration plate area in the right lens image through Hough transformation.
In an embodiment, the first determining module 403 may be further configured to extract point cloud data located in a preset interest region from the point cloud data; according to the point cloud intensity, separating point cloud data with the point cloud intensity larger than the preset intensity from the point cloud data of the preset interest area to serve as the point cloud data of the calibration plate; and determining the three-dimensional coordinates of the target point according to the point cloud data of the calibration plate, and using the three-dimensional coordinates as radar coordinates of the target point relative to the target vehicle.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the system embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference may be made to the partial description of the method embodiment for relevant points.
It should be noted that, the systems, devices, modules or units described in the above embodiments may be implemented by a computer chip or an entity, or implemented by an article with a certain function. For convenience of description, in the present specification, the above devices are described as being divided into various units by functions, respectively. Of course, the functionality of the various elements may be implemented in the same one or more pieces of software and/or hardware in the practice of the present application.
Moreover, in the subject specification, adjectives such as first and second may only be used to distinguish one element or action from another element or action without necessarily requiring or implying any actual such relationship or order. References to an element or component or step (etc.) should not be construed as limited to only one of the element, component, or step, but rather to one or more of the element, component, or step, etc., where the context permits.
From the above description, it can be seen that, in the calibration device for vehicle location coordinates provided in the embodiment of the present application, before calibration, a calibration plate is fixedly disposed at a preset position, where an edge area and an inner area of the calibration plate are respectively provided with reflective objects with different colors; during calibration, the target vehicle is controlled to move through the acquisition module, a plurality of point cloud data for the calibration plate are acquired through the laser radar at a plurality of different time points respectively, and a plurality of image data for the calibration plate are acquired through the binocular camera; then, establishing a matching group containing point cloud data and image data of the same time point through synchronous matching according to the time points corresponding to the point cloud data and the image data by a matching module; then respectively determining radar coordinates, left lens image coordinates and right lens image coordinates of time points corresponding to the matching groups according to the point cloud data and the image data in each matching group through a first determining module to obtain a plurality of corresponding matching coordinate pairs; and then the second determination module determines the calibration transformation parameters of the positioning coordinates according to the matching coordinate pairs. Therefore, the technical problems of low calibration efficiency and poor accuracy in the existing calibration method are solved. The technical effects of simplifying the calibration process and efficiently and accurately calibrating by tracking the target point are achieved.
The embodiment of the present application further provides an electronic device, and specifically, a schematic structural diagram of the electronic device for implementing calibration of a driving positioning coordinate based on the embodiment of the present application shown in fig. 5 may be referred to, where the electronic device specifically may include a detection device 51, a processor 52, and a memory 53. The detection device 51 may be specifically configured to respond to a calibration request for positioning coordinates, control a target vehicle to be calibrated to move, acquire, at multiple time points, multiple point cloud data for a calibration plate through a laser radar, and acquire multiple image data for the calibration plate through a binocular camera; the calibration plate is provided with a target point, the edge area of the calibration plate is provided with a reflective object with a first color, the area between the edge area of the calibration plate and the target point is provided with a reflective object with a second color, and the calibration plate is fixed at a preset position. The processor 52 may be specifically configured to establish a plurality of matching groups according to time points corresponding to the point cloud data and time points corresponding to the image data, where the matching groups include the point cloud data and the image data corresponding to the same time points; according to the point cloud data and the image data contained in the matching group, determining radar coordinates, left lens image coordinates and right lens image coordinates of a target point relative to a target vehicle, which are measured at a time point corresponding to the matching group, and obtaining a plurality of groups of matching coordinate pairs; determining calibration transformation parameters of the positioning coordinates according to the multiple groups of matched coordinate pairs, wherein the calibration transformation parameters comprise: the camera comprises an inner parameter of a left lens, an inner parameter of a right lens, an outer parameter between the left lens and the right lens, and an outer parameter between the binocular camera and the laser radar.
In this embodiment, the detector may specifically include a correlation tester, a sensor, and other instruments that can be used to detect the correlation characteristic data. The processor may be implemented in any suitable way. For example, the processor may take the form of, for example, a microprocessor or processor and a computer-readable medium that stores computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, an embedded microcontroller, and so forth. The memory may in particular be a memory device used in modern information technology for storing information. The memory may include multiple levels, and in a digital system, the memory may be any memory as long as it can store binary data; in an integrated circuit, a circuit without a physical form and with a storage function is also called a memory, such as a RAM, a FIFO and the like; in the system, the storage device in physical form is also called a memory, such as a memory bank, a TF card and the like.
In this embodiment, the functions and effects specifically realized by the electronic device may be explained in comparison with other embodiments, and are not described herein again.
The embodiment of the present application further provides a computer storage medium for a calibration method based on driving positioning coordinates, where the computer storage medium stores computer program instructions, and when the computer program instructions are executed, the computer program instructions implement: responding to a calibration request of a positioning coordinate, controlling a target vehicle to be calibrated to move, acquiring a plurality of point cloud data aiming at a calibration plate through a laser radar at a plurality of time points, and acquiring a plurality of image data aiming at the calibration plate through a binocular camera; the calibration plate is provided with a target point, the edge area of the calibration plate is provided with a reflector with a first color, the area between the edge area of the calibration plate and the target point is provided with a reflector with a second color, and the calibration plate is fixed at a preset position point; establishing a plurality of matching groups according to the time points corresponding to the point cloud data and the time points corresponding to the image data, wherein the matching groups comprise the point cloud data and the image data corresponding to the same time points; according to the point cloud data and the image data contained in the matching group, determining radar coordinates, left lens image coordinates and right lens image coordinates of a target point relative to a target vehicle, which are measured at a time point corresponding to the matching group, and obtaining a plurality of groups of matching coordinate pairs; determining calibration transformation parameters of the positioning coordinates according to the multiple groups of matching coordinate pairs, wherein the calibration transformation parameters comprise: the camera comprises an inner parameter of a left lens, an inner parameter of a right lens, an outer parameter between the left lens and the right lens, and an outer parameter between the binocular camera and the laser radar.
In this embodiment, the storage medium includes, but is not limited to, a Random Access Memory (RAM), a Read-Only Memory (ROM), a Cache (Cache), a Hard Disk Drive (HDD), or a Memory Card (Memory Card). The memory may be used to store computer program instructions. The network communication unit may be an interface for performing network connection communication, which is set in accordance with a standard prescribed by a communication protocol.
In this embodiment, the functions and effects of the specific implementation of the program instructions stored in the computer storage medium can be explained in comparison with other embodiments, and are not described herein again.
Although various specific embodiments are mentioned in the disclosure of the present application, the present application is not limited to the cases described in the industry standards or the examples, and the like, and some industry standards or the embodiments slightly modified based on the implementation described in the custom manner or the examples can also achieve the same, equivalent or similar, or the expected implementation effects after the modifications. Embodiments employing such modified or transformed data acquisition, processing, output, determination, etc., may still fall within the scope of alternative embodiments of the present application.
Although the present application provides method steps as described in the examples or flowcharts, more or fewer steps may be included based on conventional or non-inventive means. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When implemented in practice, an apparatus or client product may execute sequentially or in parallel (e.g., in a parallel processor or multithreaded processing environment, or even in a distributed data processing environment) in accordance with the embodiments or methods depicted in the figures. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the presence of additional identical or equivalent elements in a process, method, article, or apparatus that comprises the recited elements is not excluded.
The apparatuses or modules and the like explained in the above embodiments may be specifically implemented by a computer chip or an entity, or by a product having a certain function. For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, in implementing the present application, the functions of each module may be implemented in one or more pieces of software and/or hardware, or a module that implements the same function may be implemented by a combination of a plurality of sub-modules, and the like. The above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and other divisions may be realized in practice, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may therefore be considered as a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, classes, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, or the like, and includes several instructions for enabling a computer device (which may be a personal computer, a mobile terminal, a server, or a network device) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same or similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable electronic devices, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
While the present application has been described with examples, those of ordinary skill in the art will appreciate that there are numerous variations and permutations of the present application without departing from the spirit of the application and it is intended that the appended examples include such variations and permutations without departing from the present application.

Claims (9)

1. A calibration method for driving positioning coordinates is characterized by comprising the following steps:
responding to a calibration request of a positioning coordinate, controlling a target vehicle to be calibrated to move, acquiring a plurality of point cloud data aiming at a calibration plate through a laser radar at a plurality of time points, and acquiring a plurality of image data aiming at the calibration plate through a binocular camera; the calibration plate is provided with a target point, the edge area of the calibration plate is provided with a reflector with a first color, the area between the edge area of the calibration plate and the target point is provided with a reflector with a second color, and the calibration plate is fixed at a preset position point;
establishing a plurality of matching groups according to the time points corresponding to the point cloud data and the time points corresponding to the image data, wherein the matching groups comprise the point cloud data and the image data corresponding to the same time points;
determining the left lens image coordinate and the right lens image coordinate of a target point relative to the target vehicle, which are measured at the time point corresponding to the matching group, according to the image data contained in the matching group; according to the point cloud data contained in the matching group, determining radar coordinates of a target point measured at a time point corresponding to the matching group relative to the target vehicle to establish a matching coordinate pair; determining the left lens image coordinate and the right lens image coordinate of a target point relative to the target vehicle, which are measured at the time point corresponding to the matching group, according to the image data contained in the matching group, wherein the method comprises the following steps: respectively searching areas with brightness contrast values larger than a preset contrast threshold value in the left lens image and the right lens image, and determining image data of a calibration plate area in the left lens image and image data of the calibration plate area in the right lens image, wherein the image data comprise the left lens image and the right lens image; determining the coordinates of the target point relative to the left lens image of the target vehicle according to the image data of the calibration plate area in the left lens image through Hough transformation; determining the coordinates of the target point relative to the right lens image of the target vehicle according to the image data of the calibration plate area in the right lens image through Hough transformation;
determining calibration transformation parameters of the positioning coordinates according to the multiple groups of matched coordinate pairs, wherein the calibration transformation parameters comprise: the camera comprises an inner parameter of a left lens, an inner parameter of a right lens, an outer parameter between the left lens and the right lens, and an outer parameter between the binocular camera and the laser radar.
2. The method of claim 1, wherein after determining calibration transformation parameters for the positioning coordinates based on the plurality of sets of matching coordinate pairs, the method further comprises:
acquiring point cloud data aiming at a target obstacle through a laser radar, and acquiring image data aiming at the target obstacle through a binocular camera;
determining the current relative position coordinates of the target obstacle relative to the target vehicle according to the point cloud data aiming at the target obstacle, the image data aiming at the target obstacle and the calibration transformation parameters;
and controlling the target vehicle to run according to the current relative position coordinate so as to bypass the target obstacle.
3. The method of claim 1, wherein determining radar coordinates of a target point relative to the target vehicle measured at a time point corresponding to the matching group according to the point cloud data included in the matching group comprises:
extracting point cloud data located in a preset interest area from the point cloud data;
according to the point cloud intensity, separating point cloud data with the point cloud intensity greater than the preset intensity from the point cloud data of the preset interest area to serve as the point cloud data of the calibration board;
and determining the three-dimensional coordinates of the target point according to the point cloud data of the calibration plate, and using the three-dimensional coordinates as radar coordinates of the target point relative to the target vehicle.
4. The method of claim 1, wherein after determining calibration transformation parameters for the positioning coordinates from the plurality of sets of matching coordinate pairs, the method further comprises:
reconstructing an image three-dimensional coordinate of the target point relative to the target vehicle according to the left lens image coordinate and the right lens image coordinate in the matched coordinate pair;
according to the three-dimensional coordinates of the image and the radar coordinates in the matched coordinate pair, determining an alignment error;
and optimizing and calibrating transformation parameters according to the alignment error.
5. A calibration device for driving positioning coordinates is characterized by comprising:
the acquisition module is used for responding to a calibration request of the positioning coordinates, controlling a target vehicle to be calibrated to move, acquiring a plurality of point cloud data aiming at the calibration plate through a laser radar at a plurality of time points, and acquiring a plurality of image data aiming at the calibration plate through a binocular camera; the calibration plate is provided with a target point, the edge area of the calibration plate is provided with a reflector with a first color, the area between the edge area of the calibration plate and the target point is provided with a reflector with a second color, and the calibration plate is fixed at a preset position point;
the matching module is used for establishing a plurality of matching groups according to the time points corresponding to the point cloud data and the time points corresponding to the image data, wherein the matching groups comprise the point cloud data and the image data corresponding to the same time points;
the first determining module is used for determining the left lens image coordinate and the right lens image coordinate of a target point relative to the target vehicle, which are measured at the time point corresponding to the matching group, according to the image data contained in the matching group; according to the point cloud data contained in the matching group, determining radar coordinates of a target point measured at a time point corresponding to the matching group relative to the target vehicle to establish a matching coordinate pair; the first determining module is specifically configured to determine image data of a calibration plate region in the left lens image and image data of the calibration plate region in the right lens image by respectively retrieving regions, in the left lens image and the right lens image, of which luminance contrast values are greater than a preset contrast threshold, where the image data includes the left lens image and the right lens image; determining the left lens image coordinate of the target point relative to the target vehicle according to the image data of the calibration plate area in the left lens image through Hough transformation; determining the coordinates of the target point relative to the right lens image of the target vehicle according to the image data of the calibration plate area in the right lens image through Hough transformation;
a second determining module, configured to determine calibration transformation parameters of the positioning coordinates according to the multiple sets of matching coordinate pairs, where the calibration transformation parameters include: the camera comprises an inner parameter of a left lens, an inner parameter of a right lens, an outer parameter between the left lens and the right lens, and an outer parameter between the binocular camera and the laser radar.
6. The apparatus of claim 5, further comprising an application module for acquiring point cloud data for a target obstacle by lidar and image data for the target obstacle by a binocular camera; determining the current relative position coordinates of the target obstacle relative to the target vehicle according to the point cloud data aiming at the target obstacle, the image data aiming at the target obstacle and the calibration transformation parameters; and controlling the target vehicle to run according to the current relative position coordinates so as to bypass the target obstacle.
7. The apparatus of claim 5, further comprising an optimization module configured to reconstruct image three-dimensional coordinates of the target point relative to the target vehicle from the left lens image coordinates and the right lens image coordinates in the matched coordinate pair; according to the three-dimensional coordinates of the image and the radar coordinates in the matched coordinate pair, determining an alignment error; and optimizing and calibrating transformation parameters according to the alignment error.
8. An electronic device, comprising a processor and a memory for storing processor-executable instructions, the instructions when executed by the processor implementing the steps of the method of any one of claims 1 to 4.
9. A computer readable storage medium having stored thereon computer instructions which, when executed, implement the steps of the method of any one of claims 1 to 4.
CN201911195997.XA 2019-11-29 2019-11-29 Calibration method and device for travelling crane positioning coordinates Active CN111123912B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911195997.XA CN111123912B (en) 2019-11-29 2019-11-29 Calibration method and device for travelling crane positioning coordinates

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911195997.XA CN111123912B (en) 2019-11-29 2019-11-29 Calibration method and device for travelling crane positioning coordinates

Publications (2)

Publication Number Publication Date
CN111123912A CN111123912A (en) 2020-05-08
CN111123912B true CN111123912B (en) 2023-01-31

Family

ID=70497312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911195997.XA Active CN111123912B (en) 2019-11-29 2019-11-29 Calibration method and device for travelling crane positioning coordinates

Country Status (1)

Country Link
CN (1) CN111123912B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111882655B (en) * 2020-06-19 2023-07-18 杭州易现先进科技有限公司 Method, device, system, computer equipment and storage medium for three-dimensional reconstruction
CN111783597B (en) * 2020-06-24 2022-12-13 中国第一汽车股份有限公司 Method and device for calibrating driving trajectory, computer equipment and storage medium
CN111754798A (en) * 2020-07-02 2020-10-09 上海电科智能系统股份有限公司 Method for realizing detection of vehicle and surrounding obstacles by fusing roadside laser radar and video
CN112362054B (en) * 2020-11-30 2022-12-16 上海商汤临港智能科技有限公司 Calibration method, calibration device, electronic equipment and storage medium
CN112509062B (en) * 2020-12-17 2023-09-12 广东工业大学 Calibration plate, calibration system and calibration method
CN113487686A (en) * 2021-08-02 2021-10-08 固高科技股份有限公司 Calibration method and device for multi-view camera, multi-view camera and storage medium
CN218299035U (en) * 2022-05-27 2023-01-13 华为技术有限公司 Calibration plate and calibration control equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109270534A (en) * 2018-05-07 2019-01-25 西安交通大学 A kind of intelligent vehicle laser sensor and camera online calibration method
CN109472831A (en) * 2018-11-19 2019-03-15 东南大学 Obstacle recognition range-measurement system and method towards road roller work progress
CN109920011A (en) * 2019-05-16 2019-06-21 长沙智能驾驶研究院有限公司 Outer ginseng scaling method, device and the equipment of laser radar and binocular camera
CN110135453A (en) * 2019-03-29 2019-08-16 初速度(苏州)科技有限公司 A kind of laser point cloud data mask method and device
CN110390695A (en) * 2019-06-28 2019-10-29 东南大学 The fusion calibration system and scaling method of a kind of laser radar based on ROS, camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109270534A (en) * 2018-05-07 2019-01-25 西安交通大学 A kind of intelligent vehicle laser sensor and camera online calibration method
CN109472831A (en) * 2018-11-19 2019-03-15 东南大学 Obstacle recognition range-measurement system and method towards road roller work progress
CN110135453A (en) * 2019-03-29 2019-08-16 初速度(苏州)科技有限公司 A kind of laser point cloud data mask method and device
CN109920011A (en) * 2019-05-16 2019-06-21 长沙智能驾驶研究院有限公司 Outer ginseng scaling method, device and the equipment of laser radar and binocular camera
CN110390695A (en) * 2019-06-28 2019-10-29 东南大学 The fusion calibration system and scaling method of a kind of laser radar based on ROS, camera

Also Published As

Publication number Publication date
CN111123912A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN111123912B (en) Calibration method and device for travelling crane positioning coordinates
CN109949372B (en) Laser radar and vision combined calibration method
CN111060132B (en) Calibration method and device for travelling crane positioning coordinates
US11587248B2 (en) Method and system for multiple stereo based depth estimation and collision warning/avoidance utilizing the same
US20200042803A1 (en) Information processing method, information processing apparatus, and recording medium
US10560686B2 (en) Photographing device and method for obtaining depth information
US11042966B2 (en) Method, electronic device, and storage medium for obtaining depth image
US20220383530A1 (en) Method and system for generating a depth map
CN109191513B (en) Power equipment stereo matching method based on global optimization
CN113744348A (en) Parameter calibration method and device and radar vision fusion detection equipment
CN111080784A (en) Ground three-dimensional reconstruction method and device based on ground image texture
CN115267796B (en) Positioning method, positioning device, robot and storage medium
CN109073398B (en) Map establishing method, positioning method, device, terminal and storage medium
CN111583338B (en) Positioning method and device for unmanned equipment, medium and unmanned equipment
CN113658265A (en) Camera calibration method and device, electronic equipment and storage medium
CN110570468A (en) Binocular vision depth estimation method and system based on depth learning
US10331977B2 (en) Method for the three-dimensional detection of objects
CN116129378A (en) Lane line detection method, device, equipment, vehicle and medium
CN112950709B (en) Pose prediction method, pose prediction device and robot
CN111260538A (en) Positioning and vehicle-mounted terminal based on long-baseline binocular fisheye camera
CN112686155A (en) Image recognition method, image recognition device, computer-readable storage medium and processor
CN109587303B (en) Electronic equipment and mobile platform
US8804251B2 (en) Accurate auto-focus method
US20200320725A1 (en) Light projection systems
CN110763232B (en) Robot and navigation positioning method and device thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant