WO2021035748A1 - 位姿获取方法、系统和可移动平台 - Google Patents

位姿获取方法、系统和可移动平台 Download PDF

Info

Publication number
WO2021035748A1
WO2021035748A1 PCT/CN2019/103871 CN2019103871W WO2021035748A1 WO 2021035748 A1 WO2021035748 A1 WO 2021035748A1 CN 2019103871 W CN2019103871 W CN 2019103871W WO 2021035748 A1 WO2021035748 A1 WO 2021035748A1
Authority
WO
WIPO (PCT)
Prior art keywords
laser point
point cloud
time
moments
pose
Prior art date
Application number
PCT/CN2019/103871
Other languages
English (en)
French (fr)
Inventor
朱振宇
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980034308.9A priority Critical patent/CN112204344A/zh
Priority to PCT/CN2019/103871 priority patent/WO2021035748A1/zh
Publication of WO2021035748A1 publication Critical patent/WO2021035748A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data

Definitions

  • the embodiments of the present application relate to the technical field of unmanned aerial vehicles, and in particular to a method, system and movable platform for acquiring a pose.
  • the navigation electronic map can be used to assist the manual driver to do navigation when driving a car.
  • the absolute coordinate accuracy of this kind of navigation electronic map is about 10 meters.
  • driverless cars will become a travel trend, and driverless cars need to accurately know their position on the road when driving on the road.
  • the lane is only about tens of centimeters, so a higher-precision map is required.
  • the absolute accuracy of this map is generally at the sub-meter level, that is, the accuracy is within 1 meter, and the horizontal relative accuracy (for example, lanes and lanes, lanes)
  • the relative position accuracy of the lane line is often higher.
  • This high-precision map not only has high-precision coordinates, but also accurate road shapes, and the slope, curvature, heading, elevation, and roll data of each lane are also included to ensure that the driverless car is on the road. Safety and accuracy of driving.
  • High-precision maps are very important for driverless cars.
  • High-precision maps are currently reconstructed based on data collected by the camera while the vehicle is driving. However, due to errors in the pose data of the vehicle itself, this will affect the accuracy of the high-precision map.
  • the embodiments of the present application provide a method, a system and a movable platform for acquiring a pose, which are used to improve the accuracy of acquiring a pose, so as to improve the accuracy of the established map.
  • an embodiment of the present application provides a pose acquisition method, which is applied to a movable platform, and the movable platform is provided with a first detection device for acquiring laser point cloud data, and is also provided with a second detection device, Used to obtain pose information; the method includes:
  • the pose of the laser point cloud at each moment is corrected to obtain the target pose of the laser point cloud at each moment .
  • an embodiment of the present application provides a pose acquisition system, which is applied to a movable platform, and the system includes:
  • the first detection device is used to obtain laser point cloud data
  • the second detection device is used to obtain pose information
  • a processor configured to acquire the laser point cloud data at each time acquired by the first detection device and the pose information at each time acquired by the second detection device when the movable platform is moving in the target area; According to the laser point cloud data at two adjacent moments, the relative pose relationship of the laser point clouds at two adjacent moments is obtained; according to the relative pose relationship of the laser point clouds at each two adjacent moments and the pose information at each moment, the correction is made The pose of the laser point cloud at each time is used to obtain the target pose of the laser point cloud at each time.
  • an embodiment of the present application provides a movable platform, including:
  • the first detection device is used to obtain laser point cloud data
  • the second detection device is used to obtain pose information
  • a processor configured to acquire the laser point cloud data at each time acquired by the first detection device and the pose information at each time acquired by the second detection device when the movable platform is moving in the target area; According to the laser point cloud data at two adjacent moments, the relative pose relationship of the laser point clouds at two adjacent moments is obtained; according to the relative pose relationship of the laser point clouds at each two adjacent moments and the pose information at each moment, the correction is made The pose of the laser point cloud at each time is used to obtain the target pose of the laser point cloud at each time.
  • an embodiment of the present application provides a readable storage medium with a computer program stored on the readable storage medium; when the computer program is executed, the computer program implements the bits described in the embodiment of the present application in the first aspect. Posture acquisition method.
  • an embodiment of the present application provides a program product, the program product includes a computer program, the computer program is stored in a readable storage medium, and at least one processor of a removable platform can download from the readable storage medium The computer program is read, and the at least one processor executes the computer program to enable the mobile platform to implement the pose acquisition method described in the embodiment of the present application in the first aspect.
  • the pose acquisition method, system, and movable platform obtained by the embodiments of the present application obtain the laser point cloud data and pose information at each moment obtained during the process of moving the movable platform in the target area;
  • the laser point cloud data is used to obtain the relative pose relationship of the laser point cloud at two adjacent moments; according to the relative pose relationship of the laser point cloud at each two adjacent moments and the pose information at each moment, the laser at each moment is corrected
  • the pose of the point cloud to obtain the target pose of the laser point cloud at each time.
  • the target pose of the laser point cloud can more accurately reflect the actual pose of the movable platform, so that the accuracy of the map is improved when the map is established based on the target pose of the laser point cloud.
  • FIG. 1 is a schematic architecture diagram of an autonomous driving vehicle 100 according to an embodiment of the present application
  • Figure 2 is a schematic diagram of an application scenario provided by an embodiment of the application
  • FIG. 3 is a flowchart of a pose acquisition method provided by an embodiment of the application.
  • FIG. 4 is a schematic diagram of the relative pose relationship of the laser point cloud at each time according to an embodiment of the application
  • FIG. 5 is a schematic structural diagram of a pose acquisition system provided by an embodiment of this application.
  • FIG. 6 is a schematic structural diagram of a pose acquisition system provided by another embodiment of this application.
  • FIG. 7 is a schematic structural diagram of a movable platform provided by an embodiment of this application.
  • FIG. 8 is a schematic structural diagram of a movable platform provided by another embodiment of the application.
  • a component when referred to as being "fixed to” another component, it can be directly on the other component or a central component may also exist. When a component is considered to be “connected” to another component, it can be directly connected to the other component or there may be a centered component at the same time.
  • the embodiments of the present application provide a pose acquisition method, system, and movable platform, where the movable platform may be a handheld phone, a handheld PTZ, a drone, an unmanned vehicle, an unmanned ship, a robot, or an autonomous vehicle, etc. .
  • FIG. 1 is a schematic architecture diagram of an autonomous driving vehicle 100 according to an embodiment of the present application.
  • the autonomous vehicle 100 may include a sensing system 110, a control system 120, and a mechanical system 130.
  • the perception system 110 is used to measure the state information of the autonomous vehicle 100, that is, the perception data of the autonomous vehicle 100.
  • the perception data may represent the position information and/or state information of the autonomous vehicle 100, for example, position, angle, speed, Acceleration and angular velocity, etc.
  • the perception system 110 may include, for example, a visual sensor (for example, including multiple monocular or binocular vision devices), lidar, millimeter wave radar, inertial measurement unit (IMU), global navigation satellite system, gyroscope, ultrasonic sensor At least one of sensors such as, electronic compass, and barometer.
  • the global navigation satellite system may be the Global Positioning System (GPS).
  • the sensing system 110 After the sensing system 110 obtains the sensing data, it can transmit the sensing data to the control system 120.
  • the control system 120 is used to make decisions on how to control the autonomous driving vehicle 100 based on the perception data, for example: how much speed to travel, or how much braking acceleration to brake, or whether to change lanes, or, Turn left/right, etc.
  • the control system 120 may include, for example, a computing platform, such as a vehicle-mounted super-computing platform, or at least one device having processing functions such as a central processing unit and a distributed processing unit.
  • the control system 120 may also include communication links for various data transmission on the vehicle.
  • the control system 120 may output one or more control commands to the mechanical system 130 according to the determined decision.
  • the mechanical system 130 is used to control the autonomous vehicle 100 in response to one or more control commands from the control system 120 to complete the above-mentioned decision.
  • the mechanical system 130 can drive the wheels of the autonomous vehicle 100 to rotate so as to be automatic.
  • the driving vehicle 100 provides power for driving, wherein the rotation speed of the wheels can affect the speed of the unmanned vehicle.
  • the mechanical system 130 may include, for example, at least one of a mechanical body engine/motor, a controlled wire control system, and the like.
  • FIG 2 is a schematic diagram of an application scenario provided by an embodiment of this application.
  • the autonomous vehicle can drive on the ground, and the autonomous vehicle can drive on the ground in the target area (for example, through the above
  • the sensing system 110 collects sensing data, which may include laser point cloud data, pose information, and Vins pose data, etc., and then performs pose correction on the laser point cloud.
  • sensing data may include laser point cloud data, pose information, and Vins pose data, etc.
  • Fig. 3 is a flowchart of a pose acquisition method provided by an embodiment of the application. As shown in Fig. 3, the method of this embodiment can be applied to a movable platform, and can also be applied to other electronic devices other than the movable platform. equipment. The method includes:
  • S301 Acquire laser point cloud data and pose information at each moment when the movable platform moves in the target area.
  • This embodiment is applied to a movable platform as an example.
  • the first detection device and the second detection device of the movable platform equipment wherein the first detection device is used to obtain laser point cloud data, and the second detection device is used to obtain pose information .
  • the pose information may be the pose information of the location where the second detection device is located.
  • the second detection device may be set on the movable platform, so the pose information may be the pose information of the movable platform.
  • the first detection device acquires laser point cloud data at each time
  • the second detection device acquires pose information at each time.
  • the movable platform obtains the laser point cloud data at each time obtained by the first detection device and the pose information at each time obtained by the second detection device.
  • the first detection device is, for example, a laser sensor
  • the second detection device is, for example, a GPS.
  • the electronic device can obtain laser point cloud data and poses at various moments acquired during the movement of the movable platform in the target area from the movable platform.
  • Information such as receiving the above-mentioned information sent by the mobile platform, or reading the above-mentioned information from the storage device of the mobile platform.
  • the mobile platform of this embodiment obtains the laser point cloud data at each time and the pose information at each time, it obtains the relative pose relationship of the laser point clouds at two adjacent times according to the laser point cloud data at two adjacent times. In this way, the relative pose relationship of the laser point cloud at two adjacent moments can be obtained. Taking each time from time 1 to time 10 as an example, the relative pose relationships of nine laser point clouds at two adjacent time points can be obtained accordingly.
  • the relative pose relationship includes at least one of the following: a rotation matrix and a displacement matrix.
  • the position of the laser point cloud at each moment is determined.
  • the pose is corrected to obtain the target pose of the laser point cloud at each moment.
  • the pose acquisition method obtaineds the laser point cloud data and pose information at each moment when the movable platform is moving in the target area; according to the laser point cloud data at two adjacent moments, the phase is obtained.
  • the relative pose relationship of the laser point cloud at two adjacent moments; according to the relative pose relationship of the laser point cloud at two adjacent moments and the pose information at each moment, the pose of the laser point cloud at each moment is corrected to Obtain the target pose of the laser point cloud at each time.
  • the target pose of the laser point cloud can more accurately reflect the actual pose of the movable platform, so that the accuracy of the map is improved when the map is established based on the target pose of the laser point cloud.
  • a map is also established according to the target pose of the laser point cloud at each time. Since the target pose of the laser point cloud at each moment is very close to the actual pose, the map established accordingly has a higher accuracy.
  • a possible implementation manner of the foregoing S302 may include S3021 and S3022:
  • the estimated relative pose relationship of the laser point cloud at two adjacent moments is acquired, and then according to the estimated relative pose relationship of the laser point cloud at the two adjacent moments and the laser point cloud data at the two adjacent moments, Obtain the relative pose relationship of the laser point cloud at two adjacent moments. Since the obtained relative estimated pose relationship has a large error, the estimated relative pose relationship is corrected according to the laser point cloud data to obtain a more accurate relative pose relationship.
  • a possible implementation manner of the foregoing S3021 may include A1 and A2:
  • A2 according to the vins pose data at two adjacent moments, obtain the estimated relative pose relationship of the laser point cloud at two adjacent moments.
  • the movable platform is also provided with a third detection device, and the third detection device can obtain vins pose data.
  • the third detection device can obtain the vins pose data at each time, and correspondingly, the movable platform can obtain the vins pose data at each time obtained by the third detection device. Then, the movable platform obtains the vins pose data of two adjacent moments from it.
  • the vins pose data of each moment can represent the estimated pose of the laser point cloud at each moment, and obtains the vins pose data of the two adjacent moments. The estimated relative pose relationship of the laser point cloud at two moments.
  • a possible implementation manner of the foregoing S3021 may include B1 and B2:
  • the laser point cloud data at the next moment acquired by the first detection device, and the estimated relative pose relationship obtain the relative poses of the laser point clouds at two adjacent moments relationship.
  • the laser at time 3 is estimated
  • the estimated laser point cloud data at time 3 may be different from the laser point cloud data at time 3 acquired by the first detection device. Then, according to the estimated laser point cloud data at time 3, the laser point cloud data at time 3 acquired by the first detection device, and the above-mentioned estimated relative pose relationship, the relative pose relationship of the laser point cloud at time 2 and time 3 is obtained. .
  • a possible implementation of B2 above is: determining the relative pose relationship deviation based on the estimated laser point cloud data at a later time and the laser point cloud data at the later time acquired by the first detection device ; According to the relative pose relationship deviation and the estimated relative pose relationship, the relative pose relationship of the laser point cloud at two moments is obtained.
  • the estimated laser point cloud data at time 3 and the laser point cloud data at time 3 acquired by the first detection device determine the estimated relative pose relationship with the laser point cloud data at time 3 acquired by the first detection device, which is called The relative pose relationship deviation, and then based on the relative pose relationship deviation and the estimated relative pose relationship of the laser point cloud at time 2 and time 3, the relative pose relationship of the laser point cloud at time 2 and time 3 is obtained.
  • a possible implementation manner of obtaining the relative pose relationship of the laser point clouds at two adjacent times according to the laser point cloud data at two adjacent times may be: according to the laser point clouds at two adjacent times Data to determine whether the laser point cloud data at two adjacent moments match. If it matches, the relative pose relationship of the laser point cloud at two adjacent moments is obtained according to the laser point cloud data at two adjacent moments.
  • the laser point cloud data at two adjacent moments it is determined whether the laser point cloud data at two adjacent moments match. If they match, the above-mentioned laser point cloud data at two adjacent moments are executed to obtain the adjacent laser point cloud data. If the scheme of the relative pose relationship between two moments does not match, the relative pose relationship of the two adjacent moments is not obtained. In this way, the relative pose relationships of the matched laser point clouds at two adjacent moments are obtained, and these relative pose relationships are more accurate to facilitate subsequent correction of the laser point cloud.
  • determine whether the laser point cloud data at two adjacent moments match for example, obtain the normal vector distance between the laser point cloud data at two adjacent moments, and if the normal vector distance is less than a preset value, determine the phase The laser point cloud data at two adjacent moments are matched. If the distance of the normal vector is greater than or equal to the preset value, it is determined that the laser point cloud data at two adjacent moments do not match.
  • a possible implementation manner of the foregoing S303 may include S3031-S3033:
  • S3033 Correct the position of the laser point cloud at each time according to the relative pose relationship of the laser point cloud at each time i and K time instants, and the relative pose relationship of the laser point cloud at each two adjacent time points.
  • any moment of each moment as an example, for example, moment i, i is an integer greater than or equal to 1, and according to the pose information at each moment, it is determined that the pose is within a preset distance from the pose at time i K moments.
  • the pose information includes GPS data.
  • the position is K times within a preset distance from the position of time i.
  • the relative pose relationship between the moment i and the laser point cloud at each moment of the K moments is obtained, so as to obtain the respective moments of each moment.
  • the relative pose of the laser point cloud at each two adjacent moments obtained in S302 Relationship to correct the pose of the laser point cloud at each moment. Improved the accuracy of the posture of the corrected laser point cloud.
  • a possible implementation manner of the foregoing S3033 may be C1:
  • the any moment is time j, assuming that time i is time 1, and time j is time 4, as shown in Figure 4, according to time 1 and time 4
  • the relative pose relationship of the laser point cloud obtained through S3032), the relative pose relationship of the laser point cloud at time 1 and time 2, the relative pose relationship of the laser point cloud at time 2 and time 3, and the time 3 and The relative pose relationship of the laser point cloud at time 4 can be corrected for the pose of the laser point cloud at time 1, time 2, time 3, and time 4.
  • a possible implementation manner of the foregoing C1 may include C11-C13:
  • the relative pose relationship including at least one of the following: rotation matrix and displacement matrix as an example, the relative pose relationship of the laser point cloud at time 1 and time 2, and the relative pose of the laser point cloud at time 2 and time 3
  • the relative pose relationship of the laser point cloud at time 3 and time 4 is multiplied to obtain a calculated relative pose relationship of the laser point cloud at time 1 and time 4, and then the position of the laser point cloud at time 1 and time 4 is obtained.
  • the difference between the pose relationship and the calculated relative pose relationship of the laser point cloud at time 1 and time 4, and this difference is called the relative pose relationship error of the laser point cloud at time 1 and time 4.
  • the relative pose relationship of the laser point cloud at time 1 and time 4 the relative pose relationship of the laser point cloud at time 1 and time 2, the relative pose relationship of the laser point cloud at time 2 and time 3, and the time are corrected. 3 and the relative pose relationship of the laser point cloud at time 4.
  • the relative pose relationship of the laser point cloud at time 1 and time 2 after correction the relative pose relationship of the laser point cloud at time 2 and time 3 after correction, and the laser point at time 3 and time 4 after correction
  • the relative pose relationship of the cloud is corrected to correct the pose of the laser point cloud at time 1, time 2, time 3, and time 4.
  • the corrected laser point cloud data is transformed according to the corrected relative pose relationship, it can be as the same as the corresponding laser point cloud data as much as possible.
  • the error of the relative pose relationship between the laser point cloud at time i and time j is the same as the relative pose relationship of the laser point cloud at time i and time j, and the laser points at two adjacent times from time i to time j.
  • the relative pose relationship of the cloud is related. Therefore, the relative pose relationship of the laser point cloud at time i and time j is minimized, and the relative pose relationship of the laser point cloud data at each adjacent time from time i to time j is corrected. or,
  • the sum of the relative pose relationship errors of the laser point cloud at each time from time i to K time can be minimized, and the laser point cloud at each adjacent time from time i to K time can be corrected.
  • the relative pose relationship of the data can be minimized, and the laser point cloud at each adjacent time from time i to K time can be corrected.
  • the sum of the relative pose relationship errors of the laser point cloud at each moment i and the corresponding K moments may be minimized, and the relative pose relationship of the laser point clouds at each adjacent moment may be corrected.
  • a possible implementation of S3031 is: according to the pose information of each moment, determine all moments within a preset distance between the pose and the pose of moment i, and all moments are K moments, It should be noted that the value of i is different, and the value of K may also be different. Or, the value of K is preset. According to the pose information at each moment, determine the K moments within the preset distance between the pose and the pose at time i. For example, K is preset to a value of 2. All the moments within the preset distance between the pose and the pose of moment i are greater than 2 moments, and this embodiment acquires 2 moments among them to save processing resources.
  • all moments within a preset distance between the pose and the pose of moment i are determined, and all moments are M moments.
  • the above K time points at which the laser point cloud data matches the laser point cloud data at time i are determined.
  • all the moments when the laser point cloud data matches the laser point cloud data at time i are determined as K moments.
  • the value of K is preset, from the K times when the laser point cloud data matches the laser point cloud data at time i from M times, for example: K is set to a value of 2 in advance, if M laser point clouds In the data, there are three times when the laser point cloud data matches the laser point cloud data at time i, and this embodiment acquires two of them to save processing resources.
  • how to determine the way of matching laser point cloud data at two moments can refer to the way of determining the matching of laser point cloud data at two adjacent moments, which will not be repeated here.
  • An embodiment of the present application also provides a computer storage medium.
  • the computer storage medium stores program instructions.
  • the program execution may include some or all of the steps of the pose acquisition method in FIG. 3 and its corresponding embodiments. .
  • FIG. 5 is a schematic structural diagram of a pose acquisition system provided by an embodiment of this application.
  • the pose acquisition system 500 of this embodiment may include: a first detection device 501, a second detection device 502, and a processor 503. Wherein, the first detection device 501, the second detection device 502, and the processor 503 may be connected through a bus.
  • the pose acquisition system 500 may further include a third detection device 504, and the third detection device 504 may be connected to the above-mentioned components through a bus.
  • the first detection device 501 is used to obtain laser point cloud data.
  • the second detection device 502 is used to obtain pose information.
  • the processor 503 is configured to obtain the laser point cloud data at each time acquired by the first detection device and the pose information at each time acquired by the second detection device when the movable platform moves in the target area ;According to the laser point cloud data at two adjacent moments, the relative pose relationship of the laser point clouds at two adjacent moments is obtained; according to the relative pose relationship of the laser point clouds at each adjacent moment and the pose information at each moment, The pose of the laser point cloud at each time is corrected to obtain the target pose of the laser point cloud at each time.
  • the processor 503 is specifically configured to:
  • K moments within a preset distance between the pose and the pose of moment i according to the pose information of each moment K and i are integers greater than or equal to 1;
  • the pose of the laser point cloud at each moment is corrected.
  • the processor 503 is specifically configured to:
  • the processor 503 is specifically configured to:
  • the laser point cloud data from time i to time j is corrected according to the relative pose relationship of the laser point cloud at each adjacent time from time i to time j after correction.
  • the processor 503 is specifically configured to:
  • the relative pose relationship of the laser point cloud at each adjacent time is corrected.
  • the corrected pose of the laser point cloud includes at least one of the following: altitude information and heading information.
  • the processor 503 is specifically configured to:
  • the relative pose relationship of the laser point cloud at two adjacent moments is obtained according to the laser point cloud data at two adjacent moments.
  • the moments within the preset distance between the pose and the pose of moment i are M moments, where M is an integer greater than or equal to K;
  • the laser point cloud data at time i and the laser point cloud data at the M time points determine K time points at which the laser point cloud data matches the laser point cloud data at time i respectively among the M time points.
  • the relative pose relationship of the laser point clouds at the two moments is obtained.
  • the third detection device 504 is used to obtain vins pose data
  • the processor 503 is specifically configured to: obtain the vins pose data acquired by the third detection device 504 at two moments; obtain the estimated relative pose relationship of the laser point cloud at the two moments according to the vins pose data at the two moments .
  • the relative pose relationship of the laser point clouds at the two moments is obtained.
  • the processor 503 is specifically configured to:
  • the relative pose relationship of the laser point cloud at two moments is obtained.
  • the pose information includes GPS data.
  • the relative pose relationship includes at least one of the following: a rotation matrix and a displacement matrix.
  • the processor 503 is further configured to establish a map according to the target pose of the laser point cloud at each time.
  • the pose acquisition system 500 of this embodiment may further include: a memory (not shown in the figure) for storing program codes.
  • the memory is used for storing program codes.
  • the poses The acquisition system 500 can implement the above-mentioned technical solutions.
  • the pose acquisition system of this embodiment can be used to implement the technical solutions of FIG. 3 and the corresponding method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 6 is a schematic structural diagram of a pose acquisition system provided by another embodiment of this application.
  • the pose acquisition system 600 of this embodiment may include a memory 601 and a processor 602. Wherein, the memory 601 and the processor 602 may be connected through a bus.
  • the memory 601 is used to store program codes
  • the processor 602 is configured to execute when the program code is called:
  • the processor 602 is specifically configured to:
  • K moments within a preset distance between the pose and the pose of moment i according to the pose information of each moment K and i are integers greater than or equal to 1;
  • the pose of the laser point cloud at each moment is corrected.
  • the processor 602 is specifically configured to:
  • the processor 602 is specifically configured to:
  • the laser point cloud data from time i to time j is corrected according to the relative pose relationship of the laser point cloud at each adjacent time from time i to time j after correction.
  • the processor 602 is specifically configured to:
  • the relative pose relationship of the laser point cloud at each adjacent time is corrected.
  • the corrected pose of the laser point cloud includes at least one of the following: altitude information and heading information.
  • the processor 602 is specifically configured to:
  • the relative pose relationship of the laser point cloud at two adjacent moments is obtained according to the laser point cloud data at two adjacent moments.
  • the processor 602 is specifically configured to:
  • the moments within the preset distance between the pose and the pose of moment i are M moments, where M is an integer greater than or equal to K;
  • the laser point cloud data at time i and the laser point cloud data at the M time points determine K time points at which the laser point cloud data matches the laser point cloud data at time i respectively among the M time points.
  • the laser point cloud data at the two moments match if the distance of the normal vector between the laser point cloud data at two moments is less than the preset value.
  • the processor 602 is specifically configured to:
  • the relative pose relationship of the laser point clouds at the two moments is obtained.
  • the processor 602 is specifically configured to: obtain the vins pose data obtained by the third detection device at two moments; and obtain the laser point cloud data at the two moments according to the vins pose data at the two moments. Estimate the relative pose relationship.
  • the processor 602 is specifically configured to:
  • the relative pose relationship of the laser point clouds at the two moments is obtained.
  • the processor 602 is specifically configured to:
  • the relative pose relationship of the laser point cloud at two moments is obtained.
  • the pose information includes GPS data.
  • the relative pose relationship includes at least one of the following: a rotation matrix and a displacement matrix.
  • the processor 602 is further configured to build a map according to the target pose of the laser point cloud at each time.
  • the pose acquisition system of this embodiment can be used to implement the technical solutions of FIG. 3 and the corresponding method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 7 is a schematic structural diagram of a movable platform provided by an embodiment of this application.
  • the movable platform 700 of this embodiment may include: a first detection device 701, a second detection device 702, and a processor 703. Wherein, the first detection device 701, the second detection device 702 and the processor 703 may be connected by a bus.
  • the movable platform 700 may further include a third detection device 704, and the third detection device 704 may be connected to the foregoing components through a bus.
  • the first detection device 701 is used to obtain laser point cloud data.
  • the second detection device 702 is used to obtain pose information.
  • the processor 703 is configured to obtain the laser point cloud data at each time acquired by the first detection device 701 and the laser point cloud data at each time acquired by the second detection device 702 during the process of moving the movable platform 700 within the target area.
  • Pose information According to the laser point cloud data at two adjacent moments, the relative pose relationship of the laser point cloud at two adjacent moments is obtained; according to the relative pose relationship of the laser point cloud at each two adjacent moments and the position at each moment The pose information is used to correct the pose of the laser point cloud at each time to obtain the target pose of the laser point cloud at each time.
  • the processor 703 is specifically configured to:
  • K moments within a preset distance between the pose and the pose of moment i according to the pose information of each moment K and i are integers greater than or equal to 1;
  • the pose of the laser point cloud at each moment is corrected.
  • the processor 703 is specifically configured to:
  • the processor 703 is specifically configured to:
  • the laser point cloud data from time i to time j is corrected according to the relative pose relationship of the laser point cloud at each adjacent time from time i to time j after correction.
  • the processor 703 is specifically configured to:
  • the relative pose relationship of the laser point cloud at each adjacent time is corrected.
  • the corrected pose of the laser point cloud includes at least one of the following: altitude information and heading information.
  • the processor 703 is specifically configured to:
  • the relative pose relationship of the laser point cloud at two adjacent moments is obtained according to the laser point cloud data at two adjacent moments.
  • the processor 703 is specifically configured to:
  • the moments within the preset distance between the pose and the pose of moment i are M moments, where M is an integer greater than or equal to K;
  • the laser point cloud data at time i and the laser point cloud data at the M time points determine K time points at which the laser point cloud data matches the laser point cloud data at time i respectively among the M time points.
  • the laser point cloud data at the two moments match if the distance of the normal vector between the laser point cloud data at two moments is less than the preset value.
  • the processor 703 is specifically configured to:
  • the relative pose relationship of the laser point clouds at the two moments is obtained.
  • the third detection device 704 is used to obtain vins pose data
  • the processor 703 is specifically configured to: obtain the vins pose data obtained by the third detection device 704 at two moments; obtain the estimated relative pose relationship of the laser point cloud at the two moments according to the vins pose data at the two moments .
  • the processor 703 is specifically configured to:
  • the relative pose relationship of the laser point clouds at the two moments is obtained.
  • the processor 703 is specifically configured to:
  • the relative pose relationship of the laser point cloud at two moments is obtained.
  • the pose information includes GPS data.
  • the relative pose relationship includes at least one of the following: a rotation matrix and a displacement matrix.
  • the processor 703 is further configured to establish a map according to the target pose of the laser point cloud at each time.
  • the movable platform 700 of this embodiment may further include: a memory (not shown in the figure) for storing program codes, the memory is used for storing program codes, and when the program codes are executed, the movable platform 700 can implement the above-mentioned technical solutions.
  • the movable platform of this embodiment can be used to implement the technical solutions of FIG. 3 and the corresponding method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 8 is a schematic structural diagram of a movable platform provided by another embodiment of this application.
  • the movable platform 800 of this embodiment may include: a movable platform body 801 and a pose acquisition system 802.
  • the pose acquisition system 802 is installed on the movable platform body 801.
  • the pose acquisition system 802 may be a device independent of the movable platform body 801.
  • the pose acquisition system 802 may adopt the structure of the device embodiment shown in FIG. 5 or FIG. 6, and correspondingly, it may implement the technical solutions of FIG. 3 and its corresponding method embodiments. The implementation principles and technical effects are similar. No longer.
  • the movable platform 800 includes a handheld phone, a handheld PTZ, a drone, an unmanned vehicle, an unmanned boat, a robot, or an autonomous vehicle.
  • a person of ordinary skill in the art can understand that all or part of the steps in the above method embodiments can be implemented by a program instructing relevant hardware.
  • the foregoing program can be stored in a computer readable storage medium. When the program is executed, it is executed. Including the steps of the foregoing method embodiment; and the foregoing storage medium includes: read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disks or optical disks, etc., which can store program codes Medium.

Abstract

一种位姿获取方法、系统(500,600)和可移动平台(700,800),方法包括:获取可移动平台(700,800)在目标区域内移动的过程中获取的各时刻的激光点云数据和位姿信息(S301);根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系(S302);根据各相邻两时刻的激光点云的相对位姿关系和各时刻的位姿信息,修正各时刻的激光点云的位姿,以获得各时刻的激光点云的目标位姿(S303)。从而使得激光点云的目标位姿更加准确地反映出可移动平台(700,800)的实际位姿,以便在根据激光点云的目标位姿建立地图时,提高了地图的精度。

Description

位姿获取方法、系统和可移动平台 技术领域
本申请实施例涉及无人机技术领域,尤其涉及一种位姿获取方法、系统和可移动平台。
背景技术
导航电子地图,可用于辅助人工驾驶员在驾驶汽车时做导航使用,这种导航电子地图的绝对坐标精度大约在10米左右。但是,随着无人驾驶技术的发展,无人驾驶汽车会成为一种出行趋势,而无人驾驶汽车在道路上行驶时需要精确的知道自己在路上的位置,往往车辆离马路牙子和旁边的车道也就几十厘米左右,所以需要更高精度的地图,这种地图的绝对精度一般都会在亚米级,也就是1米以内的精度,而且横向的相对精度(比如,车道和车道,车道和车道线的相对位置精度)往往还要更高。这种高精度的地图不仅有高精度的坐标,同时还有准确的道路形状,并且每个车道的坡度、曲率、航向、高程、侧倾的数据也都含有,以保证无人驾驶汽车在道路上行驶的安全性和准确性。
因此,高精度的地图对无人驾驶汽车至关重要,高精度的地图目前是根据车辆在行驶过程中拍摄装置采集到的数据重建而成。但是由于车辆本身的位姿数据存在误差,这会影响高精度的地图的精度。
发明内容
本申请实施例提供一种位姿获取方法、系统和可移动平台,用于提高获取位姿的准确性,以提高建立的地图的精度。
第一方面,本申请实施例提供一种位姿获取方法,应用于可移动平台,所述可移动平台设有第一探测装置,用于获取激光点云数据,还设有第二探测装置,用于获取位姿信息;该方法包括:
获取可移动平台在目标区域内移动的过程中获取的各时刻的激光点云数据和所述位姿信息;
根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系;
根据各相邻两时刻的激光点云的相对位姿关系和各时刻的位姿信息,修正所述各时刻的激光点云的位姿,以获得所述各时刻的激光点云的目标位姿。
第二方面,本申请实施例提供一种位姿获取系统,应用于可移动平台中,所述系统包括:
第一探测装置,用于获取激光点云数据;
第二探测装置,用于获取位姿信息;
处理器,用于获取可移动平台在目标区域内移动的过程中所述第一探测装置获取的各时刻的激光点云数据和所述第二探测装置获取的各时刻的所述位姿信息;根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系;根据各相邻两时刻的激光点云的相对位姿关系和各时刻的位姿信息,修正所述各时刻的激光点云的位姿,以获得所述各时刻的激光点云的目标位姿。
第三方面,本申请实施例提供一种可移动平台,包括:
第一探测装置,用于获取激光点云数据;
第二探测装置,用于获取位姿信息;
处理器,用于获取可移动平台在目标区域内移动的过程中所述第一探测装置获取的各时刻的激光点云数据和所述第二探测装置获取的各时刻的所述位姿信息;根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系;根据各相邻两时刻的激光点云的相对位姿关系和各时刻的位姿信息,修正所述各时刻的激光点云的位姿,以获得所述各时刻的激光点云的目标位姿。
第四方面,本申请实施例提供一种可读存储介质,所述可读存储介质上存储有计算机程序;所述计算机程序在被执行时,实现如第一方面本申请实施例所述的位姿获取方法。
第五方面,本申请实施例提供一种程序产品,所述程序产品包括计算机程序,所述计算机程序存储在可读存储介质中,可移动平台的至少一个处理器可以从所述可读存储介质读取所述计算机程序,所述至少一个处理器执行 所述计算机程序使得可移动平台实施如第一方面本申请实施例所述的位姿获取方法。
本申请实施例提供的位姿获取方法、系统和可移动平台,通过获取可移动平台在目标区域内移动的过程中获取的各时刻的激光点云数据和位姿信息;根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系;根据各相邻两时刻的激光点云的相对位姿关系和各时刻的位姿信息,修正所述各时刻的激光点云的位姿,以获得所述各时刻的激光点云的目标位姿。从而使得激光点云的目标位姿可以更加准确地反映出可移动平台的实际位姿,以便在根据激光点云的目标位姿建立地图时,提高了地图的精度。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是根据本申请的实施例的自动驾驶车辆100的示意性架构图;
图2为本申请一实施例提供的应用场景示意图;
图3为本申请一实施例提供的位姿获取方法的流程图;
图4为本申请一实施例提供的各时刻间的激光点云的相对位姿关系的一种示意图;
图5为本申请一实施例提供的位姿获取系统的结构示意图;
图6为本申请另一实施例提供的位姿获取系统的结构示意图;
图7为本申请一实施例提供的可移动平台的结构示意图;
图8为本申请另一实施例提供的可移动平台的结构示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获 得的所有其他实施例,都属于本申请保护的范围。
需要说明的是,当组件被称为“固定于”另一个组件,它可以直接在另一个组件上或者也可以存在居中的组件。当一个组件被认为是“连接”另一个组件,它可以是直接连接到另一个组件或者可能同时存在居中组件。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
本申请的实施例提供了位姿获取方法、系统和可移动平台,其中,可移动平台可以是手持电话、手持云台、无人机、无人车、无人船、机器人或自动驾驶汽车等。
下面对本申请可移动平台的描述使用自动驾驶车辆作为示例。图1是根据本申请的实施例的自动驾驶车辆100的示意性架构图。
自动驾驶车辆100可以包括感知系统110、控制系统120和机械系统130。
其中,感知系统110用于测量自动驾驶车辆100的状态信息,即自动驾驶车辆100的感知数据,感知数据可以表示自动驾驶车辆100的位置信息和/或状态信息,例如,位置、角度、速度、加速度和角速度等。感知系统110例如可以包括视觉传感器(例如包括多个单目或双目视觉装置)、激光雷达、毫米波雷达、惯性测量单元(Inertial Measurement Unit,IMU)、全球导航卫星系统、陀螺仪、超声传感器、电子罗盘、和气压计等传感器中的至少一种。例如,全球导航卫星系统可以是全球定位系统(Global Positioning System,GPS)。
感知系统110获取到感知数据后,可以将感知数据传输给控制系统120。其中,控制系统120用于根据感知数据做出用于控制自动驾驶车辆100如何行驶的决策,例如:以多少的速度行驶,或者,以多少的刹车加速度刹车,或者,是否变道行驶,或者,左/右转行驶等。控制系统120例如可以包括:计算平台,例如车载超算平台,或者中央处理器、分布式处理单元等具有处理功能器件的至少一种。控制系统120还可以包括车辆上各种数据传输的通 信链路。
控制系统120可以根据确定的决策向机械系统130输出一个或多个控制指令。其中,机械系统130用于响应来自控制系统120的一个或多个控制指令对自动驾驶车辆100进行控制,以完成上述决策,例如:机械系统130可以驱动自动驾驶车辆100的车轮转动,从而为自动驾驶车辆100的行驶提供动力,其中,车轮的转动速度可以影响到无人驾驶车辆的速度。其中,机械系统130例如可以包括:机械的车身发动机/电动机、控制的线控系统等等中的至少一种。
应理解,上述对于无人驾驶车辆各组成部分的命名仅是出于标识的目的,并不应理解为对本申请的实施例的限制。
其中,图2为本申请一实施例提供的应用场景示意图,如图2所示,自动驾驶车辆可以地面上行驶,并且自动驾驶车辆在目标区域的地面行驶的过程中,可以(例如通过上述的感知系统110)采集感知数据,该感知数据可以包括激光点云数据、位姿信息和Vins位姿数据等,然后对激光点云进行位姿修正,具体如何处理可以参见本申请下述各实施例所述。
图3为本申请一实施例提供的位姿获取方法的流程图,如图3所示,本实施例的方法可以应用于可移动平台中,也可应用于该可移动平台之外的其它电子设备。该方法包括:
S301、获取可移动平台在目标区域内移动的过程中获取的各时刻的激光点云数据和位姿信息。
S302、根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系。
S303、根据各相邻两时刻的激光点云的相对位姿关系和各时刻的位姿信息,修正所述各时刻的激光点云的位姿,以获得所述各时刻的激光点云的目标位姿。
本实施例以应用于可移动平台为例,可移动平台设备第一探测装置和第二探测装置,其中,第一探测装置用于获取激光点云数据,第二探测装置用于获取位姿信息。位姿信息可以是第二探测装置所在位置的位姿信息。第二探测装置可以设置于可移动平台,因此位姿信息可以为可移动平台的位姿信息。其中,当可移动平台在目标区域内移动的过程中,第一探测装 置获取各时刻的激光点云数据,第二探测装置获取各时刻的位姿信息。相应地,可移动平台获取第一探测装置获取到的各时刻的激光点云数据以及第二探测装置获取到的各时刻的位姿信息。第一探测装置例如为激光传感器,第二探测装置例如为GPS。
若本实施例应用于该可移动平台之外的其它电子设备,则电子设备可以从可移动平台处获取可移动平台在目标区域内移动的过程中获取的各时刻的激光点云数据和位姿信息,例如接收可移动平台发送的上述信息,或者,从可移动平台的存储装置中读取上述信息。
本实施例的可移动平台获取各时刻的激光点云数据和各时刻的位姿信息之后,根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系,从而可以获得各相邻两时刻的激光点云的相对位姿关系。以各时刻分别为时刻1至时刻10为例,相应地可以获得9个相邻两时刻的激光点云的相对位姿关系。可选地,该相对位姿关系包括以下至少一项:旋转矩阵和位移矩阵。
在获得各相邻两时刻的激光点云的相对位姿关系之后,根据各相邻两时刻的激光点云的相对位姿关系以及各时刻的位姿信息,对各时刻的激光点云的位姿进行修正,从而获得各时刻的激光点云的目标位姿。
本实施例提供的位姿获取方法,通过获取可移动平台在目标区域内移动的过程中获取的各时刻的激光点云数据和位姿信息;根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系;根据各相邻两时刻的激光点云的相对位姿关系和各时刻的位姿信息,修正所述各时刻的激光点云的位姿,以获得所述各时刻的激光点云的目标位姿。从而使得激光点云的目标位姿可以更加准确地反映出可移动平台的实际位姿,以便在根据激光点云的目标位姿建立地图时,提高了地图的精度。
在一些实施例中,在执行S303之后,还根据所述各时刻的激光点云的目标位姿,建立地图。由于各时刻的激光点云的目标位姿与实际位姿非常接近,所以据此建立的地图的精度更高。
在一些实施例中,修正的激光点云的位姿包括如下至少一项:高度信息和航向信息。本实施例中,根据各相邻两时刻的激光点云的相对位姿关系和各时刻的位姿信息,修正所述各时刻的激光点云的高度信息,使得修正 后的激光点云的高度更准确;或者,根据各相邻两时刻的激光点云的相对位姿关系和各时刻的位姿信息,修正所述各时刻的激光点云的航向信息,使得修正后的激光点云的航向更准确;或者,根据各相邻两时刻的激光点云的相对位姿关系和各时刻的位姿信息,修正所述各时刻的激光点云的高度信息和航向信息,使得修正后的激光点云的高度和航向更准确。
在一些实施例中,上述S302的一种可能的实现方式可以包括S3021和S3022:
S3021、获取相邻两时刻的激光点云的估计相对位姿关系。
S3022、根据所述估计相对位姿关系和所述相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系。
本实施例中,获取相邻两时刻的激光点云的估计相对位姿关系,然后根据该相邻两时刻的激光点云的估计相对位姿关系和该相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系。由于获取的相对估计位姿关系误差较大,因此再根据激光点云数据,对估计相对位姿关系进行修正,以获得更加准确的相对位姿关系。
在一些实施例中,上述S3021的一种可能的实现方式可以包括A1和A2:
A1、获取所述可移动平台在相邻两时刻获取的vins位姿数据;
A2、根据相邻两时刻的vins位姿数据,获取相邻两时刻的激光点云的估计相对位姿关系。
本实施例中,可移动平台还设有第三探测装置,第三探测装置可以获取vins位姿数据。当可移动平台在目标区域内移动的过程中,第三探测装置可以获取各时刻的vins位姿数据,相应地,可移动平台可以获取第三探测装置获取的各时刻的vins位姿数据。然后,可移动平台从中获取相邻两时刻的vins位姿数据,每时刻的vins位姿数据可以表示每时刻的激光点云的估计位姿,并根据相邻两时刻的vins位姿数据,获取两时刻的激光点云的估计相对位姿关系。
在一些实施例中,上述S3021的一种可能的实现方式可以包括B1和B2:
B1、根据相邻两时刻中前一时刻的激光点云数据和所述估计相对位姿关系,估计所述相邻两时刻中后一时刻的激光点云数据。
B2、根据估计的后一时刻的激光点云数据、第一探测装置获取的后一时 刻的激光点云数据以及所述估计相对位姿关系,获得相邻两时刻的激光点云的相对位姿关系。
本实施例中,以相邻两时刻为时刻2和时刻3为例,根据时刻2的激光点云数据的时刻2与时刻3的激光点云数据的估计相对位姿关系,估计时刻3的激光点云数据,估计出的时刻3的激光点云数据与第一探测装置获取的时刻3的激光点云数据可能不同。然后再根据估计的时刻3的激光点云数据、第一探测装置获取的时刻3的激光点云数据以及上述的估计相对位姿关系,获得时刻2与时刻3的激光点云的相对位姿关系。
可选地,上述B2的一种可能的实现方式为:根据估计的后一时刻的激光点云数据和所述第一探测装置获取的后一时刻的激光点云数据,确定相对位姿关系偏差;根据所述相对位姿关系偏差和所述估计相对位姿关系,获得两时刻的激光点云的相对位姿关系。
根据估计的时刻3的激光点云数据以及第一探测装置获取的时刻3的激光点云数据,确定估计的与第一探测装置获取的时刻3的激光点云数据的相对位姿关系,称为相对位姿关系偏差,然后再根据该相对位姿关系偏差以及上述时刻2与时刻3的激光点云的估计相对位姿关系,获取时刻2与时刻3的激光点云的相对位姿关系。
在一些实施例中,上述根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系的一种可能的实现方式可以:根据相邻两时刻的激光点云数据,确定相邻两时刻的激光点云数据是否匹配。若匹配,则根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系。
本实施例中,根据相邻两时刻的激光点云数据,确定相邻两时刻的激光点云数据是否匹配,如果匹配,则执行上述根据相邻两时刻的激光点云数据,获得该相邻两时刻的相对位姿关系的方案,如果不匹配,则不获取该相邻两时刻的相对位位姿关系。这样获取的是相匹配的相邻两时刻的激光点云的相对位姿关系,这些相对位姿关系更加准确,以利于后续修正激光点云。
可选地,判断相邻两时刻的激光点云数据是否匹配,例如:获取相邻两时刻的激光点云数据间的法向量的距离,如果该法向量的距离小于预设值,则确定相邻两时刻的激光点云数据匹配,如果该法向量的距离大于等于预设值,则确定相邻两时刻的激光点云数据不匹配。
在一些实施例中,上述S303的一种可能的实现方式可以包括S3031-S3033:
S3031、针对任一时刻i,根据各时刻的位姿信息,确定位姿与时刻i的位姿相距预设距离内的K个时刻,K、i为大于等于1的整数。
S3032、根据所述K个时刻的激光点云数据和所述时刻i的激光点云数据,获取时刻i与K个时刻中每一时刻的激光点云的相对位姿关系。
S3033、根据各时刻i与K个时刻中每一时刻的激光点云的相对位姿关系、各相邻两时刻的激光点云的相对位姿关系,修正各时刻的激光点云的位姿。
本实施例中,以各时刻中的任一时刻为例,例如时刻i,i为大于等于1的整数,根据各时刻的位姿信息,确定位姿与时刻i的位姿相距预设距离内的K个时刻。可选地,位姿信息包括GPS数据,相应地,确定的是位置与时刻i的位置相距预设距离内的K个时刻。然后根据该K个时刻的激光点云数据和时刻i的激光点云数据,获取时刻i与K个时刻中每一时刻的激光点云的相对位姿关系,从而获得各个时刻中每个时刻分别与位姿相距预设距离内的至少一个时刻的激光点云的位姿。
其中,获取时刻i与K个时刻中每一时刻的激光点云的相对位姿关系的方式可以参见上述实施例中获取相邻两时刻的激光点云的相对位姿关系的方式,此处不再赘述。
然后,根据各个时刻中每个时刻分别与位姿相距预设距离内的至少一个时刻的激光点云的相对位姿关系、上述S302中获得的各相邻两时刻的激光点云的相对位姿关系,修正各时刻的激光点云的位姿。提高了修正后的激光点云的位姿的准确性。
在一些实施例中,上述S3033的一种可能的实现方式可以为C1:
C1、根据时刻i与K个时刻中时刻j的激光点云的相对位姿关系,以及时刻i与时刻j之间的各相邻时刻的激光点云的相对位姿关系,修正时刻i到时刻j的激光点云的位姿。
本实施例中,以K个时刻中的任一时刻为例,该任一时刻为时刻j,假设时刻i为时刻1,时刻j为时刻4,如图4所示,根据时刻1与时刻4的激光点云的相对位姿关系(通过S3032获得的),以及时刻1与时刻2的激光点云的相对位姿关系、时刻2与时刻3的激光点云的相对位姿关系、时刻3 与时刻4的激光点云的相对位姿关系,可以修正时刻1、时刻2、时刻3、时刻4的激光点云的位姿。
在一些实施例中,上述C1的一种可能的实现方式可以包括C11-C13:
C11、根据时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系,以及时刻i与时刻j的激光点云的相对位姿关系,确定时刻i与时刻j的激光点云的相对位姿关系误差。
C12、根据所述相对位姿关系误差,修正时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系。
C13、根据修正后的时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系,修正时刻i到时刻j的激光点云数据。
本实施例中,以时刻i为时刻1,时刻j为时刻4为例,根据时刻1与时刻2的激光点云的相对位姿关系、时刻2与时刻3的激光点云的位姿关系、时刻3与时刻4的激光点云的位姿关系以及时刻1与时刻4的激光点云的位姿关系,确定时刻i与时刻j的激光点云的相对位姿关系误差。例如:以相对位姿关系包括如下至少一项:旋转矩阵和位移矩阵为例,将时刻1与时刻2的激光点云的相对位姿关系、时刻2与时刻3的激光点云的相对位姿关系、时刻3与时刻4的激光点云的相对位姿关系相乘,获得一个时刻1与时刻4的激光点云的计算相对位姿关系,再获取时刻1与时刻4的激光点云的位姿关系与时刻1与时刻4的激光点云的计算相对位姿关系的差值,该差值称为时刻1与时刻4的激光点云的相对位姿关系误差。
然后根据时刻1与时刻4的激光点云的相对位姿关系误差,修正时刻1与时刻2的激光点云的相对位姿关系、时刻2与时刻3的激光点云的相对位姿关系、时刻3与时刻4的激光点云的相对位姿关系。然后,根据修正后的时刻1与时刻2的激光点云的相对位姿关系、修正后的时刻2与时刻3的激光点云的相对位姿关系、修正后的时刻3与时刻4的激光点云的相对位姿关系,修正时刻1、时刻2、时刻3、时刻4的激光点云的位姿。使得修正后的激光点云数据根据修正后的相对位姿关系变换后,可以尽可能与相应的激光点云数据相同。
可选地,时刻i与时刻j的激光点云的相对位姿关系误差与,时刻i和时刻j的激光点云的相对位姿关系、时刻i到时刻j中各相邻两时刻的激光点云 的相对位姿关系有关,因此,令时刻i与时刻j的激光点云的相对位姿关系误差最小,修正时刻i到时刻j中各相邻时刻的激光点云数据的相对位姿关系。或者,
可选地,可以令时刻i与K个时刻中每个时刻的激光点云的相对位姿关系误差之和最小,修正时刻i到K个时刻中每个时刻中各相邻时刻的激光点云数据的相对位姿关系。或者,
可选地,可以令各个时刻i与对应的K个时刻中每个时刻的激光点云的相对位姿关系误差之和最小,修正各相邻时刻的激光点云的相对位姿关系。
在一些实施例中,S3031的一种可能的实现方式为:根据各时刻的位姿信息,确定位姿与时刻i的位姿相距预设距离内的所有时刻,该所有时刻为K个时刻,需要说明的是,i的取值不,K的值取也可能不同。或者,K的数值是预先设定好的,根据各时刻的位姿信息,确定位姿与时刻i的位姿相距预设距离内的K个时刻,例如:K预先取值为2,若位姿与时刻i的位姿相距预设距离内的所有时刻大于2个时刻,则本实施例获取其中的2个时刻,以节省处理资源。
在一些实施例中,S3031的另一种可能的实现方式为:根据各时刻的位姿信息,确定位姿与时刻i的位姿相距预设距离内的时刻为M个时刻,所述M为大于等于K的整数;根据时刻i的激光点云数据和所述M个时刻的激光点云数据,确定M个时刻中激光点云数据分别与时刻i的激光点云数据匹配的K个时刻。
本实施例中,根据各时刻的位姿信息,确定位姿与时刻i的位姿相距预设距离内的所有时刻,该所有时刻为M个时刻。然后根据M个时刻的激光点云数据和时刻i的激光点云数据,确定激光点云数据与上述时刻i的激光点云数据匹配的上述K个时刻。例如:将激光点云数据与时刻i的激光点云数据匹配的所有时刻确定为K个时刻。或者,K的数值是预先设定好的,从M个时刻中激光点云数据与时刻i的激光点云数据匹配的K个时刻,例如:K预先取值为2,若M个激光点云数据中激光点云数据与时刻i的激光点云数据匹配的时刻为3个,则本实施例获取其中的2个时刻,以节省处理资源。
其中,如何确定两个时刻的激光点云数据匹配的方式可以参见确定相邻两时刻的激光点云数据匹配的方式,此处不再赘述。
本申请实施例中还提供了一种计算机存储介质,该计算机存储介质中存储有程序指令,所述程序执行时可包括如图3及其对应实施例中的位姿获取方法的部分或全部步骤。
图5为本申请一实施例提供的位姿获取系统的结构示意图,如图5所示,本实施例的位姿获取系统500可以包括:第一探测装置501、第二探测装置502和处理器503。其中,第一探测装置501、第二探测装置502和处理器503可以通过总线连接。可选地,位姿获取系统500还可以包括第三探测装置504,第三探测装置504可以通过总线与上述部件连接。
第一探测装置501,用于获取激光点云数据。
第二探测装置502,用于获取位姿信息。
处理器503,用于获取可移动平台在目标区域内移动的过程中所述第一探测装置获取的各时刻的激光点云数据和所述第二探测装置获取的各时刻的所述位姿信息;根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系;根据各相邻两时刻的激光点云的相对位姿关系和各时刻的位姿信息,修正所述各时刻的激光点云的位姿,以获得所述各时刻的激光点云的目标位姿。
在一些实施例中,所述处理器503,具体用于:
针对任一时刻i,根据各时刻的位姿信息,确定位姿与时刻i的位姿相距预设距离内的K个时刻,K、i为大于等于1的整数;
根据所述K个时刻的激光点云数据和所述时刻i的激光点云数据,获取时刻i与K个时刻中每一时刻的激光点云的相对位姿关系;
根据各时刻i与K个时刻中每一时刻的激光点云的相对位姿关系、各相邻两时刻的激光点云的相对位姿关系,修正各时刻的激光点云的位姿。
在一些实施例中,所述处理器503,具体用于:
根据时刻i与K个时刻中时刻j的激光点云的相对位姿关系,以及时刻i与时刻j之间的各相邻时刻的激光点云的相对位姿关系,修正时刻i到时刻j的激光点云的位姿。
在一些实施例中,所述处理器503,具体用于:
根据时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系,以及时刻i与时刻j的激光点云的相对位姿关系,确定时刻i与时刻j的激光点云的 相对位姿关系误差;
根据所述相对位姿关系误差,修正时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系;
根据修正后的时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系,修正时刻i到时刻j的激光点云数据。
在一些实施例中,所述处理器503,具体用于:
根据所各时刻i对应的相对位姿关系误差之和最小,修正各相邻时刻的激光点云的相对位姿关系。
在一些实施例中,修正的激光点云的位姿包括如下至少一种:高度信息和航向信息。
在一些实施例中,所述处理器503,具体用于:
根据相邻两时刻的激光点云数据,确定相邻两时刻的激光点云数据是否匹配;
若匹配,则根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系。
在一些实施例中,所述处理器503,具体用于:
根据各时刻的位姿信息,确定位姿与时刻i的位姿相距预设距离内的时刻为M个时刻,所述M为大于等于K的整数;
根据时刻i的激光点云数据和所述M个时刻的激光点云数据,确定M个时刻中激光点云数据分别与时刻i的激光点云数据匹配的K个时刻。
在一些实施例中,若两时刻的激光点云数据间的法向量的距离小于预设值,则该两时刻的激光点云数据匹配。
在一些实施例中,所述处理器503,具体用于:
获取两时刻的激光点云的估计相对位姿关系;
根据所述估计相对位姿关系和所述两时刻的激光点云数据,获得两时刻的激光点云的相对位姿关系。
在一些实施例中,所述第三探测装置504,用于获取vins位姿数据;
所述处理器503,具体用于:获取所述第三探测装置504在两时刻获取的vins位姿数据;根据两时刻的vins位姿数据,获取两时刻的激光点云的估计相对位姿关系。
在一些实施例中,所述处理器503,具体用于:
根据两时刻中前一时刻的激光点云数据和所述估计相对位姿关系,估计所述两时刻中后一时刻的激光点云数据;
根据估计的后一时刻的激光点云数据、所述第一探测装置获取的后一时刻的激光点云数据以及所述估计相对位姿关系,获得两时刻的激光点云的相对位姿关系。
在一些实施例中,所述处理器503,具体用于:
根据估计的后一时刻的激光点云数据和所述第一探测装置获取的后一时刻的激光点云数据,确定相对位姿关系偏差;
根据所述相对位姿关系偏差和所述估计相对位姿关系,获得两时刻的激光点云的相对位姿关系。
在一些实施例中,所述位姿信息包括GPS数据。
在一些实施例中,所述相对位姿关系包括以下至少一项:旋转矩阵和位移矩阵。
在一些实施例中,所述处理器503,还用于根据所述各时刻的激光点云的目标位姿,建立地图。
可选地,本实施例的位姿获取系统500还可以包括:用于存储程序代码的存储器(图中未示出),存储器用于存储程序代码,当程序代码被执行时,所述位姿获取系统500可以实现上述的技术方案。
本实施例的位姿获取系统,可以用于执行图3及对应方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。
图6为本申请另一实施例提供的位姿获取系统的结构示意图,如图6所示,本实施例的位姿获取系统600可以包括:存储器601和处理器602。其中,存储器601和处理器602可以通过总线连接。
存储器601,用于存储程序代码;
处理器602,用于当程序代码被调用时,执行:
获取可移动平台在目标区域内移动的过程中获取的各时刻的激光点云数据和所述位姿信息;根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系;根据各相邻两时刻的激光点云的相对位姿关系和各时刻的位姿信息,修正所述各时刻的激光点云的位姿,以获得所述各时 刻的激光点云的目标位姿。
在一些实施例中,所述处理器602,具体用于:
针对任一时刻i,根据各时刻的位姿信息,确定位姿与时刻i的位姿相距预设距离内的K个时刻,K、i为大于等于1的整数;
根据所述K个时刻的激光点云数据和所述时刻i的激光点云数据,获取时刻i与K个时刻中每一时刻的激光点云的相对位姿关系;
根据各时刻i与K个时刻中每一时刻的激光点云的相对位姿关系、各相邻两时刻的激光点云的相对位姿关系,修正各时刻的激光点云的位姿。
在一些实施例中,所述处理器602,具体用于:
根据时刻i与K个时刻中时刻j的激光点云的相对位姿关系,以及时刻i与时刻j之间的各相邻时刻的激光点云的相对位姿关系,修正时刻i到时刻j的激光点云的位姿。
在一些实施例中,所述处理器602,具体用于:
根据时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系,以及时刻i与时刻j的激光点云的相对位姿关系,确定时刻i与时刻j的激光点云的相对位姿关系误差;
根据所述相对位姿关系误差,修正时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系;
根据修正后的时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系,修正时刻i到时刻j的激光点云数据。
在一些实施例中,所述处理器602,具体用于:
根据所各时刻i对应的相对位姿关系误差之和最小,修正各相邻时刻的激光点云的相对位姿关系。
在一些实施例中,修正的激光点云的位姿包括如下至少一种:高度信息和航向信息。
在一些实施例中,所述处理器602,具体用于:
根据相邻两时刻的激光点云数据,确定相邻两时刻的激光点云数据是否匹配;
若匹配,则根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系。
在一些实施例中,所述处理器602,具体用于:
根据各时刻的位姿信息,确定位姿与时刻i的位姿相距预设距离内的时刻为M个时刻,所述M为大于等于K的整数;
根据时刻i的激光点云数据和所述M个时刻的激光点云数据,确定M个时刻中激光点云数据分别与时刻i的激光点云数据匹配的K个时刻。
在一些实施例中,若两时刻的激光点云数据间的法向量的距离小于预设值,则该两时刻的激光点云数据匹配。
在一些实施例中,所述处理器602,具体用于:
获取两时刻的激光点云的估计相对位姿关系;
根据所述估计相对位姿关系和所述两时刻的激光点云数据,获得两时刻的激光点云的相对位姿关系。
在一些实施例中,所述处理器602,具体用于:获取所述第三探测装置在两时刻获取的vins位姿数据;根据两时刻的vins位姿数据,获取两时刻的激光点云的估计相对位姿关系。
在一些实施例中,所述处理器602,具体用于:
根据两时刻中前一时刻的激光点云数据和所述估计相对位姿关系,估计所述两时刻中后一时刻的激光点云数据;
根据估计的后一时刻的激光点云数据、所述第一探测装置获取的后一时刻的激光点云数据以及所述估计相对位姿关系,获得两时刻的激光点云的相对位姿关系。
在一些实施例中,所述处理器602,具体用于:
根据估计的后一时刻的激光点云数据和所述第一探测装置获取的后一时刻的激光点云数据,确定相对位姿关系偏差;
根据所述相对位姿关系偏差和所述估计相对位姿关系,获得两时刻的激光点云的相对位姿关系。
在一些实施例中,所述位姿信息包括GPS数据。
在一些实施例中,所述相对位姿关系包括以下至少一项:旋转矩阵和位移矩阵。
在一些实施例中,所述处理器602,还用于根据所述各时刻的激光点云的目标位姿,建立地图。
本实施例的位姿获取系统,可以用于执行图3及对应方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。
图7为本申请一实施例提供的可移动平台的结构示意图,如图7所示,本实施例的可移动平台700可以包括:第一探测装置701、第二探测装置702和处理器703。其中,第一探测装置701、第二探测装置702和处理器703可以通过总线连接。可选地,可移动平台700还可以包括第三探测装置704,第三探测装置704可以通过总线与上述部件连接。
第一探测装置701,用于获取激光点云数据。
第二探测装置702,用于获取位姿信息。
处理器703,用于获取可移动平台700在目标区域内移动的过程中所述第一探测装置701获取的各时刻的激光点云数据和所述第二探测装置702获取的各时刻的所述位姿信息;根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系;根据各相邻两时刻的激光点云的相对位姿关系和各时刻的位姿信息,修正所述各时刻的激光点云的位姿,以获得所述各时刻的激光点云的目标位姿。
在一些实施例中,所述处理器703,具体用于:
针对任一时刻i,根据各时刻的位姿信息,确定位姿与时刻i的位姿相距预设距离内的K个时刻,K、i为大于等于1的整数;
根据所述K个时刻的激光点云数据和所述时刻i的激光点云数据,获取时刻i与K个时刻中每一时刻的激光点云的相对位姿关系;
根据各时刻i与K个时刻中每一时刻的激光点云的相对位姿关系、各相邻两时刻的激光点云的相对位姿关系,修正各时刻的激光点云的位姿。
在一些实施例中,所述处理器703,具体用于:
根据时刻i与K个时刻中时刻j的激光点云的相对位姿关系,以及时刻i与时刻j之间的各相邻时刻的激光点云的相对位姿关系,修正时刻i到时刻j的激光点云的位姿。
在一些实施例中,所述处理器703,具体用于:
根据时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系,以及时刻i与时刻j的激光点云的相对位姿关系,确定时刻i与时刻j的激光点云的相对位姿关系误差;
根据所述相对位姿关系误差,修正时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系;
根据修正后的时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系,修正时刻i到时刻j的激光点云数据。
在一些实施例中,所述处理器703,具体用于:
根据所各时刻i对应的相对位姿关系误差之和最小,修正各相邻时刻的激光点云的相对位姿关系。
在一些实施例中,修正的激光点云的位姿包括如下至少一种:高度信息和航向信息。
在一些实施例中,所述处理器703,具体用于:
根据相邻两时刻的激光点云数据,确定相邻两时刻的激光点云数据是否匹配;
若匹配,则根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系。
在一些实施例中,所述处理器703,具体用于:
根据各时刻的位姿信息,确定位姿与时刻i的位姿相距预设距离内的时刻为M个时刻,所述M为大于等于K的整数;
根据时刻i的激光点云数据和所述M个时刻的激光点云数据,确定M个时刻中激光点云数据分别与时刻i的激光点云数据匹配的K个时刻。
在一些实施例中,若两时刻的激光点云数据间的法向量的距离小于预设值,则该两时刻的激光点云数据匹配。
在一些实施例中,所述处理器703,具体用于:
获取两时刻的激光点云的估计相对位姿关系;
根据所述估计相对位姿关系和所述两时刻的激光点云数据,获得两时刻的激光点云的相对位姿关系。
在一些实施例中,所述第三探测装置704,用于获取vins位姿数据;
所述处理器703,具体用于:获取所述第三探测装置704在两时刻获取的vins位姿数据;根据两时刻的vins位姿数据,获取两时刻的激光点云的估计相对位姿关系。
在一些实施例中,所述处理器703,具体用于:
根据两时刻中前一时刻的激光点云数据和所述估计相对位姿关系,估计所述两时刻中后一时刻的激光点云数据;
根据估计的后一时刻的激光点云数据、所述第一探测装置获取的后一时刻的激光点云数据以及所述估计相对位姿关系,获得两时刻的激光点云的相对位姿关系。
在一些实施例中,所述处理器703,具体用于:
根据估计的后一时刻的激光点云数据和所述第一探测装置获取的后一时刻的激光点云数据,确定相对位姿关系偏差;
根据所述相对位姿关系偏差和所述估计相对位姿关系,获得两时刻的激光点云的相对位姿关系。
在一些实施例中,所述位姿信息包括GPS数据。
在一些实施例中,所述相对位姿关系包括以下至少一项:旋转矩阵和位移矩阵。
在一些实施例中,所述处理器703,还用于根据所述各时刻的激光点云的目标位姿,建立地图。
可选地,本实施例的可移动平台700还可以包括:用于存储程序代码的存储器(图中未示出),存储器用于存储程序代码,当程序代码被执行时,所述可移动平台700可以实现上述的技术方案。
本实施例的可移动平台,可以用于执行图3及对应方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。
图8为本申请另一实施例提供的可移动平台的结构示意图,如图8所示,本实施例的可移动平台800可以包括:可移动平台本体801以及位姿获取系统802。
其中,所述位姿获取系统802安装于所述可移动平台本体801上。位姿获取系统802可以是独立于可移动平台本体801的设备。
其中,位姿获取系统802可以采用图5或图6所示装置实施例的结构,其对应地,可以执行图3及其对应方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。
可选地,所述可移动平台800包括手持电话、手持云台、无人机、无人车、无人船、机器人或自动驾驶汽车。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:只读内存(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (50)

  1. 一种位姿获取方法,其特征在于,应用于可移动平台,所述可移动平台设有第一探测装置,用于获取激光点云数据,还设有第二探测装置,用于获取位姿信息;包括:
    获取可移动平台在目标区域内移动的过程中获取的各时刻的激光点云数据和所述位姿信息;
    根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系;
    根据各相邻两时刻的激光点云的相对位姿关系和各时刻的位姿信息,修正所述各时刻的激光点云的位姿,以获得所述各时刻的激光点云的目标位姿。
  2. 根据权利要求1所述的方法,其特征在于,所述根据各相邻时刻的激光点云的相对位姿关系和各时刻的位姿信息,修正所述各时刻的激光点云的位姿,包括:
    针对任一时刻i,根据各时刻的位姿信息,确定位姿与时刻i的位姿相距预设距离内的K个时刻,K、i为大于等于1的整数;
    根据所述K个时刻的激光点云数据和所述时刻i的激光点云数据,获取时刻i与K个时刻中每一时刻的激光点云的相对位姿关系;
    根据各时刻i与K个时刻中每一时刻的激光点云的相对位姿关系、各相邻两时刻的激光点云的相对位姿关系,修正各时刻的激光点云的位姿。
  3. 根据权利要求2所述的方法,其特征在于,所述根据各时刻i与K个时刻中每一时刻的激光点云的相对位姿关系、各相邻两时刻的激光点云的相对位姿关系,修正各时刻的激光点云的位姿,包括:
    根据时刻i与K个时刻中时刻j的激光点云的相对位姿关系,以及时刻i与时刻j之间的各相邻时刻的激光点云的相对位姿关系,修正时刻i到时刻j的激光点云的位姿。
  4. 根据权利要求3所述的方法,其特征在于,所述根据时刻i与K个时刻中时刻j的相对位姿关系,以及时刻i与时刻j之间的各相邻时刻的激光点云的相对位姿关系,修正时刻i到时刻j的激光点云的位姿,包括:
    根据时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系,以及时 刻i与时刻j的激光点云的相对位姿关系,确定时刻i与时刻j的激光点云的相对位姿关系误差;
    根据所述相对位姿关系误差,修正时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系;
    根据修正后的时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系,修正时刻i到时刻j的激光点云数据。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述相对位姿关系误差,修正时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系,包括:
    根据所各时刻i对应的相对位姿关系误差之和最小,修正各相邻时刻的激光点云的相对位姿关系。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,修正的激光点云的位姿包括如下至少一种:高度信息和航向信息。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系,包括:
    根据相邻两时刻的激光点云数据,确定相邻两时刻的激光点云数据是否匹配;
    若匹配,则根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系。
  8. 根据权利要求2所述的方法,其特征在于,所述根据各时刻的位姿信息,确定位姿与时刻i的位姿相距预设距离内的K个时刻,包括:
    根据各时刻的位姿信息,确定位姿与时刻i的位姿相距预设距离内的时刻为M个时刻,所述M为大于等于K的整数;
    根据时刻i的激光点云数据和所述M个时刻的激光点云数据,确定M个时刻中激光点云数据分别与时刻i的激光点云数据匹配的K个时刻。
  9. 根据权利要求7或8所述的方法,其特征在于,若两时刻的激光点云数据间的法向量的距离小于预设值,则该两时刻的激光点云数据匹配。
  10. 根据权利要求1-9任一项所述的方法,其特征在于,根据两时刻的激光点云数据,获得两时刻的激光点云的相对位姿关系,包括:
    获取两时刻的激光点云的估计相对位姿关系;
    根据所述估计相对位姿关系和所述两时刻的激光点云数据,获得两时刻的激光点云的相对位姿关系。
  11. 根据权利要求10所述的方法,其特征在于,所述获取两时刻的激光点云的估计相对位姿关系,包括:
    获取所述可移动平台在两时刻获取的vins位姿数据;
    根据两时刻的vins位姿数据,获取两时刻的激光点云的估计相对位姿关系。
  12. 根据权利要求10或11所述的方法,其特征在于,所述根据所述估计相对位姿关系和所述两时刻采集获得的激光点云数据,获得两时刻的激光点云的相对位姿关系,包括:
    根据两时刻中前一时刻的激光点云数据和所述估计相对位姿关系,估计所述两时刻中后一时刻的激光点云数据;
    根据估计的后一时刻的激光点云数据、所述第一探测装置获取的后一时刻的激光点云数据以及所述估计相对位姿关系,获得两时刻的激光点云的相对位姿关系。
  13. 根据权利要求12所述的方法,其特征在于,所述根据估计的后一时刻的激光点云数据、所述第一探测装置获取的后一时刻的激光点云数据以及所述估计相对位姿关系,获得两时刻的激光点云的相对位姿关系,包括:
    根据估计的后一时刻的激光点云数据和所述第一探测装置获取的后一时刻的激光点云数据,确定相对位姿关系偏差;
    根据所述相对位姿关系偏差和所述估计相对位姿关系,获得两时刻的激光点云的相对位姿关系。
  14. 根据权利要求1-13任一项所述的方法,其特征在于,所述位姿信息包括GPS数据。
  15. 根据权利要求1-14任一项所述的方法,其特征在于,所述相对位姿关系包括以下至少一项:旋转矩阵和位移矩阵。
  16. 根据权利要求1-15任一项所述的方法,其特征在于,还包括:
    根据所述各时刻的激光点云的目标位姿,建立地图。
  17. 一种位姿获取系统,其特征在于,应用于可移动平台,包括:
    第一探测装置,用于获取激光点云数据;
    第二探测装置,用于获取位姿信息;
    处理器,用于获取可移动平台在目标区域内移动的过程中所述第一探测装置获取的各时刻的激光点云数据和所述第二探测装置获取的各时刻的所述位姿信息;根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系;根据各相邻两时刻的激光点云的相对位姿关系和各时刻的位姿信息,修正所述各时刻的激光点云的位姿,以获得所述各时刻的激光点云的目标位姿。
  18. 根据权利要求17所述的系统,其特征在于,所述处理器,具体用于:
    针对任一时刻i,根据各时刻的位姿信息,确定位姿与时刻i的位姿相距预设距离内的K个时刻,K、i为大于等于1的整数;
    根据所述K个时刻的激光点云数据和所述时刻i的激光点云数据,获取时刻i与K个时刻中每一时刻的激光点云的相对位姿关系;
    根据各时刻i与K个时刻中每一时刻的激光点云的相对位姿关系、各相邻两时刻的激光点云的相对位姿关系,修正各时刻的激光点云的位姿。
  19. 根据权利要求18所述的系统,其特征在于,所述处理器,具体用于:
    根据时刻i与K个时刻中时刻j的激光点云的相对位姿关系,以及时刻i与时刻j之间的各相邻时刻的激光点云的相对位姿关系,修正时刻i到时刻j的激光点云的位姿。
  20. 根据权利要求19所述的系统,其特征在于,所述处理器,具体用于:
    根据时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系,以及时刻i与时刻j的激光点云的相对位姿关系,确定时刻i与时刻j的激光点云的相对位姿关系误差;
    根据所述相对位姿关系误差,修正时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系;
    根据修正后的时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系,修正时刻i到时刻j的激光点云数据。
  21. 根据权利要求20所述的系统,其特征在于,所述处理器,具体用于:
    根据所各时刻i对应的相对位姿关系误差之和最小,修正各相邻时刻的激光点云的相对位姿关系。
  22. 根据权利要求17-21任一项所述的系统,其特征在于,修正的激光 点云的位姿包括如下至少一种:高度信息和航向信息。
  23. 根据权利要求17-22任一项所述的系统,其特征在于,所述处理器,具体用于:
    根据相邻两时刻的激光点云数据,确定相邻两时刻的激光点云数据是否匹配;
    若匹配,则根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系。
  24. 根据权利要求18所述的系统,其特征在于,所述处理器,具体用于:
    根据各时刻的位姿信息,确定位姿与时刻i的位姿相距预设距离内的时刻为M个时刻,所述M为大于等于K的整数;
    根据时刻i的激光点云数据和所述M个时刻的激光点云数据,确定M个时刻中激光点云数据分别与时刻i的激光点云数据匹配的K个时刻。
  25. 根据权利要求23或24所述的系统,其特征在于,若两时刻的激光点云数据间的法向量的距离小于预设值,则该两时刻的激光点云数据匹配。
  26. 根据权利要求17-25任一项所述的系统,其特征在于,所述处理器,具体用于:
    获取两时刻的激光点云的估计相对位姿关系;
    根据所述估计相对位姿关系和所述两时刻的激光点云数据,获得两时刻的激光点云的相对位姿关系。
  27. 根据权利要求26所述的系统,其特征在于,还包括第三探测装置;
    所述第三探测装置,用于获取vins位姿数据;
    所述处理器,具体用于:获取所述第三探测装置在两时刻获取的vins位姿数据;根据两时刻的vins位姿数据,获取两时刻的激光点云的估计相对位姿关系。
  28. 根据权利要求26或27所述的系统,其特征在于,所述处理器,具体用于:
    根据两时刻中前一时刻的激光点云数据和所述估计相对位姿关系,估计所述两时刻中后一时刻的激光点云数据;
    根据估计的后一时刻的激光点云数据、所述第一探测装置获取的后一时刻的激光点云数据以及所述估计相对位姿关系,获得两时刻的激光点云的相 对位姿关系。
  29. 根据权利要求28所述的系统,其特征在于,所述处理器,具体用于:
    根据估计的后一时刻的激光点云数据和所述第一探测装置获取的后一时刻的激光点云数据,确定相对位姿关系偏差;
    根据所述相对位姿关系偏差和所述估计相对位姿关系,获得两时刻的激光点云的相对位姿关系。
  30. 根据权利要求17-29任一项所述的系统,其特征在于,所述位姿信息包括GPS数据。
  31. 根据权利要求17-30任一项所述的系统,其特征在于,所述相对位姿关系包括以下至少一项:旋转矩阵和位移矩阵。
  32. 根据权利要求17-31任一项所述的系统,其特征在于,所述处理器,还用于根据所述各时刻的激光点云的目标位姿,建立地图。
  33. 一种可移动平台,其特征在于,包括:
    第一探测装置,用于获取激光点云数据;
    第二探测装置,用于获取位姿信息;
    处理器,用于获取可移动平台在目标区域内移动的过程中所述第一探测装置获取的各时刻的激光点云数据和所述第二探测装置获取的各时刻的所述位姿信息;根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系;根据各相邻两时刻的激光点云的相对位姿关系和各时刻的位姿信息,修正所述各时刻的激光点云的位姿,以获得所述各时刻的激光点云的目标位姿。
  34. 根据权利要求33所述的可移动平台,其特征在于,所述处理器,具体用于:
    针对任一时刻i,根据各时刻的位姿信息,确定位姿与时刻i的位姿相距预设距离内的K个时刻,K、i为大于等于1的整数;
    根据所述K个时刻的激光点云数据和所述时刻i的激光点云数据,获取时刻i与K个时刻中每一时刻的激光点云的相对位姿关系;
    根据各时刻i与K个时刻中每一时刻的激光点云的相对位姿关系、各相邻两时刻的激光点云的相对位姿关系,修正各时刻的激光点云的位姿。
  35. 根据权利要求34所述的可移动平台,其特征在于,所述处理器,具 体用于:
    根据时刻i与K个时刻中时刻j的激光点云的相对位姿关系,以及时刻i与时刻j之间的各相邻时刻的激光点云的相对位姿关系,修正时刻i到时刻j的激光点云的位姿。
  36. 根据权利要求35所述的可移动平台,其特征在于,所述处理器,具体用于:
    根据时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系,以及时刻i与时刻j的激光点云的相对位姿关系,确定时刻i与时刻j的激光点云的相对位姿关系误差;
    根据所述相对位姿关系误差,修正时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系;
    根据修正后的时刻i到时刻j中的各相邻时刻的激光点云的相对位姿关系,修正时刻i到时刻j的激光点云数据。
  37. 根据权利要求36所述的可移动平台,其特征在于,所述处理器,具体用于:
    根据所各时刻i对应的相对位姿关系误差之和最小,修正各相邻时刻的激光点云的相对位姿关系。
  38. 根据权利要求33-37任一项所述的可移动平台,其特征在于,修正的激光点云的位姿包括如下至少一种:高度信息和航向信息。
  39. 根据权利要求33-38任一项所述的可移动平台,其特征在于,所述处理器,具体用于:
    根据相邻两时刻的激光点云数据,确定相邻两时刻的激光点云数据是否匹配;
    若匹配,则根据相邻两时刻的激光点云数据,获得相邻两时刻的激光点云的相对位姿关系。
  40. 根据权利要求39所述的可移动平台,其特征在于,所述处理器,具体用于:
    根据各时刻的位姿信息,确定位姿与时刻i的位姿相距预设距离内的时刻为M个时刻,所述M为大于等于K的整数;
    根据时刻i的激光点云数据和所述M个时刻的激光点云数据,确定M个 时刻中激光点云数据分别与时刻i的激光点云数据匹配的K个时刻。
  41. 根据权利要求39或40所述的可移动平台,其特征在于,若两时刻的激光点云数据间的法向量的距离小于预设值,则该两时刻的激光点云数据匹配。
  42. 根据权利要求33-41任一项所述的可移动平台,其特征在于,所述处理器,具体用于:
    获取两时刻的激光点云的估计相对位姿关系;
    根据所述估计相对位姿关系和所述两时刻的激光点云数据,获得两时刻的激光点云的相对位姿关系。
  43. 根据权利要求42所述的可移动平台,其特征在于,还包括第三探测装置;
    所述第三探测装置,用于获取vins位姿数据;
    所述处理器,具体用于:获取所述第三探测装置在两时刻获取的vins位姿数据;根据两时刻的vins位姿数据,获取两时刻的激光点云的估计相对位姿关系。
  44. 根据权利要求42或43所述的可移动平台,其特征在于,所述处理器,具体用于:
    根据两时刻中前一时刻的激光点云数据和所述估计相对位姿关系,估计所述两时刻中后一时刻的激光点云数据;
    根据估计的后一时刻的激光点云数据、所述第一探测装置获取的后一时刻的激光点云数据以及所述估计相对位姿关系,获得两时刻的激光点云的相对位姿关系。
  45. 根据权利要求44所述的可移动平台,其特征在于,所述处理器,具体用于:
    根据估计的后一时刻的激光点云数据和所述第一探测装置获取的后一时刻的激光点云数据,确定相对位姿关系偏差;
    根据所述相对位姿关系偏差和所述估计相对位姿关系,获得两时刻的激光点云的相对位姿关系。
  46. 根据权利要求33-45任一项所述的可移动平台,其特征在于,所述位姿信息包括GPS数据。
  47. 根据权利要求33-46任一项所述的可移动平台,其特征在于,所述相对位姿关系包括以下至少一项:旋转矩阵和位移矩阵。
  48. 根据权利要求33-47任一项所述的可移动平台,其特征在于,所述处理器,还用于根据所述各时刻的激光点云的目标位姿,建立地图。
  49. 根据权利要求33-48任一项所述的可移动平台,其特征在于,所述可移动平台包括自动驾驶车辆。
  50. 一种可读存储介质,其特征在于,所述可读存储介质上存储有计算机程序;所述计算机程序在被执行时,实现如权利要求1-16任一项所述的位姿获取方法。
PCT/CN2019/103871 2019-08-30 2019-08-30 位姿获取方法、系统和可移动平台 WO2021035748A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980034308.9A CN112204344A (zh) 2019-08-30 2019-08-30 位姿获取方法、系统和可移动平台
PCT/CN2019/103871 WO2021035748A1 (zh) 2019-08-30 2019-08-30 位姿获取方法、系统和可移动平台

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/103871 WO2021035748A1 (zh) 2019-08-30 2019-08-30 位姿获取方法、系统和可移动平台

Publications (1)

Publication Number Publication Date
WO2021035748A1 true WO2021035748A1 (zh) 2021-03-04

Family

ID=74004603

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103871 WO2021035748A1 (zh) 2019-08-30 2019-08-30 位姿获取方法、系统和可移动平台

Country Status (2)

Country Link
CN (1) CN112204344A (zh)
WO (1) WO2021035748A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106382917A (zh) * 2015-08-07 2017-02-08 武汉海达数云技术有限公司 一种室内环境三维空间信息连续采集方法
WO2019006289A1 (en) * 2017-06-30 2019-01-03 Kaarta, Inc. SYSTEMS AND METHODS FOR SCAN ENHANCEMENT AND CORRECTION
CN109166140A (zh) * 2018-07-27 2019-01-08 长安大学 一种基于多线激光雷达的车辆运动轨迹估计方法及系统
CN109211236A (zh) * 2017-06-30 2019-01-15 沈阳新松机器人自动化股份有限公司 导航定位方法、装置及机器人
CN109709801A (zh) * 2018-12-11 2019-05-03 智灵飞(北京)科技有限公司 一种基于激光雷达的室内无人机定位系统及方法
CN109934920A (zh) * 2019-05-20 2019-06-25 奥特酷智能科技(南京)有限公司 基于低成本设备的高精度三维点云地图构建方法
CN109974712A (zh) * 2019-04-22 2019-07-05 广东亿嘉和科技有限公司 一种基于图优化的变电站巡检机器人建图方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106382917A (zh) * 2015-08-07 2017-02-08 武汉海达数云技术有限公司 一种室内环境三维空间信息连续采集方法
WO2019006289A1 (en) * 2017-06-30 2019-01-03 Kaarta, Inc. SYSTEMS AND METHODS FOR SCAN ENHANCEMENT AND CORRECTION
CN109211236A (zh) * 2017-06-30 2019-01-15 沈阳新松机器人自动化股份有限公司 导航定位方法、装置及机器人
CN109166140A (zh) * 2018-07-27 2019-01-08 长安大学 一种基于多线激光雷达的车辆运动轨迹估计方法及系统
CN109709801A (zh) * 2018-12-11 2019-05-03 智灵飞(北京)科技有限公司 一种基于激光雷达的室内无人机定位系统及方法
CN109974712A (zh) * 2019-04-22 2019-07-05 广东亿嘉和科技有限公司 一种基于图优化的变电站巡检机器人建图方法
CN109934920A (zh) * 2019-05-20 2019-06-25 奥特酷智能科技(南京)有限公司 基于低成本设备的高精度三维点云地图构建方法

Also Published As

Publication number Publication date
CN112204344A (zh) 2021-01-08

Similar Documents

Publication Publication Date Title
US11346950B2 (en) System, device and method of generating a high resolution and high accuracy point cloud
CN109887057B (zh) 生成高精度地图的方法和装置
US10788830B2 (en) Systems and methods for determining a vehicle position
US11802769B2 (en) Lane line positioning method and apparatus, and storage medium thereof
US20220187843A1 (en) Systems and methods for calibrating an inertial measurement unit and a camera
CN110207714B (zh) 一种确定车辆位姿的方法、车载系统及车辆
EP4170282A1 (en) Method for calibrating mounting deviation angle between sensors, combined positioning system, and vehicle
US10228252B2 (en) Method and apparatus for using multiple filters for enhanced portable navigation
CN111308415B (zh) 一种基于时间延迟的在线估计位姿的方法和设备
CN115082549A (zh) 位姿估计方法及装置、以及相关设备和存储介质
US20230366680A1 (en) Initialization method, device, medium and electronic equipment of integrated navigation system
CN112362054B (zh) 一种标定方法、装置、电子设备及存储介质
WO2024027350A1 (zh) 车辆定位方法、装置、计算机设备、存储介质
CN109141411B (zh) 定位方法、定位装置、移动机器人及存储介质
CN113984044A (zh) 一种基于车载多感知融合的车辆位姿获取方法及装置
WO2018037653A1 (ja) 車両制御システム、自車位置算出装置、車両制御装置、自車位置算出プログラム及び車両制御プログラム
CN115164936A (zh) 高精地图制作中用于点云拼接的全局位姿修正方法及设备
CN114942025A (zh) 车辆导航定位方法、装置、电子设备及存储介质
CN112985391A (zh) 一种基于惯性和双目视觉的多无人机协同导航方法和装置
CN110155080B (zh) 传感器稳定控制方法、装置、稳定器和介质
WO2021035748A1 (zh) 位姿获取方法、系统和可移动平台
WO2022037370A1 (zh) 一种运动估计方法及装置
CN111207688B (zh) 在载运工具中测量目标对象距离的方法、装置和载运工具
EP3686556B1 (en) Method for position estimation of vehicle based on graph structure and vehicle using the same
WO2021196983A1 (zh) 一种自运动估计的方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19943624

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19943624

Country of ref document: EP

Kind code of ref document: A1