WO2021035748A1 - Procédé d'acquisition d'orientation, système et plate-forme mobile - Google Patents

Procédé d'acquisition d'orientation, système et plate-forme mobile Download PDF

Info

Publication number
WO2021035748A1
WO2021035748A1 PCT/CN2019/103871 CN2019103871W WO2021035748A1 WO 2021035748 A1 WO2021035748 A1 WO 2021035748A1 CN 2019103871 W CN2019103871 W CN 2019103871W WO 2021035748 A1 WO2021035748 A1 WO 2021035748A1
Authority
WO
WIPO (PCT)
Prior art keywords
laser point
point cloud
time
moments
pose
Prior art date
Application number
PCT/CN2019/103871
Other languages
English (en)
Chinese (zh)
Inventor
朱振宇
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980034308.9A priority Critical patent/CN112204344A/zh
Priority to PCT/CN2019/103871 priority patent/WO2021035748A1/fr
Publication of WO2021035748A1 publication Critical patent/WO2021035748A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data

Definitions

  • the embodiments of the present application relate to the technical field of unmanned aerial vehicles, and in particular to a method, system and movable platform for acquiring a pose.
  • the navigation electronic map can be used to assist the manual driver to do navigation when driving a car.
  • the absolute coordinate accuracy of this kind of navigation electronic map is about 10 meters.
  • driverless cars will become a travel trend, and driverless cars need to accurately know their position on the road when driving on the road.
  • the lane is only about tens of centimeters, so a higher-precision map is required.
  • the absolute accuracy of this map is generally at the sub-meter level, that is, the accuracy is within 1 meter, and the horizontal relative accuracy (for example, lanes and lanes, lanes)
  • the relative position accuracy of the lane line is often higher.
  • This high-precision map not only has high-precision coordinates, but also accurate road shapes, and the slope, curvature, heading, elevation, and roll data of each lane are also included to ensure that the driverless car is on the road. Safety and accuracy of driving.
  • High-precision maps are very important for driverless cars.
  • High-precision maps are currently reconstructed based on data collected by the camera while the vehicle is driving. However, due to errors in the pose data of the vehicle itself, this will affect the accuracy of the high-precision map.
  • the embodiments of the present application provide a method, a system and a movable platform for acquiring a pose, which are used to improve the accuracy of acquiring a pose, so as to improve the accuracy of the established map.
  • an embodiment of the present application provides a pose acquisition method, which is applied to a movable platform, and the movable platform is provided with a first detection device for acquiring laser point cloud data, and is also provided with a second detection device, Used to obtain pose information; the method includes:
  • the pose of the laser point cloud at each moment is corrected to obtain the target pose of the laser point cloud at each moment .
  • an embodiment of the present application provides a pose acquisition system, which is applied to a movable platform, and the system includes:
  • the first detection device is used to obtain laser point cloud data
  • the second detection device is used to obtain pose information
  • a processor configured to acquire the laser point cloud data at each time acquired by the first detection device and the pose information at each time acquired by the second detection device when the movable platform is moving in the target area; According to the laser point cloud data at two adjacent moments, the relative pose relationship of the laser point clouds at two adjacent moments is obtained; according to the relative pose relationship of the laser point clouds at each two adjacent moments and the pose information at each moment, the correction is made The pose of the laser point cloud at each time is used to obtain the target pose of the laser point cloud at each time.
  • an embodiment of the present application provides a movable platform, including:
  • the first detection device is used to obtain laser point cloud data
  • the second detection device is used to obtain pose information
  • a processor configured to acquire the laser point cloud data at each time acquired by the first detection device and the pose information at each time acquired by the second detection device when the movable platform is moving in the target area; According to the laser point cloud data at two adjacent moments, the relative pose relationship of the laser point clouds at two adjacent moments is obtained; according to the relative pose relationship of the laser point clouds at each two adjacent moments and the pose information at each moment, the correction is made The pose of the laser point cloud at each time is used to obtain the target pose of the laser point cloud at each time.
  • an embodiment of the present application provides a readable storage medium with a computer program stored on the readable storage medium; when the computer program is executed, the computer program implements the bits described in the embodiment of the present application in the first aspect. Posture acquisition method.
  • an embodiment of the present application provides a program product, the program product includes a computer program, the computer program is stored in a readable storage medium, and at least one processor of a removable platform can download from the readable storage medium The computer program is read, and the at least one processor executes the computer program to enable the mobile platform to implement the pose acquisition method described in the embodiment of the present application in the first aspect.
  • the pose acquisition method, system, and movable platform obtained by the embodiments of the present application obtain the laser point cloud data and pose information at each moment obtained during the process of moving the movable platform in the target area;
  • the laser point cloud data is used to obtain the relative pose relationship of the laser point cloud at two adjacent moments; according to the relative pose relationship of the laser point cloud at each two adjacent moments and the pose information at each moment, the laser at each moment is corrected
  • the pose of the point cloud to obtain the target pose of the laser point cloud at each time.
  • the target pose of the laser point cloud can more accurately reflect the actual pose of the movable platform, so that the accuracy of the map is improved when the map is established based on the target pose of the laser point cloud.
  • FIG. 1 is a schematic architecture diagram of an autonomous driving vehicle 100 according to an embodiment of the present application
  • Figure 2 is a schematic diagram of an application scenario provided by an embodiment of the application
  • FIG. 3 is a flowchart of a pose acquisition method provided by an embodiment of the application.
  • FIG. 4 is a schematic diagram of the relative pose relationship of the laser point cloud at each time according to an embodiment of the application
  • FIG. 5 is a schematic structural diagram of a pose acquisition system provided by an embodiment of this application.
  • FIG. 6 is a schematic structural diagram of a pose acquisition system provided by another embodiment of this application.
  • FIG. 7 is a schematic structural diagram of a movable platform provided by an embodiment of this application.
  • FIG. 8 is a schematic structural diagram of a movable platform provided by another embodiment of the application.
  • a component when referred to as being "fixed to” another component, it can be directly on the other component or a central component may also exist. When a component is considered to be “connected” to another component, it can be directly connected to the other component or there may be a centered component at the same time.
  • the embodiments of the present application provide a pose acquisition method, system, and movable platform, where the movable platform may be a handheld phone, a handheld PTZ, a drone, an unmanned vehicle, an unmanned ship, a robot, or an autonomous vehicle, etc. .
  • FIG. 1 is a schematic architecture diagram of an autonomous driving vehicle 100 according to an embodiment of the present application.
  • the autonomous vehicle 100 may include a sensing system 110, a control system 120, and a mechanical system 130.
  • the perception system 110 is used to measure the state information of the autonomous vehicle 100, that is, the perception data of the autonomous vehicle 100.
  • the perception data may represent the position information and/or state information of the autonomous vehicle 100, for example, position, angle, speed, Acceleration and angular velocity, etc.
  • the perception system 110 may include, for example, a visual sensor (for example, including multiple monocular or binocular vision devices), lidar, millimeter wave radar, inertial measurement unit (IMU), global navigation satellite system, gyroscope, ultrasonic sensor At least one of sensors such as, electronic compass, and barometer.
  • the global navigation satellite system may be the Global Positioning System (GPS).
  • the sensing system 110 After the sensing system 110 obtains the sensing data, it can transmit the sensing data to the control system 120.
  • the control system 120 is used to make decisions on how to control the autonomous driving vehicle 100 based on the perception data, for example: how much speed to travel, or how much braking acceleration to brake, or whether to change lanes, or, Turn left/right, etc.
  • the control system 120 may include, for example, a computing platform, such as a vehicle-mounted super-computing platform, or at least one device having processing functions such as a central processing unit and a distributed processing unit.
  • the control system 120 may also include communication links for various data transmission on the vehicle.
  • the control system 120 may output one or more control commands to the mechanical system 130 according to the determined decision.
  • the mechanical system 130 is used to control the autonomous vehicle 100 in response to one or more control commands from the control system 120 to complete the above-mentioned decision.
  • the mechanical system 130 can drive the wheels of the autonomous vehicle 100 to rotate so as to be automatic.
  • the driving vehicle 100 provides power for driving, wherein the rotation speed of the wheels can affect the speed of the unmanned vehicle.
  • the mechanical system 130 may include, for example, at least one of a mechanical body engine/motor, a controlled wire control system, and the like.
  • FIG 2 is a schematic diagram of an application scenario provided by an embodiment of this application.
  • the autonomous vehicle can drive on the ground, and the autonomous vehicle can drive on the ground in the target area (for example, through the above
  • the sensing system 110 collects sensing data, which may include laser point cloud data, pose information, and Vins pose data, etc., and then performs pose correction on the laser point cloud.
  • sensing data may include laser point cloud data, pose information, and Vins pose data, etc.
  • Fig. 3 is a flowchart of a pose acquisition method provided by an embodiment of the application. As shown in Fig. 3, the method of this embodiment can be applied to a movable platform, and can also be applied to other electronic devices other than the movable platform. equipment. The method includes:
  • S301 Acquire laser point cloud data and pose information at each moment when the movable platform moves in the target area.
  • This embodiment is applied to a movable platform as an example.
  • the first detection device and the second detection device of the movable platform equipment wherein the first detection device is used to obtain laser point cloud data, and the second detection device is used to obtain pose information .
  • the pose information may be the pose information of the location where the second detection device is located.
  • the second detection device may be set on the movable platform, so the pose information may be the pose information of the movable platform.
  • the first detection device acquires laser point cloud data at each time
  • the second detection device acquires pose information at each time.
  • the movable platform obtains the laser point cloud data at each time obtained by the first detection device and the pose information at each time obtained by the second detection device.
  • the first detection device is, for example, a laser sensor
  • the second detection device is, for example, a GPS.
  • the electronic device can obtain laser point cloud data and poses at various moments acquired during the movement of the movable platform in the target area from the movable platform.
  • Information such as receiving the above-mentioned information sent by the mobile platform, or reading the above-mentioned information from the storage device of the mobile platform.
  • the mobile platform of this embodiment obtains the laser point cloud data at each time and the pose information at each time, it obtains the relative pose relationship of the laser point clouds at two adjacent times according to the laser point cloud data at two adjacent times. In this way, the relative pose relationship of the laser point cloud at two adjacent moments can be obtained. Taking each time from time 1 to time 10 as an example, the relative pose relationships of nine laser point clouds at two adjacent time points can be obtained accordingly.
  • the relative pose relationship includes at least one of the following: a rotation matrix and a displacement matrix.
  • the position of the laser point cloud at each moment is determined.
  • the pose is corrected to obtain the target pose of the laser point cloud at each moment.
  • the pose acquisition method obtaineds the laser point cloud data and pose information at each moment when the movable platform is moving in the target area; according to the laser point cloud data at two adjacent moments, the phase is obtained.
  • the relative pose relationship of the laser point cloud at two adjacent moments; according to the relative pose relationship of the laser point cloud at two adjacent moments and the pose information at each moment, the pose of the laser point cloud at each moment is corrected to Obtain the target pose of the laser point cloud at each time.
  • the target pose of the laser point cloud can more accurately reflect the actual pose of the movable platform, so that the accuracy of the map is improved when the map is established based on the target pose of the laser point cloud.
  • a map is also established according to the target pose of the laser point cloud at each time. Since the target pose of the laser point cloud at each moment is very close to the actual pose, the map established accordingly has a higher accuracy.
  • a possible implementation manner of the foregoing S302 may include S3021 and S3022:
  • the estimated relative pose relationship of the laser point cloud at two adjacent moments is acquired, and then according to the estimated relative pose relationship of the laser point cloud at the two adjacent moments and the laser point cloud data at the two adjacent moments, Obtain the relative pose relationship of the laser point cloud at two adjacent moments. Since the obtained relative estimated pose relationship has a large error, the estimated relative pose relationship is corrected according to the laser point cloud data to obtain a more accurate relative pose relationship.
  • a possible implementation manner of the foregoing S3021 may include A1 and A2:
  • A2 according to the vins pose data at two adjacent moments, obtain the estimated relative pose relationship of the laser point cloud at two adjacent moments.
  • the movable platform is also provided with a third detection device, and the third detection device can obtain vins pose data.
  • the third detection device can obtain the vins pose data at each time, and correspondingly, the movable platform can obtain the vins pose data at each time obtained by the third detection device. Then, the movable platform obtains the vins pose data of two adjacent moments from it.
  • the vins pose data of each moment can represent the estimated pose of the laser point cloud at each moment, and obtains the vins pose data of the two adjacent moments. The estimated relative pose relationship of the laser point cloud at two moments.
  • a possible implementation manner of the foregoing S3021 may include B1 and B2:
  • the laser point cloud data at the next moment acquired by the first detection device, and the estimated relative pose relationship obtain the relative poses of the laser point clouds at two adjacent moments relationship.
  • the laser at time 3 is estimated
  • the estimated laser point cloud data at time 3 may be different from the laser point cloud data at time 3 acquired by the first detection device. Then, according to the estimated laser point cloud data at time 3, the laser point cloud data at time 3 acquired by the first detection device, and the above-mentioned estimated relative pose relationship, the relative pose relationship of the laser point cloud at time 2 and time 3 is obtained. .
  • a possible implementation of B2 above is: determining the relative pose relationship deviation based on the estimated laser point cloud data at a later time and the laser point cloud data at the later time acquired by the first detection device ; According to the relative pose relationship deviation and the estimated relative pose relationship, the relative pose relationship of the laser point cloud at two moments is obtained.
  • the estimated laser point cloud data at time 3 and the laser point cloud data at time 3 acquired by the first detection device determine the estimated relative pose relationship with the laser point cloud data at time 3 acquired by the first detection device, which is called The relative pose relationship deviation, and then based on the relative pose relationship deviation and the estimated relative pose relationship of the laser point cloud at time 2 and time 3, the relative pose relationship of the laser point cloud at time 2 and time 3 is obtained.
  • a possible implementation manner of obtaining the relative pose relationship of the laser point clouds at two adjacent times according to the laser point cloud data at two adjacent times may be: according to the laser point clouds at two adjacent times Data to determine whether the laser point cloud data at two adjacent moments match. If it matches, the relative pose relationship of the laser point cloud at two adjacent moments is obtained according to the laser point cloud data at two adjacent moments.
  • the laser point cloud data at two adjacent moments it is determined whether the laser point cloud data at two adjacent moments match. If they match, the above-mentioned laser point cloud data at two adjacent moments are executed to obtain the adjacent laser point cloud data. If the scheme of the relative pose relationship between two moments does not match, the relative pose relationship of the two adjacent moments is not obtained. In this way, the relative pose relationships of the matched laser point clouds at two adjacent moments are obtained, and these relative pose relationships are more accurate to facilitate subsequent correction of the laser point cloud.
  • determine whether the laser point cloud data at two adjacent moments match for example, obtain the normal vector distance between the laser point cloud data at two adjacent moments, and if the normal vector distance is less than a preset value, determine the phase The laser point cloud data at two adjacent moments are matched. If the distance of the normal vector is greater than or equal to the preset value, it is determined that the laser point cloud data at two adjacent moments do not match.
  • a possible implementation manner of the foregoing S303 may include S3031-S3033:
  • S3033 Correct the position of the laser point cloud at each time according to the relative pose relationship of the laser point cloud at each time i and K time instants, and the relative pose relationship of the laser point cloud at each two adjacent time points.
  • any moment of each moment as an example, for example, moment i, i is an integer greater than or equal to 1, and according to the pose information at each moment, it is determined that the pose is within a preset distance from the pose at time i K moments.
  • the pose information includes GPS data.
  • the position is K times within a preset distance from the position of time i.
  • the relative pose relationship between the moment i and the laser point cloud at each moment of the K moments is obtained, so as to obtain the respective moments of each moment.
  • the relative pose of the laser point cloud at each two adjacent moments obtained in S302 Relationship to correct the pose of the laser point cloud at each moment. Improved the accuracy of the posture of the corrected laser point cloud.
  • a possible implementation manner of the foregoing S3033 may be C1:
  • the any moment is time j, assuming that time i is time 1, and time j is time 4, as shown in Figure 4, according to time 1 and time 4
  • the relative pose relationship of the laser point cloud obtained through S3032), the relative pose relationship of the laser point cloud at time 1 and time 2, the relative pose relationship of the laser point cloud at time 2 and time 3, and the time 3 and The relative pose relationship of the laser point cloud at time 4 can be corrected for the pose of the laser point cloud at time 1, time 2, time 3, and time 4.
  • a possible implementation manner of the foregoing C1 may include C11-C13:
  • the relative pose relationship including at least one of the following: rotation matrix and displacement matrix as an example, the relative pose relationship of the laser point cloud at time 1 and time 2, and the relative pose of the laser point cloud at time 2 and time 3
  • the relative pose relationship of the laser point cloud at time 3 and time 4 is multiplied to obtain a calculated relative pose relationship of the laser point cloud at time 1 and time 4, and then the position of the laser point cloud at time 1 and time 4 is obtained.
  • the difference between the pose relationship and the calculated relative pose relationship of the laser point cloud at time 1 and time 4, and this difference is called the relative pose relationship error of the laser point cloud at time 1 and time 4.
  • the relative pose relationship of the laser point cloud at time 1 and time 4 the relative pose relationship of the laser point cloud at time 1 and time 2, the relative pose relationship of the laser point cloud at time 2 and time 3, and the time are corrected. 3 and the relative pose relationship of the laser point cloud at time 4.
  • the relative pose relationship of the laser point cloud at time 1 and time 2 after correction the relative pose relationship of the laser point cloud at time 2 and time 3 after correction, and the laser point at time 3 and time 4 after correction
  • the relative pose relationship of the cloud is corrected to correct the pose of the laser point cloud at time 1, time 2, time 3, and time 4.
  • the corrected laser point cloud data is transformed according to the corrected relative pose relationship, it can be as the same as the corresponding laser point cloud data as much as possible.
  • the error of the relative pose relationship between the laser point cloud at time i and time j is the same as the relative pose relationship of the laser point cloud at time i and time j, and the laser points at two adjacent times from time i to time j.
  • the relative pose relationship of the cloud is related. Therefore, the relative pose relationship of the laser point cloud at time i and time j is minimized, and the relative pose relationship of the laser point cloud data at each adjacent time from time i to time j is corrected. or,
  • the sum of the relative pose relationship errors of the laser point cloud at each time from time i to K time can be minimized, and the laser point cloud at each adjacent time from time i to K time can be corrected.
  • the relative pose relationship of the data can be minimized, and the laser point cloud at each adjacent time from time i to K time can be corrected.
  • the sum of the relative pose relationship errors of the laser point cloud at each moment i and the corresponding K moments may be minimized, and the relative pose relationship of the laser point clouds at each adjacent moment may be corrected.
  • a possible implementation of S3031 is: according to the pose information of each moment, determine all moments within a preset distance between the pose and the pose of moment i, and all moments are K moments, It should be noted that the value of i is different, and the value of K may also be different. Or, the value of K is preset. According to the pose information at each moment, determine the K moments within the preset distance between the pose and the pose at time i. For example, K is preset to a value of 2. All the moments within the preset distance between the pose and the pose of moment i are greater than 2 moments, and this embodiment acquires 2 moments among them to save processing resources.
  • all moments within a preset distance between the pose and the pose of moment i are determined, and all moments are M moments.
  • the above K time points at which the laser point cloud data matches the laser point cloud data at time i are determined.
  • all the moments when the laser point cloud data matches the laser point cloud data at time i are determined as K moments.
  • the value of K is preset, from the K times when the laser point cloud data matches the laser point cloud data at time i from M times, for example: K is set to a value of 2 in advance, if M laser point clouds In the data, there are three times when the laser point cloud data matches the laser point cloud data at time i, and this embodiment acquires two of them to save processing resources.
  • how to determine the way of matching laser point cloud data at two moments can refer to the way of determining the matching of laser point cloud data at two adjacent moments, which will not be repeated here.
  • An embodiment of the present application also provides a computer storage medium.
  • the computer storage medium stores program instructions.
  • the program execution may include some or all of the steps of the pose acquisition method in FIG. 3 and its corresponding embodiments. .
  • FIG. 5 is a schematic structural diagram of a pose acquisition system provided by an embodiment of this application.
  • the pose acquisition system 500 of this embodiment may include: a first detection device 501, a second detection device 502, and a processor 503. Wherein, the first detection device 501, the second detection device 502, and the processor 503 may be connected through a bus.
  • the pose acquisition system 500 may further include a third detection device 504, and the third detection device 504 may be connected to the above-mentioned components through a bus.
  • the first detection device 501 is used to obtain laser point cloud data.
  • the second detection device 502 is used to obtain pose information.
  • the processor 503 is configured to obtain the laser point cloud data at each time acquired by the first detection device and the pose information at each time acquired by the second detection device when the movable platform moves in the target area ;According to the laser point cloud data at two adjacent moments, the relative pose relationship of the laser point clouds at two adjacent moments is obtained; according to the relative pose relationship of the laser point clouds at each adjacent moment and the pose information at each moment, The pose of the laser point cloud at each time is corrected to obtain the target pose of the laser point cloud at each time.
  • the processor 503 is specifically configured to:
  • K moments within a preset distance between the pose and the pose of moment i according to the pose information of each moment K and i are integers greater than or equal to 1;
  • the pose of the laser point cloud at each moment is corrected.
  • the processor 503 is specifically configured to:
  • the processor 503 is specifically configured to:
  • the laser point cloud data from time i to time j is corrected according to the relative pose relationship of the laser point cloud at each adjacent time from time i to time j after correction.
  • the processor 503 is specifically configured to:
  • the relative pose relationship of the laser point cloud at each adjacent time is corrected.
  • the corrected pose of the laser point cloud includes at least one of the following: altitude information and heading information.
  • the processor 503 is specifically configured to:
  • the relative pose relationship of the laser point cloud at two adjacent moments is obtained according to the laser point cloud data at two adjacent moments.
  • the moments within the preset distance between the pose and the pose of moment i are M moments, where M is an integer greater than or equal to K;
  • the laser point cloud data at time i and the laser point cloud data at the M time points determine K time points at which the laser point cloud data matches the laser point cloud data at time i respectively among the M time points.
  • the relative pose relationship of the laser point clouds at the two moments is obtained.
  • the third detection device 504 is used to obtain vins pose data
  • the processor 503 is specifically configured to: obtain the vins pose data acquired by the third detection device 504 at two moments; obtain the estimated relative pose relationship of the laser point cloud at the two moments according to the vins pose data at the two moments .
  • the relative pose relationship of the laser point clouds at the two moments is obtained.
  • the processor 503 is specifically configured to:
  • the relative pose relationship of the laser point cloud at two moments is obtained.
  • the pose information includes GPS data.
  • the relative pose relationship includes at least one of the following: a rotation matrix and a displacement matrix.
  • the processor 503 is further configured to establish a map according to the target pose of the laser point cloud at each time.
  • the pose acquisition system 500 of this embodiment may further include: a memory (not shown in the figure) for storing program codes.
  • the memory is used for storing program codes.
  • the poses The acquisition system 500 can implement the above-mentioned technical solutions.
  • the pose acquisition system of this embodiment can be used to implement the technical solutions of FIG. 3 and the corresponding method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 6 is a schematic structural diagram of a pose acquisition system provided by another embodiment of this application.
  • the pose acquisition system 600 of this embodiment may include a memory 601 and a processor 602. Wherein, the memory 601 and the processor 602 may be connected through a bus.
  • the memory 601 is used to store program codes
  • the processor 602 is configured to execute when the program code is called:
  • the processor 602 is specifically configured to:
  • K moments within a preset distance between the pose and the pose of moment i according to the pose information of each moment K and i are integers greater than or equal to 1;
  • the pose of the laser point cloud at each moment is corrected.
  • the processor 602 is specifically configured to:
  • the processor 602 is specifically configured to:
  • the laser point cloud data from time i to time j is corrected according to the relative pose relationship of the laser point cloud at each adjacent time from time i to time j after correction.
  • the processor 602 is specifically configured to:
  • the relative pose relationship of the laser point cloud at each adjacent time is corrected.
  • the corrected pose of the laser point cloud includes at least one of the following: altitude information and heading information.
  • the processor 602 is specifically configured to:
  • the relative pose relationship of the laser point cloud at two adjacent moments is obtained according to the laser point cloud data at two adjacent moments.
  • the processor 602 is specifically configured to:
  • the moments within the preset distance between the pose and the pose of moment i are M moments, where M is an integer greater than or equal to K;
  • the laser point cloud data at time i and the laser point cloud data at the M time points determine K time points at which the laser point cloud data matches the laser point cloud data at time i respectively among the M time points.
  • the laser point cloud data at the two moments match if the distance of the normal vector between the laser point cloud data at two moments is less than the preset value.
  • the processor 602 is specifically configured to:
  • the relative pose relationship of the laser point clouds at the two moments is obtained.
  • the processor 602 is specifically configured to: obtain the vins pose data obtained by the third detection device at two moments; and obtain the laser point cloud data at the two moments according to the vins pose data at the two moments. Estimate the relative pose relationship.
  • the processor 602 is specifically configured to:
  • the relative pose relationship of the laser point clouds at the two moments is obtained.
  • the processor 602 is specifically configured to:
  • the relative pose relationship of the laser point cloud at two moments is obtained.
  • the pose information includes GPS data.
  • the relative pose relationship includes at least one of the following: a rotation matrix and a displacement matrix.
  • the processor 602 is further configured to build a map according to the target pose of the laser point cloud at each time.
  • the pose acquisition system of this embodiment can be used to implement the technical solutions of FIG. 3 and the corresponding method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 7 is a schematic structural diagram of a movable platform provided by an embodiment of this application.
  • the movable platform 700 of this embodiment may include: a first detection device 701, a second detection device 702, and a processor 703. Wherein, the first detection device 701, the second detection device 702 and the processor 703 may be connected by a bus.
  • the movable platform 700 may further include a third detection device 704, and the third detection device 704 may be connected to the foregoing components through a bus.
  • the first detection device 701 is used to obtain laser point cloud data.
  • the second detection device 702 is used to obtain pose information.
  • the processor 703 is configured to obtain the laser point cloud data at each time acquired by the first detection device 701 and the laser point cloud data at each time acquired by the second detection device 702 during the process of moving the movable platform 700 within the target area.
  • Pose information According to the laser point cloud data at two adjacent moments, the relative pose relationship of the laser point cloud at two adjacent moments is obtained; according to the relative pose relationship of the laser point cloud at each two adjacent moments and the position at each moment The pose information is used to correct the pose of the laser point cloud at each time to obtain the target pose of the laser point cloud at each time.
  • the processor 703 is specifically configured to:
  • K moments within a preset distance between the pose and the pose of moment i according to the pose information of each moment K and i are integers greater than or equal to 1;
  • the pose of the laser point cloud at each moment is corrected.
  • the processor 703 is specifically configured to:
  • the processor 703 is specifically configured to:
  • the laser point cloud data from time i to time j is corrected according to the relative pose relationship of the laser point cloud at each adjacent time from time i to time j after correction.
  • the processor 703 is specifically configured to:
  • the relative pose relationship of the laser point cloud at each adjacent time is corrected.
  • the corrected pose of the laser point cloud includes at least one of the following: altitude information and heading information.
  • the processor 703 is specifically configured to:
  • the relative pose relationship of the laser point cloud at two adjacent moments is obtained according to the laser point cloud data at two adjacent moments.
  • the processor 703 is specifically configured to:
  • the moments within the preset distance between the pose and the pose of moment i are M moments, where M is an integer greater than or equal to K;
  • the laser point cloud data at time i and the laser point cloud data at the M time points determine K time points at which the laser point cloud data matches the laser point cloud data at time i respectively among the M time points.
  • the laser point cloud data at the two moments match if the distance of the normal vector between the laser point cloud data at two moments is less than the preset value.
  • the processor 703 is specifically configured to:
  • the relative pose relationship of the laser point clouds at the two moments is obtained.
  • the third detection device 704 is used to obtain vins pose data
  • the processor 703 is specifically configured to: obtain the vins pose data obtained by the third detection device 704 at two moments; obtain the estimated relative pose relationship of the laser point cloud at the two moments according to the vins pose data at the two moments .
  • the processor 703 is specifically configured to:
  • the relative pose relationship of the laser point clouds at the two moments is obtained.
  • the processor 703 is specifically configured to:
  • the relative pose relationship of the laser point cloud at two moments is obtained.
  • the pose information includes GPS data.
  • the relative pose relationship includes at least one of the following: a rotation matrix and a displacement matrix.
  • the processor 703 is further configured to establish a map according to the target pose of the laser point cloud at each time.
  • the movable platform 700 of this embodiment may further include: a memory (not shown in the figure) for storing program codes, the memory is used for storing program codes, and when the program codes are executed, the movable platform 700 can implement the above-mentioned technical solutions.
  • the movable platform of this embodiment can be used to implement the technical solutions of FIG. 3 and the corresponding method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 8 is a schematic structural diagram of a movable platform provided by another embodiment of this application.
  • the movable platform 800 of this embodiment may include: a movable platform body 801 and a pose acquisition system 802.
  • the pose acquisition system 802 is installed on the movable platform body 801.
  • the pose acquisition system 802 may be a device independent of the movable platform body 801.
  • the pose acquisition system 802 may adopt the structure of the device embodiment shown in FIG. 5 or FIG. 6, and correspondingly, it may implement the technical solutions of FIG. 3 and its corresponding method embodiments. The implementation principles and technical effects are similar. No longer.
  • the movable platform 800 includes a handheld phone, a handheld PTZ, a drone, an unmanned vehicle, an unmanned boat, a robot, or an autonomous vehicle.
  • a person of ordinary skill in the art can understand that all or part of the steps in the above method embodiments can be implemented by a program instructing relevant hardware.
  • the foregoing program can be stored in a computer readable storage medium. When the program is executed, it is executed. Including the steps of the foregoing method embodiment; and the foregoing storage medium includes: read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disks or optical disks, etc., which can store program codes Medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

L'invention concerne un procédé d'acquisition d'orientation, un système (500, 600) et une plate-forme mobile (700, 800). Ledit procédé consiste à : acquérir des données de nuage de points laser et des informations d'orientation à chaque moment, qui sont acquises pendant le mouvement de la plate-forme mobile (700, 800) dans une zone cible (S301) ; obtenir, en fonction des données de nuage de points laser à deux moments adjacents, une relation d'orientation relative entre des nuages de points laser aux deux moments adjacents (S302) ; et corriger, en fonction de la relation d'orientation relative entre les nuages de points laser aux deux moments adjacents et des informations d'orientation à chaque moment, l'orientation du nuage de points laser à chaque moment, de façon à obtenir une orientation cible du nuage de points laser à chaque moment (S303). Ainsi, l'orientation cible du nuage de points laser reflète plus précisément l'orientation réelle de la plate-forme mobile (700, 800), de telle sorte que, lorsqu'une carte est établie selon l'orientation cible du nuage de points laser, la précision de la carte est améliorée.
PCT/CN2019/103871 2019-08-30 2019-08-30 Procédé d'acquisition d'orientation, système et plate-forme mobile WO2021035748A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980034308.9A CN112204344A (zh) 2019-08-30 2019-08-30 位姿获取方法、系统和可移动平台
PCT/CN2019/103871 WO2021035748A1 (fr) 2019-08-30 2019-08-30 Procédé d'acquisition d'orientation, système et plate-forme mobile

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/103871 WO2021035748A1 (fr) 2019-08-30 2019-08-30 Procédé d'acquisition d'orientation, système et plate-forme mobile

Publications (1)

Publication Number Publication Date
WO2021035748A1 true WO2021035748A1 (fr) 2021-03-04

Family

ID=74004603

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103871 WO2021035748A1 (fr) 2019-08-30 2019-08-30 Procédé d'acquisition d'orientation, système et plate-forme mobile

Country Status (2)

Country Link
CN (1) CN112204344A (fr)
WO (1) WO2021035748A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106382917A (zh) * 2015-08-07 2017-02-08 武汉海达数云技术有限公司 一种室内环境三维空间信息连续采集方法
WO2019006289A1 (fr) * 2017-06-30 2019-01-03 Kaarta, Inc. Systèmes et procédés d'améliorations de balayage et de mise en correspondance
CN109166140A (zh) * 2018-07-27 2019-01-08 长安大学 一种基于多线激光雷达的车辆运动轨迹估计方法及系统
CN109211236A (zh) * 2017-06-30 2019-01-15 沈阳新松机器人自动化股份有限公司 导航定位方法、装置及机器人
CN109709801A (zh) * 2018-12-11 2019-05-03 智灵飞(北京)科技有限公司 一种基于激光雷达的室内无人机定位系统及方法
CN109934920A (zh) * 2019-05-20 2019-06-25 奥特酷智能科技(南京)有限公司 基于低成本设备的高精度三维点云地图构建方法
CN109974712A (zh) * 2019-04-22 2019-07-05 广东亿嘉和科技有限公司 一种基于图优化的变电站巡检机器人建图方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106382917A (zh) * 2015-08-07 2017-02-08 武汉海达数云技术有限公司 一种室内环境三维空间信息连续采集方法
WO2019006289A1 (fr) * 2017-06-30 2019-01-03 Kaarta, Inc. Systèmes et procédés d'améliorations de balayage et de mise en correspondance
CN109211236A (zh) * 2017-06-30 2019-01-15 沈阳新松机器人自动化股份有限公司 导航定位方法、装置及机器人
CN109166140A (zh) * 2018-07-27 2019-01-08 长安大学 一种基于多线激光雷达的车辆运动轨迹估计方法及系统
CN109709801A (zh) * 2018-12-11 2019-05-03 智灵飞(北京)科技有限公司 一种基于激光雷达的室内无人机定位系统及方法
CN109974712A (zh) * 2019-04-22 2019-07-05 广东亿嘉和科技有限公司 一种基于图优化的变电站巡检机器人建图方法
CN109934920A (zh) * 2019-05-20 2019-06-25 奥特酷智能科技(南京)有限公司 基于低成本设备的高精度三维点云地图构建方法

Also Published As

Publication number Publication date
CN112204344A (zh) 2021-01-08

Similar Documents

Publication Publication Date Title
US11346950B2 (en) System, device and method of generating a high resolution and high accuracy point cloud
CN109887057B (zh) 生成高精度地图的方法和装置
US10788830B2 (en) Systems and methods for determining a vehicle position
US11802769B2 (en) Lane line positioning method and apparatus, and storage medium thereof
US20220187843A1 (en) Systems and methods for calibrating an inertial measurement unit and a camera
CN110207714B (zh) 一种确定车辆位姿的方法、车载系统及车辆
US10228252B2 (en) Method and apparatus for using multiple filters for enhanced portable navigation
CN111308415B (zh) 一种基于时间延迟的在线估计位姿的方法和设备
CN111572526A (zh) 用于自动驾驶系统的定位方法和系统
JP2022140374A (ja) 姿勢推定方法及び装置、並びに関連デバイス及び記憶媒体
CN112362054B (zh) 一种标定方法、装置、电子设备及存储介质
WO2024027350A1 (fr) Procédé et appareil de positionnement de véhicule, dispositif informatique et support de stockage
CN113984044A (zh) 一种基于车载多感知融合的车辆位姿获取方法及装置
WO2018037653A1 (fr) Système de commande de véhicule, dispositif de calcul de position locale de véhicule, dispositif de commande de véhicule, programme de calcul de position locale de véhicule et programme de commande de véhicule
CN115164936A (zh) 高精地图制作中用于点云拼接的全局位姿修正方法及设备
CN109141411B (zh) 定位方法、定位装置、移动机器人及存储介质
CN114942025A (zh) 车辆导航定位方法、装置、电子设备及存储介质
CN112985391B (zh) 一种基于惯性和双目视觉的多无人机协同导航方法和装置
CN110155080B (zh) 传感器稳定控制方法、装置、稳定器和介质
WO2021035748A1 (fr) Procédé d'acquisition d'orientation, système et plate-forme mobile
WO2021143664A1 (fr) Procédé et appareil de mesure d'une distance d'un objet cible dans un véhicule, et véhicule associé
CN114076946A (zh) 一种运动估计方法及装置
EP3686556B1 (fr) Procédé pour l'estimation de la position d'un véhicule sur la base d'une structure de graphe et véhicule l'utilisant
WO2021196983A1 (fr) Procédé et appareil d'estimation de mouvement propre
US20220187432A1 (en) Systems and methods for calibrating a camera and a lidar

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19943624

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19943624

Country of ref document: EP

Kind code of ref document: A1