WO2021143286A1 - 车辆定位的方法、装置、控制器、智能车和系统 - Google Patents

车辆定位的方法、装置、控制器、智能车和系统 Download PDF

Info

Publication number
WO2021143286A1
WO2021143286A1 PCT/CN2020/125761 CN2020125761W WO2021143286A1 WO 2021143286 A1 WO2021143286 A1 WO 2021143286A1 CN 2020125761 W CN2020125761 W CN 2020125761W WO 2021143286 A1 WO2021143286 A1 WO 2021143286A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
point cloud
cloud data
pose
coordinate system
Prior art date
Application number
PCT/CN2020/125761
Other languages
English (en)
French (fr)
Inventor
潘杨杰
胡伟龙
李旭鹏
丁涛
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20914151.4A priority Critical patent/EP4080248A4/en
Publication of WO2021143286A1 publication Critical patent/WO2021143286A1/zh
Priority to US17/864,998 priority patent/US20220371602A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves
    • G01S17/875Combinations of systems using electromagnetic waves other than radio waves for determining attitude
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/408Radar; Laser, e.g. lidar
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4041Position
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4049Relationship among other objects, e.g. converging dynamic objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/53Determining attitude
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
    • G01S5/163Determination of attitude
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Definitions

  • This application relates to the field of intelligent driving, and in particular to a method, device, controller, intelligent car and system for vehicle positioning.
  • the Global Positioning System (Global Positioning System, GPS) has the characteristics of good performance, high precision, and wide application. It is the most commonly used navigation and positioning system so far. At present, the application of GPS in the field of vehicle positioning has been extremely extensive. The application of GPS in vehicles can achieve the goals of navigation and positioning, safety control, intelligent transportation, etc., and has a good development trend.
  • GPS technology is widely used in vehicle positioning, for example, it can be applied to support unmanned driving (unmanned driving), driver assistance (ADAS), intelligent driving (intelligent driving), connected driving (connected driving), intelligent network connection Driving (intelligent network driving) or car sharing (car sharing) smart cars.
  • ADAS driver assistance
  • intelligent driving intelligent driving
  • connected driving connected driving
  • intelligent network connection Driving intelligent network driving
  • car sharing car sharing
  • the present application provides a vehicle positioning method, device, and readable storage medium, which are used to solve the problem of vehicle positioning when the GPS signal is weak or there is no GPS signal.
  • an embodiment of the present application provides a vehicle positioning method.
  • the controller obtains the first relative pose of the first vehicle and the helped person and the global pose of the helped person, according to the first relative pose
  • the global pose calculate the global pose of the first vehicle.
  • the first relative pose is used to indicate the position and posture of the assisted object relative to the first vehicle based on the first vehicle;
  • the first relative pose is the posture of the assisted object determined in the first coordinate system ;
  • the global pose is the global pose of the helpee determined in the second coordinate system.
  • the object to be asked for is the second vehicle
  • the first relative pose is used to indicate the position and attitude of the second vehicle relative to the first vehicle
  • the pose is used to indicate the position and attitude of the second vehicle relative to the first vehicle.
  • the heading of the first vehicle is the reference, and the heading of the second vehicle.
  • acquiring the first relative pose of the first vehicle and the helped object includes: acquiring the laser radar of the first vehicle and scanning the objects around the first vehicle in the first coordinate system.
  • the first point of cloud data; the second point cloud data in the second coordinate system obtained by the lidar of the second vehicle scanning objects around the second vehicle; among them, the lidar of the first vehicle and the lidar of the second vehicle It has a coincident scanning area, and for obstacles in the coincident scanning area, the first point cloud data includes the point cloud data corresponding to the obstacle, and the second point cloud data includes the point cloud data corresponding to the obstacle;
  • the point cloud data corresponding to the point cloud data and the point cloud data corresponding to the obstacle in the second point cloud data are calculated to calculate the first relative pose. Since the lidars deployed in the two vehicles have overlapping scanning areas, point cloud matching can be performed based on the point cloud data scanned by the lidars deployed in the two vehicles to determine a more accurate relative relationship between the two vehicles. Posture.
  • determining the first relative pose according to the first point cloud data and the second point cloud data includes: according to the point cloud data corresponding to the obstacle in the first point cloud data, and The point cloud data corresponding to the obstacle in the second point cloud data to calculate the first relative pose includes: converting the first point cloud data in the first coordinate system to the preset first reference coordinate system of the first vehicle; Obtain the third point cloud data; wherein, the first reference coordinate system is the coordinate system obtained after the origin of the first coordinate system is translated to the preset first reference point; the second point cloud data in the second coordinate system is converted to The preset second reference coordinate system of the second vehicle obtains the fourth point cloud data; wherein, the second reference coordinate system is a coordinate system obtained by translating the origin of the second coordinate system to the preset second reference point; Point cloud matching is performed on the point cloud data corresponding to the obstacle in the third point cloud data and the point cloud data corresponding to the obstacle in the fourth point cloud data to obtain the first relative pose.
  • both the first point cloud data and the second point cloud data can be converted to the reference coordinate system of the respective vehicle.
  • the origin of the reference coordinate system of each vehicle corresponds to the position of the respective vehicle, for example, the position of each vehicle
  • the origin of the reference coordinate system is located at the center of the rear axle of the respective vehicle, so that the relative pose between the two vehicles can be determined more accurately.
  • the point cloud data corresponding to the obstacle in the third point cloud data and the point cloud data corresponding to the obstacle in the fourth point cloud data are matched to obtain the first relative point cloud data.
  • the pose includes: according to the point cloud data corresponding to the obstacle in the third point cloud data, and the point cloud data corresponding to the obstacle in the fourth point cloud data, execute N iterations to obtain the output of the Nth iteration Three transformation matrix; N is a positive integer; according to the third transformation matrix output by the Nth iteration, the first relative pose is determined.
  • i is a positive integer not greater than N: execute for each of the M points of the obstacle: according to the third transformation matrix output from the i-1th iteration, Perform a transformation of the point cloud data corresponding to the third point cloud data to obtain the transformed point cloud data corresponding to the point; calculate the transformed point cloud data corresponding to the point and the point corresponding to the fourth point cloud data The difference of the point cloud data of, get the residual corresponding to the point; M is a positive integer; calculate the residual sum according to the residual corresponding to each of the M points of the obstacle; if the residual sum is not less than the preset The third transformation matrix output by the i-1th iteration is updated with the preset transformation amount, and the third transformation matrix output by the i-1th iteration after the update is taken as the output of the i-th iteration The third transformation matrix, and execute the next iteration; if the residual sum is less than the residual threshold, the iteration ends, and
  • the third transformation matrix output from the i-1th iteration is determined by the following content: according to the global pose of the first vehicle at the previous moment, and calculated based on the IMU ⁇ : The global pose of the first vehicle at the previous moment is used as the reference, the current pose of the first vehicle at the current moment is relative to the previous moment, and the estimated global pose of the first vehicle at the current moment is calculated; the second vehicle is obtained based on global positioning The global pose at the current moment obtained by the system GPS and IMU; according to the estimated global pose of the first vehicle at the current moment and the global pose of the second vehicle at the current moment, the second relative pose is determined, and the second relative pose is used The instruction is based on the first vehicle, and the position and posture of the helper relative to the first vehicle; the second relative pose is the position and posture of the helper determined in the first coordinate system; it will be used to express the second relative
  • the pose matrix is used as the third transformation matrix output from the i-1th iteration.
  • an initial value of the third matrix can be provided for N iterations. Since the initial value of the third matrix is determined based on the estimated global pose of the first vehicle at the current moment and the global pose of the second vehicle at the current moment, , The initial value of the third matrix is relatively close to the third transformation matrix output by the Nth iteration, and further, the number of iterations can be reduced, and the time for finding the optimal solution can be shortened.
  • the global pose of the first vehicle at the previous moment can be obtained based on the following: the last GPS-based global pose of the first vehicle recorded by the system, and the calculation based on IMU:
  • the global pose of the first vehicle at the first moment is the reference, and the pose of the first vehicle at the current moment is relative to the first moment; the first moment is the latest GPS-based global pose of the first vehicle recorded by the system.
  • the global pose of the first vehicle at the previous moment can be obtained based on the following: the global pose of the first vehicle calculated last time, and the calculation based on the IMU: at the second moment
  • the global pose of the first vehicle is the reference, and the pose of the first vehicle at the current moment is relative to the second moment; the second moment is the moment of the most recently calculated global pose of the first vehicle. In this way, the flexibility of the scheme can be improved.
  • the third transformation matrix output by the Nth iteration is used as the first relative pose The mathematical expression of.
  • the third transformation matrix output from the i-1th iteration and the third transformation matrix output from the Nth iteration are weighted and fused to obtain the fourth transformation matrix, and the fourth transformation matrix is taken as The mathematical expression of the first relative pose.
  • the first relative pose that can be obtained through this possible implementation can be more accurate, and then a more accurate global pose can be calculated.
  • the calculating a second relative pose based on the first relative pose and the initial value of the relative pose includes: comparing the first relative pose and the relative pose The initial value of the pose is weighted and merged to obtain the second relative pose.
  • the GPS-based global pose error of the first vehicle is greater than the error threshold; or; the first vehicle The GPS positioning device in the first vehicle is malfunctioning.
  • obtaining the global pose of the person requested for help includes: sending the first request to the person requested for help; carrying the identification of the first vehicle in the first request; receiving the requested person for help sent by the person for help The global pose of the object.
  • the method further includes: acquiring each of the multiple requested objects for help The global pose of the first vehicle corresponding to the object; weighted fusion is performed on the global poses of the multiple first vehicles corresponding to the multiple requested objects to obtain the target global pose of the first vehicle.
  • a vehicle positioning device in a second aspect, includes various modules for executing the vehicle positioning method in the first aspect or any one of the possible implementation manners of the first aspect.
  • a controller for vehicle positioning includes a processor and a memory.
  • the memory is used to store computer execution instructions.
  • the processor executes the computer execution instructions in the memory to use the hardware resources in the controller to execute the operation steps of the first aspect or any one of the possible implementations of the first aspect .
  • the present application provides a smart car, which includes a controller as described in the third aspect or any one of the possible implementation manners of the third aspect.
  • the present application also provides a vehicle positioning system, which includes a first vehicle and a cloud computing platform.
  • the cloud computing platform is used to execute the third aspect or any one of the possible implementations of the third aspect.
  • the function of the controller is used to control the vehicle positioning system.
  • the present application also provides a vehicle positioning system.
  • the system includes a first vehicle and a requested object.
  • the requested object is used to execute the controller in the third aspect or any one of the possible implementations of the third aspect. Function.
  • the present application provides a computer-readable storage medium that stores instructions in the computer-readable storage medium, which when run on a computer, causes the computer to execute the methods of the foregoing aspects.
  • this application provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the methods of the above aspects.
  • Figure 1 is a schematic diagram of an application scenario provided by this application.
  • FIG. 2 is a schematic flowchart of a vehicle positioning method provided by this application when the controller is deployed in the first vehicle;
  • FIG. 3 is a schematic diagram of coordinate system conversion using vehicle A and vehicle B shown in FIG. 1 as an example provided by this application;
  • FIG. 4 is a schematic flowchart of a method for calculating a third transformation matrix provided by this application.
  • FIG. 5 is a schematic diagram of a calculation process for predicting the global pose of a vehicle provided by this application.
  • FIG. 6 is a schematic structural diagram of a vehicle positioning device provided by this application.
  • FIG. 7 is a schematic structural diagram of another vehicle positioning device provided by this application.
  • the global pose can also be called an absolute pose, which refers to the position and pose of an object in a reference coordinate system.
  • the reference coordinate system can be a multi-dimensional coordinate system.
  • the multi-dimensional coordinate system includes a coordinate system with multiple dimensions, such as a two-dimensional coordinate system, a three-dimensional coordinate system, and other multi-dimensional coordinate systems. Specifically, it can be a geodetic coordinate system or a general transverse Mercator ( universal transverse mercartor (UTM) grid system, etc.
  • the position of the object can be represented by the coordinate value of the coordinate axis in the coordinate system.
  • the coordinate values of the same object may be different in different coordinate systems.
  • the position of the vehicle in the embodiment of the present application may be represented by the coordinate value of each coordinate axis in the coordinate system where the vehicle is located. For ease of calculation, it may also be represented by the coordinate value of the coordinate axis in a reference coordinate system. Wherein, when multiple objects respectively use multiple coordinate systems to identify positions, one coordinate system may be used as the basis to determine the positions of all objects, and further determine the positions of other objects in the reference coordinate system.
  • the reference coordinate system is also called the reference coordinate system, which can be a coordinate system where any object is located, or a coordinate system shared by a third party, which is not limited in the embodiment of the present application.
  • the posture of an object can be understood as the orientation of the front of the vehicle, or as the orientation of any position on the vehicle body. Specifically, it can be determined by the angle between the vector corresponding to the vehicle and the horizontal coordinate axis in the multidimensional coordinate system.
  • a vector is a quantity that has both magnitude and direction.
  • the direction of the front of the vehicle can be determined by the posture of the vehicle, and the direction of travel of the vehicle can also be determined.
  • Relative pose refers to the pose of one object relative to the other of two objects. For example, taking the object O1 as the reference, the pose of the object O2 relative to the object O1; or, taking the object O2 as the reference, the pose of the object O1 relative to the object O2.
  • the first relative pose of the first vehicle (also referred to as the faulty vehicle) and the helped object mentioned in the embodiments of this application may refer to the position of the helped object relative to the first vehicle based on the first vehicle. Posture; or, based on the object being asked for, the position of the first vehicle relative to the object being asked for.
  • Point cloud data refers to a set of vectors in a three-dimensional coordinate system. These vectors are usually expressed in the form of X, Y, Z three-dimensional coordinates, and are generally used to represent the shape of an object's outer surface. Not only that, in addition to the geometric position information represented by (X, Y, Z), the point cloud data can also represent the RGB color, gray value, depth, segmentation result, etc. of a point.
  • the point cloud data in the embodiments of the present application may be obtained by scanning by lidar, which may also be referred to as a laser detection and ranging system (Light Detection and Ranging, LiDAR).
  • lidar which may also be referred to as a laser detection and ranging system (Light Detection and Ranging, LiDAR).
  • Transformation matrix is a concept in mathematical linear algebra.
  • linear transformations can be represented by transformation matrices.
  • the most commonly used geometric transformations are linear transformations, which include rotation, translation, scaling, shear, reflection, and front projection.
  • the linear transformation includes rotation and translation as an example for illustration, that is, the linear transformation that can be represented by the transformation matrix includes rotation and translation.
  • Fig. 1 is a schematic diagram of an application scenario provided by an embodiment of the application.
  • the lane driving direction indication information shown in Fig. 1 includes 4 lanes, where the driving directions of the two lanes on the left are the same, and the driving directions of the vehicles in the two lanes on the right are the same.
  • Any vehicle can be a smart car or a non-smart car, which is not limited in comparison with the embodiments of the present application.
  • Each vehicle is provided with a sensor for detecting obstacles around the vehicle.
  • the sensors include lidar, millimeter wave radar, and camera.
  • each vehicle can be equipped with one or more sensors, and the number of each sensor can be one or more.
  • the sensor can be installed on the top of the vehicle, specifically, it can be installed at the middle position of the top of the vehicle.
  • the embodiment of the present application does not limit the installation position and number of sensors in each vehicle.
  • the vehicle may communicate with other objects based on the wireless communication technology between the vehicle and the outside world (for example, vehicle to everything (V2X)).
  • V2X vehicle to everything
  • the vehicle-to-vehicle communication can be realized based on the inter-vehicle wireless communication technology (for example, vehicle to vehicle (V2V)).
  • the communication between the vehicle and other objects may be based on wireless high fidelity (for example, wireless fidelity (Wi-Fi)), the fifth generation (5th generation, 5G) mobile communication technology, and the like.
  • Wi-Fi wireless fidelity
  • 5G fifth generation
  • communication between vehicles and smart devices (such as smart phones or mobile devices that support positioning functions) can be realized based on 5G.
  • the application scenario shown in FIG. 1 may also include a cloud computing platform, and the computing platform may be implemented by a cloud server or virtual machine.
  • the controller for determining the global pose of the first vehicle in the embodiment of the present application may be located in the first vehicle, and specifically may be implemented by the controller of the positioning system in the vehicle, the controller for intelligent driving, or any other device with computing capability.
  • the controller can also be located in the requested object. At this time, the controller can also be any device or device with computing capability in the requested object.
  • the controller After determining the global pose of the first vehicle, the The global pose is sent to the first vehicle.
  • the controller can also be located on a cloud computing platform (this situation can also be described as the function of the controller is implemented by a cloud virtual machine or a server). At this time, after determining the global pose of the first vehicle, set the first The global pose of the vehicle is sent to the first vehicle.
  • Vehicle A can search for wireless signals from other objects in the preset area through the communication module.
  • the preset area of vehicle A is shown as the circle in the dashed frame in Figure 1, with vehicle A as the center and the preset distance as the radius.
  • the preset distance may be a value smaller than or equal to the radius of the coverage area of the wireless signal sent by the vehicle A.
  • the helped object used to assist vehicle A in positioning may be a vehicle, such as one or more of vehicle B, vehicle C, and vehicle D, or other vehicles that have positioning capabilities or can obtain positioning information.
  • Objects such as some infrastructures equipped with communication modules on both sides of the road, such as telegraph poles with communication modules, mobile base stations, mobile electronic devices carried by pedestrians (such as smartphones with positioning functions), cameras, and so on.
  • the infrastructure shown exemplarily in FIG. 1 is a camera H in a preset area centered on the vehicle A.
  • Figure 1 also shows a vehicle E and a vehicle F. Since the vehicle E and the vehicle F are outside the preset area centered on the vehicle A, the vehicle A will not rely on the vehicle E and the vehicle F. Positioning.
  • vehicle A after vehicle A searches for other objects in the preset area, it may send a first request to it for requesting other objects to assist vehicle A in positioning.
  • the other objects searched by vehicle A include the object sought for help mentioned above.
  • vehicle A after vehicle A searches for other objects in the preset area, it can establish mutual trust with it first, that is, vehicle A and other objects searched for first perform security authentication, and then the authentication succeeds. Send it the first request.
  • vehicle A and other objects searched for first perform security authentication include the sought-after objects mentioned in the above content.
  • a possible process of establishing mutual trust is as follows: After vehicle A has searched for other objects in the preset area, for each searched object, vehicle A can send first signaling to the object through the communication module. The signaling is used to request the establishment of a communication link with the object. The first signaling can carry an identifier for indicating vehicle A.
  • the identifier includes information that can uniquely indicate the vehicle in the world, for example, engine number, vehicle's serial number.
  • the communication module of the object After the communication module of the object receives the first signaling, it can verify the vehicle A. The specific identity verification process can be confirmed by the object according to a preset mutual trust list, or the object can request a third-party system to verify the vehicle.
  • a second signaling is sent to vehicle A. The second signaling is used to indicate that the object agrees to establish mutual trust with vehicle A.
  • FIG. 2 is a schematic diagram for further explaining the process of the vehicle positioning method provided by the embodiment of the present application by taking the case where the first vehicle is a faulty vehicle and the controller is deployed on the first vehicle as an example provided by an embodiment of the application, as shown in FIG. As shown, the method includes:
  • the first vehicle sends a first request to the seeker when it is determined that the GPS-based global pose error of the first vehicle is greater than the error threshold, or when it is determined that the GPS positioning device of the first vehicle is malfunctioning.
  • the first request is used to indicate that the first vehicle needs to request positioning assistance.
  • the first request may include the identification of the first vehicle.
  • the identification of the first vehicle includes information that can uniquely indicate the first vehicle globally, for example, the engine number of the first vehicle and the serial number of the first vehicle.
  • the person sought for help receives the first request.
  • Step 311 The controller obtains the first relative pose of the first vehicle and the helped object.
  • the first relative pose is used to indicate the position and posture of the assisted object relative to the first vehicle based on the first vehicle; the first relative pose is the posture of the assisted object determined in the first coordinate system.
  • the first coordinate system may be a coordinate system where the sensor of the first vehicle is located.
  • the first coordinate system may be a coordinate system established with the center position of the sensor of the first vehicle as the origin.
  • step 312 the helpee sends the global pose of the helpee to the controller.
  • the global pose is the global pose of the assisted object determined in the second coordinate system.
  • the second coordinate system may be a coordinate system where the sensor of the second vehicle is located.
  • the second coordinate system may be a coordinate system established with the center position of the sensor of the second vehicle as the origin.
  • the controller receives the global pose of the helper sent by the helper.
  • Step 313 The controller calculates the global pose of the first vehicle according to the first relative pose and the global pose of the helped object.
  • the global pose of the first vehicle is also the global pose of the first vehicle in the aforementioned second coordinate system.
  • the second coordinate system may be a two-dimensional coordinate system, a three-dimensional coordinate system, and other multi-dimensional coordinate systems, such as the aforementioned geodetic coordinate system, UTM grid system, and the like.
  • the controller can be based on the global pose of the requested object, as well as the and The first relative pose of the first vehicle calculates the global pose of the first vehicle, which can solve the problem of vehicle positioning when the GPS signal is weak or there is no GPS signal. If applied in the field of unmanned driving, it can solve the problem of vehicle positioning when the GPS signal is weak or there is no GPS signal in the unmanned driving application scenario.
  • the first vehicle may request assistance from one or more objects for assistance.
  • the requested object can be a vehicle.
  • the vehicle can also be called the requested vehicle.
  • the first vehicle regards a vehicle as the requested vehicle, it sends a request to the requested vehicle, and uses the requested vehicle as the accurate determination of the first relative pose of the first vehicle, and based on the first relative pose and the requested vehicle.
  • the global pose of the help-seeking car determines the global pose of the first vehicle.
  • the first vehicle may also select multiple helpers to request assistance.
  • the first vehicle can request assistance from K helpers, and K is a positive integer.
  • the controller can request assistance according to the The global pose of the help-seeking object and the first relative pose between the first vehicle and the help-seeked object are calculated, and the global pose of the first vehicle corresponding to the help-seeked object is calculated. Finally, the global poses of the K first vehicles corresponding to the K helpers can be obtained. Further, the global pose of the first vehicle can be calculated according to the global poses of the K first vehicles corresponding to the K helpers, which is determined by the global poses of the first vehicles corresponding to the multiple helpers The global pose of a first vehicle is called the target global pose of the first vehicle.
  • Manner 1 Use one of the K global poses of the first vehicle as the global pose of the first vehicle.
  • the controller can select any one of the K global poses of the K first vehicles as the global pose of the first vehicle. It is also possible to select the global pose with the highest reliability among the K global poses as the global pose of the first vehicle according to the credibility of each of the K global poses.
  • the credibility of a global pose can be related to one or more of the following parameters:
  • Parameter item 1 The strength of the wireless signal sent by the help-seeking object corresponding to the global pose; wherein, the stronger the strength of the wireless signal sent by the help-seeking object corresponding to the global pose, the greater the parameter value of the parameter item; Otherwise, the parameter value of the parameter item is smaller;
  • Parameter item 2 The distance between the assisted object corresponding to the global pose and the first vehicle; wherein, the smaller the distance between the assisted object corresponding to the global pose and the first vehicle, the parameter of the parameter item The larger the value; on the contrary, the smaller the parameter value of this parameter item;
  • Parameter item 3 The identity of the person being asked for help corresponding to the global pose (smart pole, smart camera, vehicle, etc.); among them, the identity of the person being asked for help can be preset according to the identity of the person being asked for help and the parameters of the parameter item For example, when the identity of the person being asked for is a vehicle, the parameter value of this parameter item is 80 points, and when the identity of the person being asked for help is a smart telephone pole, the parameter value of the parameter item is 60 points, and the person being asked for help When the identity of is a smart camera, the parameter value of this parameter item is 50 points.
  • the corresponding parameter values can be artificially set according to the identity of the person being asked for help, hardware capabilities, etc.;
  • Parameter item 4 The sensor type used in the global pose calculation process (lidar sensor, millimeter wave radar, camera, etc.); among them, it can be the corresponding relationship between the preset sensor type and the parameter value of the parameter item, for example, When the sensor type is a lidar sensor, the parameter value of this parameter item is 80 points. When the sensor type is millimeter wave radar, the parameter value of this parameter item is 60 points. When the sensor type is camera, the parameter value of this parameter item is 50 points. point.
  • the weight corresponding to each parameter item can be preset, and the parameter values of the multiple parameter items can be weighted and added to obtain the global pose Credibility.
  • Manner 2 Calculate the global pose of the first vehicle according to the global poses of the K first vehicles.
  • the controller respectively obtains the K global poses of the K first vehicles corresponding to the K helpers, and then performs weighted fusion on the K global poses to obtain the target global pose of the first vehicle.
  • the process of weighting and fusing the K global poses may specifically include: determining respective weights for the K global poses, and then performing weighted addition on each global pose.
  • the weight of a global pose is related to the credibility of the global pose. For example, the higher the credibility of the global pose, the greater the weight of the global pose; the lower the credibility of the global pose , The weight of the global pose can be smaller. It is worth noting that the weighted fusion process used in the second method can adopt Kalman filter fusion, and the specific process of Kalman filter fusion can be processed by a traditional method or an improved algorithm of this method. This is not the case in the embodiment of this application. Make a limit.
  • the solution provided by the above method 1 calculates the target global pose of the first vehicle based on a global pose, and the calculation process is relatively simple and the calculation speed is relatively fast.
  • the solution provided by the second method above calculates the target global pose of the first vehicle based on K global poses, which can improve the accuracy of the target global pose of the first vehicle.
  • the specific process of determining the first relative pose in step 311 is further introduced by taking the second vehicle as the second vehicle, and the first vehicle and the second vehicle are respectively installed with lidar as an example.
  • the obstacle information detected by the lidar is represented by point cloud data
  • the controller obtains the first point cloud data obtained by the lidar scan of the first vehicle and the second point cloud data obtained by the lidar scan of the second vehicle.
  • the coordinate system of the first point cloud data is usually a multi-dimensional coordinate system established with the location of the device collecting the first point cloud data as the origin, that is, the multi-dimensional coordinate system established with the location of the lidar of the first vehicle as the origin.
  • the second coordinate system is usually a multi-dimensional coordinate system established with the location of the device that collects the second point cloud data as the origin, that is, the multi-dimensional coordinate system established with the location of the lidar of the second vehicle as the origin.
  • the point cloud data scanned by the lidar can be filtered in the embodiment of the present application.
  • the point cloud data can be filtered to find that the curvature is greater than the curvature Threshold point cloud data (the point cloud data with a curvature greater than the curvature threshold can also be understood as the point cloud data corresponding to the points on the surface of the object scanned by the lidar).
  • the first point cloud data and the second point cloud data both include points corresponding to one or more obstacles in the overlapping scanning area Cloud data, (filter the first point cloud data and the second point cloud data, the filtered first point cloud data and the second point cloud data both include the surface of one or more obstacles in the overlapping scanning area Point cloud data corresponding to the points on the above).
  • the point of an obstacle in the coincident scanning area corresponds to a set of vectors in the first coordinate system, and also corresponds to a set of vectors in the second coordinate system.
  • the set of two sets of vectors of the object in the two different coordinate systems determines the first relative pose between the first vehicle and the second vehicle.
  • the first relative pose is used to indicate the position and posture of the assisted object relative to the first vehicle based on the first vehicle; the first relative pose is the posture of the assisted object determined in the first coordinate system .
  • the first method is to select the first vehicle and the second coordinate system respectively, and then perform coordinate conversion on the first point cloud data and the second point cloud data in their corresponding reference coordinate systems.
  • the laser radars of two vehicles may be installed at different positions in one car, for example, the laser radar of the first vehicle is installed near the front of the car, and the laser radar of the second vehicle is installed near the rear of the car.
  • the point cloud data scanned by radar usually establishes a multi-dimensional coordinate system with the position of the lidar as the origin. Therefore, the origin of the first coordinate system is close to the front of the first vehicle, and the origin of the second coordinate system is close to the rear of the second vehicle.
  • the point cloud matching result obtained reflects:
  • the front of the vehicle is the reference, and the relative pose of the rear of the second vehicle with respect to the front of the first vehicle.
  • this relative pose cannot accurately express the relative pose between the first vehicle and the second vehicle.
  • a more accurate relative pose between the first vehicle and the second vehicle may be, for example: taking the front of the first vehicle as a reference, the relative pose of the front of the second vehicle relative to the front of the first vehicle; or, taking the front of the first vehicle as a reference;
  • the rear of a vehicle is the reference, and the relative pose of the rear of the second vehicle with respect to the rear of the first vehicle.
  • the first reference coordinate system is a coordinate system obtained by translating the origin of the first coordinate system to a preset first reference point, that is, the origin of the first reference coordinate system is the first reference point.
  • the second reference coordinate system is a coordinate system obtained by translating the origin of the second coordinate system to the preset second reference point, that is, the origin of the second reference coordinate system is the second reference point.
  • the respective coordinate axes of the first reference coordinate system and the second reference coordinate system are correspondingly parallel (for example, the first reference coordinate system and the second reference coordinate system are both X/Y/Z three-dimensional coordinate systems, then the X of the first reference coordinate system
  • the axis is parallel to the X axis of the second reference coordinate system
  • the Y axis of the first reference coordinate system is parallel to the Y axis of the second reference coordinate system
  • the Z axis of the first reference coordinate system is parallel to the Z axis of the second reference coordinate system
  • the position of the first reference point on the first vehicle and the position of the second reference point on the second vehicle are the same part of the vehicle.
  • the first reference point and the second reference point may be points on the body of two vehicles.
  • the first reference point may refer to the center of the rear axle of the first vehicle
  • the second reference point may refer to the second vehicle.
  • the first reference point may refer to the center of the front axle of the first vehicle
  • the second reference point may refer to the center of the front axle of the second vehicle
  • the first reference point may refer to the center of the front axle of the second vehicle.
  • the center of the roof of the first vehicle, and the second reference point may refer to the center of the roof of the second vehicle.
  • the first point cloud data obtained by the laser radar scanning of the first vehicle is converted into a coordinate system to convert it to the first coordinate system.
  • the process of coordinate system conversion can be to translate the origin of a coordinate system to the origin of the reference coordinate system.
  • the origin of the first coordinate system is moved from the position of the lidar of the first vehicle to the center of the rear axle of the first vehicle.
  • the origin of the coordinate system is moved to the first point cloud data of the center of the rear axle of the first vehicle. It is called the third point cloud data.
  • the second point cloud data scanned by the lidar of the second vehicle also needs to be converted to the second coordinate system.
  • the origin is moved from the position of the lidar of the second vehicle to the center of the rear axle of the second vehicle.
  • the second point cloud data whose coordinate system was originally moved to the center of the rear axle of the second vehicle is called the fourth point cloud data.
  • each coordinate value in the different coordinate systems can be obtained by multiplying the point coordinates by the transformation matrix. For example, if a point is expressed in coordinate system S1 (that is, expressed by the coordinate value of the coordinate axis in coordinate system S1), if it is necessary to convert the coordinate system to coordinate system S2 (that is, the point is in coordinate system S2). System S2), then the point can be multiplied by the transformation matrix.
  • the origin of the coordinate system S1 is (0,0,0,) in the coordinate system S1, and it is (x1, y1, z1) in the coordinate system S2, and (x1, y1, z1) is The value obtained by multiplying the origin of the coordinate system S1 by the transformation matrix.
  • Fig. 3 exemplarily shows the coordinate system conversion diagram of vehicle A and vehicle B in Fig. 1.
  • the point cloud data is called the first point cloud data.
  • the coordinate axis center of the first point cloud data is located on the lidar of vehicle A.
  • the coordinate system to which the first point cloud data belongs is called the first coordinate system or the lidar coordinate system.
  • the coordinate system conversion process of a point cloud data is specifically as follows: move the origin of the coordinate system to the center of the rear axle of the vehicle A, and the coordinate system with the center of the rear axle of the vehicle as the center (ie the coordinate system to which the third point cloud data belongs) Called the first coordinate system.
  • the process involves the conversion between the lidar coordinate system and the first coordinate system (refer to the foregoing content for the coordinate system conversion). It can also be said that for a point on the surface of an obstacle in the overlapping scanning area of the lidar of the first vehicle and the lidar of the second vehicle, a set of vectors corresponding to the first point cloud data (the set of vector It can be called the point cloud data corresponding to the point in the first point cloud data) multiplied by the preset first transformation matrix to obtain a set of vectors corresponding to the point in the third point cloud data (this set of vectors can be called the Point the corresponding point cloud data in the third point cloud data). Since the lidar of vehicle A has been fixed during installation and the center of the rear axle of the vehicle has been determined, the first transformation matrix can also be determined.
  • the point cloud data scanned by vehicle B is called the second point cloud data
  • the coordinate axis center of the second point cloud data is located on the lidar of vehicle B, that is, the second point cloud data is in the second vehicle
  • the data obtained in the second coordinate system. Transform the coordinate system of the second point cloud data, and move the origin of the coordinate system to the center of the rear axle of the vehicle B.
  • the specific conversion process is similar to the foregoing content.
  • the conversion matrix used in the coordinate system conversion process between the second point cloud data and the fourth point cloud data is called the second conversion matrix
  • the second conversion matrix is Taking the center of the rear axle of the vehicle B as a reference, the center of the lidar of the vehicle B is a transformation matrix corresponding to the linear transformation that needs to be performed on the center of the rear axle of the vehicle A. Since the installation position of the lidar of the vehicle A and the installation position of the vehicle B may be different, the first transformation matrix and the second transformation matrix may also be different.
  • a third transformation matrix between the second reference coordinate system and the first reference coordinate system is calculated based on the third point cloud data and the fourth point cloud data. How to calculate the transformation matrix based on the two point cloud data will be explained in detail later.
  • the first coordinate system is selected as the reference coordinate system for the coordinate conversion between the first point cloud data and the second point cloud data.
  • a prerequisite is required, that is, the installation position of the lidar of the first vehicle in the first vehicle corresponds to the installation position of the lidar of the second vehicle in the second vehicle.
  • the lidar of the first vehicle is installed in the middle of the roof of the first vehicle
  • the lidar of the second vehicle is installed in the middle of the roof of the second vehicle.
  • the lidar of the first vehicle is installed on the roof of the first vehicle near the front of the vehicle
  • the lidar of the second vehicle is installed on the roof of the second vehicle near the front of the vehicle.
  • the lidar of the first vehicle is installed on the roof of the first vehicle near the rear of the vehicle
  • the lidar of the second vehicle is installed on the roof of the second vehicle near the rear of the vehicle.
  • the distance between the installation position of the lidar of the first vehicle and the front of the first vehicle and the distance between the installation position of the lidar of the second vehicle and the front of the second vehicle can be calculated. If the distance is less than the distance threshold, it can be determined that the installation position of the lidar of the first vehicle in the first vehicle corresponds to the installation position of the lidar of the second vehicle in the second vehicle.
  • the third transformation matrix between the second coordinate system and the first coordinate system can be calculated based on the first point cloud data and the second point cloud data based on the first coordinate system.
  • the method of calculating the third transformation matrix based on the two point cloud data is further described below.
  • the calculation of the third transformation matrix based on the third point cloud data and the fourth point cloud data is taken as an example for description.
  • the process of performing point cloud matching on the third point cloud data and the fourth point cloud data can also be understood as: taking the first reference coordinate system to which the third point cloud data belongs as a reference, solving the problem of getting the fourth point cloud data from the first reference coordinate system to which the third point cloud data belongs.
  • the point cloud matching process is the process of solving the third transformation matrix.
  • the point cloud matching relationship between the third point cloud data and the fourth point cloud data can be expressed by the following formula (1):
  • the number of the first vehicle is v 1
  • the number of the second vehicle is v 2
  • Is the third point cloud data It is the fourth point cloud data.
  • the third transformation matrix in the above formula (1) can be solved by the Iterative Closest Point (ICP) algorithm.
  • the ICP algorithm is a process of solving the third transformation matrix through N iterations, where N is a positive integer. For each iteration process, the third transformation matrix used in this iteration process is obtained by multiplying the third transformation matrix output from the previous iteration by the preset transformation amount.
  • Calculate the residual error between two point cloud data (also can be understood as an error). Since the two point cloud data correspond to two sets of vectors, calculating the residual error between the two point cloud data can be understood as solving the two sets of vectors. The residual error between the solving vectors can also be understood as solving the euclidean metric (also known as the Euclidean distance) between the two sets of vectors.
  • the residual sum of the M points can be further calculated. If the residual sum is less than the preset residual threshold, it means that the result of the iteration meets the requirements.
  • the third transformation matrix used in this iteration process is used as the finally solved third transformation matrix. If the residual sum does not meet the requirements, the third transformation matrix needs to be adjusted with a preset transformation amount, and the adjusted third transformation matrix is used as the third transformation matrix in the next iteration process.
  • the number of the first vehicle is v 1
  • the number of the second vehicle is v 2
  • Is the third point cloud data It is the fourth point cloud data.
  • ⁇ T is The intermediate variable in the calculation process is the third transformation matrix in each iteration.
  • Fig. 4 is a schematic flow chart of solving the third transformation matrix in formula (2) by using the ICP algorithm according to an embodiment of the application, as shown in Fig. 4, including:
  • Step 301 start.
  • Step 302 Initialize the third transformation matrix ⁇ T, ⁇ T is the third transformation matrix, corresponding to linear transformation to rotation and translation, and the third transformation matrix can also be expressed as a rotation matrix R and translation matrix t. This step 302 can also be described as initializing the rotation matrix R and the translation matrix t.
  • initializing the third transformation matrix may specifically refer to: assigning the initial value of the third transformation matrix mentioned in the subordinate content to ⁇ T.
  • the initial value of the third transformation matrix can also be understood as the third transformation matrix used for the first iteration in the following content, that is, in the N iterations mentioned in the following content, when i is 1, the first The third transformation matrix output from i-1 iterations.
  • initializing the third transformation matrix may specifically refer to: assigning a value to ⁇ T.
  • the rotation matrix R and the translation matrix t have the following values:
  • Steps 303 to 307 are the process of the i-th iteration, and i is a positive integer not greater than N:
  • Step 303 execute for each of the M points of the obstacle: perform a transformation on the point cloud data corresponding to the point in the third point cloud data according to the third transformation matrix output from the i-1th iteration, Obtain the transformed point cloud data corresponding to the point.
  • M is a positive integer.
  • step 303 the point cloud data corresponding to the point in the third point cloud data is rotated and translated to obtain the transformed point cloud data corresponding to the point.
  • Step 304 According to the transformed point cloud data corresponding to each of the M points, and the point cloud data corresponding to each point in the fourth point cloud data, a residual error corresponding to each of the M points is obtained.
  • step 304 the point cloud data corresponding to the point in the fourth point cloud data can be determined in the following manner:
  • the point cloud data is determined to be: the point corresponding to the fourth point cloud data Point cloud data; where the condition includes: the corresponding distance between the transformed point cloud data and the point cloud data corresponding to the point is less than a preset distance threshold.
  • the corresponding distance between the two point cloud data may be Euclidean metric, which may refer to the true distance between two points in a multi-dimensional (two-dimensional or more than two-dimensional) space. The Euclidean distance in two-dimensional and three-dimensional space is the actual distance between two points.
  • the corresponding distance between the changed point cloud data corresponding to the point and the point cloud data corresponding to the point in the fourth point cloud data is calculated, which can be used as the residual corresponding to the point.
  • Step 305 Calculate the sum of the residuals according to the residuals corresponding to each of the M points of the obstacle.
  • step 306 it is determined whether the sum of residual errors is less than the residual error threshold, if not, step 307 is executed, and if yes, step 308 is executed.
  • the residual threshold may be a preset value, for example, it may be 10-10 .
  • Step 307 When it is determined that the number of iterations is not greater than the threshold of the number of iterations, update the third transformation matrix output by the i-1th iteration with a preset transformation amount, and update the third transformation matrix output by the i-1th iteration after the update.
  • the transformation matrix is taken as the third transformation matrix output from the i-th iteration, and the next iteration is executed to repeat step 303, so as to enter the next iteration.
  • a threshold value for the number of iterations is set, and if the number of iterations is greater than the threshold for the number of iterations, the process is ended and no iteration is performed.
  • the threshold of the number of iterations may be a preset value to prevent the problem of excessive number of iterations.
  • the preset transformation amount may include a preset rotation step size and a preset translation step size, which may be calculated by the Gauss-Newton method.
  • Step 308 If the residual sum is less than the residual threshold, the iteration is ended, and the third transformation matrix output from the i-1th iteration is used as the third transformation matrix output from the Nth iteration.
  • the third transformation matrix output by the Nth iteration is used as the mathematical expression of the first relative pose. That is to say, the third transformation matrix output by the Nth iteration may be used as a mathematical expression of the first relative pose in the embodiment of the present application.
  • the first relative pose refers to the position and attitude of the second vehicle relative to the first vehicle based on the first vehicle.
  • the position and attitude can be represented by the third transformation matrix output by the Nth iteration.
  • the expression of the first relative pose by a transformation matrix (for example, the third transformation matrix output by the Nth iteration, or the fourth transformation matrix mentioned in the following content) is only one possible implementation. In the specific embodiment, it is also It can be expressed by other mathematical expressions, for example, four elements, angle axis, Euler angle, etc.
  • the third point cloud data and the fourth point cloud data used in the iterative process of the above ICP algorithm are obtained by the coordinate system conversion of the first point cloud data and the second point cloud data, so the first point cloud data and the second point cloud data
  • the cloud data needs to be guaranteed to be the data collected by two lidars at the same time.
  • the vehicle’s clock is synchronized.
  • Both the first point cloud data and the second point cloud data carry their own timestamps.
  • the timestamps can have multiple representation methods.
  • the timestamps can be the increase between the current time and the preset time.
  • the preset time can be, for example, the designated December 31, 2020, 0:00:00:00.
  • an initial value of the third transformation matrix can be provided for the ICP algorithm iteration, which can also be said to be an initial value for point cloud matching.
  • the initial value of the third transformation matrix can also be said to be the third transformation matrix output by the i-1th iteration when i is 1 in the above content, that is, the initial value of the third transformation matrix can also be called the 0th time Iteratively output the third transformation matrix.
  • the following introduces a scheme for determining the initial value of the third transformation matrix.
  • the initial value of the third transformation matrix provided in the embodiments of this application may be determined based on the estimated global pose of the first vehicle and the global pose of the second vehicle.
  • the GPS of the second vehicle is not damaged, so the second vehicle’s
  • the global pose is relatively accurate, and the GPS of the first vehicle is malfunctioning.
  • the embodiment of this application estimates the estimated global pose of the first vehicle based on other solutions (the following content will introduce how to calculate the prediction of the first vehicle). Estimated global pose), this value is estimated, so the accuracy is not as high as the global pose of the second vehicle.
  • the second relative pose is used to indicate the position and pose of the requested object relative to the first vehicle based on the first vehicle; the second relative pose is determined in the first coordinate system.
  • the matrix used to express the second relative pose is taken as the third transformation matrix output from the i-1th iteration. It can also be said that the matrix used to express the second relative pose is taken as the initial value of the third transformation matrix.
  • the calculated initial value of the third transformation matrix is not very accurate, it is closer to the third matrix output by the Nth iteration (the third matrix output by the Nth iteration can be called the optimal solution), so, based on The initial value of the third transformation matrix is iterated by the ICP algorithm, which is like finding the optimal solution in a place close to the optimal solution.
  • the search range for finding the optimal solution can be obviously reduced, and the number of iterations can be significantly reduced. Improve the speed and accuracy of algorithm convergence.
  • the optimal solution mentioned in the embodiments of this application is one of the basic concepts of mathematical programming.
  • the feasible solution that makes the objective function take the minimum value is called the minimum solution
  • the feasible solution that makes the objective function take the maximum value is called the maximum solution.
  • Minimal or maximal solutions are called optimal solutions.
  • the minimum or maximum value of the objective function is called the optimal value.
  • the optimal solution and the optimal value are also called the optimal solution of the corresponding mathematical programming problem.
  • the objective function here refers to the above formula (2), and the process of solving the formula (2) can be understood as solving the optimal solution of the third matrix.
  • the estimated global pose of the first vehicle at the current moment can be calculated based on the global pose of the first vehicle at the previous moment, and based on IMU: The moment is calculated relative to the pose of the previous moment. The details are described below.
  • the global pose of the first vehicle at the previous moment may be the last valid GPS-based global pose recorded by the system.
  • the system records the most recent valid GPS-based global position Pose (the last valid GPS-based global pose of the first vehicle may also be referred to as the GPS-based global pose of the first vehicle in the previous frame).
  • the last valid GPS-based global pose recorded by the system is called the first moment
  • the estimated global pose of the first vehicle at the current moment is obtained based on the following: The GPS-based global pose, and the relative pose between the pose of the first vehicle at the current moment and the pose of the first vehicle at the first moment.
  • the data pre-integration method can be used to calculate the relative poses occurring in a period of time.
  • the following example uses the IMU method.
  • the pose change of the first vehicle from the first moment to the current moment can be calculated based on the IMU (also called the pose of the first vehicle at the current moment and the pose of the first vehicle at the first moment). Relative poses between). Combining the GPS-based global pose of the first vehicle at the first moment and the relative pose calculated based on the IMU from the first moment to the current moment, the global pose of the first vehicle at the current moment can be obtained.
  • the estimated global pose of the first vehicle at the current moment can be determined by the following formula (3):
  • the number of the first vehicle is v 1
  • the number of the second vehicle is v 2
  • the pose change of the current moment relative to the first moment calculated based on the IMU can also be called the pose of the first vehicle at the current moment and the pose of the first vehicle at the first moment
  • Formula (3) can also be understood as: calculating the relative pose between the current moment of the first vehicle and the global pose of the first vehicle at the first moment based on the IMU Use the GPS-based global pose of the first vehicle at the first moment And relative pose The estimated global pose of the first vehicle at the current moment can be calculated
  • the global pose of the second vehicle at the current moment is obtained based on GPS and IMU. Because the GPS of the second vehicle has not failed, but there will be a certain delay in the global pose of the second vehicle learned based on GPS. Therefore, when positioning the second vehicle based on GPS, you can combine the information obtained based on GPS with the information obtained based on GPS. Information fusion obtained by other technologies (such as IMU) uses the fusion result as the global pose of the second vehicle at the current moment. This method of fusing information obtained based on GPS and information obtained based on other technologies (such as IMU) to achieve positioning can be called combined positioning.
  • the initial value of the third transformation matrix of the first vehicle and the second vehicle can be calculated by the following formula (4):
  • the number of the first vehicle is v 1 and the number of the second vehicle is v 2
  • FIG. 5 exemplarily shows the calculation diagram of the estimated global pose of the vehicle A at the current moment in FIG. 1.
  • the global pose may be determined according to the global pose of the vehicle A at the first moment and the relative pose of the vehicle A from the first moment to the current moment.
  • the global pose of vehicle A at the first moment can be the formula (3) above
  • the relative pose of vehicle A from the first moment to the current moment can be the formula (3) above
  • the global pose of the first vehicle at the previous moment may be the global pose of the first vehicle obtained last time using the method provided in FIG. 3 above.
  • the global pose of the first vehicle has been determined at least once according to the solution provided in step 310 to step 313 above.
  • the time of the most recently calculated global pose of the first vehicle is the second moment.
  • the estimated global pose of the first vehicle at the current moment is obtained based on the following: Based on the last calculated global pose of the first vehicle (that is, the global pose of the first vehicle at the second moment), the amount of pose transformation of the first vehicle at the current moment relative to the second moment can also be said It is the relative pose of the first vehicle between the second moment and the current moment.
  • the way in which the second vehicle obtains the global pose at the current moment is the same as that in way a1, and will not be repeated.
  • the initial value of the third transformation matrix of the first vehicle and the second vehicle can be calculated using the above formula (4), which will not be repeated here. .
  • Fig. 5 exemplarily shows the calculation diagram of the estimated global pose of vehicle A at the current moment in Fig. 1.
  • the global pose may be determined according to the global pose of the vehicle A at the second moment and the relative pose of the vehicle A from the second moment to the current moment.
  • the global pose of the vehicle A at the second moment may be the global pose of the first vehicle once determined according to the solution provided in the above steps 311 to 313.
  • the relative pose of the vehicle A during the period from the second moment to the current moment may be obtained based on the following: the global pose and IMU of the first vehicle calculated at the previous moment.
  • the embodiment of this application provides a possible implementation solution, which can be used for the first time after the GPS of the first vehicle fails
  • the solution provided in FIG. 2 described above uses the above method a1 to calculate the initial values of the third transformation matrix of the first vehicle and the second vehicle when calculating the global pose of the first vehicle.
  • the method a2 is used to calculate the initial values of the third transformation matrix of the first vehicle and the second vehicle.
  • the third transformation matrix output by the Nth iteration is used as the mathematical expression of the first relative pose.
  • the initial value of the third transformation matrix and the third transformation matrix output by the Nth iteration are weighted and fused to obtain the fourth transformation matrix, and the fourth transformation matrix is used as the first relative pose Mathematical expression form.
  • the weighted fusion can use Kalman filter fusion.
  • the initial values of the third transformation matrix and the third transformation matrix output by the Nth iteration are determined respectively.
  • the corresponding weights are then weighted and added to the initial values of the third transformation matrix and the third transformation matrix output by the Nth iteration to obtain the fourth transformation matrix.
  • the fourth transformation matrix is used as the mathematical expression of the first relative pose.
  • the weight of the third transformation matrix output by the Nth iteration is related to its credibility, and the weight of the initial value of the third transformation matrix is related to its credibility.
  • the first relative pose can be set as the observation model of Kalman filter fusion (the observation model is more reliable), and the initial value of the third transformation matrix can be used as the prediction model of Kalman filter fusion (the credibility of the prediction model) Degree is lower).
  • the global pose of the first vehicle can be determined by the following formula (5):
  • the number of the first vehicle is v 1 and the number of the second vehicle is v 2
  • Is the global pose of the first vehicle Is the global pose of the second vehicle at the current moment, Is the first relative pose.
  • the above solution introduces the solution that the object to be asked for is the second vehicle, and the first vehicle and the second vehicle are used to assist in determining the first relative pose of the first vehicle and the second vehicle through the lidar installed on the second vehicle.
  • the following introduces several other schemes for determining the first relative pose between the first vehicle and the helped object.
  • the controller in addition to deploying the controller for vehicle positioning in the first vehicle, it can also be deployed on the object being asked for help.
  • the difference from the solution shown in Figure 2 above is The point is that the controller can directly obtain the global pose of the sought-after object, and does not need to obtain the global pose of the sought-after object through the above step 312.
  • the controller determines the global pose of the first vehicle, it also needs to send the global pose of the first vehicle to the first vehicle.
  • the server since the server is deployed on the object to be sought, the calculation amount of the first vehicle can be reduced, that is, the calculation burden of the faulty vehicle can be reduced.
  • the controller that implements vehicle positioning can also be deployed in the cloud.
  • the first vehicle may send the first request to the controller in the cloud, and the controller deployed in the cloud sends the first request to the requested object, and then the requested object will be the global of the requested object.
  • the pose can be sent to the controller in the cloud.
  • the controller calculates the global pose of the first vehicle, it needs to send the global pose of the first vehicle to the first vehicle.
  • the server since the server is deployed in the cloud, the calculation amount of the first vehicle or the second vehicle can be reduced, and the burden of the first vehicle or the second vehicle can be reduced.
  • the first vehicle and the second vehicle may also be equipped with cameras
  • the camera sensor may be a panoramic camera sensor.
  • the first vehicle can continuously take two frames of first images through the camera sensor, each image is an image in a two-dimensional coordinate system (such as an X/Y coordinate system), and the feature points in the image correspond to the coordinates of the X axis and the Y axis. Combining the two frames of the first image can determine the depth information of the feature point in the first image, that is, the Z-axis coordinate.
  • the first vehicle can obtain the X-axis, Y-axis, and Z-axis coordinates of the feature point of the obstacle through the camera sensor, and the X-axis, Y-axis, and Z-axis coordinates of the feature point form a set of vectors.
  • the point cloud data corresponding to the point has been obtained.
  • the point cloud data corresponding to the first image can also be obtained through the camera sensor of the first vehicle.
  • the point cloud data corresponding to the second image is acquired through the camera sensor of the second vehicle.
  • the camera sensor of the first vehicle and the camera sensor of the second vehicle have overlapping shooting areas, and the point cloud data corresponding to the point cloud data of the first image according to the feature points of the obstacles in the overlapping shooting area can be used, and the The point cloud data corresponding to the feature point in the point cloud data of the second image performs point cloud matching, thereby determining the first relative pose of the first vehicle and the second vehicle.
  • the global pose of the first vehicle is determined according to the first relative pose and the global pose of the second vehicle.
  • the object to be sought can be an infrastructure equipped with a communication module, such as a street lamp equipped with a communication module mentioned in the above content, or a base station on the roadside, or an infrastructure equipped with a communication module.
  • a communication module such as a street lamp equipped with a communication module mentioned in the above content, or a base station on the roadside, or an infrastructure equipped with a communication module.
  • the camera of the communication module can be an infrastructure equipped with a communication module, such as a street lamp equipped with a communication module mentioned in the above content, or a base station on the roadside, or an infrastructure equipped with a communication module.
  • the object to be sought is a street lamp equipped with a communication module or a basic setting such as a base station
  • take street lamps as an example.
  • the first vehicle can receive the signal sent by the street lamp, and the signal strength can be used to determine the difference between the street lamp and the first vehicle. relative position.
  • the street lamp may send its global position to the vehicle, and then the first vehicle may determine its own position based on the relative position and the position of the street lamp.
  • the first vehicle can estimate its own posture, for example: according to the posture of the first vehicle at the previous moment, and calculated based on the IMU: the posture of the first vehicle at the previous moment as the reference, the first vehicle The attitude of the vehicle at the current time relative to the attitude of the previous time is calculated, and the attitude of the first vehicle at the current time is calculated.
  • the camera can take two frames of the third image including the first vehicle.
  • Each image is an image in a two-dimensional coordinate system (such as an X/Y coordinate system).
  • the characteristic points of a vehicle correspond to the coordinates of the X axis and the Y axis.
  • Combining the two frames of the third image can determine the depth information of the feature point of the first vehicle in the third image, that is, the Z-axis coordinate.
  • the coordinates of the X-axis, Y-axis, and Z-axis of the feature point of the first vehicle can be used as a way of representing the global posture information of the first vehicle.
  • FIG. 6 is a schematic diagram of the structure of a vehicle positioning device provided by an embodiment of the application. As shown in the figure, the device 1501 includes an acquisition unit 1502 and a calculation unit 1503.
  • the obtaining unit 1502 is used to obtain the first relative pose of the first vehicle and the helped object, and obtain the global pose of the helped object;
  • the first relative pose is used to indicate that the first vehicle is used as a reference, and the helped object is relative to The position and posture of the first vehicle;
  • the first relative posture is the posture of the assisted object determined in the first coordinate system;
  • the global posture is the global posture of the assisted object determined in the second coordinate system;
  • the calculation unit 1503 is configured to calculate the global pose of the first vehicle according to the first relative pose and the global pose. In this way, in the case that the GPS signal of the first vehicle is weak or there is no GPS signal, the first vehicle can be determined based on the global pose of the sought-after object and the first relative pose between the first vehicle and the sought-after object The global pose.
  • the device 1501 of the embodiment of the present application may be implemented by an application-specific integrated circuit (ASIC) or a programmable logic device (PLD), and the above-mentioned PLD may be a complex program logic device. (complex programmable logical device, CPLD), field-programmable gate array (field-programmable gate array, FPGA), general array logic (generic array logic, GAL) or any combination thereof.
  • ASIC application-specific integrated circuit
  • PLD programmable logic device
  • CPLD complex programmable logical device
  • FPGA field-programmable gate array
  • GAL general array logic
  • the vehicle positioning method shown in FIGS. 2 to 5 can also be implemented by software
  • the device 1501 and its various modules can also be software modules.
  • the object to be asked for is the second vehicle
  • the first relative pose is used to indicate the position and attitude of the second vehicle relative to the first vehicle
  • the pose is used to indicate the position and attitude of the second vehicle relative to the first vehicle.
  • the heading of the first vehicle is the reference, and the heading of the second vehicle.
  • the acquiring unit 1502 is further configured to: acquire the first point cloud data in the first coordinate system obtained by scanning the surrounding objects of the first vehicle by the lidar of the first vehicle; and acquire the second vehicle The second point cloud data in the second coordinate system obtained by the lidar scanning the objects around the second vehicle; wherein the lidar of the first vehicle and the lidar of the second vehicle have overlapping scanning areas, and for the overlapping scanning areas Obstacles in the first point cloud data includes the point cloud data corresponding to the obstacle, and the second point cloud data includes the point cloud data corresponding to the obstacle; according to the point cloud data corresponding to the obstacle in the first point cloud data , And the corresponding point cloud data of the obstacle in the second point cloud data, calculate the first relative pose. In this way, point cloud matching can be performed based on the point cloud data scanned by the lidar, so as to determine a more accurate relative pose between the two vehicles.
  • the acquiring unit 1502 is further configured to: convert the first point cloud data in the first coordinate system to the preset first reference coordinate system of the first vehicle to obtain third point cloud data;
  • the first reference coordinate system is a coordinate system obtained by translating the origin of the first coordinate system to the preset first reference point; the second point cloud data in the second coordinate system is converted to the preset second vehicle
  • the second reference coordinate system obtains the fourth point cloud data; among them, the second reference coordinate system is the coordinate system obtained after the origin of the second coordinate system is translated to the preset second reference point; the obstacle is at the third point Point cloud matching is performed on the corresponding point cloud data in the cloud data and the point cloud data corresponding to the obstacle in the fourth point cloud data to obtain the first relative pose.
  • both the first point cloud data and the second point cloud data can be converted to the reference coordinate system of the respective vehicle.
  • the origin of the reference coordinate system of each vehicle corresponds to the position of the respective vehicle, for example, the position of each vehicle
  • the origin of the reference coordinate system is located at the center of the rear axle of the respective vehicle, so that the relative pose between the two vehicles can be determined more accurately.
  • the acquiring unit 1502 is further configured to: execute according to the point cloud data corresponding to the obstacle in the third point cloud data and the point cloud data corresponding to the obstacle in the fourth point cloud data N iterations, the third transformation matrix output by the N iteration is obtained; N is a positive integer; the first relative pose is determined according to the third transformation matrix output by the N iteration; among them, for the i-th iteration in the N iteration Iteration, i is a positive integer not greater than N: Execute for each of the M points of the obstacle: According to the third transformation matrix output by the i-1th iteration, the point corresponds to the third point cloud data Perform a transformation on the point cloud data to obtain the transformed point cloud data corresponding to the point; calculate the difference between the transformed point cloud data corresponding to the point and the point cloud data corresponding to the point in the fourth point cloud data to obtain the The residuals corresponding to the points; M is a positive integer; calculate the sum of residuals according to the residuals corresponding
  • the acquiring unit 1502 is further configured to: when i is 1, determine the third transformation matrix output by the i-1th iteration: according to the global position of the first vehicle at the previous moment Pose, and calculated based on IMU: the global pose of the first vehicle at the previous moment is used as the reference, and the current pose of the first vehicle at the current moment is relative to the pose of the previous moment, and the estimated global pose of the first vehicle at the current moment is calculated; Obtain the global pose of the second vehicle at the current moment based on the global positioning system GPS and IMU; determine the second relative pose based on the estimated global pose of the first vehicle at the current moment and the global pose of the second vehicle at the current moment , The second relative pose is used to indicate the position and posture of the requested object relative to the first vehicle based on the first vehicle; the second relative pose is the pose of the requested object determined in the first coordinate system; The matrix used to express the second relative pose is taken as the third transformation matrix output from the i-1th iteration.
  • the acquiring unit 1502 is further used for: the third transformation matrix output by the i-1th iteration, and is also used for: taking the third transformation matrix output by the Nth iteration as the first relative position The mathematical expression of posture.
  • the acquiring unit 1502 is further configured to: perform weighted fusion on the third transformation matrix output from the i-1th iteration and the third transformation matrix output from the Nth iteration to obtain the fourth transformation Matrix, the fourth transformation matrix is used as the mathematical expression of the first relative pose.
  • the device 1501 further includes a determining unit 1504, configured to: determine that the GPS-based global pose error of the first vehicle is greater than the error threshold; or; the first vehicle appears in the GPS positioning device of the first vehicle Fault.
  • the obtaining unit 1502 is further configured to: obtain the global pose of the first vehicle corresponding to each of the multiple requested objects; the calculating unit 1503 is further configured to: The global poses of the multiple first vehicles corresponding to the help-seeking object are weighted and merged to obtain the target global pose of the first vehicle.
  • the device 1501 further includes a sending unit 1505 and a receiving unit 1506.
  • the sending unit 1505 is used to send a first request to the helper; the first request carries the identification of the first vehicle.
  • the receiving unit 1506 is configured to receive the global pose of the helper sent by the helper.
  • the device 1501 may correspond to the method described in the embodiment of the present application, and the above-mentioned and other operations and/or functions of each unit in the device 1501 are used to implement the respective methods in FIGS. 2 to 5 respectively. For the sake of brevity, the corresponding process will not be repeated here.
  • FIG. 7 is a schematic diagram of a controller structure provided by an embodiment of the application.
  • the controller 1301 includes a processor 1302, a memory 1304, and a communication interface 1303.
  • the processor 1302, the memory 1304, and the communication interface 1303 may communicate through the bus 1305, or may communicate through other means such as wireless transmission.
  • the memory 1304 is used to store instructions, and the processor 1302 is used to execute instructions stored in the memory 1304.
  • the memory 1304 stores program codes, and the processor 1302 can call the program codes stored in the memory 1304 to perform the following operations:
  • the processor 1302 is used to obtain the first relative pose of the first vehicle and the sought-after object, and obtain the global pose of the sought-after object; the first relative pose is used to indicate that the first vehicle is used as a reference, and the sought-after object is relative to The position and posture of the first vehicle; the first relative pose is the pose of the requested object determined in the first coordinate system; the global pose is the global pose of the requested object determined in the second coordinate system; the processor 1302, used to calculate the global pose of the first vehicle according to the first relative pose and the global pose. In this way, in the case that the GPS signal of the first vehicle is weak or there is no GPS signal, the first vehicle can be determined based on the global pose of the sought-after object and the first relative pose between the first vehicle and the sought-after object The global pose.
  • the processor 1302 may be a central processing unit (CPU), and the processor 1302 may also be other general-purpose processors or digital signal processing (DSP). , Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or any conventional processor.
  • the memory 1304 may include a read-only memory and a random access memory, and provides instructions and data to the processor 1302.
  • the memory 1304 may also include a non-volatile random access memory.
  • the memory 1304 may also store device type information.
  • the memory 1304 may be volatile memory or non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), and electrically available Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • the volatile memory may be random access memory (RAM), which is used as an external cache.
  • RAM random access memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • Double data rate synchronous dynamic random access memory double data date SDRAM, DDR SDRAM
  • enhanced SDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous connection dynamic random access memory
  • direct rambus RAM direct rambus RAM
  • bus 1305 may also include a power bus, a control bus, and a status signal bus. However, for clear description, various buses are marked as bus 1305 in the figure.
  • the vehicle positioning controller 1301 may correspond to the device 1501 of the vehicle positioning apparatus in the embodiment of the present application, and may correspond to the corresponding main body performing the method according to the embodiment of the present application, and The above-mentioned and other operations and/or functions of each module in the controller 1301 are used to implement the corresponding procedures of the methods in FIGS. 2 to 5, and are not repeated here for brevity.
  • the application also provides a vehicle, which includes a controller 1301 as shown in FIG. 7.
  • the vehicle may correspond to the first vehicle in the method shown in FIG. 2 and is used to implement the operation steps of each method in FIG. 2 to FIG. 5. For the sake of brevity, it will not be repeated here.
  • the application also provides a vehicle, which includes a controller 1301 as shown in FIG. 7.
  • the vehicle may be a vehicle for assistance, which is used to implement the process of assisting the positioning of the GPS faulty vehicle by using the operation steps of the methods shown in Figs. 2 to 5.
  • the present application also provides a system, which includes a first vehicle and a cloud computing platform, and the cloud computing platform is used to implement the process of assisting the GPS faulty first vehicle to locate the first vehicle by the cloud computing platform in the above method.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • Computer instructions can be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • computer instructions can be transmitted from a website, computer, server, or data center through a cable (such as Coaxial cable, optical fiber, digital subscriber line (digital subscriber line, DSL) or wireless (such as infrared, wireless, microwave, etc.) transmission to another website site, computer, server or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium can be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a high-density digital video disc (DVD)), or a semiconductor medium (for example, a solid state disc (SSD)) )Wait.
  • a magnetic medium for example, a floppy disk, a hard disk, and a magnetic tape
  • an optical medium for example, a high-density digital video disc (DVD)
  • DVD high-density digital video disc
  • SSD solid state disc

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

一种车辆定位的方法、装置(1501)、控制器(1301)、智能车和系统,用于在GPS信号弱或者无GPS信号的情况下,解决车辆定位的问题。车辆定位的方法包括:获取第一车辆(A)与被求助物(B)的第一相对位姿(311)以及被求助物(B)的全局位姿,根据第一相对位姿和全局位姿,计算第一车辆(A)的全局位姿(313)。

Description

车辆定位的方法、装置、控制器、智能车和系统
本申请要求在2020年01月14日提交中国专利局、申请号为202010038272.6、申请名称为“智能车定位的方法、装置、系统和智能车”,以及在2020年3月13日提交中国专利局、申请号为202010177804.4、申请名称为“车辆定位的方法、装置、控制器、智能车和系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及智能车(intelligent driving)领域,尤其涉及一种车辆定位的方法、装置、控制器、智能车和系统。
背景技术
全球定位系统(Global Positioning System,GPS)具有性能好、精度高、应用广的特点,是迄今较常用的导航定位系统。目前车辆定位领域GPS的应用已极为广泛。将GPS应用在车辆中,可以达到导航定位、安全控制、智能交通等目标,具备良好的发展趋势。
GPS技术广泛应用于车辆的定位中,比如可以应用到支持无人驾驶(unmanned driving)、辅助驾驶(driver assistance/ADAS)、智能驾驶(intelligent driving)、网联驾驶(connected driving)、智能网联驾驶(intelligent network driving)或汽车共享(car sharing)的智能车。在实际应用中,可能会由于硬件和/或通信的原因,导致GPS信号弱甚至无GPS信号的情况发生,这种情况下若还依据GPS来进行车辆的定位,则会带来较为严重的安全问题。
发明内容
本申请提供一种车辆定位方法、装置和可读存储介质,用于在GPS信号弱或者无GPS信号的情况下,解决车辆定位的问题。
第一方面,本申请实施例提供一种车辆定位方法,该方法中,控制器获取第一车辆与被求助物的第一相对位姿以及被求助物的全局位姿,根据第一相对位姿和全局位姿,计算第一车辆的全局位姿。其中,第一相对位姿用于指示以第一车辆为基准,被求助物相对于第一车辆的位置和姿态;第一相对位姿是在第一坐标系中确定的被求助物的位姿;全局位姿是在第二坐标系确定的被求助物的全局位姿。如此,在第一车辆的GPS信号弱或者无GPS信号的情况下,可以基于被求助物的全局位姿,以及第一车辆和被求助物之间的第一相对位姿来确定出第一车辆的全局位姿。
在一种可能地实施方式中,被求助物为第二车辆,第一相对位姿用于指示以第一车辆为基准,第二车辆相对于第一车辆的位置和姿态,姿态用于指示以第一车辆的车头朝向为基准,第二车辆的车头的朝向。如此,在第一车辆的GPS信号弱或者无GPS信号的情况下,可以向其他车辆求助,且可以基于其他车辆的信息,确定出第一车辆的车头朝向,从而在智能车领域可以辅助其实现自动驾驶。
在另一种可能地实施方式中,获取第一车辆与被求助物的第一相对位姿,包括:获取第一车辆的激光雷达进行扫描第一车辆周围物体所得到的在第一坐标系的第一点云数据;获取第二车辆的激光雷达进行扫描第二车辆周围物体所得到的在第二坐标系的第二点云数据;其中,第一车辆的激光雷达和第二车辆的激光雷达具有重合扫描区域,且针对重合扫 描区域中的障碍物,第一点云数据中包括障碍物对应的点云数据,第二点云数据中包括障碍物对应的点云数据;根据障碍物在第一点云数据中对应的点云数据,以及障碍物在第二点云数据中对应的点云数据,计算第一相对位姿。由于部署在两辆车的激光雷达具有重合扫描区域,因此,可以根据部署在两辆车的激光雷达扫描出的点云数据进行点云匹配,从而确定出较为准确的两辆车之间的相对位姿。
在另一种可能地实施方式中,根据第一点云数据和第二点云数据,确定出第一相对位姿,包括:根据障碍物在第一点云数据中对应的点云数据,以及障碍物在第二点云数据中对应的点云数据,计算第一相对位姿,包括:将第一坐标系中第一点云数据转换至预设的第一车辆的第一基准坐标系,获得第三点云数据;其中,第一基准坐标系是将第一坐标系的原点平移至预设的第一基准点后得到的坐标系;将第二坐标系中第二点云数据转换至预设的第二车辆的第二基准坐标系,获得第四点云数据;其中,第二基准坐标系是将第二坐标系的原点平移至预设的第二基准点后得到的坐标系;对障碍物在第三点云数据中对应的点云数据,以及障碍物在第四点云数据中对应的点云数据进行点云匹配,得到第一相对位姿。如此,可以将第一点云数据和第二点云数据均转换至各自车辆的基准坐标系下,各个车辆的基准坐标系的原点相对于各自的车辆的位置都是对应的,例如各个车辆的基准坐标系的原点均位于各自的车辆的后轴中心,从而可以较为准确的确定出两辆车之间的相对位姿。
在另一种可能地实施方式中,对障碍物在第三点云数据中对应的点云数据,以及障碍物在第四点云数据中对应的点云数据进行点云匹配,得到第一相对位姿,包括:根据障碍物在第三点云数据中对应的点云数据,以及障碍物在第四点云数据中对应的点云数据,执行N次迭代,得到第N次迭代输出的第三变换矩阵;N为正整数;根据第N次迭代输出的第三变换矩阵,确定第一相对位姿。其中,针对N迭代中的第i次迭代,i为不大于N的正整数:针对障碍物的M个点中的每个点执行:根据第i-1次迭代输出的第三变换矩阵,对该点在第三点云数据中对应的点云数据执行一次变换,得到该点对应的变换后点云数据;计算该点对应的变换后点云数据和该点在第四点云数据中对应的点云数据的差值,得到该点对应的残差;M为正整数;根据障碍物的M个点中每个点对应的残差,计算残差总和;若残差总和不小于预设的残差阈值,则以预设变换量更新第i-1次迭代输出的第三变换矩阵,并将更新后的第i-1次迭代输出的第三变换矩阵作为:第i次迭代输出的第三变换矩阵,并执行下一次迭代;若残差总和小于残差阈值,则结束迭代,将第i-1次迭代输出的第三变换矩阵作为:第N次迭代输出的第三变换矩阵。如此,可以通过N次迭代的方式求解出较为准确的第三变换矩阵,从而确定出更加准确的第一相对位姿。
在另一种可能地实施方式中,当i为1时,第i-1次迭代输出的第三变换矩阵通过如下内容确定:根据前一时刻第一车辆的全局位姿,以及基于IMU计算出的:以前一时刻第一车辆的全局位姿为基准,第一车辆在当前时刻相对于前一时刻的位姿,计算第一车辆当前时刻的预估全局位姿;获取第二车辆基于全球定位系统GPS和IMU得到的当前时刻的全局位姿;根据第一车辆当前时刻的预估全局位姿和第二车辆当前时刻的全局位姿,确定出第二相对位姿,第二相对位姿用于指示以第一车辆为基准,被求助物相对于第一车辆的位置和姿态;第二相对位姿是在第一坐标系中确定的被求助物的位姿;将用于表达第二相对位姿的矩阵作为:第i-1次迭代输出的第三变换矩阵。如此,可以为N次迭代提供一个第三矩 阵初始值,由于第三矩阵的初始值是基于第一车辆当前时刻的预估全局位姿,以及第二车辆当前时刻的全局位姿确定的,因此,该第三矩阵初始值是较为接近第N次迭代输出的第三变换矩阵,进而,可以减少迭代次数,缩短寻找到最优解的时间。
在另一种可能地实施方式中,前一时刻第一车辆的全局位姿可以基于以下内容得到:系统记录的最近一次基于GPS的第一车辆的全局位姿,以及基于IMU计算出的:以第一时刻第一车辆的全局位姿为基准,第一车辆在当前时刻相对于第一时刻的位姿;第一时刻为系统记录的最近一次基于GPS的第一车辆的全局位姿的时刻。
在另一种可能地实施方式中,前一时刻第一车辆的全局位姿可以基于以下内容得到:最近一次计算出的第一车辆的全局位姿,以及基于IMU计算出的:以第二时刻第一车辆的全局位姿为基准,第一车辆在当前时刻相对于第二时刻的位姿;第二时刻为最近一次计算出的第一车辆的全局位姿的时刻。如此,可以提高方案的灵活性。
为了提高方案的灵活性,本申请中可以有多种确定第一相对位姿的方案,在另一种可能地实施方式中,将第N次迭代输出的第三变换矩阵作为第一相对位姿的数学表达形式。在另一种可能地实施方式中,对第i-1次迭代输出的第三变换矩阵和第N次迭代输出的第三变换矩阵进行加权融合,得到第四变换矩阵,将第四变换矩阵作为第一相对位姿的数学表达形式。通过该可能地实施方案可以得到的第一相对位姿可以更加准确,进而可以计算出更加准确的全局位姿。
在另一种可能地实施方式中,所述根据所述第一相对位姿和所述相对位姿初始值,计算第二相对位姿,包括:对所述第一相对位姿和所述相对位姿初始值进行加权融合,得到所述第二相对位姿。
在另一种可能地实施方式中,获取第一车辆与被求助物的第一相对位姿之前,还包括:在第一车辆的基于GPS的全局位姿误差大于误差阈值;或者;第一车辆在第一车辆的GPS定位装置出现故障。
当控制器部署在第一车辆上时,获取被求助物的全局位姿,包括:向被求助物发送第一请求;第一请求中携带第一车辆的标识;接收被求助物发送的被求助物的全局位姿。
在另一种可能地实施方式中,根据第一相对位姿和被求助物的全局位姿,确定出第一车辆的全局位姿之后,还包括:获取多个被求助物中每个被求助物对应的第一车辆的全局位姿;对多个被求助物对应的多个第一车辆的全局位姿进行加权融合,得到第一车辆的目标全局位姿。
第二方面,提供了一种车辆定位的装置,装置包括用于执行第一方面或第一方面任一种可能实现方式中的车辆定位方法的各个模块。
第三方面,提供了一种用于车辆定位的控制器,该控制器包括处理器和存储器。存储器中用于存储计算机执行指令,控制器运行时,处理器执行存储器中的计算机执行指令以利用控制器中的硬件资源执行第一方面或第一方面任一种可能实现方式中方法的操作步骤。
第四方面,本申请提供一种智能车,该智能车包括如上述第三方面或第三方面任一种可能的实现方式中的控制器。
第五方面,本申请还提供一种车辆定位的系统,该系统包括第一车辆和云端的计算平台,上述云端计算平台用于执行上述第三方面或第三方面任一种可能实现方式中的控制器的功能。
第六方面,本申请还提供一种车辆定位的系统,该系统包括第一车辆和被求助物,被求助物用于执行上述第三方面或第三方面任一种可能实现方式中的控制器的功能。
第七方面,本申请提供一种计算机可读存储介质,计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述各方面的方法。
第八方面,本申请提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述各方面的方法。
本申请在上述各方面提供的实现方式的基础上,还可以进行进一步组合以提供更多实现方式。
附图说明
图1为本申请提供的一种应用场景的示意图;
图2为本申请提供的一种控制器部署在第一车辆的情况下的车辆定位方法的流程示意图;
图3为本申请提供的一种以图1所示的车辆A和车辆B为例的坐标系转换示意图;
图4为本申请提供的一种计算第三变换矩阵的方法流程示意图;
图5为本申请提供的一种预估车辆的全局位姿的计算过程的示意图;
图6为本申请提供的一种车辆定位装置的结构示意图;
图7为本申请提供的另一种车辆定位装置的结构示意图。
具体实施方式
为了便于理解,下面先对本申请实施例涉及到的专有概念和名词进行解释。
(1)全局位姿
全局位姿也可以称为绝对位姿,是指在一个基准坐标系中物体的位置和姿态。其中,基准坐标系可以为多维坐标系,多维坐标系包括具有多个维度的坐标系,如二维坐标系、三维坐标系等多维坐标系,具体可以为大地坐标系、通用横墨卡托(universal transverse mercartor,UTM)网格系统(grid system)等。
物体的位置可以由坐标系中的坐标轴的坐标值来表示。同一个物体在不同的坐标系下,坐标值可能不同。本申请实施例中车辆的位置可以由该车辆所在坐标系中各个坐标轴的坐标值表示,为了便于计算,也可以利用一个基准坐标系中的坐标轴的坐标值来表示。其中,当多个物体分别利用多个坐标系标识位置时,可以以一个坐标系为准,确定所有物体所在的位置,进而进一步确定其他物体在该基准坐标系中的位置。作为基准的坐标系也称为基准坐标系,可以为任意一个物体所在的坐标系,也可以是一个第三方公共的坐标系,本申请实施例对此不做限定。
物体的姿态对于车辆而言可以理解为该车辆车头的朝向,或者,理解为车身的任意位置的朝向。具体可以由车辆对应的矢量与多维坐标系中的水平坐标轴之间的夹角来确定。矢量是一种既有大小又有方向的量,通过车辆的姿态可以确定出车头方向,也可以确定出车辆的行进方向。
(2)相对位姿
相对位姿是指两个物体中一个物体相对于另一个物体的位姿。比如,以物体O1为基准,物体O2相对于物体O1的位姿;或者,以物体O2为基准,物体O1相对于物体O2的位姿。 本申请实施例中提到的第一车辆(也可以称为故障车辆)与被求助物的第一相对位姿,可以是指以第一车辆为基准,被求助物相对于第一车辆的位姿;或者,以被求助物为基准,第一车辆相对于被求助物的位置。
(3)点云数据
点云数据是指在一个三维坐标系统中的一组向量的集合。这些向量通常以X、Y、Z三维坐标的形式表示,而且一般主要用来代表一个物体的外表面形状。不仅如此,除(X,Y,Z)代表的几何位置信息之外,点云数据还可以表示一个点的RGB颜色,灰度值,深度,分割结果等。
本申请实施例中的点云数据可以由激光雷达扫描获取,激光雷达也可以称为激光探测及测距系统(Light Detection and Ranging,LiDAR)。
(4)变换矩阵
变换矩阵是数学线性代数中的一个概念。在线性代数中,线性变换能够用变换矩阵表示。最为常用的几何变换都是线性变换,这包括旋转、平移、缩放、切变、反射以及正投。本申请实施例中以线性变换包括旋转和平移为例进行示意,即变换矩阵可以表示的线性变换包括旋转和平移。
下面结合附图进一步介绍本申请实施例提供的车辆定位方法。图1为本申请实施例提供的一种应用场景的示意图。如图1所示的车道行驶方向的指示信息,包括4个车道,其中左侧两个车道的行驶方向一致,右侧两个车道的车辆行驶方向一致。任意一个车辆可以是智能车或非智能车,本申请实施例对比不做限定。每个车辆上设置有传感器,该传感器用于探测车辆周边的障碍物,其中,传感器包括激光雷达、毫米波雷达和相机。另外,每辆车可以设置一种或多种传感器,每种传感器的个数可以为一个或多个。传感器可以安装在车辆的顶部、具体可以设置在车辆顶部的中间位置,本申请实施例对每个车辆中传感器安装位置和数量并不做限定。本申请实施例中车辆可以基于车辆与外界无线通信技术(例如,vehicle to everything(V2X))与其它物体进行通信。例如,可以基于车辆间无线通信技术(例如,vehicle to vehicle(V2V))实现车辆与车辆之间的通信。车辆与其它物体之间进行通信可以基于无线高保真(例如,wireless fidelity(Wi-Fi))、第五代(5th generation,5G)移动通信技术等进行通信。例如,可以基于5G实现车辆与智能设备(如支持定位功能的智能手机或移动设备)之间的通信。
可选地,图1所示的应用场景中还可以包括云端的计算平台,该计算平台可以由云端的服务器或虚拟机实现。
本申请实施例中确定第一车辆的全局位姿的控制器可以位于第一车辆中,具体可以由该车辆中定位系统的控制器、智能驾驶的控制器或其他任何具有计算能力的设备实施,该控制器也可以位于被求助物中,此时,该控制器同样也可以是被求助物中任何具有计算能力的设备或器件,在确定第一车辆的全局位姿后将该第一车辆的全局位姿发送给第一车辆。另外,该控制器还可以位于云端的计算平台(这种情况也可以描述为控制器的功能由云端虚拟机或服务器实施),此时,在确定第一车辆的全局位姿后,将第一车辆的全局位姿发送给第一车辆。
接下来,以图1所示的场景中传感器为激光雷达,假设第一车辆为车辆A在路上行驶过程中GPS出现了异常为例,进一步介绍车辆A如何借助其他物体来进行定位。车辆A和 其它物体上均配置有通信模块,车辆A可以通过通信模块搜索到预设区域内其它物体发出的无线信号。车辆A的预设区域如图1中虚线框圆圈所示,以车辆A为中心,以预设距离为半径的区域。该预设距离可以是小于或等于车辆A所发出的无线信号的覆盖范围的半径的一个值。
本申请实施例中用于帮助车辆A进行定位的被求助物可以是车辆,比如车辆B、车辆C和车辆D中的一辆或多辆,还可以是其它具有定位能力或可获知定位信息的物体,比如是道路两侧的一些安装有通信模块的基础设施,比如安装有通信模块的电线杆、移动基站、行人携带的移动电子设备(如带有定位功能的智能手机)和摄像头等等。图1中示例性示出的基础设施为以车辆A为中心的预设区域内的摄像头H。如图1所示,图1中还示出了车辆E和车辆F,由于车辆E和车辆F在以车辆A为中心的预设区域之外,因此车辆A不会借助车辆E和车辆F来进行定位。
在一种可选地实施方式中,车辆A在预设区域中搜索到其它物体后,可以向其发送第一请求,用于请求其它物体协助车辆A进行定位。其中,车辆A搜到的其它物体中包括上述内容提到的被求助物。
在另外一种可选地实施方式中,车辆A在预设区域中搜索到其它物体后,可以先与其建立互信,也就是车辆A和搜索到的其他物体首先进行安全认证,鉴权成功后再向其发送第一请求。其中,鉴权成功的其它物体中包括上述内容提到的被求助物。其中,一种可能地建立互信的过程如下:车辆A搜索到预设区域中的其它物体后,针对每个搜索到的物体,车辆A可以通过通信模块向该物体发送第一信令,第一信令用于请求与该物体建立之间建立通信链路,第一信令中可以携带用于指示车辆A的标识,该标识包括能够全球唯一指示该车辆的信息,例如,发动机号、车辆的序列号。该物体的通信模块接收到第一信令之后,可以对车辆A进行校验,具体身份校验过程可以由该物体根据预置的互信列表确认,也可以是该物体向第三方系统请求验证车辆A的身份,在校验成功后,向车辆A发送第二信令,第二信令用于指示该物体同意与车辆A建立互信。
图2为本申请实施例提供的一种以第一车辆为故障车辆且控制器部署在第一车辆的情况为例进一步解释本申请实施例所提供的车辆定位方法的流程的示意图,如图2所示,该方法包括:
步骤310,第一车辆在确定第一车辆的基于GPS的全局位姿误差大于误差阈值,或者,在确定第一车辆的GPS定位装置出现故障的情况下,向被求助物发送第一请求。
其中,第一请求用于指示第一车辆需要请求协助定位。第一请求中可以包括第一车辆的标识。其中,第一车辆的标识包括能够全球唯一指示第一车辆的信息,例如,第一车辆的发动机号、第一车辆的序列号。
相对应地,被求助物接收第一请求。
步骤311,控制器获取第一车辆与被求助物的第一相对位姿。第一相对位姿用于指示以第一车辆为基准,被求助物相对于第一车辆的位置和姿态;第一相对位姿是在第一坐标系中确定的被求助物的位姿。第一坐标系可以为第一车辆的传感器所在坐标系,例如,第一坐标系可以为以第一车辆的传感器所在中心位置为原点所建立的坐标系。
步骤312,被求助物向控制器发送被求助物的全局位姿。
全局位姿是在第二坐标系确定的被求助物的全局位姿。第二坐标系可以是以第二车辆 的传感器所在坐标系,例如,第二坐标系可以为以第二车辆的传感器的中心位置为原点所建立的坐标系。
相对应地,控制器接收被求助物发送的被求助物的全局位姿。
步骤313,控制器根据第一相对位姿和被求助物的全局位姿,计算出第一车辆的全局位姿。
其中,第一车辆的全局位姿也是在上述第二坐标系下的第一车辆的全局位姿。第二坐标系可以是二维坐标系、三维坐标系等多维坐标系,例如上述提到的大地坐标系、UTM网格系统等。
通过上述步骤311至步骤313提供的方案可以看出,车辆在行驶过程中,若发生了GPS信号弱或者无GPS信号的情况,控制器可以基于被求助物的全局位姿,以及被求助物和第一车辆的第一相对位姿计算出第一车辆的全局位姿,从而可以解决GPS信号弱或者无GPS信号的情况下的车辆定位问题。若应用在无人驾驶领域,则可以解决无人驾驶应用场景下GPS信号弱或者无GPS信号的情况下的车辆定位问题。
在一种可能地实施方式中,第一车辆可以向一个或多个被求助物请求协助。被求助物可以是车辆,这种情况下,也可以称该车辆为被求助车。第一车辆将一个车辆作为被求助车后,向该被求助车发送请求,并以该被求助车为准确定第一车辆的第一相对位姿,并基于该第一相对位姿和该被求助车的全局位姿确定第一车辆的全局位姿。可选地,第一车辆还可以选择多个被求助物请求协助。比如,第一车辆可以向K个被求助物请求协助,K为正整数,当K为大于1的正整数时,针对K个被求助物中的每个被求助物,控制器可以根据该被求助物的全局位姿,以及第一车辆与该被求助物的第一相对位姿,计算该被求助物对应的第一车辆的全局位姿。最终可以得到K个被求助物对应的K个第一车辆的全局位姿。进一步,可以根据K个被求助物对应的K个第一车辆的全局位姿,计算出该第一车辆的全局位姿,由多个被求助物对应的第一车辆的全局位姿确定出的一个第一车辆的全局位姿称为该第一车辆的目标全局位姿。
根据K个第一车辆的全局位姿计算第一车辆的全局位姿的方案可以有多种,具体实施中可以采用以下方式中任意一种:
方式一,将K个第一车辆的全局位姿中的一个作为第一车辆的全局位姿。
控制器可以从K个第一车辆的K个全局位姿中选择任意一个作为第一车辆的全局位姿。也可以根据K个全局位姿中每个全局位姿的可信度,选择K个全局位姿中可信度最高的一个全局位姿作为第一车辆的全局位姿。
一个全局位姿的可信度可以与以下参数项中的一项或多项相关:
参数项1:该全局位姿对应的被求助物发送的无线信号的强度;其中,该全局位姿对应的被求助物发送的无线信号的强度越强,则该参数项的参数值越大;反之,则该参数项的参数值越小;
参数项2:该全局位姿对应的被求助物与第一车辆之间的间距;其中,该全局位姿对应的被求助物与第一车辆之间的间距越小,则该参数项的参数值越大;反之,则该参数项的参数值越小;
参数项3:该全局位姿对应的被求助物的身份(智能电线杆、智能摄像头、车辆等等);其中,可以根据被求助物的身份为预设被求助的身份与该参数项的参数值的对应关系,例 如,被求助物的身份为车辆时,该参数项的参数值为80分,被求助物的身份为智能电线杆时,该参数项的参数值为60分,被求助物的身份为智能摄像头时,该参数项的参数值为50分。一种可能地实施方式中,可以人为的根据被求助物的身份,硬件能力等为其设置对应的参数值;
参数项4:该全局位姿计算过程中使用到的传感器类型(激光雷达传感器、毫米波雷达、相机等);其中,可以为预设传感器类型与该参数项的参数值的对应关系,例如,传感器类型为激光雷达传感器时,该参数项的参数值为80分,传感器类型为毫米波雷达时,该参数项的参数值为60分,传感器类型为相机时,该参数项的参数值为50分。
当一个全局位姿的可信度与多个参数项相关时,可以预设每个参数项对应的权重,并对该多个参数项的参数值进行加权相加,从而得到该全局位姿的可信度。
方式二,根据K个第一车辆的全局位姿计算第一车辆的全局位姿。
控制器分别获取K个被求助物对应的K个第一车辆的K个全局位姿,然后,对K个全局位姿进行加权融合,从而得到第一车辆的目标全局位姿。对K个全局位姿进行加权融合的过程具体可以包括:为K个全局位姿确定各自对应的权重,之后对各个全局位姿进行加权相加。
一个全局位姿的权重与该全局位姿的可信度相关,比如该全局位姿的可信度越高,则该全局位姿的权重可以越大;该全局位姿的可信度越低,则该全局位姿的权重可以越小。值得说明的是,方式二中所使用的加权融合的过程可以采用卡尔曼滤波融合,卡尔曼滤波融合的具体过程可以采用传统方法或该方法的改进算法进行处理,本申请实施例对此并不做限定。
上述方式一提供的方案基于一个全局位姿计算第一车辆的目标全局位姿,该计算过程较为简单,计算速度较快。上述方式二提供的方案基于K个全局位姿计算第一车辆的目标全局位姿,可以提高第一车辆的目标全局位姿的精度。
接下来,以被求助物为第二车辆,且第一车辆和第二车辆分别安装激光雷达为例进一步介绍步骤311中确定第一相对位姿的具体过程。激光雷达所探测的障碍物信息以点云数据表示,控制器获取第一车辆的激光雷达扫描得到的第一点云数据和第二车辆的激光雷达扫描得到的第二点云数据。第一点云数据的坐标系通常是以采集该第一点云数据的设备所在位置为原点建立的多维坐标系,即以第一车辆的激光雷达所在位置为原点建立的多维坐标系。类似地,第二坐标系通常是以采集该第二点云数据的设备所在位置为原点建立的多维坐标系,即以第二车辆的激光雷达所在位置为原点建立的多维坐标系。
针对第一点云数据和第二点云数据中的每个点云数据,本申请实施例中可以对激光雷达扫描出的点云数据进行筛选,例如可以从点云数据中筛选出曲率大于曲率阈值的点云数据(该曲率大于曲率阈值的点云数据也可以理解为激光雷达扫描到的物体的表面上的点对应的点云数据)。由于第一车辆的激光雷达和第二车辆的激光雷达具有重合扫描区域,如此,第一点云数据和第二点云数据中均包括该重合扫描区域中的一个或多个障碍物对应的点云数据,(对第一点云数据和第二点云数据进行筛选,筛选后的第一点云数据和第二点云数据中均包括该重合扫描区域中的一个或多个障碍物的表面上的点对应的点云数据)。也可以理解为:该重合扫描区域中的一个障碍物的点在第一坐标系下对应一组向量的集合,且在第二坐标系下也对应一组向量的集合,进一步,可以根据该障碍物在该两个不同的坐标系下 的两组向量的集合,确定出第一车辆和第二车辆之间的第一相对位姿。其中,第一相对位姿用于指示以第一车辆为基准,被求助物相对于第一车辆的位置和姿态;第一相对位姿是在第一坐标系中确定的被求助物的位姿。
下述内容将详细介绍确定第一相对位姿的过程。
方式一,分别选择第一车辆和第二坐标系,再将第一点云数据和第二点云数据在其对应的基准坐标系中进行坐标转换。
由于两辆车的激光雷达在一辆车的安装位置可能是不同的,比如第一车辆的激光雷达装在靠近车头的位置,第二车辆的激光雷达装在靠近车尾的位置,且由于激光雷达扫描到的点云数据的通常以激光雷达的位置为原点建立多维坐标系,因此,第一坐标系原点靠近第一车辆的车头,第二坐标系原点靠近第二车辆的车尾。这种情况下,若直接以第一坐标系为基准,对第二点云数据与第一点云数据进行点云匹配,则得出的点云匹配的结果反映的是:以第一车辆的车头为基准,第二车辆的车尾相对于第一车辆的车头的相对位姿。显然,这种相对位姿并不能精确的表达第一车辆和第二车辆之间的相对位姿。更加精确的第一车辆和第二车辆之间的相对位姿例如可以是:以第一车辆的车头为基准,第二车辆的车头相对于第一车辆的车头的相对位姿;或者,以第一车辆的车尾为基准,第二车辆的车尾相对于第一车辆的车尾的相对位姿。
为了更加准确的确定第一车辆和第二车辆之间的第一相对位姿,可以为第一车辆和第二车辆分别选择两个各自的基准坐标系,比如,在第一车辆上选择第一基准坐标系,在第二车辆上选择第二基准坐标系。其中,第一基准坐标系是将第一坐标系的原点平移至预设的第一基准点后得到的坐标系,即第一基准坐标系的原点为第一基准点。第二基准坐标系是将第二坐标系的原点平移至预设的第二基准点后得到的坐标系,即第二基准坐标系的原点为第二基准点。第一基准坐标系和第二基准坐标系的各个坐标轴对应平行(例如,第一基准坐标系和第二基准坐标系均为X/Y/Z三维坐标系,则第一基准坐标系的X轴与第二基准坐标系的X轴平行,第一基准坐标系的Y轴与第二基准坐标系的Y轴平行,第一基准坐标系的Z轴与第二基准坐标系的Z轴平行),然后,将第一点云数据转换至第一基准坐标系下,将第二点云数据转换至第二基准坐标系下,再对转换至两个基准坐标系下的点云数据进行点云匹配,以得到点云匹配结果。
其中,第一基准点在第一车辆的位置与第二基准点在第二车辆的位置是车辆的同一个部位。具体来说,第一基准点和第二基准点可以是两辆车的车身上的点,比如第一基准点可以是指第一车辆的后轴中心,第二基准点可以是指第二车辆的后轴中心;再比如,第一基准点可以是指第一车辆的车头中轴中心,第二基准点可以是指第二车辆的车头中轴中心;再比如,第一基准点可以是指第一车辆的车顶中心,第二基准点可以是指第二车辆的车顶中心。
以第一基准点是指第一车辆的后轴中心,第二基准点是指第二车辆的后轴中心为例进行说明。首先,将第一车辆的激光雷达扫描得到的第一点云数据进行坐标系转换,使其转换至第一坐标系下。坐标系转换的过程可以是将一个坐标系的原点向基准坐标系的原点进行平移。例如,将第一坐标系原点由第一车辆的激光雷达所在位置移动至第一车辆的后轴中心,为了便于描述,将坐标系原点移动至第一车辆的后轴中心的第一点云数据称为第三点云数据。与第一车辆的点云数据进行坐标转换过程类似,第二车辆的激光雷达扫描得到 的第二点云数据也需要进行坐标系转换,使其转换至第二坐标系下,将第二坐标系原点由第二车辆的激光雷达所在位置移动至第二车辆的后轴中心,为了便于描述,将坐标系原定移动至第二车辆的后轴中心的第二点云数据称为第四点云数据。
在平移不同坐标系的原点后,还需要对不同坐标系中各个坐标值进行转换,具体可以通过点坐标乘以与变换矩阵获得。例如,一个点是在坐标系S1下表示的(即采用坐标系S1下的坐标轴的坐标值表示),若需要将该点进行坐标系转换,转换至坐标系S2下(即将该点在坐标系S2下表示),则可以将该点乘以变换矩阵。举个例子,坐标系S1的原点在坐标系S1下的坐标轴为(0,0,0,),在坐标系S2下为(x1,y1,z1),(x1,y1,z1)即为坐标系S1的原点乘以变换矩阵后得到的值。
图3示例性示出了图1中的车辆A和车辆B的坐标系转换示意图,如图3所示,以车辆A为第一车辆,车辆B为第二车辆为例,车辆A扫描到的点云数据称为第一点云数据,第一点云数据的坐标轴中心位于车辆A的激光雷达上,第一点云数据所属的坐标系称为第一坐标系或激光雷达坐标系,第一点云数据进行坐标系转换的过程具体为:将其坐标系原点移动至车辆A的后轴中心,将以车辆后轴中心为圆心的坐标系(即第三点云数据所属的坐标系)称为第一坐标系。则该过程涉及到激光雷达坐标系与第一坐标系之间的转换(坐标系转换参见前述内容)。也可以说,针对第一车辆的激光雷达和第二车辆的激光雷达的重合扫描区域的一个障碍物的表面的一个点,将该点在第一点云数据对应的一组向量(该组向量可以称为该点在第一点云数据中对应的点云数据)乘以预设的第一变换矩阵,得到该点在第三点云数据对应的一组向量(该组向量可以称为该点在第三点云数据中对应的点云数据)。由于车辆A的激光雷达在安装时已经固定,车辆后轴中心也已确定,因此该第一变换矩阵也可以确定。
类似地,车辆B扫描到的点云数据称为第二点云数据,第二点云数据的坐标轴中心位于车辆B的激光雷达上,也就是说,第二点云数据是在第二车辆的第二坐标系中所得的数据。将第二点云数据进行坐标系转换,将其坐标系原点移动至车辆B的后轴中心。具体转换过程与前述内容类似,需要说明的是,将第二点云数据与第四点云数据之间进行坐标系转换过程中使用到的变换矩阵称为第二变换矩阵,第二变换矩阵是以车辆B的后轴中心为基准,车辆B的激光雷达的中心相对于车辆A的后轴中心需要进行的线性变换对应的变换矩阵。由于车辆A的激光雷达的安装位置与车辆B的安装位置可以不同,因此第一变换矩阵和第二变换矩阵也可以不同。
进一步,以第一基准坐标系为基准,根据第三点云数据和第四点云数据,计算出第二基准坐标系与第一基准坐标系之间的第三变换矩阵。如何根据两个点云数据计算变换矩阵,后续内容将详细说明。
作为一种可能的实现方式,除了分别设置两个基准坐标系转换第一点云数据和第二点云数据外,还可以仅选择一个基准坐标系(例如可以是上述内容中提到的大地坐标系、UTM网格系统、第二坐标系等),分别将第一点云数据和第二点云数据分别转换至该基准坐标系中,然后再对转换后的点云数据进行点云匹配,进而确定点云匹配结果所获得的第三变换矩阵。具体转换过程与上述第一车辆和第二车辆分别选择基准坐标系时获得第三点云数据和第四点云数据的方法类似,为了简洁,在此不再赘述。
方式二,选择第一坐标系作为第一点云数据和第二点云数据坐标转换的基准坐标系。
在该方式二中,需要有前提条件,即第一车辆的激光雷达在第一车辆的安装位置,与第二车辆的激光雷达在第二车辆的安装位置对应。例如,第一车辆的激光雷达安装在第一车辆的车顶中间位置,第二车辆的激光雷达安装在第二车辆的车顶中间位置。再例如,第一车辆的激光雷达安装在第一车辆的车顶上靠近车头位置,第二车辆的激光雷达安装在第二车辆的车顶上靠近车头位置。再例如,第一车辆的激光雷达安装在第一车辆的车顶上靠近车尾位置,第二车辆的激光雷达安装在第二车辆的车顶上靠近车尾位置。
一种可能地实施方式中,可以计算第一车辆的激光雷达的安装位置与第一车辆的车头的距离,以及第二车辆的激光雷达的安装位置与第二车辆的车头的距离,该两个距离小于距离阈值,则可以确定:第一车辆的激光雷达在第一车辆的安装位置,与第二车辆的激光雷达在第二车辆的安装位置对应。
进而,在方式二中,可以以第一坐标系为基准,根据第一点云数据和第二点云数据,计算出第二坐标系与第一坐标系之间的第三变换矩阵。
下面进一步介绍如何根据两个点云数据计算第三变换矩阵的方法,该方法中以根据第三点云数据和第四点云数据计算第三变换矩阵为例进行说明。对第三点云数据和第四点云数据进行点云匹配的过程也可以理解为:以第三点云数据所属的第一基准坐标系为基准,求解将第四点云数据从所属的第二基准坐标系下转换至第一基准坐标系所进行的线性变换对应的第三变换矩阵。也就是说,点云匹配过程就是求解第三变换矩阵的过程。
对第三点云数据和第四点云数据的进行点云匹配的关系可以采用下述公式(1)来表达:
Figure PCTCN2020125761-appb-000001
在公式(1)中,第一车辆编号为v 1,第二车辆的编号为v 2
Figure PCTCN2020125761-appb-000002
为第三变换矩阵,
Figure PCTCN2020125761-appb-000003
为第一车辆和第二车辆的第一相对位姿的表示方式,
Figure PCTCN2020125761-appb-000004
为第三点云数据,
Figure PCTCN2020125761-appb-000005
为第四点云数据。
可以通过迭代最近邻(Iterative Closest Point,ICP)算法求解上述公式(1)中的第三变换矩阵。ICP算法是通过N次迭代求解第三变换矩阵的过程,N为正整数。针对每次迭代过程,该次迭代过程采用的第三变换矩阵是将上次迭代所输出的第三变换矩阵乘以预设变换量所得到。针对第一车辆的激光雷达和第二车辆的激光雷达的重合扫描区域内的障碍物的M个点中的每个点(其中,障碍物的M个点可以是第一车辆的激光雷达和第二车辆的激光雷达均扫描到的该障碍物的表面的M个点,M为正整数),以该点为基准,将该点在第四点云数据中对应的点云数据乘以第三变换矩阵,得到该点对应的变换后点云数据,将变换后点云数据称为第五点云数据,则针对该点在第五点云数据中对应的点云数据,以及该点在第四点云数据中对应的点云数据,计算两者之间的残差。计算两个点云数据间的残差(也可以理解为误差),由于两个点云数据对应两组向量,因此计算两个点云数据之间的残差可以理解为求解该两组向量之间的残差,求解向量之间的残差也可以理解为求解两组向量之间的欧几里得度量(euclidean metric)(也称欧氏距离)。
针对障碍物中的M个点对应的M个残差,可以进一步计算M个点的残差总和,若该残差总和小于预设的残差阈值,则说明该次迭代结果满足要求,可以将该次迭代过程中使用的第三变换矩阵作为最后求解的第三变换矩阵。若该残差总和不满足要求,则需要以预设的变换量调整该第三变换矩阵,将该调整后的第三变换矩阵作为下一次迭代过程的第三 变换矩阵。
用ICP算法来进行点云匹配的过程可以用下述公式(2)来表示:
Figure PCTCN2020125761-appb-000006
在公式(2)中,第一车辆编号为v 1,第二车辆的编号为v 2
Figure PCTCN2020125761-appb-000007
为最终求解出第三变换矩阵,
Figure PCTCN2020125761-appb-000008
为第三点云数据,
Figure PCTCN2020125761-appb-000009
为第四点云数据。ΔT为
Figure PCTCN2020125761-appb-000010
的计算过程中的中间变量,即为每次迭代中的第三变换矩阵。
图4为本申请实施例提供的一种通过ICP算法求解公式(2)中的第三变换矩阵的流程示意图,如图4所示,包括:
步骤301,开始。
步骤302,初始化第三变换矩阵ΔT,ΔT为第三变换矩阵,对应线性变换为旋转和平移,也可以将第三变换矩阵表示为一个旋转矩阵R和平移矩阵t。该步骤302也可以描述为初始化旋转矩阵R和平移矩阵t。
在步骤302中,一种可能地实施方式中,初始化第三变换矩阵具体可以是指:将下属内容中提到的第三变换矩阵的初始值赋值给ΔT。
该第三变换矩阵的初始值,也可以理解为下述内容中进行第一次迭代所使用的第三变换矩阵,即下述内容中提到的N次迭代中,当i为1时,第i-1次迭代输出的第三变换矩阵。
另一种可能地实施方式中,初始化第三变换矩阵具体可以是指:给ΔT赋值,一种可能地示例中,旋转矩阵R和平移矩阵t有如下取值:
Figure PCTCN2020125761-appb-000011
Figure PCTCN2020125761-appb-000012
下面开始进行N次迭代,N次迭代中以第1次迭代为起始开始迭代。步骤303至步骤307是第i次迭代的过程,i为不大于N的正整数:
步骤303,针对障碍物的M个点中的每个点执行:根据第i-1次迭代输出的第三变换矩阵,对该点在第三点云数据中对应的点云数据执行一次变换,得到该点对应的变换后点云数据。M为正整数。
针对障碍物的M个点中的每个点xi,计算(Rxi+t)。
(Rxi+t)也可以写为(xi*ΔT)。在步骤303中,将该点在第三点云数据中对应的点云数据进行旋转和平移后得到该点对应的变换后点云数据。
步骤304,根据M个点中的每个点对应的变换后点云数据,以及每个点在第四点云数据中对应的点云数据,得到M个点中每个点对应的残差。
在步骤304中,可以通过如下方式确定出该点在第四点云数据中对应的点云数据:
针对第四点云数据的一个点云数据,若该点云数据与该点对应的变换后点云数据满足以下条件,则确定该点云数据为:该点在第四点云数据中对应的点云数据;其中,该条件包括:该点对应的变换后点云数据与该点云数据之间对应的距离小于预设的距离阈值。其中,该两个点云数据之间对应的距离可以是欧几里得度量,可以是指在多维(二维或二维 以上)空间中两个点之间的真实距离。在二维和三维空间中的欧氏距离就是两点之间的实际距离。
针对M个点中的每个点,计算该点对应的变化后点云数据和该点在第四点云数据中对应的点云数据之间对应的距离,可以作为该点对应的残差。
步骤305,根据障碍物的M个点中每个点对应的残差,计算残差总和。
步骤306,判断残差总和是否小于残差阈值,若不是则执行步骤307,若是,则执行步骤308。
在步骤306中,残差阈值可以是预设的值,比如可以是10 -10
步骤307,在确定迭代次数不大于迭代次数阈值的情况下,以预设变换量更新第i-1次迭代输出的第三变换矩阵,并将更新后的第i-1次迭代输出的第三变换矩阵作为:第i次迭代输出的第三变换矩阵,并执行下一次迭代,以重复执行步骤303,从而进入下次迭代。
在步骤307中,设置有迭代次数阈值,若迭代次数大于迭代次数阈值,则结束该进程,不再进行迭代。迭代次数阈值可以是预设的一个值,以防止迭代次数过多的问题。
在步骤307中,预设的变换量可以包括预设的旋转步长和预设的平移步长,可以通过高斯牛顿法来计算。
步骤308,若残差总和小于残差阈值,则结束迭代,将第i-1次迭代输出的第三变换矩阵作为:第N次迭代输出的第三变换矩阵。
一种可能地实施方式中,将第N次迭代输出的第三变换矩阵作为第一相对位姿的数学表达形式。也就是说,第N次迭代输出的第三变换矩阵可以作为本申请实施例中的第一相对位姿的一种数学表达方式。本申请实施例中第一相对位姿是指以第一车辆为基准,第二车辆相对于第一车辆的位置和姿态,该位置和姿态可以通过第N次迭代输出的第三变换矩阵来表示。第一相对位姿用变换矩阵来表达(例如第N次迭代输出的第三变换矩阵,或下述内容提到的第四变换矩阵)仅仅是一种可能地实现方式,具体实施例中,也可以用其它数学表达式来表达,例如,四元素、角轴、欧拉角等。
上述ICP算法的迭代过程使用到的第三点云数据和第四点云数据是由第一点云数据和第二点云数据进行坐标系转换得到的,因此第一点云数据和第二点云数据需要保证是同一个时刻由两个激光雷达采集到的数据,为了保证此要求,可以设置以第一车辆的无线时钟源为基准,调整第二车辆的时钟,以使第一车辆和第二车辆的时钟同步,第一点云数据和第二点云数据均携带有各自的时间戳,该时间戳可以有多种表示方法,该时间戳可以是当前时间与预设时间之间的增量值,该预设时间比如可以是指定的2020年12月31日0点0分0秒。
在具体实施中,当第一车辆和第二车辆距离较远时,上述ICP算法可能迭代次数会过多,而且很容易陷入局部极值的情况。为了应对这个问题,本申请实施例中可以为ICP算法迭代提供一个第三变换矩阵的初始值,也可以说是为点云匹配提供一个初始值。该第三变换矩阵的初始值,也可以说是上述内容中当i为1时,第i-1次迭代输出的第三变换矩阵,即第三变换矩阵的初始值也可以称为第0次迭代输出的第三变换矩阵。上述迭代过程中是从第1次迭代起始的。
下面介绍一种确定第三变换矩阵的初始值的方案。
本申请实施例中提供的第三变换矩阵的初始值可以是基于第一车辆的预估全局位姿以 及第二车辆的全局位姿确定的,第二车辆的GPS没有损坏,因此第二车辆的全局位姿是较准确的,而第一车辆的GPS出了故障,本申请实施例基于其它方案预估出第一车辆的预估全局位姿(下述内容将介绍如何计算第一车辆的预估全局位姿),该值是预估的,因此准确度没有第二车辆的全局位姿高,根据第一车辆当前时刻的预估全局位姿和第二车辆当前时刻的全局位姿,确定出第二相对位姿,第二相对位姿用于指示以第一车辆为基准,被求助物相对于第一车辆的位置和姿态;第二相对位姿是在第一坐标系中确定的被求助物的位姿。将用于表达第二相对位姿的矩阵作为:第i-1次迭代输出的第三变换矩阵。也可以说:将用于表达第二相对位姿的矩阵作为:第三变换矩阵的初始值。计算出的第三变换矩阵的初始值虽然准确度不是很高,但是较为接近第N次迭代输出的第三矩阵(第N次迭代输出的第三矩阵可以称为最优解),如此,基于第三变换矩阵的初始值进行ICP算法的迭代,就好比在距离最优解较近的地方寻找该最优解,明显可以缩小寻找最优解的搜索范围,迭代次数可以明显减少,从而可以大大的提高算法收敛的速度和精度。
需要注意的是,本申请实施例提到的最优解,最优解是数学规划的基本概念之一。指在数学规划问题中,使目标函数取最小值(对极大化问题取最大值)的可行解。使目标函数取最小值的可行解称为极小解,使其取最大值的可行解称为极大解。极小解或极大解均称为最优解。相应地,目标函数的最小值或最大值称为最优值。有时,也将最优解和最优值一起称为相应数学规划问题的最优解。这里的目标函数是指上述公式(2),求解公式(2)的过程可以理解为求解第三矩阵的最优解。
第一车辆的当前时刻的预估全局位姿可以根据前一时刻第一车辆的全局位姿,以及基于IMU计算出的:以前一时刻第一车辆的全局位姿为基准,第一车辆在当前时刻相对于前一时刻的位姿,计算得到。下面进行详细介绍。
方式a1:前一时刻第一车辆的全局位姿可以是系统记录的最近一次有效的基于GPS的全局位姿。
在一种可能地实施方式中,在GPS失效(比如基于GPS的全局位姿误差大于误差阈值,或者,GPS定位装置出现故障)的情况下,系统记录有最近的一次有效的基于GPS的全局位姿(第一车辆的最近的一次有效的基于GPS的全局位姿也可以称为第一车辆的前一帧的基于GPS的全局位姿)。在方式a1中,将系统记录的最近的一次有效的基于GPS的全局位姿的时刻称为第一时刻,第一车辆当前时刻的预估全局位姿基于以下内容得到:第一车辆前一帧的基于GPS的全局位姿,以及第一车辆在当前时刻的位姿和第一车辆在第一时刻的位姿之间的相对位姿。
在计算一辆车在一个时间段内发生的相对位姿的方法有多种,比如基于轮式里程计、视觉里程计、惯性测量单元等进行计算得到。通常可以采用数据预积分的方法计算一个时间段内发生的相对位姿。为了介绍方便,下面以IMU方法进行示例。
在方式a1中可以基于IMU计算第一时刻至当前时刻这段时长第一车辆的位姿变化量(也可以称为第一车辆在当前时刻的位姿和第一车辆在第一时刻的位姿之间的相对位姿)。结合第一时刻的第一车辆的基于GPS的全局位姿,以及第一时刻至当前时刻基于IMU计算出的相对位姿,可以得到第一车辆的当前时刻的全局位姿。
在方式a1中,第一车辆当前时刻的预估全局位姿可以通过下述公式(3)来确定:
Figure PCTCN2020125761-appb-000013
在公式(3)中,第一车辆编号为v 1,第二车辆的编号为v 2
Figure PCTCN2020125761-appb-000014
为第一车辆第一时刻的基于GPS的全局位姿,
Figure PCTCN2020125761-appb-000015
为以第一时刻为基准,基于IMU计算出的当前时刻相对于第一时刻的位姿变化量,也可以称为第一车辆在当前时刻的位姿和第一车辆在第一时刻的位姿之间的相对位姿;
Figure PCTCN2020125761-appb-000016
为第一车辆当前时刻的预估全局位姿。
公式(3)也可以理解为:基于IMU计算第一车辆的当前时刻与第一时刻第一车辆的全局位姿之间的相对位姿
Figure PCTCN2020125761-appb-000017
利用第一车辆在第一时刻的基于GPS的全局位姿
Figure PCTCN2020125761-appb-000018
与相对位姿
Figure PCTCN2020125761-appb-000019
可以计算得到第一车辆当前时刻的预估全局位姿
Figure PCTCN2020125761-appb-000020
在方式a1中,第二车辆基于GPS和IMU得到的当前时刻的全局位姿。因为第二车辆的GPS未失效,但是由于基于GPS获知的第二车辆的全局位姿本身会有一定的延迟,因此在基于GPS对第二车辆进行定位时,可以将基于GPS得到的信息和基于其它技术(例如IMU)得到的信息融合,将融合后的结果作为第二车辆的当前时刻的全局位姿。这种将基于GPS得到的信息和基于其它技术(例如IMU)得到的信息进行融合以实现定位的方式可以称为组合定位。具有来说,先确定收到的最新的第二车辆的基于GPS的全局位姿的时刻,之后基于IMU计算接收到最新的第二车辆的基于GPS的全局位姿的时刻至当前时刻这段时间内第二车辆可能行驶的距离,结合二者确定出第二车辆的当前时刻的全局位姿。
在方式a1中,可以通过下述公式(4)计算第一车辆和第二车辆的第三变换矩阵的初始值:
Figure PCTCN2020125761-appb-000021
在公式(4)中,第一车辆编号为v 1,第二车辆的编号为v 2
Figure PCTCN2020125761-appb-000022
为第一车辆当前时刻的预估全局位姿,
Figure PCTCN2020125761-appb-000023
为第二车辆当前时刻的全局位姿,
Figure PCTCN2020125761-appb-000024
为第三变换矩阵的初始值,也可以描述为以第一车辆为基准,第二车辆相对于第一车辆的位姿变化量的初始值。
图5示例性示出了图1中的车辆A的当前时刻的预估全局位姿的计算示意图,如图5所示,以车辆A为第一车辆进行示例,车辆A的当前时刻的预估全局位姿可以根据车辆A的第一时刻的全局位姿和车辆A在第一时刻至当前时刻这段时长内的相对位姿来确定。如图5所示,车辆A在第一时刻的全局位姿可以是上述公式(3)中的
Figure PCTCN2020125761-appb-000025
车辆A在第一时刻至当前时刻这段时长内的相对位姿可以是上述公式(3)中的
Figure PCTCN2020125761-appb-000026
方式a2,前一时刻第一车辆的全局位姿可以是最近一次采用上述图3提供的方法得到的第一车辆的全局位姿。
在方式a2中,一种可能地实施方式中,在本次计算第一车辆的全局位姿之前,已经至少根据上述步骤310至步骤313提供的方案确定出了一次第一车辆的全局位姿。在方式a2中,最近一次计算出的第一车辆的全局位姿的时刻为第二时刻,则一种可选地实施方式中,第一车辆当前时刻的预估全局位姿基于以下内容得到:以最近一次计算出的第一车辆的全局位姿(即第一车辆在第二时刻的全局位姿)为基准,第一车辆在当前时刻相对于第二时刻的位姿变换量,也可以说是第一车辆在第二时刻和当前时刻之间的相对位姿。
在方式a2中,第二车辆得到当前时刻的全局位姿的方式与方式a1中相同,不再赘述。且基于第一车辆当前时刻的预估全局位姿和第二车辆当前时刻的全局位姿计算第一车辆和第二车辆的第三变换矩阵的初始值可以采用上述公式(4),不再赘述。
图5示例性示出了图1中的车辆A的当前时刻的预估全局位姿的计算示意图,如图5所示,以车辆A为第一车辆进行示例,车辆A的当前时刻的预估全局位姿可以根据车辆A的第二时刻的全局位姿和车辆A在第二时刻至当前时刻这段时长内的相对位姿来确定。如图5所示,车辆A在第二时刻的全局位姿可以是根据上述步骤311至步骤313提供的方案确定出的一次第一车辆的全局位姿。车辆A在第二时刻至当前时刻这段时长内的相对位姿可以是基于以下内容得到:前一时刻计算出的第一车辆的全局位姿和IMU得到。
需要说明的是,上述方式a1和方式a2的选用时机,上述方式a1和方式a2可以灵活选用,本申请实施例提供一种可能地实施方案,可以在第一车辆的GPS失效后第一次采用上述图2提供的方案计算第一车辆的全局位姿时采用上述方式a1计算第一车辆和第二车辆的第三变换矩阵的初始值。在第二次以及以后采用上述图2提供的方案计算第一车辆的全局位姿时采用上述方式a2计算第一车辆和第二车辆的第三变换矩阵的初始值。
一种可能地实施方式中,将第N次迭代输出的第三变换矩阵作为第一相对位姿的数学表达形式。另一种可能地实施方式中,对第三变换矩阵的初始值和第N次迭代输出的第三变换矩阵进行加权融合,得到第四变换矩阵,将第四变换矩阵作为第一相对位姿的数学表达形式。加权融合可以采用卡尔曼滤波融合。
在对第N次迭代输出的第三变换矩阵和第三变换矩阵的初始值进行卡尔曼滤波融合的过程中,为第N次迭代输出的第三变换矩阵和第三变换矩阵的初始值确定各自对应的权重,之后对第N次迭代输出的第三变换矩阵和第三变换矩阵的初始值进行加权相加,从而得到第四变换矩阵。将第四变换矩阵作为第一相对位姿的数学表达形式。第N次迭代输出的第三变换矩阵的权重与其可信度相关,第三变换矩阵的初始值的权重与其可信度相关。由于第三变换矩阵的初始值是根据第一车辆的预估的全局位姿确定的,因此可信度较低,而第N次迭代输出的第三变换矩阵的可信度较高,基于此,可以设定第一相对位姿作为卡尔曼滤波融合的观测模型(观测模型的可信度较高,),第三变换矩阵的初始值作为卡尔曼滤波融合的预测模型(预测模型的可信度较低)。
第一车辆的全局位姿可以通过下述公式(5)来确定:
Figure PCTCN2020125761-appb-000027
在公式(5)中,第一车辆编号为v 1,第二车辆的编号为v 2
Figure PCTCN2020125761-appb-000028
为第一车辆的全局位姿,
Figure PCTCN2020125761-appb-000029
为第二车辆当前时刻的全局位姿,
Figure PCTCN2020125761-appb-000030
为第一相对位姿。
上述方案介绍了被求助物为第二车辆,且通过第一车辆和第二车辆上安装的激光雷达来辅助确定第一车辆和第二车辆的第一相对位姿的方案。下面介绍几种其它用于确定第一车辆和被求助物之间的第一相对位姿的方案。
作为一种可能的实施例,除了将实施车辆定位的控制器部署在第一车辆中,还可以部署在被求助物上,这种情况下,相对于上述图2所示的方案的不同之处在于,控制器可以直接获取被求助物的全局位姿,不需要通过上述步骤312来获取被求助物的全局位姿。另一方面,当控制器确定出第一车辆的全局位姿之后,还需要向第一车辆发送该第一车辆的 全局位姿。这种实施方式中,由于服务器部署在被求助物上,因此可以减少第一车辆的计算量,即减轻故障车辆的计算负担。
作为另一种可能的实施例,实施车辆定位的控制器还可以部署在云端,这种情况下,相对于上述图2所示的方案的不同之处在于,被求助物可以在接收到上述步骤310中的第一请求之后,可以将被求助物的全局位姿发送至云端的控制器。另一种可能地实施方式中,第一车辆可以向云端的控制器发送第一请求,并由部署于云端的控制器向被求助物发送第一请求,之后被求助物将被求助物的全局位姿可以发送至云端的控制器。在该实施方式中,当控制器计算出第一车辆的全局位姿之后,需要向第一车辆发送该第一车辆的全局位姿。这种实施方式中,由于将服务器部署在云端,因此可以减少第一车辆或第二车辆的计算量,减轻第一车辆或第二车辆的负担。
在另一种可能地实施方式中,除了图2的方法中以被求助物为可以是第二车辆且第二车辆上部署激光雷达以外,第一车辆和第二车辆上也可以安装有相机,相机传感器可以是全景相机传感器。第一车辆可以通过相机传感器连续拍摄两帧第一图像,每个图像为二维坐标系(例如X/Y坐标系)下的图像,图像中的特征点对应X轴和Y轴的坐标。结合两帧第一图像可以确定出该第一图像中的特征点的深度信息,即Z轴坐标。也就是说,第一车辆可以通过相机传感器获取障碍物的特征点的X轴、Y轴和Z轴的坐标,该特征点的X轴、Y轴和Z轴的坐标构成一组向量,这种情况下可以称获取了该点对应的点云数据。可以看出,通过第一车辆的相机传感器也可以获取第一图像对应的点云数据。类似地,通过第二车辆的相机传感器获取第二图像对应的点云数据。第一车辆的相机传感器和第二车辆的相机传感器具有重合的拍摄区域,进而可以根据该重合的拍摄区域的障碍物的特征点在第一图像的点云数据中对应的点云数据,以及该特征点在第二图像的点云数据中对应的点云数据进行点云匹配,从而确定出第一车辆和第二车辆的第一相对位姿。结合两个点云数据进行点云匹配从而确定第一相对位姿的方案可以参见上述内容,在此不再赘述。进一步,根据第一相对位姿以及第二车辆的全局位姿确定出第一车辆的全局位姿。
在另一种可能地实施方式中,被求助物可以是配置有通信模块的基础设施,例如上述内容提到的配置有通信模块的路灯,或者还可以是路边的基站,还可以是配置有通信模块的摄像头。
当被求助物为配置有通信模块的路灯或者为基站等基础设置,以路灯为例进行介绍,第一车辆可以接收路灯发送的信号,可以根据信号的强度判断出路灯和第一车辆之间的相对位置。路灯可以将自身的全局位置发送给车辆,之后第一车辆可以基于相对位置和路灯的位置,确定出第一车辆自身的位置。另一方面,第一车辆可以估计出自身的姿态,例如:根据前一时刻所述第一车辆的姿态,以及基于IMU计算出的:以前一时刻第一车辆的姿态为基准,所述第一车辆在当前时刻相对于所述前一时刻的姿态,计算所述第一车辆当前时刻的姿态。
当被求助物为配置有通信模块的摄像头,摄像头可以拍摄两帧包括有第一车辆的第三图像,每个图像为二维坐标系(例如X/Y坐标系)下的图像,图像中第一车辆的特征点对应X轴和Y轴的坐标。结合两帧第三图像可以确定出该第三图像中第一车辆的特征点的深度信息,即Z轴坐标。第一车辆的特征点的X轴、Y轴和Z轴的坐标可以作为第一车辆的全局姿态信息的一种表示方式。
上文中结合图1至图5,详细描述了根据本申请实施例所提供的车辆定位的方法,下面将结合图6至图7,描述根据本申请实施例所提供的车辆定位的控制器和系统。
图6为本申请实施例提供的一种车辆定位装置的结构的示意图,如图所示,装置1501包括获取单元1502和计算单元1503。
获取单元1502,用于获取第一车辆与被求助物的第一相对位姿,获取被求助物的全局位姿;第一相对位姿用于指示以第一车辆为基准,被求助物相对于第一车辆的位置和姿态;第一相对位姿是在第一坐标系中确定的被求助物的位姿;全局位姿是在第二坐标系确定的被求助物的全局位姿;
计算单元1503,用于根据第一相对位姿和全局位姿,计算第一车辆的全局位姿。如此,在第一车辆的GPS信号弱或者无GPS信号的情况下,可以基于被求助物的全局位姿,以及第一车辆和被求助物之间的第一相对位姿来确定出第一车辆的全局位姿。
应理解的是,本申请实施例的装置1501可以通过专用集成电路(application-specific integrated circuit,ASIC)实现,或可编程逻辑器件(programmable logic device,PLD)实现,上述PLD可以是复杂程序逻辑器件(complex programmable logical device,CPLD),现场可编程门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。也可以通过软件实现图2至图5所示的车辆定位方法时,装置1501及其各个模块也可以为软件模块。
在一种可能地实施方式中,被求助物为第二车辆,第一相对位姿用于指示以第一车辆为基准,第二车辆相对于第一车辆的位置和姿态,姿态用于指示以第一车辆的车头朝向为基准,第二车辆的车头的朝向。如此,在第一车辆的GPS信号弱或者无GPS信号的情况下,可以向其他车辆求助,且可以基于其他车辆的信息,确定出第一车辆的车头朝向,从而在智能车领域可以辅助其实现自动驾驶。
在一种可能地实施方式中,获取单元1502,还用于:获取第一车辆的激光雷达进行扫描第一车辆周围物体所得到的在第一坐标系的第一点云数据;获取第二车辆的激光雷达进行扫描第二车辆周围物体所得到的在第二坐标系的第二点云数据;其中,第一车辆的激光雷达和第二车辆的激光雷达具有重合扫描区域,且针对重合扫描区域中的障碍物,第一点云数据中包括障碍物对应的点云数据,第二点云数据中包括障碍物对应的点云数据;根据障碍物在第一点云数据中对应的点云数据,以及障碍物在第二点云数据中对应的点云数据,计算第一相对位姿。如此,可以根据激光雷达扫描出的点云数据进行点云匹配,从而确定出较为准确的两辆车之间的相对位姿。
在一种可能地实施方式中,获取单元1502,还用于:将第一坐标系中第一点云数据转换至预设的第一车辆的第一基准坐标系,获得第三点云数据;其中,第一基准坐标系是将第一坐标系的原点平移至预设的第一基准点后得到的坐标系;将第二坐标系中第二点云数据转换至预设的第二车辆的第二基准坐标系,获得第四点云数据;其中,第二基准坐标系是将第二坐标系的原点平移至预设的第二基准点后得到的坐标系;对障碍物在第三点云数据中对应的点云数据,以及障碍物在第四点云数据中对应的点云数据进行点云匹配,得到第一相对位姿。如此,可以将第一点云数据和第二点云数据均转换至各自车辆的基准坐标系下,各个车辆的基准坐标系的原点相对于各自的车辆的位置都是对应的,例如各个车辆的基准坐标系的原点均位于各自的车辆的后轴中心,从而可以较为准确的确定出两辆车之 间的相对位姿。
在一种可能地实施方式中,获取单元1502,还用于:根据障碍物在第三点云数据中对应的点云数据,以及障碍物在第四点云数据中对应的点云数据,执行N次迭代,得到第N次迭代输出的第三变换矩阵;N为正整数;根据第N次迭代输出的第三变换矩阵,确定第一相对位姿;其中,针对N迭代中的第i次迭代,i为不大于N的正整数:针对障碍物的M个点中的每个点执行:根据第i-1次迭代输出的第三变换矩阵,对该点在第三点云数据中对应的点云数据执行一次变换,得到该点对应的变换后点云数据;计算该点对应的变换后点云数据和该点在第四点云数据中对应的点云数据的差值,得到该点对应的残差;M为正整数;根据障碍物的M个点中每个点对应的残差,计算残差总和;若残差总和不小于预设的残差阈值,则以预设变换量更新第i-1次迭代输出的第三变换矩阵,并将更新后的第i-1次迭代输出的第三变换矩阵作为:第i次迭代输出的第三变换矩阵,并执行下一次迭代;若残差总和小于残差阈值,则结束迭代,将第i-1次迭代输出的第三变换矩阵作为:第N次迭代输出的第三变换矩阵。
在一种可能地实施方式中,获取单元1502,还用于:当i为1时,通过如下内容确定第i-1次迭代输出的第三变换矩阵:根据前一时刻第一车辆的全局位姿,以及基于IMU计算出的:以前一时刻第一车辆的全局位姿为基准,第一车辆在当前时刻相对于前一时刻的位姿,计算第一车辆当前时刻的预估全局位姿;获取第二车辆基于全球定位系统GPS和IMU得到的当前时刻的全局位姿;根据第一车辆当前时刻的预估全局位姿和第二车辆当前时刻的全局位姿,确定出第二相对位姿,第二相对位姿用于指示以第一车辆为基准,被求助物相对于第一车辆的位置和姿态;第二相对位姿是在第一坐标系中确定的被求助物的位姿;将用于表达第二相对位姿的矩阵作为:第i-1次迭代输出的第三变换矩阵。
在一种可能地实施方式中,获取单元1502,还用于:第i-1次迭代输出的第三变换矩阵,还用于:将第N次迭代输出的第三变换矩阵作为第一相对位姿的数学表达形式。在另一种可能地实施方式中,获取单元1502,还用于:对第i-1次迭代输出的第三变换矩阵和第N次迭代输出的第三变换矩阵进行加权融合,得到第四变换矩阵,将第四变换矩阵作为第一相对位姿的数学表达形式。
在一种可能地实施方式中,装置1501还包括确定单元1504,用于:确定第一车辆的基于GPS的全局位姿误差大于误差阈值;或者;第一车辆在第一车辆的GPS定位装置出现故障。
在一种可能地实施方式中,获取单元1502,还用于:获取多个被求助物中每个被求助物对应的第一车辆的全局位姿;计算单元1503,还用于:对多个被求助物对应的多个第一车辆的全局位姿进行加权融合,得到第一车辆的目标全局位姿。
装置1501还包括发送单元1505和接收单元1506,发送单元1505用于:向被求助物发送第一请求;第一请求中携带第一车辆的标识。接收单元1506,用于:接收被求助物发送的被求助物的全局位姿。
根据本申请实施例的装置1501可对应于执行本申请实施例中描述的方法,并且装置1501中的各个单元的上述和其它操作和/或功能分别为了实现图2至图5中的各个方法的相应流程,为了简洁,在此不再赘述。
图7为本申请实施例提供的一种控制器结构的示意图,如图所示,控制器1301包括处 理器1302、存储器1304和通信接口1303。其中,处理器1302、存储器1304和通信接口1303可以通过总线1305进行通信,也可以通过无线传输等其他手段实现通信。该存储器1304用于存储指令,该处理器1302用于执行该存储器1304存储的指令。该存储器1304存储程序代码,且处理器1302可以调用存储器1304中存储的程序代码执行以下操作:
处理器1302,用于获取第一车辆与被求助物的第一相对位姿,获取被求助物的全局位姿;第一相对位姿用于指示以第一车辆为基准,被求助物相对于第一车辆的位置和姿态;第一相对位姿是在第一坐标系中确定的被求助物的位姿;全局位姿是在第二坐标系确定的被求助物的全局位姿;处理器1302,用于根据第一相对位姿和全局位姿,计算第一车辆的全局位姿。如此,在第一车辆的GPS信号弱或者无GPS信号的情况下,可以基于被求助物的全局位姿,以及第一车辆和被求助物之间的第一相对位姿来确定出第一车辆的全局位姿。
应理解,在本申请实施例中,该处理器1302可以是中央处理单元(central processing unit,CPU),该处理器1302还可以是其他通用处理器、数字信号处理器(digital signal processing,DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者是任何常规的处理器等。
该存储器1304可以包括只读存储器和随机存取存储器,并向处理器1302提供指令和数据。存储器1304还可以包括非易失性随机存取存储器。例如,存储器1304还可以存储设备类型的信息。
该存储器1304可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data date SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。
该总线1305除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为总线1305。
应理解,根据本申请实施例的车辆定位的控制器1301可对应于本申请实施例中的车辆定位的装置的装置1501,并可以对应于执行根据本申请实施例的方法中的相应主体,并且控制器1301中的各个模块的上述和其它操作和/或功能分别为了实现图2至图5中的各个方法的相应流程,为了简洁,在此不再赘述。
本申请还提供一种车辆,该车辆上包括如图7所示的控制器1301。该车辆可以对应图2所示方法中第一车辆,用于实现如图2至图5中各个方法的操作步骤,为了简洁,在此不再赘述。
本申请还提供一种车辆,该车辆包括如图7所示的控制器1301。此时,该车辆可以为 被求助车辆,用于利用图2至图5所示的各个方法的操作步骤实现辅助GPS故障车辆定位的过程。
本申请还提供一种系统,该系统包括第一车辆和云端的计算平台,该云端的计算平台用于实现上述方法中由云端的计算平台辅助GPS故障的第一车辆进行定位的过程。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,高密度数字视频光盘(digital video disc,DVD))、或者半导体介质(例如,固态硬盘(solid state disc,SSD))等。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (17)

  1. 一种车辆定位方法,其特征在于,包括:
    获取第一车辆与被求助物的第一相对位姿,所述第一相对位姿用于指示以所述第一车辆为基准,所述被求助物相对于所述第一车辆的位置和姿态;所述第一相对位姿是在第一坐标系中确定的被求助物的位姿;
    获取所述被求助物的全局位姿;所述全局位姿是在第二坐标系确定的所述被求助物的全局位姿;
    根据所述第一相对位姿和所述全局位姿,计算所述第一车辆的全局位姿。
  2. 如权利要求1所述的方法,其特征在于,所述被求助物包括第二车辆,所述第一相对位姿用于指示以所述第一车辆为基准,所述第二车辆相对于所述第一车辆的位置和姿态,所述姿态用于指示以第一车辆的车头朝向为基准,所述第二车辆的车头的朝向。
  3. 如权利要求2所述的方法,其特征在于,所述获取第一车辆与被求助物的第一相对位姿,包括:
    获取所述第一车辆的激光雷达进行扫描所述第一车辆周围物体所得到的在所述第一坐标系的第一点云数据;
    获取所述第二车辆的激光雷达进行扫描所述第二车辆周围物体所得到的在所述第二坐标系的第二点云数据;其中,所述第一车辆的激光雷达和所述第二车辆的激光雷达具有重合扫描区域,且针对所述重合扫描区域中的障碍物,所述第一点云数据中包括所述障碍物对应的点云数据,所述第二点云数据中包括所述障碍物对应的点云数据;
    根据所述障碍物在所述第一点云数据中对应的点云数据,以及所述障碍物在所述第二点云数据中对应的点云数据,计算所述第一相对位姿。
  4. 如权利要求3所述的方法,其特征在于,根据所述障碍物在所述第一点云数据中对应的点云数据,以及所述障碍物在所述第二点云数据中对应的点云数据,计算所述第一相对位姿,包括:
    将所述第一坐标系中所述第一点云数据转换至预设的所述第一车辆的第一基准坐标系,获得第三点云数据;其中,所述第一基准坐标系是将所述第一坐标系的原点平移至预设的第一基准点后得到的坐标系;
    将所述第二坐标系中所述第二点云数据转换至预设的所述第二车辆的第二基准坐标系,获得第四点云数据;其中,所述第二基准坐标系是将所述第二坐标系的原点平移至预设的第二基准点后得到的坐标系;
    对所述障碍物在所述第三点云数据中对应的点云数据,以及所述障碍物在所述第四点云数据中对应的点云数据进行点云匹配,得到所述第一相对位姿。
  5. 如权利要求4所述的方法,其特征在于,所述对所述障碍物在所述第三点云数据中对应的点云数据,以及所述障碍物在所述第四点云数据中对应的点云数据进行点云匹配, 得到所述第一相对位姿,包括:
    根据所述障碍物在所述第三点云数据中对应的点云数据,以及所述障碍物在所述第四点云数据中对应的点云数据,执行N次迭代,得到第N次迭代输出的第三变换矩阵;所述N为正整数;
    根据所述第N次迭代输出的第三变换矩阵,确定所述第一相对位姿;
    其中,针对所述N迭代中的第i次迭代,所述i为不大于所述N的正整数:
    针对所述障碍物的M个点中的每个点执行:根据第i-1次迭代输出的第三变换矩阵,对该点在第三点云数据中对应的点云数据执行一次变换,得到该点对应的变换后点云数据;计算该点对应的变换后点云数据和该点在所述第四点云数据中对应的点云数据的差值,得到该点对应的残差;所述M为正整数;
    根据所述障碍物的所述M个点中每个点对应的残差,计算残差总和;
    若所述残差总和不小于预设的残差阈值,则以预设变换量更新所述第i-1次迭代输出的第三变换矩阵,并将更新后的第i-1次迭代输出的第三变换矩阵作为:所述第i次迭代输出的第三变换矩阵,并执行下一次迭代;
    若所述残差总和小于所述残差阈值,则结束迭代,将所述第i-1次迭代输出的第三变换矩阵作为:所述第N次迭代输出的第三变换矩阵。
  6. 如权利要求5所述的方法,其特征在于,当所述i为1时,所述第i-1次迭代输出的第三变换矩阵通过如下内容确定:
    根据前一时刻所述第一车辆的全局位姿,以及基于IMU计算出的:以前一时刻第一车辆的全局位姿为基准,所述第一车辆在当前时刻相对于所述前一时刻的位姿,计算所述第一车辆当前时刻的预估全局位姿;
    获取所述第二车辆基于全球定位系统GPS和IMU得到的当前时刻的全局位姿;
    根据所述第一车辆当前时刻的预估全局位姿和所述第二车辆当前时刻的全局位姿,确定出第二相对位姿,所述第二相对位姿用于指示以所述第一车辆为基准,所述被求助物相对于所述第一车辆的位置和姿态;所述第二相对位姿是在所述第一坐标系中确定的被求助物的位姿;
    将用于表达所述第二相对位姿的矩阵作为:所述第i-1次迭代输出的第三变换矩阵。
  7. 如权利要求5或6所述的方法,其特征在于,所述根据所述第N次迭代输出的第三变换矩阵,确定所述第一相对位姿,还包括:
    将所述第N次迭代输出的第三变换矩阵作为所述第一相对位姿的数学表达形式;
    或者;
    对所述第i-1次迭代输出的第三变换矩阵和所述第N次迭代输出的第三变换矩阵进行加权融合,得到第四变换矩阵,将所述第四变换矩阵作为所述第一相对位姿的数学表达形式。
  8. 一种车辆定位的装置,其特征在于,所述装置包括:
    获取单元,用于获取第一车辆与被求助物的第一相对位姿,获取所述被求助物的全局位姿;所述第一相对位姿用于指示以所述第一车辆为基准,所述被求助物相对于所述第一 车辆的位置和姿态;所述第一相对位姿是在第一坐标系中确定的被求助物的位姿;所述全局位姿是在第二坐标系确定的所述被求助物的全局位姿;
    计算单元,用于根据所述第一相对位姿和所述全局位姿,计算所述第一车辆的全局位姿。
  9. 如权利要求8所述的装置,其特征在于,所述被求助物包括第二车辆,所述第一相对位姿用于指示以所述第一车辆为基准,所述第二车辆相对于所述第一车辆的位置和姿态,所述姿态用于指示以第一车辆的车头朝向为基准,所述第二车辆的车头的朝向。
  10. 如权利要求9所述的装置,其特征在于,所述获取单元,还用于:
    获取所述第一车辆的激光雷达进行扫描所述第一车辆周围物体所得到的在所述第一坐标系的第一点云数据;
    获取所述第二车辆的激光雷达进行扫描所述第二车辆周围物体所得到的在所述第二坐标系的第二点云数据;其中,所述第一车辆的激光雷达和所述第二车辆的激光雷达具有重合扫描区域,且针对所述重合扫描区域中的障碍物,所述第一点云数据中包括所述障碍物对应的点云数据,所述第二点云数据中包括所述障碍物对应的点云数据;
    根据所述障碍物在所述第一点云数据中对应的点云数据,以及所述障碍物在所述第二点云数据中对应的点云数据,计算所述第一相对位姿。
  11. 如权利要求10所述的装置,其特征在于,所述获取单元,还用于:
    将所述第一坐标系中所述第一点云数据转换至预设的所述第一车辆的第一基准坐标系,获得第三点云数据;其中,所述第一基准坐标系是将所述第一坐标系的原点平移至预设的第一基准点后得到的坐标系;
    将所述第二坐标系中所述第二点云数据转换至预设的所述第二车辆的第二基准坐标系,获得第四点云数据;其中,所述第二基准坐标系是将所述第二坐标系的原点平移至预设的第二基准点后得到的坐标系;
    对所述障碍物在所述第三点云数据中对应的点云数据,以及所述障碍物在所述第四点云数据中对应的点云数据进行点云匹配,得到所述第一相对位姿。
  12. 如权利要求11所述的装置,其特征在于,所述获取单元,还用于:
    根据所述障碍物在所述第三点云数据中对应的点云数据,以及所述障碍物在所述第四点云数据中对应的点云数据,执行N次迭代,得到第N次迭代输出的第三变换矩阵;所述N为正整数;
    根据所述第N次迭代输出的第三变换矩阵,确定所述第一相对位姿;
    其中,针对所述N迭代中的第i次迭代,所述i为不大于所述N的正整数:
    针对所述障碍物的M个点中的每个点执行:根据第i-1次迭代输出的第三变换矩阵,对该点在第三点云数据中对应的点云数据执行一次变换,得到该点对应的变换后点云数据;计算该点对应的变换后点云数据和该点在所述第四点云数据中对应的点云数据的差值,得到该点对应的残差;所述M为正整数;
    根据所述障碍物的所述M个点中每个点对应的残差,计算残差总和;
    若所述残差总和不小于预设的残差阈值,则以预设变换量更新所述第i-1次迭代输出的第三变换矩阵,并将更新后的第i-1次迭代输出的第三变换矩阵作为:所述第i次迭代输出的第三变换矩阵,并执行下一次迭代;
    若所述残差总和小于所述残差阈值,则结束迭代,将所述第i-1次迭代输出的第三变换矩阵作为:所述第N次迭代输出的第三变换矩阵。
  13. 如权利要求12所述的装置,其特征在于,所述获取单元,具体用于:
    当所述i为1时,通过如下内容确定所述第i-1次迭代输出的第三变换矩阵:
    根据前一时刻所述第一车辆的全局位姿,以及基于IMU计算出的:以前一时刻第一车辆的全局位姿为基准,所述第一车辆在当前时刻相对于所述前一时刻的位姿,计算所述第一车辆当前时刻的预估全局位姿;
    获取所述第二车辆基于全球定位系统GPS和IMU得到的当前时刻的全局位姿;
    根据所述第一车辆当前时刻的预估全局位姿和所述第二车辆当前时刻的全局位姿,确定出第二相对位姿,所述第二相对位姿用于指示以所述第一车辆为基准,所述被求助物相对于所述第一车辆的位置和姿态;所述第二相对位姿是在所述第一坐标系中确定的被求助物的位姿;
    将用于表达所述第二相对位姿的矩阵作为:所述第i-1次迭代输出的第三变换矩阵。
  14. 如权利要求12或13所述的装置,其特征在于,所述获取单元,还用于:
    将所述第N次迭代输出的第三变换矩阵作为所述第一相对位姿的数学表达形式;
    或者;
    对所述第i-1次迭代输出的第三变换矩阵和所述第N次迭代输出的第三变换矩阵进行加权融合,得到第四变换矩阵,将所述第四变换矩阵作为所述第一相对位姿的数学表达形式。
  15. 一种车辆定位的控制器,其特征在于,所述控制器包括处理器和存储器,所述存储器中存储中用于存储计算机执行指令,所述控制器运行时,所述处理器执行所述存储器中的计算机执行指令以利用所述控制器中的硬件资源执行权利要求1至7中任一所述方法的操作步骤。
  16. 一种智能车,其特征在于,包括权利要求15的车辆定位的控制器。
  17. 一种系统,其特征在于,包括第一车辆和被求助物,所述第一车辆包括如上述权利要求1至7任一项所述方法中第一车辆所执行的方法的操作步骤,所述被求助物用于执行上述方法权要1至7中任一所述方法中所述第二车辆所执行方法的操作步骤。
PCT/CN2020/125761 2020-01-14 2020-11-02 车辆定位的方法、装置、控制器、智能车和系统 WO2021143286A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20914151.4A EP4080248A4 (en) 2020-01-14 2020-11-02 VEHICLE POSITIONING METHOD AND APPARATUS, CONTROLLER, SMART CAR AND SYSTEM
US17/864,998 US20220371602A1 (en) 2020-01-14 2022-07-14 Vehicle positioning method, apparatus, and controller, intelligent vehicle, and system

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202010038272.6 2020-01-14
CN202010038272 2020-01-14
CN202010177804.4 2020-03-13
CN202010177804.4A CN111413721B (zh) 2020-01-14 2020-03-13 车辆定位的方法、装置、控制器、智能车和系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/864,998 Continuation US20220371602A1 (en) 2020-01-14 2022-07-14 Vehicle positioning method, apparatus, and controller, intelligent vehicle, and system

Publications (1)

Publication Number Publication Date
WO2021143286A1 true WO2021143286A1 (zh) 2021-07-22

Family

ID=71491015

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/125761 WO2021143286A1 (zh) 2020-01-14 2020-11-02 车辆定位的方法、装置、控制器、智能车和系统

Country Status (4)

Country Link
US (1) US20220371602A1 (zh)
EP (1) EP4080248A4 (zh)
CN (1) CN111413721B (zh)
WO (1) WO2021143286A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114061573A (zh) * 2021-11-16 2022-02-18 中国人民解放军陆军工程大学 地面无人车辆编队定位装置及方法
CN114563795A (zh) * 2022-02-25 2022-05-31 湖南大学无锡智能控制研究院 基于激光里程计和标签融合算法的定位追踪方法及系统
CN116559928A (zh) * 2023-07-11 2023-08-08 新石器慧通(北京)科技有限公司 激光雷达的位姿信息确定方法、装置、设备及存储介质
WO2024065173A1 (en) * 2022-09-27 2024-04-04 Apollo Intelligent Driving Technology (Beijing) Co., Ltd. Cloud based scanning for detection of sensors malfunction for autonomous vehicles

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111413721B (zh) * 2020-01-14 2022-07-19 华为技术有限公司 车辆定位的方法、装置、控制器、智能车和系统
CN112541475B (zh) * 2020-12-24 2024-01-19 北京百度网讯科技有限公司 感知数据检测方法及装置
CN113359167A (zh) * 2021-04-16 2021-09-07 电子科技大学 一种通过惯性测量参数将gps与激光雷达融合定位的方法
CN113758491B (zh) * 2021-08-05 2024-02-23 重庆长安汽车股份有限公司 基于多传感器融合无人车的相对定位方法、系统及车辆
CN113675923B (zh) * 2021-08-23 2023-08-08 追觅创新科技(苏州)有限公司 充电方法、充电装置及机器人
CN113899363B (zh) 2021-09-29 2022-10-21 北京百度网讯科技有限公司 车辆的定位方法、装置及自动驾驶车辆
US20230134107A1 (en) * 2021-11-03 2023-05-04 Toyota Research Institute, Inc. Systems and methods for improving localization accuracy by sharing mutual localization information
US20230213633A1 (en) * 2022-01-06 2023-07-06 GM Global Technology Operations LLC Aggregation-based lidar data alignment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070208507A1 (en) * 2006-03-03 2007-09-06 Denso Corporation Current position sensing system, map display system and current position sensing method
CN106842271A (zh) * 2015-12-03 2017-06-13 宁波芯路通讯科技有限公司 导航定位方法及装置
CN108917762A (zh) * 2018-05-16 2018-11-30 珠海格力电器股份有限公司 电器定位的方法、系统、存储介质和家居系统
CN109932741A (zh) * 2017-12-19 2019-06-25 阿里巴巴集团控股有限公司 定位方法、定位设备、定位系统、计算设备及存储介质
CN110609290A (zh) * 2019-09-19 2019-12-24 北京智行者科技有限公司 激光雷达匹配定位方法及装置
CN111413721A (zh) * 2020-01-14 2020-07-14 华为技术有限公司 车辆定位的方法、装置、控制器、智能车和系统

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10404962B2 (en) * 2015-09-24 2019-09-03 Intel Corporation Drift correction for camera tracking
JP2019527832A (ja) * 2016-08-09 2019-10-03 ナウト, インコーポレイテッドNauto, Inc. 正確な位置特定およびマッピングのためのシステムおよび方法
CN109118794A (zh) * 2017-06-22 2019-01-01 中兴通讯股份有限公司 车辆定位方法、装置和终端设备
CN109523581B (zh) * 2017-09-19 2021-02-23 华为技术有限公司 一种三维点云对齐的方法和装置
CN108345020B (zh) * 2018-02-09 2020-08-18 长沙智能驾驶研究院有限公司 车辆定位方法、系统和计算机可读存储介质
US10551477B2 (en) * 2018-03-28 2020-02-04 Qualcomm Incorporated Method and apparatus for V2X assisted positioning determination using positioning reference signal signals
CN110333524A (zh) * 2018-03-30 2019-10-15 北京百度网讯科技有限公司 车辆定位方法、装置及设备
US11294060B2 (en) * 2018-04-18 2022-04-05 Faraday & Future Inc. System and method for lidar-based vehicular localization relating to autonomous navigation
CN109059902B (zh) * 2018-09-07 2021-05-28 百度在线网络技术(北京)有限公司 相对位姿确定方法、装置、设备和介质
CN109540148B (zh) * 2018-12-04 2020-10-16 广州小鹏汽车科技有限公司 基于slam地图的定位方法及系统
CN110007300B (zh) * 2019-03-28 2021-08-06 东软睿驰汽车技术(沈阳)有限公司 一种得到点云数据的方法及装置
CN110221276B (zh) * 2019-05-31 2023-09-29 文远知行有限公司 激光雷达的标定方法、装置、计算机设备和存储介质
CN110398729B (zh) * 2019-07-16 2022-03-15 启迪云控(北京)科技有限公司 一种基于车联网的车辆定位方法及系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070208507A1 (en) * 2006-03-03 2007-09-06 Denso Corporation Current position sensing system, map display system and current position sensing method
CN106842271A (zh) * 2015-12-03 2017-06-13 宁波芯路通讯科技有限公司 导航定位方法及装置
CN109932741A (zh) * 2017-12-19 2019-06-25 阿里巴巴集团控股有限公司 定位方法、定位设备、定位系统、计算设备及存储介质
CN108917762A (zh) * 2018-05-16 2018-11-30 珠海格力电器股份有限公司 电器定位的方法、系统、存储介质和家居系统
CN110609290A (zh) * 2019-09-19 2019-12-24 北京智行者科技有限公司 激光雷达匹配定位方法及装置
CN111413721A (zh) * 2020-01-14 2020-07-14 华为技术有限公司 车辆定位的方法、装置、控制器、智能车和系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4080248A4

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114061573A (zh) * 2021-11-16 2022-02-18 中国人民解放军陆军工程大学 地面无人车辆编队定位装置及方法
CN114061573B (zh) * 2021-11-16 2024-03-22 中国人民解放军陆军工程大学 地面无人车辆编队定位装置及方法
CN114563795A (zh) * 2022-02-25 2022-05-31 湖南大学无锡智能控制研究院 基于激光里程计和标签融合算法的定位追踪方法及系统
CN114563795B (zh) * 2022-02-25 2023-01-17 湖南大学无锡智能控制研究院 基于激光里程计和标签融合算法的定位追踪方法及系统
WO2024065173A1 (en) * 2022-09-27 2024-04-04 Apollo Intelligent Driving Technology (Beijing) Co., Ltd. Cloud based scanning for detection of sensors malfunction for autonomous vehicles
CN116559928A (zh) * 2023-07-11 2023-08-08 新石器慧通(北京)科技有限公司 激光雷达的位姿信息确定方法、装置、设备及存储介质
CN116559928B (zh) * 2023-07-11 2023-09-22 新石器慧通(北京)科技有限公司 激光雷达的位姿信息确定方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN111413721A (zh) 2020-07-14
EP4080248A1 (en) 2022-10-26
CN111413721B (zh) 2022-07-19
US20220371602A1 (en) 2022-11-24
EP4080248A4 (en) 2023-06-21

Similar Documents

Publication Publication Date Title
WO2021143286A1 (zh) 车辆定位的方法、装置、控制器、智能车和系统
CN110148185B (zh) 确定成像设备坐标系转换参数的方法、装置和电子设备
CN110146869B (zh) 确定坐标系转换参数的方法、装置、电子设备和存储介质
EP3627180B1 (en) Sensor calibration method and device, computer device, medium, and vehicle
US11915099B2 (en) Information processing method, information processing apparatus, and recording medium for selecting sensing data serving as learning data
CN108828527B (zh) 一种多传感器数据融合方法、装置、车载设备及存储介质
CN107636679B (zh) 一种障碍物检测方法及装置
CN110033489B (zh) 一种车辆定位准确性的评估方法、装置及设备
US20150142248A1 (en) Apparatus and method for providing location and heading information of autonomous driving vehicle on road within housing complex
US11427218B2 (en) Control apparatus, control method, program, and moving body
WO2020232648A1 (zh) 车道线的检测方法、电子设备与存储介质
WO2017057042A1 (ja) 信号処理装置、信号処理方法、プログラム、および、物体検出システム
CN109300143B (zh) 运动向量场的确定方法、装置、设备、存储介质和车辆
CN109849930B (zh) 自动驾驶汽车的相邻车辆的速度计算方法和装置
KR101880185B1 (ko) 이동체 포즈 추정을 위한 전자 장치 및 그의 이동체 포즈 추정 방법
CN114111774B (zh) 车辆的定位方法、系统、设备及计算机可读存储介质
TWI604980B (zh) 載具控制系統及載具控制方法
CN110766761B (zh) 用于相机标定的方法、装置、设备和存储介质
CN111563450A (zh) 数据处理方法、装置、设备及存储介质
CN116449392B (zh) 一种地图构建方法、装置、计算机设备及存储介质
WO2023226155A1 (zh) 多源数据融合定位方法、装置、设备及计算机存储介质
CN113295159B (zh) 端云融合的定位方法、装置和计算机可读存储介质
CN112689234A (zh) 室内车辆定位方法、装置、计算机设备和存储介质
CN113312403B (zh) 地图获取方法、装置、电子设备及存储介质
WO2022037370A1 (zh) 一种运动估计方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20914151

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020914151

Country of ref document: EP

Effective date: 20220722

NENP Non-entry into the national phase

Ref country code: DE