CN115127561A - Object positioning method, device, automatic driving vehicle and edge computing platform - Google Patents

Object positioning method, device, automatic driving vehicle and edge computing platform Download PDF

Info

Publication number
CN115127561A
CN115127561A CN202210763841.2A CN202210763841A CN115127561A CN 115127561 A CN115127561 A CN 115127561A CN 202210763841 A CN202210763841 A CN 202210763841A CN 115127561 A CN115127561 A CN 115127561A
Authority
CN
China
Prior art keywords
data
target object
determining
adjusted
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210763841.2A
Other languages
Chinese (zh)
Inventor
刘文杰
邱笑晨
程风
蔡仁澜
万国伟
彭亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210763841.2A priority Critical patent/CN115127561A/en
Publication of CN115127561A publication Critical patent/CN115127561A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The disclosure provides an object positioning method, relates to the technical field of artificial intelligence, and particularly relates to the technical fields of automatic driving, auxiliary driving, positioning technology and high-precision maps. The specific implementation scheme is as follows: obtaining second positioning data of the target object in a second coordinate system according to the attitude information of the target object and the first positioning data of the target object in the first coordinate system; determining information to be processed of the target object in a second coordinate system according to the second positioning data; filtering the information to be processed to obtain target positioning data; and determining the position of the target object according to the target positioning data. The present disclosure also provides an object locating device, an autonomous vehicle, and an edge computing platform.

Description

Object positioning method, device, automatic driving vehicle and edge computing platform
Technical Field
The present disclosure relates to the field of artificial intelligence technology, and more particularly, to the fields of automatic driving, driver assistance, positioning technology, and high-precision map technology. More particularly, the present disclosure provides an object localization method, apparatus, electronic device, storage medium, computer program product, edge computing platform and autonomous vehicle.
Background
With the development of artificial intelligence technology and high-precision map technology, the application scenarios of automatic driving technology and auxiliary driving technology are increasing. In the automatic driving mode or the auxiliary driving mode, the vehicle may be controlled to travel according to the position of the vehicle.
Disclosure of Invention
The present disclosure provides an object positioning method, apparatus, electronic device, storage medium, computer program product, edge computing platform and autonomous vehicle.
According to an aspect of the present disclosure, there is provided an object positioning method, including: obtaining second positioning data of the target object in a second coordinate system according to the attitude information of the target object and the first positioning data of the target object in the first coordinate system; determining information to be processed of the target object in a second coordinate system according to the second positioning data; filtering the information to be processed to obtain target positioning data; and determining the position of the target object according to the target positioning data.
According to another aspect of the present disclosure, there is provided an object positioning apparatus, including: the obtaining module is used for obtaining second positioning data of the target object in a second coordinate system according to the attitude information of the target object and the first positioning data of the target object in the first coordinate system; the first determining module is used for determining to-be-processed information of the target object in a second coordinate system according to the second positioning data; the filtering processing module is used for filtering the information to be processed to obtain target positioning data; and the second determining module is used for determining the position of the target object according to the target positioning data.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform methods provided in accordance with the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform a method provided according to the present disclosure.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements a method provided according to the present disclosure.
According to another aspect of the present disclosure, an edge computing platform is provided, comprising a plurality of edge computing units, the edge computing units comprising the electronic device provided by the present disclosure.
According to another aspect of the present disclosure, an autonomous vehicle is provided that includes an electronic device provided by the present disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram of an exemplary system architecture to which the object location method and apparatus may be applied, according to one embodiment of the present disclosure;
FIG. 2 is a flow diagram of an object location method according to one embodiment of the present disclosure;
FIG. 3 is a flow diagram of obtaining second positioning data, according to one embodiment of the present disclosure;
FIG. 4 is a flow diagram of obtaining object location data according to one embodiment of the present disclosure;
FIG. 5A is an exemplary scenario diagram according to one embodiment of the present disclosure;
FIG. 5B is an exemplary diagram of a location of a target object according to one embodiment of the present disclosure;
FIG. 6 is an exemplary diagram of a location of a target object according to another embodiment of the present disclosure;
FIG. 7 is a block diagram of an object locating device according to one embodiment of the present disclosure; and
fig. 8 is a block diagram of an electronic device to which an object positioning method may be applied according to one embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of embodiments of the present disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In related scenes such as automatic driving scenes or auxiliary driving scenes, a positioning system is required to output continuous, high-frequency and accurate positioning results in real time so as to ensure the normal work of related modules such as a path planning module and a sensing module. The positioning System may be a multi-sensor fusion System, and may obtain a high-precision fusion positioning result by fully utilizing absolute and relative positioning results of various sensors such as a GNSS (Global Navigation Satellite System), a LiDAR (Laser Radar), a Camera (Camera), an IMU (Inertial Measurement Unit), and the like. The fusion positioning method adopted by the positioning system can be a method based on Kalman filtering processing technology.
In some embodiments, for example, a fusion localization method may include: predicting (recursion) updating the position, the speed, the attitude and the corresponding covariance by using data acquired by the IMU to obtain a prediction result; measuring by utilizing data acquired by sensors such as GNSS, LiDAR and Camera to obtain a measuring result; and updating by using the measurement result, and correcting the error of the prediction result so as to ensure that the positioning system can always output a high-precision positioning result in real time.
The fusion localization method may involve two coordinate systems. One coordinate system is the navigation coordinate system, i.e. the ENU (East-North-Up, northeast) coordinate system. Another coordinate system is a carrier coordinate system, which may take the lateral and longitudinal directions of an object (e.g., a vehicle) as X-and Y-axes, respectively. The Z-axis of the carrier coordinate system and X, Y form a right-hand rectangular coordinate system. For example, the direction from the rear of the vehicle to the front of the vehicle may be a positive direction of the Y axis.
In order to obtain a high-precision positioning result in the world, the prediction and measurement of the fusion positioning method are generally performed in a navigation coordinate system. And for special positioning scenes with tunnels and roadside high barriers, the data collected by the LiDAR can be utilized for positioning. The special positioning scene may also include a scene in which a lane line is positioned, and the like.
In these special positioning scenarios, the measurement accuracy may be related to the lateral direction and the longitudinal direction of the carrier coordinate system. In this case, the lateral constraint is strong and the accuracy is high, and the longitudinal constraint is small and the accuracy is poor. If the measurement is used for updating under a navigation coordinate system, the transverse measurement and the longitudinal measurement can be projected to the northeast direction and are mutually influenced, so that the high-precision transverse measurement cannot be fully utilized, and further, the positioning result has deviation and the positioning precision is reduced.
In some embodiments, in the special positioning scenario described above, the fused positioning method described above may be adjusted. The adjusted fusion localization method may include, for example: and converting the first positioning data in the navigation coordinate system into intermediate positioning data in a carrier coordinate system.
For example, the first positioning data may include a first information vector, which may be determined by the following equation
Figure BDA0003720413230000041
Figure BDA0003720413230000042
pos sins (ii) location information, including first latitudinal data and first elevation data, derived for the IMU; pos sensor The position information determined for the GNSS, LiDAR, Camera, etc. sensors includes second latitude and longitude data and second elevation data. The first longitude and latitude data and the second longitude and latitude data are in radians, and the first elevation data and the second elevation data are in meters. d lon 、d lat And d h Respectively, a difference in longitude data, a difference in latitude data, and a difference in elevation data.
Figure BDA0003720413230000048
Is D n Inverse matrix of, D n The method is a conversion parameter matrix for converting meters into radians under a navigation coordinate system.
Figure BDA0003720413230000043
A rotation matrix from the carrier coordinate system to the navigation coordinate system. l. the b Is the lever arm value.
The transformation parameter matrix D in the navigation coordinate system can be determined by the following formula n
Figure BDA0003720413230000044
R M Is the radius of curvature of the meridian. R N The curvature radius of the unitary mortise ring. B is latitude data in radians. cos () is a cosine function。
For another example, the intermediate positioning data may include an intermediate innovation vector, and the first innovation vector may be converted into the intermediate innovation vector in the carrier coordinate system by the following formula
Figure BDA0003720413230000045
Figure BDA0003720413230000046
Figure BDA0003720413230000047
A rotation matrix for navigating the coordinate system to the carrier coordinate system. d x 、d y And d z The difference of the horizontal data, the difference of the vertical data and the difference of the height data.
As another example, the first positioning data may include a first covariance matrix
Figure BDA0003720413230000051
First covariance matrix
Figure BDA0003720413230000052
Corresponding to the first information vector. In one example, the first covariance matrix
Figure BDA0003720413230000053
The one column data may include: d is a radical of lon Variance of d lon And d lat Sum of covariance between d lon And d h The covariance between.
The medium covariance matrix of the medium positioning data can be determined by the following formula
Figure BDA0003720413230000054
Figure BDA0003720413230000055
In some embodiments, the adjusted fusion localization method may include, for example: and adjusting the intermediate positioning data by using a preset value to obtain the adjusted intermediate positioning data.
For example, the longitudinal data of the intermediate innovation vector can be adjusted to a preset longitudinal data value (e.g., 0), and the adjusted intermediate innovation vector
Figure BDA0003720413230000056
Can be as follows:
Figure BDA0003720413230000057
as another example, the intermediate covariance matrix may be adjusted using a preset standard deviation (e.g., 0.05 x 0.05) and a preset covariance (e.g., 0). The adjusted intermediate covariance matrix may be:
Figure BDA0003720413230000058
same indicates before and after adjustment.
In some embodiments, the adjusted fusion localization method may include, for example: and converting the adjusted intermediate positioning data in the carrier coordinate system into the adjusted first positioning data in the navigation coordinate system.
For example, the adjusted first information vector may be determined by the following formula
Figure BDA0003720413230000059
Figure BDA00037204132300000510
For example, the adjusted first covariance matrix may be determined by the following formula
Figure BDA00037204132300000511
Figure BDA0003720413230000061
The radius of curvature of the target object in different coordinate systems may be distinctive. When the formula five and the formula six are used for data conversion, the difference is ignored, and the influence of the longitudinal data with low precision cannot be completely avoided by the adjusted fusion positioning method.
In addition, when the adjusted intermediate covariance matrix is determined, a preset standard deviation is used, and a positioning drift phenomenon is easily generated. The reason is that the adjusted fusion positioning method cannot avoid the influence of the longitudinal data with low precision on the positioning result, and the data generated by the IMU recursion has certain errors, so that the longitudinal data in the positioning result of the method can be converged to an error state. In addition, the preset standard deviation is difficult to accurately set, too small a preset standard deviation can cause excessive convergence of a result of kalman filtering processing, and too large a preset standard deviation cannot play a role in constraint.
FIG. 1 is a schematic diagram of an exemplary system architecture to which the object location method and apparatus may be applied, according to one embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, a system architecture 100 according to this embodiment may include sensors 101, 102, 103, a network 120, a server 130, and a roadside unit 140. Network 120 is used to provide a medium for communication links between sensors 101, 102, 103 and server 130. Network 120 may include various connection types, such as wired and/or wireless communication links, and so forth.
The sensors 101, 102, 103 may interact with the server 130 over the network 120 to receive or send messages, etc.
The sensors 101, 102, 103 may be functional elements integrated on the vehicle 110, such as infrared sensors, ultrasonic sensors, millimeter-wave radar, information acquisition devices, and the like. The sensors 101, 102, 103 may be used to collect status data of sensing objects (e.g., pedestrians, vehicles, obstacles, etc.) around the vehicle 110 as well as surrounding road data.
The vehicle 110 may communicate with a Road Side Unit (RSU) 140, receive information from the Road Side Unit 140, or transmit information to the Road Side Unit.
The server 130 may be disposed at a remote end capable of establishing communication with the vehicle-mounted terminal, and may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server.
The server 130 may be a server that provides various services. For example, a map application, a data processing application, and the like may be installed on the server 130. Taking the server 130 running the data processing application as an example: the state data of the obstacle and the road data transmitted from the sensors 101, 102, 103 are received via the network 120. One or more of the state data of the obstacle and the road data may be used as the data to be processed. And processing the data to be processed to obtain target data.
It should be noted that the object location method provided by the embodiment of the present disclosure may be generally executed by the server 130. Accordingly, the object locating device provided by the embodiment of the present disclosure may also be disposed in the server 130. But is not limited thereto. The object localization methods provided by embodiments of the present disclosure may also be generally performed by the sensors 101, 102, or 103. Accordingly, the object locating device provided by the embodiment of the present disclosure may also be disposed in the sensor 101, 102 or 103.
It is understood that the number of sensors, networks, and servers in fig. 1 is merely illustrative. There may be any number of sensors, networks, and servers, as desired for implementation.
It should be noted that the sequence numbers of the respective operations in the following methods are merely used as a representation of the operations for description, and should not be construed as representing the execution order of the respective operations. The method need not be performed in the exact order shown, unless explicitly stated.
Fig. 2 is a flow diagram of an object location method according to one embodiment of the present disclosure.
As shown in fig. 2, the method 200 may include operations S210 to S240.
In operation S210, second positioning data of the target object in a second coordinate system is obtained according to the posture information of the target object and the first positioning data of the target object in the first coordinate system.
For example, the target object may be, for example, a vehicle, a pedestrian holding a mobile terminal, or the like.
The first coordinate system may for example be the navigation coordinate system described above. For another example, the second coordinate system may be a carrier coordinate system, for example.
For example, the attitude information of the target object may include yaw (heading angle), roll (roll angle), and pitch (pitch angle) of the target object.
For example, the first positioning data may be related to longitude data, latitude data, and elevation data of the target object. For another example, the second positioning data may be related to lateral data, longitudinal data, and elevation data of the target object.
For example, during data conversion, a set of conversion parameters may be determined using the pose information. And performing data conversion between different coordinate systems by using the conversion parameter set.
In operation S220, according to the second positioning data, information to be processed of the target object in the second coordinate system is determined.
For example, as described above, the second positioning data may include lateral data, longitudinal data, and elevation data of the target object. The second positioning data may indicate a position of the target object in a second coordinate system. In one example, the second positioning data may be taken as the information to be processed.
In operation S230, the information to be processed is filtered to obtain the object location data.
For example, a kalman filter may be used to filter the information to be processed to obtain the target location data. It will be appreciated that other filters may be used for the filtering process.
In operation S240, a position of the target object is determined according to the target location data.
For example, various operations may be performed using the object location data to determine the location of the object, which is not limited by the present disclosure.
Through the embodiment of the disclosure, when positioning data between different coordinate systems is converted, the attitude information of the target object is utilized, so that the difference of the curvature radius of the target object between the different coordinate systems can be concerned, the influence of longitudinal data on transverse data is reduced, and the position of the target object can be more accurately determined.
In some embodiments, the first positioning data is associated with first position information of the target object in a first coordinate system, the first position information including longitude data, latitude data, and altitude data, the second positioning data is associated with second position information of the target object in a second coordinate system, the second position information including lateral data, longitudinal data, and altitude data; the first positioning data includes a first innovation vector and a first covariance matrix, and the second positioning data includes a second innovation vector and a second covariance matrix.
For example, the first information vector may be determined by the following formula
Figure BDA0003720413230000081
Figure BDA0003720413230000082
Pos is similar to equation one sins (ii) location information, including first latitudinal data and first elevation data, derived for the IMU; pos sensor The position information determined for the GNSS, LiDAR, Camera, etc. sensors includes second latitude and longitude data and second elevation data. The first longitude and latitude data and the second longitude and latitude data are in radians, and the first elevation data and the second elevation data are in meters. d is a radical of lon 、d lat And d h Respectively by longitude dataDiscrepancies, discrepancies in dimensional data, and discrepancies in elevation data.
Figure BDA0003720413230000084
Is D n Inverse matrix of, D n And converting the metric into a conversion parameter matrix of radian in a navigation coordinate system.
Figure BDA0003720413230000083
A rotation matrix from the carrier coordinate system to the navigation coordinate system. l b Is the lever arm value. With respect to D n For a detailed description, reference may be made to the formula two described above, and details of the present disclosure are not repeated herein.
For example, a first covariance matrix
Figure BDA0003720413230000091
Corresponding to the first information vector. In one example, the first covariance matrix
Figure BDA0003720413230000092
The one column data may include: d lon Variance of (c), d lon And d lat Sum of covariance between d lon And d h The covariance between.
Operation S210 described above will be described in further detail below with reference to fig. 3.
Fig. 3 is a flow chart of obtaining second positioning data according to one embodiment of the present disclosure.
As shown in fig. 3, the method 310 may obtain second positioning data of the target object in the second coordinate system according to the posture information of the target object and the first positioning data of the target object in the first coordinate system, which will be described in detail with reference to operations S311 to S313.
In operation S311, a first radius of curvature of the target object in the second coordinate system and a second radius of curvature of the target object in the second coordinate system are determined according to the posture information of the target object.
For example, the first radius of curvature R can be determined by the following equation X And a second radius of curvature R Y
Figure BDA0003720413230000093
Figure BDA0003720413230000094
R M Is the radius of curvature of the meridian. R N The curvature radius of the unitary mortise ring. cos () is a cosine function. sin () is a sine function. yaw is the heading angle in radians of the second coordinate system relative to the first coordinate system.
In operation S312, a set of transformation parameters of the target object in the second coordinate system is determined according to the first radius of curvature and the second radius of curvature.
For example, the transformation parameter set of the target object in the second coordinate system may be implemented as a transformation parameter matrix D b . The conversion parameter matrix D can be determined by the following formula b
Figure BDA0003720413230000095
In operation S313, second positioning data is obtained according to the conversion parameter set and the first positioning data.
For example, the first positioning data may be converted to obtain intermediate positioning data of the target object in the second coordinate system.
For example, the intermediate positioning data may include an intermediate innovation vector and an intermediate covariance matrix.
In one example, the formula described above may be utilized to generate three pairs of first information vectors
Figure BDA0003720413230000101
Converting to obtain intermediate innovation vector
Figure BDA0003720413230000102
In one example, the above can be utilizedThe formula four pairs of first covariance matrixes
Figure BDA0003720413230000103
Converting to obtain a middle covariance matrix
Figure BDA0003720413230000104
For example, the second positioning data is obtained according to the conversion parameter set and the intermediate positioning data.
For example, the transformation parameter matrix D can be obtained from the following formula b And the intermediate innovation vector to obtain a second innovation vector
Figure BDA0003720413230000105
Figure BDA0003720413230000106
Figure BDA0003720413230000107
For transforming a parameter matrix D b The inverse of the matrix of (a) is,
Figure BDA0003720413230000108
is a rotation matrix that navigates the coordinate system to the carrier coordinate system.
For example, the transformation parameter matrix D can be obtained from the following formula b And the intermediate covariance matrix to obtain a second covariance matrix
Figure BDA0003720413230000109
Figure BDA00037204132300001010
Figure BDA00037204132300001011
Is composed of
Figure BDA00037204132300001012
The transposed matrix of (2).
Figure BDA00037204132300001013
To know
Figure BDA00037204132300001014
In units of radians.
It is understood that the above description uses the heading angle in the attitude information as an example, and the embodiment of obtaining the second positioning data is explained in detail. In the embodiment of the present disclosure, the second positioning data may also be obtained according to a roll angle or a pitch angle, which is not described herein again.
It is to be appreciated that some embodiments of obtaining second positioning data are described in detail above. The determination of the information to be processed of the target object in the second coordinate system will be described in detail below.
In some embodiments, determining the information to be processed of the target object in the second coordinate system according to the second positioning data includes: determining filtering parameter data according to the second innovation vector; and determining the information to be processed according to the second positioning data and the filtering parameter data.
In the embodiment of the present disclosure, the partial derivative may be obtained from the second innovation vector, and the partial derivative matrix is used as the filtering parameter data.
For example, based on the position information, the velocity information, the attitude information, the gyroscope parameter information, and the accelerometer parameter information, the partial derivative is calculated for the second innovation vector, and a partial derivative matrix is obtained as the filtering parameter data.
In one example, in conjunction with equations nine and thirteen described above, the partial derivative matrix
Figure BDA0003720413230000111
For example, it may be:
Figure BDA0003720413230000112
Figure BDA0003720413230000113
may be a 3 x 15 matrix. 0 3×3 Column data with 3 elements each being 0 is included.
In an embodiment of the present disclosure, the filter parameter data includes a plurality of first columns of data, at least one of the plurality of first columns of data is a rate of change of position information of the target object, and at least one of the plurality of first columns of data is a rate of change of posture information of the target object.
For example, in
Figure BDA0003720413230000114
In (1),
Figure BDA0003720413230000115
the 1 st first column data to the 3 rd first column data may be a change rate of the position information of the target object.
Also for example, in
Figure BDA0003720413230000116
In the step (1), the first step,
Figure BDA0003720413230000117
the 7 th to 9 th first columns of data may be a rate of change of the posture information of the target object. The x is used to convert single-column data into multi-column data or to convert multi-row data into multi-row data.
In some embodiments, the filter parameter data and the second positioning data may be adjusted to determine the information to be processed.
In an embodiment of the present disclosure, the filter parameter data further includes a plurality of first line data. Determining the information to be processed according to the second positioning data and the filter parameter data may include: determining adjusted filtering parameter data according to at least one first line of data in the plurality of first lines of data; and determining the information to be processed according to the second positioning data and the adjusted filtering parameter data.
For example, the plurality of first row data includes: a first line of data relating to lateral data of the target object, a first line of data relating to longitudinal data of the target object, and a first line of data relating to height data of the target object.
For another example, the at least one first line of data is associated with at least one of lateral data of the target object and height data of the target object. In one example, the adjusted parametric filter data may be derived from a first line of data relating to lateral data of the target object and a first line of data relating to height data of the target object. By the embodiment of the disclosure, when the target object runs on a road with a variable height, the influence of longitudinal data on transverse data can be reduced.
In one example, as described above, the partial derivative matrix
Figure BDA0003720413230000121
May be a 3 x 15 matrix. Partial derivative matrix
Figure BDA0003720413230000122
May include 3 first rows of data. In the partial derivative matrix
Figure BDA0003720413230000123
The 1 st first line data may be related to horizontal data of the target object, the 2 nd first line data may be related to vertical data of the target object, and the 3 rd first line data may be related to height data of the target object. May be based on a matrix of partial derivatives
Figure BDA0003720413230000124
The 1 st first row data and the 3 rd first row data in the data matrix are obtained to obtain the adjusted partial derivative matrix
Figure BDA0003720413230000125
In one example, the adjusted partial derivative matrix may be obtained by the following equation
Figure BDA0003720413230000126
Figure BDA0003720413230000127
(1, 3:) are the 1 st and 3 rd row data of the reservation matrix.
Figure BDA0003720413230000128
May be a 2 x 15 matrix.
Further, in an embodiment of the present disclosure, the second covariance matrix includes a plurality of second row data and a plurality of second column data. Determining the information to be processed according to the second positioning data and the adjusted filtering parameter data includes: obtaining an adjusted second covariance matrix according to at least one second row data in the second row data and at least one second column data in the second column data; and determining information to be processed according to the adjusted second covariance matrix and the adjusted filtering parameter data.
For example, the plurality of second line data includes: the second line data related to the lateral data of the target object, the second line data related to the longitudinal data of the target object, and the second line data related to the height data of the target object.
For another example, the plurality of second columns of data includes: a second column of data relating to the horizontal data of the target object, a second column of data relating to the vertical data of the target object, and a second column of data relating to the height data of the target object.
For example, the at least one second line of data is related to at least one of lateral data of the target object and height data of the target object; the at least one second column of data is associated with at least one of lateral data of the target object and height data of the target object. In one example, the adjusted second covariance matrix may be derived from a second row of data associated with the lateral data of the target object, a second row of data associated with the height data of the target object, a second column of data associated with the lateral data of the target object, and a second column of data associated with the height data of the target object.
In one example, the second covariance matrix
Figure BDA0003720413230000131
May be a 3 x 3 matrix. Second covariance matrix
Figure BDA0003720413230000132
Including
3 second rows of data and 3 second columns of data. At the second covariance matrix
Figure BDA0003720413230000133
The 1 st second line data may be related to horizontal data of the target object, the 2 nd second line data may be related to vertical data of the target object, and the 3 rd second line data may be related to height data of the target object. In addition, in the second covariance matrix
Figure BDA0003720413230000134
The 1 st second column data may be related to horizontal data of the target object, the 2 nd second column data may be related to vertical data of the target object, and the 3 rd second column data may be related to height data of the target object. May be based on the second covariance matrix
Figure BDA0003720413230000135
Obtaining the adjusted second covariance matrix according to the 1 st second row data, the 3 rd second row data, the 1 st second column data and the 3 rd second column data
Figure BDA0003720413230000136
In one example, the adjusted second covariance matrix may be obtained by the following equation
Figure BDA0003720413230000137
Figure BDA0003720413230000138
(1, 3; 1, 3) is: and taking the 1 st row data and the 3 rd row data of the matrix, and taking the 1 st column data and the 3 rd column data from the 1 st row data and the 3 rd row data as the processed matrix. Adjusted second covariance matrix
Figure BDA0003720413230000139
May be a 2 x 2 matrix.
Further, in this disclosure, the second innovation vector includes a plurality of third row data, and determining the information to be processed according to the adjusted second covariance matrix and the adjusted filter parameter data includes: determining an adjusted second innovation vector according to at least one third line of data in the plurality of third lines of data; and determining the adjusted second innovation vector, the adjusted second covariance matrix and the adjusted filtering parameter data as the information to be processed.
For example, the plurality of third line data includes: a third line of data relating to lateral data of the target object, a third line of data relating to longitudinal data of the target object, and a third line of data relating to height data of the target object.
For another example, the at least one third line of data is associated with at least one of lateral data of the target object and height data of the target object.
In one example, the adjusted second innovation vector may be derived from a third line of data related to lateral data of the target object and a third line of data related to height data of the target object.
In one example, the second innovation vector
Figure BDA0003720413230000141
May be a 3 x 1 matrix. Second innovation vector
Figure BDA0003720413230000142
Figure BDA0003720413230000142
3 third rows of data may be included. In the second innovation vector
Figure BDA0003720413230000143
The 1 st third line data may be related to horizontal data of the target object, the 2 nd third line data may be related to vertical data of the target object, and the 3 rd third line data may be related to height data of the target object. May be based on the second innovation vector
Figure BDA0003720413230000144
The 1 st third line data and the 3 rd third line data in the data processing system obtain the adjusted second innovation vector
Figure BDA0003720413230000145
In one example, the adjusted second innovation vector may be obtained by the following formula
Figure BDA0003720413230000146
Figure BDA0003720413230000147
Figure BDA0003720413230000148
May be a 2 x 1 matrix.
For example, the adjusted second innovation vector may be adjusted
Figure BDA0003720413230000149
Adjusted partial derivative matrix
Figure BDA00037204132300001410
And the adjusted second covariance matrix
Figure BDA00037204132300001411
As information to be processed.
In other embodiments, the at least one first line of data is related to lateral data of the target object. The at least one second line of data is associated with lateral data of the target object; at least one second column of data is associated with the lateral data of the target object. At least one third line of data is associated with the lateral data of the target object.
For example, it may be based on a matrix of partial derivatives
Figure BDA00037204132300001412
The adjusted partial derivative matrix is obtained from the data of the 1 st first row in the (1)
Figure BDA00037204132300001413
May be based on the second covariance matrix
Figure BDA00037204132300001414
The adjusted second covariance matrix is obtained from the 1 st second row data and the 1 st second column data
Figure BDA0003720413230000151
May be based on the second innovation vector
Figure BDA0003720413230000152
The 1 st third line data in the sequence table to obtain the adjusted second innovation vector
Figure BDA0003720413230000153
By the embodiment of the disclosure, when the target object runs on a road with unchanged height, the influence of longitudinal data on transverse data can be reduced.
After the information to be processed is obtained, the information to be processed may be subjected to a filtering process, which will be described in detail below.
Fig. 4 is a flow chart of obtaining object location data according to one embodiment of the present disclosure.
As shown in fig. 4, the method 430 may perform filtering processing on the information to be processed to obtain object location data, which will be described in detail with reference to operations S431 to S434.
In operation S431, a target gain matrix is obtained according to the adjusted second covariance matrix and the adjusted filter parameter data.
For example, the target gain matrix K may be determined by the following equation k
Figure BDA0003720413230000154
Target gain matrix K k Is the gain matrix at time k. Target gain matrix K k May be a 15 x 2 matrix. P k/k-1 The covariance matrix of the Kalman filtering state predicted value at the k moment is deduced according to the Kalman filtering state predicted value at the k-1 moment by the IMU. P k/k-1 May be a 15 x 15 matrix.
In operation S432, target state data is obtained according to the target gain matrix, the adjusted second innovation vector, and the adjusted filtering parameter data.
For example, the target state data X can be obtained by the following formula k
Figure BDA0003720413230000155
X k/k-1 The Kalman filtering state prediction value at the kth moment is deduced according to the Kalman filtering state prediction value at the kth-1 moment of the IMU. X k May be a 15 x 1 matrix. X k/k-1 May be a 15 x 1 matrix.
In operation S433, a target covariance matrix is obtained according to the target gain matrix and the adjusted filter parameter data.
For example, the target covariance matrix P can be obtained by the following formula k
Figure BDA0003720413230000161
And I is an identity matrix. P k May be a 15 x 15 matrix.
In operation S434, object location data is determined according to the object state data and the object covariance matrix.
For example, the covariance matrix P may be based on the target covariance matrix k And target state data X k And determining the object location data.
In some embodiments, target state data X k May be a 15 x 1 matrix. Target state data X k Including line data relating to the position of the target object, from which the position of the target object can be determined.
Fig. 5A is an exemplary scene schematic according to one embodiment of the disclosure.
As shown in fig. 5, in scenario 500, a vehicle 510 may be traveling into a tunnel 520. After exiting tunnel 520, vehicle 510 may travel through intersection 530. In the tunnel 520, it is difficult to acquire accurate position information from data collected by sensors such as GNSS, LiDAR, Camera, etc. of the vehicle 510.
Fig. 5B is an exemplary schematic diagram of a location of a target object according to one embodiment of the present disclosure.
As shown in fig. 5B, at the 1 st time of travel within the tunnel, using the object localization method provided by the present disclosure, it may be determined that the vehicle 510 may be at position 510_ 1. At time 2 traveling within the tunnel, using the object localization methods provided by the present disclosure, it may be determined that vehicle 510 may be at location 510_ 2. At time 3 traveling within the tunnel, using the object localization methods provided by the present disclosure, it may be determined that vehicle 510 may be at location 510_ 3. It is to be understood that the wire frame representing the position in fig. 5B (e.g., position 510_1) is merely illustrative.
Fig. 6 is an exemplary schematic diagram of a position of an adjusted target object according to another embodiment of the disclosure.
As shown in fig. 6, a vehicle 610 may be traveling in a scenario 600. The above detailed description about the scenario 500 also applies to the scenario 600 in the present embodiment, and the disclosure is not repeated herein.
As shown in fig. 6, at the 1 st time of travel within the tunnel of the scene 600, it may be determined that the vehicle 610 may be at the location 610_ 1' using the adjusted fused positioning method. At time 2 of travel within the tunnel, it may be determined that vehicle 610 may be at location 610_ 2'. At time 3 traveling within the tunnel, it may be determined that the vehicle 610 may be at location 610_ 3'. As shown in fig. 6, when the vehicle travels in the tunnel, the position of the vehicle 610 cannot be accurately determined by using the adjusted fusion positioning method, and a position drift phenomenon occurs, which may cause an automatic driving system of the vehicle 610 to generate an erroneous command and change lanes back and forth in the tunnel.
It is to be understood that the wire frame representing the position in fig. 6 (e.g., position 610_ 1') is merely illustrative.
As shown in fig. 5B and fig. 6, according to the embodiment of the present disclosure, the position of the vehicle 510 can be accurately determined by using the object positioning method (e.g., the method 200) provided by the present disclosure, so as to alleviate the position drift phenomenon.
FIG. 7 is a block diagram of an object locating device according to one embodiment of the present disclosure.
As shown in fig. 7, the apparatus 700 may include an obtaining module 710, a first determining module 720, a filter processing module 730, and a second determining module 740.
The obtaining module 710 is configured to obtain second positioning data of the target object in a second coordinate system according to the posture information of the target object and the first positioning data of the target object in the first coordinate system.
The first determining module 720 is configured to determine, according to the second positioning data, to-be-processed information of the target object in the second coordinate system;
and the filtering processing module 730 is configured to perform filtering processing on the information to be processed to obtain the target location data.
The second determining module 740 is configured to determine a position of the target object according to the target location data.
In some embodiments, the first positioning data is associated with first position information of the target object in a first coordinate system, the first position information including longitude data, latitude data, and elevation data. The second positioning data is associated with second position information of the target object in a second coordinate system, the second position information including lateral data, longitudinal data, and altitude data. The first positioning data includes a first innovation vector and a first covariance matrix, and the second positioning data includes a second innovation vector and a second covariance matrix.
In some embodiments, the obtaining module comprises: the first determining submodule is used for determining a first curvature radius of the target object in a second coordinate system and a second curvature radius of the target object in the second coordinate system according to the attitude information of the target object; the second determining submodule is used for determining a conversion parameter set of the target object under a second coordinate system according to the first curvature radius and the second curvature radius; and the first obtaining submodule is used for obtaining second positioning data according to the conversion parameter set and the first positioning data.
In some embodiments, the first determining module comprises: the third determining submodule is used for determining filtering parameter data according to the second innovation vector, wherein the filtering parameter data comprise a plurality of first columns of data, at least one first column of data in the plurality of first columns of data is the change rate of the position information of the target object, and at least one first column of data in the plurality of first columns of data is the change rate of the attitude information of the target object; and the fourth determining submodule is used for determining the information to be processed according to the second positioning data and the filtering parameter data.
In some embodiments, the filter parameter data further includes a plurality of first line data, and the fourth determination submodule includes: a first determining unit, configured to determine the adjusted filtering parameter data according to at least one first line data of the plurality of first line data, where the at least one first line data is related to at least one of lateral data of the target object and height data of the target object; and the second determining unit is used for determining the information to be processed according to the second positioning data and the adjusted filtering parameter data.
In some embodiments, the second covariance matrix includes a plurality of second row data and a plurality of second column data, and the second determination unit includes: a first determining subunit, configured to determine an adjusted second covariance matrix according to at least one second row of data in the plurality of second row of data and at least one second column of data in the plurality of second column of data, where the at least one second row of data is related to at least one of the horizontal data of the target object and the height data of the target object, and the at least one second column of data is related to at least one of the horizontal data of the target object and the height data of the target object; and the second determining subunit is used for determining the information to be processed according to the adjusted second covariance matrix and the adjusted filtering parameter data.
In some embodiments, the second innovation vector includes a plurality of third lines of data, the second determining subunit is further to: determining an adjusted second innovation vector according to at least one third line data in the plurality of third line data, wherein the at least one third line data is related to at least one of the lateral data of the target object and the height data of the target object; and determining the adjusted second innovation vector, the adjusted second covariance matrix and the adjusted filtering parameter data as information to be processed.
In some embodiments, the filtering processing module comprises: the second obtaining submodule is used for obtaining a target gain matrix according to the adjusted second covariance matrix and the adjusted filtering parameter data; the third obtaining submodule is used for obtaining target state data according to the target gain matrix, the adjusted second innovation vector and the adjusted filtering parameter data; the fourth obtaining submodule is used for obtaining a target covariance matrix according to the target gain matrix and the adjusted filtering parameter data; and the fifth obtaining submodule is used for obtaining the target positioning data according to the target state data and the target covariance matrix.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 8 shows a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 801 performs the various methods and processes described above, such as the object localization method. For example, in some embodiments, the object localization method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto device 800 via ROM 802 and/or communications unit 809. When the computer program is loaded into the RAM 803 and executed by the computing unit 801, one or more steps of the object localization method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the object localization method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to an embodiment of the present disclosure, the present disclosure also provides an autonomous vehicle, which may include the electronic device provided by the present disclosure. For example, the autonomous vehicle may include the electronic device 800 described above.
According to an embodiment of the present disclosure, the present disclosure also provides an edge computing platform including a plurality of edge computing units, which may include the electronic device provided by the present disclosure. For example, the edge calculation unit may include the electronic device 800 described above.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (21)

1. An object localization method, comprising:
obtaining second positioning data of the target object in a second coordinate system according to the attitude information of the target object and the first positioning data of the target object in the first coordinate system;
determining information to be processed of the target object in the second coordinate system according to the second positioning data;
filtering the information to be processed to obtain target positioning data; and
and determining the position of the target object according to the target positioning data.
2. The method of claim 1, wherein the first positioning data relates to first position information of the target object in the first coordinate system, the first position information including longitude data, latitude data, and elevation data,
the second positioning data is related to second position information of the target object in the second coordinate system, and the second position information comprises transverse data, longitudinal data and height data;
the first positioning data includes a first innovation vector and a first covariance matrix, and the second positioning data includes a second innovation vector and a second covariance matrix.
3. The method of claim 1, wherein the obtaining second positioning data of the target object in a second coordinate system according to the attitude information of the target object and the first positioning data of the target object in the first coordinate system comprises:
determining a first curvature radius of the target object in the second coordinate system and a second curvature radius of the target object in the second coordinate system according to the attitude information of the target object;
determining a conversion parameter set of the target object under the second coordinate system according to the first curvature radius and the second curvature radius; and
and obtaining the second positioning data according to the conversion parameter set and the first positioning data.
4. The method according to claim 2 or 3, wherein the determining the information to be processed of the target object in the second coordinate system according to the second positioning data comprises:
determining filtering parameter data according to the second innovation vector, wherein the filtering parameter data comprise a plurality of first columns of data, at least one first column of data in the plurality of first columns of data is a change rate of the position information of the target object, and at least one first column of data in the plurality of first columns of data is a change rate of the posture information of the target object; and
and determining the information to be processed according to the second positioning data and the filtering parameter data.
5. The method of claim 4, wherein the filter parameter data further comprises a plurality of first line data,
the determining the information to be processed according to the second positioning data and the filtering parameter data includes:
determining adjusted filtering parameter data from at least one of the plurality of first lines of data, wherein the at least one first line of data is related to at least one of lateral data of the target object and height data of the target object; and
and determining the information to be processed according to the second positioning data and the adjusted filtering parameter data.
6. The method of claim 5, wherein the second covariance matrix comprises a plurality of second rows of data and a plurality of second columns of data,
the determining, according to the second positioning data and the adjusted filtering parameter data, the to-be-processed information includes:
determining an adjusted second covariance matrix according to at least one second row of data in the second row of data and at least one second column of data in the second column of data, wherein the at least one second row of data is associated with at least one of lateral data of the target object and height data of the target object, and the at least one second column of data is associated with at least one of lateral data of the target object and height data of the target object; and
and determining information to be processed according to the adjusted second covariance matrix and the adjusted filtering parameter data.
7. The method of claim 6, wherein the second innovation vector includes a plurality of third lines of data,
the determining information to be processed according to the adjusted second covariance matrix and the adjusted filtering parameter data includes:
determining an adjusted second innovation vector according to at least one third line of data in the plurality of third lines of data, wherein the at least one third line of data is related to at least one of lateral data of the target object and height data of the target object; and
and determining the adjusted second innovation vector, the adjusted second covariance matrix and the adjusted filtering parameter data as information to be processed.
8. The method of claim 7, wherein the filtering the information to be processed to obtain object location data comprises:
obtaining a target gain matrix according to the adjusted second covariance matrix and the adjusted filtering parameter data;
obtaining target state data according to the target gain matrix, the adjusted second innovation vector and the adjusted filtering parameter data;
obtaining a target covariance matrix according to the target gain matrix and the adjusted filtering parameter data; and
and obtaining the target positioning data according to the target state data and the target covariance matrix.
9. An object positioning device comprising:
the acquisition module is used for acquiring second positioning data of the target object in a second coordinate system according to the attitude information of the target object and the first positioning data of the target object in the first coordinate system;
the first determining module is used for determining information to be processed of the target object in the second coordinate system according to the second positioning data;
the filtering processing module is used for filtering the information to be processed to obtain target positioning data; and
and the second determining module is used for determining the position of the target object according to the target positioning data.
10. The apparatus as defined in claim 9, wherein the first positioning data relates to first position information of the target object in the first coordinate system, the first position information including longitude data, latitude data, and elevation data,
the second positioning data is related to second position information of the target object in the second coordinate system, and the second position information comprises transverse data, longitudinal data and height data;
the first positioning data includes a first innovation vector and a first covariance matrix, and the second positioning data includes a second innovation vector and a second covariance matrix.
11. The apparatus of claim 9, wherein the means for obtaining comprises:
the first determining submodule is used for determining a first curvature radius of the target object in the second coordinate system and a second curvature radius of the target object in the second coordinate system according to the attitude information of the target object;
the second determining submodule is used for determining a conversion parameter set of the target object under the second coordinate system according to the first curvature radius and the second curvature radius; and
and the first obtaining submodule is used for obtaining the second positioning data according to the conversion parameter set and the first positioning data.
12. The apparatus of claim 10 or 11, wherein the first determining means comprises:
a third determining submodule, configured to determine filtering parameter data according to the second innovation vector, where the filtering parameter data includes multiple first columns of data, at least one of the multiple first columns of data is a change rate of the position information of the target object, and at least one of the multiple first columns of data is a change rate of the posture information of the target object; and
and the fourth determining submodule is used for determining the information to be processed according to the second positioning data and the filtering parameter data.
13. The apparatus of claim 12, wherein the filter parameter data further comprises a plurality of first line data,
the fourth determination submodule includes:
a first determining unit, configured to determine adjusted filter parameter data according to at least one first line data of the plurality of first line data, wherein the at least one first line data is related to at least one of lateral data of the target object and height data of the target object; and
and the second determining unit is used for determining the information to be processed according to the second positioning data and the adjusted filtering parameter data.
14. The apparatus of claim 13, wherein the second covariance matrix comprises a plurality of second rows of data and a plurality of second columns of data,
the second determination unit includes:
a first determining subunit, configured to determine an adjusted second covariance matrix according to at least one second row of data in the second row of data and at least one second column of data in the second column of data, where the at least one second row of data is related to at least one of the horizontal data of the target object and the height data of the target object, and the at least one second column of data is related to at least one of the horizontal data of the target object and the height data of the target object; and
and the second determining subunit is used for determining information to be processed according to the adjusted second covariance matrix and the adjusted filtering parameter data.
15. The apparatus of claim 14, wherein the second innovation vector includes a plurality of third lines of data,
the second determining subunit is further configured to:
determining an adjusted second innovation vector according to at least one third line of data in the plurality of third lines of data, wherein the at least one third line of data is related to at least one of lateral data of the target object and height data of the target object; and
and determining the adjusted second innovation vector, the adjusted second covariance matrix and the adjusted filtering parameter data as information to be processed.
16. The apparatus of claim 15, wherein the filtering processing module comprises:
the second obtaining submodule is used for obtaining a target gain matrix according to the adjusted second covariance matrix and the adjusted filtering parameter data;
a third obtaining submodule, configured to obtain target state data according to the target gain matrix, the adjusted second innovation vector, and the adjusted filtering parameter data;
a fourth obtaining submodule, configured to obtain a target covariance matrix according to the target gain matrix and the adjusted filter parameter data; and
and the fifth obtaining submodule is used for obtaining the target positioning data according to the target state data and the target covariance matrix.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1 to 8.
19. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 8.
20. An edge computing platform comprising a plurality of edge computing units, the edge computing units comprising the electronic device of claim 17.
21. An autonomous vehicle comprising the electronic device of claim 17.
CN202210763841.2A 2022-06-29 2022-06-29 Object positioning method, device, automatic driving vehicle and edge computing platform Pending CN115127561A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210763841.2A CN115127561A (en) 2022-06-29 2022-06-29 Object positioning method, device, automatic driving vehicle and edge computing platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210763841.2A CN115127561A (en) 2022-06-29 2022-06-29 Object positioning method, device, automatic driving vehicle and edge computing platform

Publications (1)

Publication Number Publication Date
CN115127561A true CN115127561A (en) 2022-09-30

Family

ID=83382940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210763841.2A Pending CN115127561A (en) 2022-06-29 2022-06-29 Object positioning method, device, automatic driving vehicle and edge computing platform

Country Status (1)

Country Link
CN (1) CN115127561A (en)

Similar Documents

Publication Publication Date Title
JP7299261B2 (en) Vehicle dead reckoning method, apparatus, device, storage medium, and program
US11802769B2 (en) Lane line positioning method and apparatus, and storage medium thereof
KR20210111180A (en) Method, apparatus, computing device and computer-readable storage medium for positioning
CN114018274B (en) Vehicle positioning method and device and electronic equipment
CN113933818A (en) Method, device, storage medium and program product for calibrating laser radar external parameter
CN112800159B (en) Map data processing method and device
CN112835085B (en) Method and device for determining vehicle position
CN114179825B (en) Method for obtaining confidence of measurement value through multi-sensor fusion and automatic driving vehicle
KR20220052312A (en) Vehicle positioning method, apparatus and autonomous driving vehicle
CN113984044A (en) Vehicle pose acquisition method and device based on vehicle-mounted multi-perception fusion
CN114323033A (en) Positioning method and device based on lane lines and feature points and automatic driving vehicle
CN111552757B (en) Method, device and equipment for generating electronic map and storage medium
CN115236714A (en) Multi-source data fusion positioning method, device and equipment and computer storage medium
CN112214014A (en) Automatic driving control method and system for agricultural machinery
CN115164936A (en) Global pose correction method and device for point cloud splicing in high-precision map manufacturing
CN111469781A (en) Method and apparatus for outputting information
CN115683170B (en) Calibration method based on radar point cloud data fusion error
CN113218380B (en) Electronic compass correction method and device, electronic equipment and storage medium
CN115792985A (en) Vehicle positioning method and device, electronic equipment, storage medium and vehicle
CN115727871A (en) Track quality detection method and device, electronic equipment and storage medium
CN115127561A (en) Object positioning method, device, automatic driving vehicle and edge computing platform
CN112649823A (en) Unmanned aerial vehicle navigation positioning method and device
CN114170297A (en) Vehicle pose processing method and device, electronic equipment and automatic driving vehicle
CN115658833A (en) High-precision map generation method and device, electronic equipment and storage medium
CN114117257A (en) Processing method, device and equipment for generating positioning information of high-precision map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination