CN117168470A - Positioning information determining method and device, electronic equipment and storage medium - Google Patents

Positioning information determining method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117168470A
CN117168470A CN202310618457.8A CN202310618457A CN117168470A CN 117168470 A CN117168470 A CN 117168470A CN 202310618457 A CN202310618457 A CN 202310618457A CN 117168470 A CN117168470 A CN 117168470A
Authority
CN
China
Prior art keywords
information
determining
positioning
vehicle
imu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310618457.8A
Other languages
Chinese (zh)
Inventor
邢春上
张天奇
陈博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Faw Nanjing Technology Development Co ltd
FAW Group Corp
Original Assignee
Faw Nanjing Technology Development Co ltd
FAW Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Faw Nanjing Technology Development Co ltd, FAW Group Corp filed Critical Faw Nanjing Technology Development Co ltd
Priority to CN202310618457.8A priority Critical patent/CN117168470A/en
Publication of CN117168470A publication Critical patent/CN117168470A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a method, a device, electronic equipment and a storage medium for determining positioning information, which are applied to a vehicle, wherein the method comprises the steps of acquiring data information, wherein the data information comprises depth vision synchronous positioning and mapping SLAM information, laser radar positioning information and inertial measurement unit IMU information; determining global map information according to the depth vision SLAM information; and fusing the global map information, the depth vision SLAM information, the laser radar positioning information and the IMU information to obtain positioning information. According to the invention, the depth vision SLAM information, the laser radar positioning information and the IMU information are fused to obtain the positioning information, so that the problem of inaccuracy in positioning by adopting a single sensor in the vehicle parking positioning process is solved, the robustness and the accuracy of the AVP are improved, accurate parking is realized, and the use experience of a user is improved.

Description

Positioning information determining method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of parking positioning technologies of vehicles, and in particular, to a method and apparatus for determining positioning information, an electronic device, and a storage medium.
Background
With the rapid development of the automobile industry in recent years, advanced driving assistance systems (Advanced Driving Assistance System, ADAS) are steadily developed in a period of time in the future, so that the requirement of 'last kilometer free' parking is solved for a vast number of users, and autonomous passenger parking (Automated Valet Parking, AVP) is used as a key core field of ADAS.
Currently, in the prior art, AVP mainly uses a single sensor, for example, based on an inertial measurement unit (Inertial Measurement Unit, IMU), to calculate the position and posture information of a vehicle, and there is a phenomenon of large error; based on the data source of the instant positioning and map construction (Simultaneously Localization and Mapping, SLAM), the problems of necessary initialization, scale drift, scale blurring and the like exist by using a monocular camera; the binocular camera has the problems of high calculation amount, smaller field of view, small dynamic range of a scene and the like; in addition, the visual SLAM has lower robustness and is easy to be interfered by external environments (illumination, shielding and the like), the robustness and the accuracy of the AVP are reduced, and the use experience of a user is reduced.
Disclosure of Invention
The invention provides a method, a device, electronic equipment and a storage medium for determining positioning information, which solve the problem of inaccurate positioning of a vehicle by adopting a single sensor in the parking positioning process, improve the robustness and the accuracy of AVP and realize accurate parking.
According to an aspect of the present invention, there is provided a method for determining positioning information, applied to a vehicle, the method comprising:
acquiring data information, wherein the data information comprises depth vision synchronous positioning and mapping SLAM information, laser radar positioning information and inertial measurement unit IMU information;
determining global map information according to the depth vision SLAM information;
and fusing the global map information, the depth vision SLAM information, the laser radar positioning information and the IMU information to obtain positioning information.
Optionally, obtaining depth vision SLAM information includes: acquiring environment sensing information through a SLAM sensor; determining transformed image information according to a first preprocessing algorithm and context awareness information; depth vision SLAM information is determined from the transformed image information.
Optionally, determining depth vision SLAM information according to the transformed image information includes: determining first initial information based on a segmentation algorithm according to the transformed image information; and correcting the first initial information according to a loop monitoring algorithm, and determining depth vision SLAM information.
Optionally, obtaining the laser radar positioning information includes: acquiring point cloud data information through a laser radar sensor; and determining laser radar positioning information according to the second preprocessing algorithm and the point cloud data information.
Optionally, acquiring IMU information includes: acquiring vehicle data information through an IMU sensor, wherein the vehicle data information comprises vehicle acceleration and vehicle speed or comprises a vehicle acceleration and rotation matrix; and determining IMU information according to the vehicle data information.
Optionally, determining IMU information according to the vehicle data information includes: if the vehicle data information includes vehicle acceleration and vehicle speed, then the IMU informationWherein v is t A is the vehicle speed at time t t For vehicle acceleration at time t, v t =v t-1 dt+a t dt,v t-1 Vehicle speed at time t-1; if the vehicle data information comprises a rotation matrix and a vehicle acceleration, IMU information +.>Wherein R is t For the rotation matrix at time t, delta pt As the position coordinate change value at time t, delta (pt-1) A is the position coordinate change value at the time t-1 t The vehicle acceleration at time t, g is the gravitational acceleration.
Optionally, fusing global map information, depth vision SLAM information, laser radar positioning information and IMU information to obtain positioning information, including: determining target positioning data based on a filtering algorithm according to global map information, laser radar positioning information, depth vision SLAM information and IMU information; and determining positioning information based on a fusion algorithm according to the target positioning data.
According to another aspect of the present invention, there is also provided a positioning information determining apparatus applied to a vehicle, the apparatus including:
the information acquisition module is used for acquiring data information, wherein the data information comprises depth vision synchronous positioning and mapping SLAM information, laser radar positioning information and inertial measurement unit IMU information;
the information determining module is used for determining global map information according to the depth vision SLAM information;
and the information fusion module is used for fusing the global map information, the depth vision SLAM information, the laser radar positioning information and the IMU information to obtain positioning information.
According to another aspect of the present invention, there is also provided an electronic apparatus including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of determining positioning information of any one of the embodiments of the present invention.
According to another aspect of the present invention, there is also provided a computer readable storage medium storing computer instructions for causing a processor to execute the method for determining positioning information according to any of the embodiments of the present invention.
The technical scheme of the invention is applied to a vehicle, and the data information comprises depth vision synchronous positioning and mapping SLAM information, laser radar positioning information and inertial measurement unit IMU information by acquiring the data information; determining global map information according to the depth vision SLAM information; and fusing the global map information, the depth vision SLAM information, the laser radar positioning information and the IMU information to obtain positioning information. According to the invention, the global map information, the depth vision SLAM information, the laser radar positioning information and the IMU information are fused to obtain the positioning information, so that the problem of inaccuracy in positioning by adopting a single sensor in the vehicle parking positioning process is solved, the robustness and the accuracy of the AVP are improved, accurate parking is realized, and the use experience of a user is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for determining positioning information provided in the first embodiment;
fig. 2 is a flowchart of a method for determining positioning information provided in the second embodiment;
fig. 3 is a schematic structural diagram of a positioning information determining apparatus provided in the third embodiment;
fig. 4 is a schematic structural diagram of an electronic device provided in the fourth embodiment.
Detailed Description
In order that those skilled in the art will better understand the present invention, a more complete description of the same will be rendered by reference to the appended drawings, wherein it is to be understood that the illustrated embodiments are merely exemplary of some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a method for determining positioning information provided in the first embodiment, where the method may be applicable to a parking positioning situation, and the method may be performed by a positioning information determining device, which may be implemented in hardware and/or software, and in a specific embodiment, the positioning information determining device may be configured in the method, and the electronic device may be a vehicle. As shown in fig. 1, the method of this embodiment specifically includes the following steps:
s101, acquiring data information.
The data information comprises depth vision synchronous positioning and mapping SLAM information, laser radar positioning information and inertial measurement unit IMU information.
Depth vision synchronous positioning and mapping (Simultaneous Localization And Mapping, SLAM) information is obtained by a positioning and mapping (SLAM) sensor, which typically has a monocular camera, a binocular camera, a depth camera, etc., where the monocular camera is a single camera to obtain measurement information. Binocular cameras are generally composed of two horizontally placed cameras, a left eye camera and a right eye camera. Each camera can be seen as a pinhole camera, the two camera apertures track the new distance, called the baseline of the binocular camera, and depth information can be measured, the greater the baseline distance, the further the distance can be measured. The depth camera measures object depth information by physical means of structured light or time of flight ToF (time of fly), which is not limited in this embodiment.
The laser radar sensor is also called an optical radar sensor, is short for a laser detection and ranging system, and analyzes information such as the reflected energy, the amplitude, the frequency and the phase of a reflection spectrum and the like on the surface of a target object by measuring the propagation distance between a sensor emitter and the target object, so that accurate three-dimensional structural information of the target object is presented, and the target object can be a vehicle and the like; the driving method of the lidar sensor includes a mechanical type, a phased array type, a floodlight array type, and the like, and this embodiment is not limited thereto.
The inertial measurement unit IMU information is directly acquired by an inertial measurement unit (Intertial Measurement Unit, IMU) sensor, which is a sensor for detecting and measuring acceleration and rotational motion, and the IMU sensor includes an accelerometer, an angular velocity meter, and the like, which is not limited in this embodiment.
Specifically, SLAM information is obtained through a LAM sensor, laser radar positioning information is directly obtained through a laser radar sensor, and IMU information is directly obtained through an IMU sensor.
S102, determining global map information according to the depth vision SLAM information.
Specifically, after depth vision SLAM information is determined, extracting feature points of the depth vision SLAM information by a feature point method, obtaining feature points in the depth vision SLAM information, and then directly performing global matching according to field overlapping and feature description among the feature points to form global map information.
And S103, fusing the global map information, the depth vision SLAM information, the laser radar positioning information and the IMU information to obtain positioning information.
Specifically, after global map information is determined, depth vision SLAM information, laser radar positioning information and IMU information are fused and positioned by combining the global map information, and positioning information is obtained by adopting extended Kalman filtering (Extended Kalman Filter, EKF).
Illustratively, the depth vision SLAM information, the laser radar positioning information and the IMU information are respectively subjected to fusion error state estimation through EKF, and the depth vision SLAM information, the laser radar positioning information and the IMU information and the prediction obey the following standard mathematical process, taking the laser radar positioning information as an example, the laser radar positioning information is shown in a formula (1), and the measurement equation is shown in a formula (2).
x(k)=A(k)x(k-1)+w(k) (1)
z(i,k)=c(i,k)x(k)+v(i,k) (2)
Wherein, x (k) in the formula (1) represents a state vector of the laser radar sensor at the time k, x (k-1) represents a state vector of the laser radar sensor at the time k-1, A (k) represents a state transition matrix, and w (k) represents laser radar sensor noise; in formula (2), i=1, 2,3, … N; where N represents the different sensors. x (k) represents a state vector of the lidar sensor at time k, z (i, k) represents an observation vector of the ith sensor, c (i, k) represents a state observation matrix, and w (k) and v (i, k) represent sensor noise and observation noise, respectively. Assuming that the correlations among x (k), w (k) and v (i, k) are independent, the purpose of data fusion is to fuse the optimal system state estimation value by reasonably calculating the observation data of each sensor, and assuming the prediction value of each sensor after EKFRepresenting different sensors) are uncorrelated, so the optimal data fusion result, EKF filter prediction process, can be demonstrated by the following mathematical model: from the formula (1), it is possible to infer +.about.each sensor as shown in the formula (3)>Predicted values.
And determining positioning information of the laser radar sensor through a formula (3)Wherein,F k representing a state transition matrix, j representing the number of different sensors.
Covariance matrix of corresponding laser radar sensor is as followsDetermining that the covariance matrix P (k, k) of one sensor is less than or equal to the fusion P of the covariance matrices of a plurality of sensors i (k,k)。
In a specific embodiment, the above parking location determining method may be applied to a park or a parking building scene, where the carrying device is a Nuvo-6108GC industrial personal computer, and the operating system environment is ubuntu16.04, which is not limited in this embodiment.
According to the technical scheme, data information is obtained, wherein the data information comprises laser depth visual synchronous positioning and mapping SLAM information, optical radar positioning information and inertial measurement unit IMU information; determining global map information according to the depth vision SLAM information; and fusing the global map information, the depth vision SLAM information, the laser radar positioning information and the IMU information to obtain positioning information. In the embodiment of the invention, the global map information, the depth vision SLAM information, the laser radar positioning information and the IMU information are fused to obtain the positioning information, so that the problem of inaccuracy in positioning by adopting a single sensor in the vehicle parking positioning process is solved, the robustness and the accuracy of the AVP are improved, accurate parking is realized, and the use experience of a user is improved.
Example two
Fig. 2 is a flowchart of a method for determining positioning information provided in the second embodiment, where the present embodiment is applicable to a parking positioning situation, and the method may be performed by a positioning information determining device, which may be implemented in the form of hardware and/or software, and in a specific embodiment, the positioning information determining device may be configured in the method, and the electronic device may be a vehicle. Based on the above embodiment, the method in this embodiment specifically includes the following steps of:
s201, acquiring environment sensing information through a SLAM sensor.
The environment sensing information is used to describe environment information around the vehicle, and is generally information such as other vehicles, obstacles, road conditions, etc. in the environment where the vehicle is located, which is not limited in this embodiment.
Specifically, the SLAM sensor acquires environment sensing information by adopting four paths of cameras, one path of cameras of the four paths of cameras are arranged right in front of a vehicle body of the vehicle, one path of cameras are arranged right behind the vehicle, and the other two paths of cameras are respectively positioned on the left side and the right side of the vehicle body, so that the environment sensing information around the vehicle is acquired.
By way of example, a Haikang Wipe fish-eye camera is generally adopted by four paths of cameras, the camera has the characteristics of short focal length and large visual angle, is very practical for shooting a large-scale environment in a short distance, can easily obtain pictures with stronger visual impact, and can effectively capture the perception information of the environment.
S202, determining transformed image information according to a first preprocessing algorithm and environment awareness information.
The first preprocessing algorithm is an algorithm for preprocessing the environment sensing information, such as an inverse perspective transformation (Inverse Perspective Mapping, IPM) method, and in the automatic or auxiliary driving process of the vehicle, the environment sensing information acquired by the SLAM sensor generates perspective effect due to the existence of the perspective effect, so that the perspective effect of the environment sensing information is eliminated by the IPM method.
Specifically, according to a first preprocessing algorithm IPM method, inverse perspective transformation is performed on the environmental perception information, and perspective transformation that things which are parallel in reality intersect in an image is eliminated, so that transformed image information is determined.
S203, determining depth vision SLAM information according to the transformed image information.
Specifically, after the transformed image information is determined, the transformed image information is input into a segmentation algorithm, thereby determining the depth vision SLAM information.
On the basis of the above embodiment, optionally, determining depth vision SLAM information according to the transformed image information includes determining first initial information based on a segmentation algorithm according to the transformed image information; and correcting the first initial information according to a loop monitoring algorithm, and determining depth vision SLAM information.
The segmentation algorithm adopts an improved Unet depth neural network to extract semantic features of the transformed image information, wherein the improved Unet depth neural network adopts a residual block to replace an original 3*3 convolution operation to form a residual network structure, and the residual network structure is used for extracting features so as to effectively prevent the overfitting phenomenon of the network.
The first initial information refers to running environment information of the vehicle during running.
Loop-back monitoring algorithms are also known as closed-loop detection, and refer to the ability to make a map closed-loop upon identifying that a vehicle has arrived at a scene. In the SLAM mapping process, the visual odometer only considers key frames in adjacent time, and errors generated during the period gradually accumulate to form accumulated errors. The result of such long-term estimation will be unreliable. Therefore, potential loops are found through a loop detection method, and a globally consistent track and map can be constructed by correcting drift errors through the potential loops.
Specifically, the transformed image information is input into a residual network structure, and feature extraction is performed on the transformed image information by performing segmentation processing on the transformed image information, so that semantic information of the transformed image information such as parking spaces, arrows, lane lines, deceleration strips, sidewalks, posts, obstacles, arrows and the like is obtained, and the semantic information of the input transformed image information is ensured to be balanced in the feature extraction process, so that the convergence of the network is maintained. After feature extraction, based on a visual odometer, the segmented semantic information is subjected to local map construction, and first initial information is determined. After the first initial information is determined, a movement error of the first initial information is corrected based on a loop-back monitoring algorithm, so that depth vision SLAM information is determined.
S204, acquiring point cloud data information through a laser radar sensor.
Specifically, the laser radar sensor is arranged on the vehicle, so that the point cloud data information of the surrounding environment of the vehicle is directly obtained by the laser radar sensor.
For example, 5 lidar sensors may be generally set, where 5 lidar sensors are used as positioning sources for acquiring point cloud data information, one lidar sensor is disposed at the center of the roof, two lidar sensors are disposed at 45 ° positions in the front left and front right of the vehicle body, and two other lidar sensors are disposed at 45 ° positions in the rear left and rear right of the vehicle body, and after the deployment of the 5 lidar sensors is completed, a full view angle may be formed for 360 ° environment around the vehicle, and the spliced 360 ° Lei Dadian cloud data is used as the point cloud data information.
The advantage of setting up like this is that through forming full perspective to 360 environment around the vehicle, 360 Lei Dadian cloud data after will splice are regarded as the point cloud data information, improve the accuracy of point cloud data information, promote the accuracy of parking.
S205, determining laser radar positioning information according to the second preprocessing algorithm and the point cloud data information.
The second preprocessing algorithm is an algorithm for preprocessing the point cloud data information, specifically, filtering the point cloud data information to make the point cloud sparse.
Specifically, the point cloud data information is subjected to filtering processing through a second preprocessing algorithm, so that the point cloud data information is thinned, then the thinned point cloud data information is calculated, and the point cloud motion of the point cloud data information between two continuous frames is determined, so that a corresponding pose transfer relationship is determined, a corresponding odometer is obtained at the frequency of 10HZ, radar motion distortion is corrected under the assumption of a uniform model by using the odometer, and laser radar positioning information is determined.
S206, acquiring vehicle data information through the IMU sensor.
Wherein the vehicle data information includes vehicle acceleration and vehicle speed, or includes a vehicle acceleration and rotation matrix.
Specifically, the vehicle acceleration and the vehicle speed are acquired by the IMU sensor when the vehicle is in a non-rotating state, and the vehicle acceleration and the rotation matrix are acquired by the IMU sensor when the vehicle is in a rotating state.
S207, determining IMU information according to the vehicle data information.
Specifically, when the vehicle acceleration and the vehicle speed are acquired, the self-positioning of the vehicle and IMU information estimated by Dead Reckoning (DR) according to the vehicle acceleration and the vehicle speed are determined. Or when the acceleration and the rotation matrix of the vehicle are acquired, the pose transformation of the vehicle under the condition of high-speed rotation is determined through the rotation matrix, so that IMU information is determined.
On the basis of the above embodiment, optionally, if the vehicle data information includes vehicle acceleration and vehicle speed, the IMU informationWherein v is t A is the vehicle speed at time t t For vehicle acceleration at time t, v t =v t-1 dt+a t dt,v t-1 Vehicle speed at time t-1; if the vehicle data information includes a rotation matrix and vehicle acceleration, then IMU informationWherein R is t For the rotation matrix at time t, delta pt As the position coordinate change value at time t, delta (pt-1) A is the position coordinate change value at the time t-1 t The vehicle acceleration at time t, g is the gravitational acceleration.
Wherein the position coordinate change value is used to determine the position change, delta, of the vehicle pt And delta (pt-1) The position changes of the vehicle in the world coordinate system at time t (current time) and time t-1 (last time) are respectively represented, and can be obtained by multiplying the calculated speed by time.
Specifically, if the vehicle data information includes vehicle acceleration and vehicle speed, it is determined that there is no rotational movement of the vehicle, IMU informationWherein v is t A is the vehicle speed at time t t For vehicle acceleration at time t, v t =v t-1 dt+a t dt,v t-1 Vehicle speed at time t-1; if the vehicle data information includes a rotation matrix and a vehicle acceleration, it is determined that there is a rotational movement of the vehicle, IMU information +.>Wherein R is t For the rotation matrix at time t, delta pt As the position coordinate change value at time t, delta (pt-1) A is the position coordinate change value at the time t-1 t The vehicle acceleration at time t, g is the gravitational acceleration.
The advantage of this arrangement is that the robustness and accuracy of the AVP is improved based on IMU information for the vehicle in the presence of rotational motion and in the absence of rotational motion, respectively, depending on the state of motion of the vehicle.
In the embodiment of the invention, in the process of executing the S201-S203, the S204-S205 and the S206-S207, the execution has no sequence, and the S201-S203, the S204-S205 and the S206-S207 can be executed first; or S201-S203 can be executed first, S206-S207 can be executed later, S204-S205 can be executed finally, or S204-S205 can be executed first, S201-S203 can be executed later, and S206-S207 can be executed finally; or S204-S205 can be executed first, S206-S207 can be executed later, and S201-S203 can be executed finally; or S206-S207 may be executed first, S201-S203 may be executed later, and S204-S205 may be executed finally; the steps S206-S207 may be executed first, the steps S204-S205 may be executed later, and the steps S201-S203 may be executed last, which is not limited in this embodiment.
S208, determining global map information according to the depth vision SLAM information.
Specifically, after depth vision SLAM information is determined, extracting feature points of the depth vision SLAM information by a feature point method, obtaining feature points in the depth vision SLAM information, and then directly performing global matching according to field overlapping and feature description among the feature points to form global map information.
S209, determining target positioning data based on a filtering algorithm according to the global map information, the depth vision SLAM information, the laser radar positioning information and the IMU information.
The filtering algorithm is used for carrying out filtering processing on global map information, depth vision SLAM information, laser radar positioning information and IMU information, the filtering algorithm is EKF, filtering processing is carried out on target positioning information determined by fusion of the depth vision SLAM information, the laser radar positioning information and the IMU information, the EKF solves the problem of nonlinearity through local linearity, a nonlinear prediction equation and an observation equation are derived, and linearization is carried out in a tangential substitution mode.
Specifically, after depth vision SLAM information, laser radar positioning information and IMU information are determined, because the depth vision SLAM information, the laser radar positioning information and the IMU information are acquired according to different sensors, clock synchronization processing is further performed on the depth vision SLAM information, the laser radar positioning information and the IMU information, the clock synchronization is absolute time synchronization, specifically, the clock synchronization is implemented by comparing hardware synchronization and protocol synchronization of each sensor, and finally, the hardware synchronization is used, i.e. input and output interfaces of each sensor are connected with a domain controller, and when each sensor receives a rising edge in a manner of triggering pulse signals and the like, data of each sensor are collected at the same time, so that the clock synchronization of each sensor is finally realized. And then, filtering the global map information, the depth vision SLAM information, the laser radar positioning information and the IMU information based on a filtering algorithm to further determine target positioning data.
S210, determining positioning information based on a fusion algorithm according to the target positioning data.
The target positioning data comprise first data after depth vision SLAM information passes through an EKF, second data after laser radar positioning information passes through the EKF and third data after IMU information passes through the EKF. The fusion algorithm is an algorithm for performing fusion processing on the first data, the second data, and the third data, such as convolution processing, and the embodiment is not limited thereto.
Specifically, after determining target positioning data such as first data corresponding to depth vision SLAM information, second data corresponding to laser radar positioning information, and third data corresponding to IMU information, convolution processing is performed on the first data corresponding to the depth vision SLAM information, the second data corresponding to the laser radar positioning information, and the third data corresponding to the IMU information based on convolution processing, so that positioning information is determined.
According to the technical scheme, environmental perception information is obtained through the SLAM sensor; determining transformed image information according to a first preprocessing algorithm and context awareness information; determining depth vision SLAM information according to the transformed image information; acquiring point cloud data information through a laser radar sensor; determining laser radar positioning information according to the second preprocessing algorithm and the point cloud data information; acquiring vehicle data information through an IMU sensor; determining IMU information according to the vehicle data information; determining global map information according to the depth vision SLAM information; determining target positioning data based on a filtering algorithm according to global map information, depth vision SLAM information, laser radar positioning information and IMU information; and determining positioning information based on a fusion algorithm according to the target positioning data. In the embodiment of the invention, the SLAM sensor, the laser radar sensor and the IMU sensor are used for acquiring the depth vision SLAM information, the laser radar positioning information and the IMU information respectively, and determining the target positioning data based on a filtering algorithm; according to the target positioning data, positioning information is determined based on a fusion algorithm, the problem that a vehicle is positioned inaccurately by adopting a single sensor in the parking positioning process is solved, the robustness and the accuracy of the AVP are improved, and the use experience of a user is improved.
Example III
Fig. 3 is a schematic structural diagram of a positioning information determining apparatus provided in a third embodiment, which is applied to a vehicle, and includes: an information acquisition module 301, an information determination module 302 and an information capacity fusion module 303. Wherein,
the information acquisition module 301 is configured to acquire data information, where the data information includes depth vision synchronous positioning and mapping SLAM information, laser radar positioning information, and inertial measurement unit IMU information.
The information determining module 302 is configured to determine global map information according to the depth vision SLAM information.
The information fusion module 303 is configured to fuse global map information, depth vision SLAM information, laser radar positioning information and IMU information to obtain positioning information.
Optionally, the information obtaining module 301 is specifically configured to: acquiring environment sensing information through a SLAM sensor; determining transformed image information according to a first preprocessing algorithm and context awareness information; depth vision SLAM information is determined from the transformed image information.
Optionally, the information obtaining module 301 is specifically configured to: determining first initial information based on a segmentation algorithm according to the transformed image information; and correcting the first initial information according to a loop monitoring algorithm, and determining depth vision SLAM information.
Optionally, the information obtaining module 301 is specifically configured to: acquiring point cloud data information through a laser radar sensor; and determining laser radar positioning information according to the second preprocessing algorithm and the point cloud data information.
Optionally, the information obtaining module 301 is specifically configured to: acquiring vehicle data information through an IMU sensor, wherein the vehicle data information comprises vehicle acceleration and vehicle speed or comprises a vehicle acceleration and rotation matrix; and determining IMU information according to the vehicle data information.
Optionally, the information obtaining module 301 is specifically configured to: if the vehicle data information includes vehicle acceleration and vehicle speed, then the IMU informationWherein v is t A is the vehicle speed at time t t For vehicle acceleration at time t, v t =v t-1 dt+a t dt,v t-1 Vehicle speed at time t-1; if the vehicle data information comprises a rotation matrix and a vehicle acceleration, IMU information +.>Wherein R is t For the rotation matrix at time t, delta pt As the position coordinate change value at time t, delta (pt-1) A is the position coordinate change value at the time t-1 t The vehicle acceleration at time t, g is the gravitational acceleration.
Optionally, the information fusion module 303 is specifically configured to: determining target positioning data based on a filtering algorithm according to global map information, depth vision SLAM information, laser radar positioning information and IMU information; and determining positioning information based on a fusion algorithm according to the target positioning data.
The positioning information determining device provided by the embodiment can execute the positioning information determining method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the executing method.
Example IV
Fig. 4 is a schematic diagram of the structure of an electronic device provided in the fourth embodiment, which is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 4, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as the determination of positioning information.
In some embodiments, the method of determining positioning information may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into the RAM 13 and executed by the processor 11, one or more steps of the above-described method of determining positioning information may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the method of determining the positioning information in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above can be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method of determining positioning information, characterized by being applied to a vehicle, the method comprising:
acquiring data information, wherein the data information comprises depth vision synchronous positioning and mapping SLAM information, laser radar positioning information and inertial measurement unit IMU information;
determining global map information according to the depth vision SLAM information;
and fusing the global map information, the laser radar positioning information, the depth vision SLAM information and the IMU information to obtain positioning information.
2. The method for determining positioning information according to claim 1, wherein the acquiring depth vision SLAM information includes:
acquiring environment sensing information through a SLAM sensor;
determining transformed image information according to a first preprocessing algorithm and the context awareness information;
and determining the depth vision SLAM information according to the transformed image information.
3. The method for determining positioning information according to claim 2, wherein the determining the depth vision SLAM information from the transformed image information includes:
determining first initial information based on a segmentation algorithm according to the transformed image information;
and correcting the first initial information according to a loop monitoring algorithm, and determining the depth vision SLAM information.
4. The method for determining positioning information according to claim 1, wherein the acquiring laser radar positioning information includes:
acquiring point cloud data information through a laser radar sensor;
and determining the laser radar positioning information according to a second preprocessing algorithm and the point cloud data information.
5. The method for determining positioning information according to claim 1, wherein the acquiring IMU information includes:
acquiring vehicle data information through an IMU sensor, wherein the vehicle data information comprises vehicle acceleration and vehicle speed or comprises a vehicle acceleration and rotation matrix;
and determining the IMU information according to the vehicle data information.
6. The method of determining positioning information according to claim 5, wherein the determining the IMU information from the vehicle data information includes:
if the vehicle data information includes the vehicle acceleration and the vehicle speed, the IMU informationWherein v is t A is the vehicle speed at time t t For vehicle acceleration at time t, v t =v t-1 dt+a t dt,v t-1 Vehicle speed at time t-1;
if the vehicle data information includes the rotation matrixAnd the vehicle acceleration, the IMU informationWherein R is t For the rotation matrix at time t, delta pt As the position coordinate change value at time t, delta (pt-1) A is the position coordinate change value at the time t-1 t And g is the gravitational acceleration, wherein the vehicle acceleration is at the time t.
7. The method for determining positioning information according to claim 1, wherein the fusing the global map information, the laser radar positioning information, the depth vision SLAM information and the IMU information to obtain positioning information includes:
determining target positioning data based on a filtering algorithm according to the global map information, the laser radar positioning information, the depth vision SLAM information and the IMU information;
and determining the positioning information based on the fusion algorithm according to the target positioning data.
8. A positioning information determining apparatus, characterized by being applied to a vehicle, comprising:
the information acquisition module is used for acquiring data information, wherein the data information comprises depth vision synchronous positioning and mapping SLAM information, laser radar positioning information and inertial measurement unit IMU information;
the information determining module is used for determining global map information according to the depth vision SLAM information;
and the information fusion module is used for fusing the global map information, the depth vision SLAM information, the laser radar positioning information and the IMU information to obtain positioning information.
9. An electronic device, the electronic device comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of determining positioning information according to any one of claims 1-7.
10. A computer readable storage medium storing computer instructions for causing a processor to perform the method of determining positioning information according to any one of claims 1-7.
CN202310618457.8A 2023-05-29 2023-05-29 Positioning information determining method and device, electronic equipment and storage medium Pending CN117168470A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310618457.8A CN117168470A (en) 2023-05-29 2023-05-29 Positioning information determining method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310618457.8A CN117168470A (en) 2023-05-29 2023-05-29 Positioning information determining method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117168470A true CN117168470A (en) 2023-12-05

Family

ID=88941962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310618457.8A Pending CN117168470A (en) 2023-05-29 2023-05-29 Positioning information determining method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117168470A (en)

Similar Documents

Publication Publication Date Title
CN109506642B (en) Robot multi-camera visual inertia real-time positioning method and device
CN111649739B (en) Positioning method and device, automatic driving vehicle, electronic equipment and storage medium
CN112734852B (en) Robot mapping method and device and computing equipment
CN107748569B (en) Motion control method and device for unmanned aerial vehicle and unmanned aerial vehicle system
KR20200005999A (en) Slam method and slam system using dual event camaer
CN111959495B (en) Vehicle control method and device and vehicle
CN111220164A (en) Positioning method, device, equipment and storage medium
CN114323033B (en) Positioning method and equipment based on lane lines and feature points and automatic driving vehicle
CN113933818A (en) Method, device, storage medium and program product for calibrating laser radar external parameter
CN111721281B (en) Position identification method and device and electronic equipment
CN113887400B (en) Obstacle detection method, model training method and device and automatic driving vehicle
CN111721305B (en) Positioning method and apparatus, autonomous vehicle, electronic device, and storage medium
CN112147632A (en) Method, device, equipment and medium for testing vehicle-mounted laser radar perception algorithm
CN114677655A (en) Multi-sensor target detection method and device, electronic equipment and storage medium
CN113610702B (en) Picture construction method and device, electronic equipment and storage medium
CN113177980B (en) Target object speed determining method and device for automatic driving and electronic equipment
CN111783611B (en) Unmanned vehicle positioning method and device, unmanned vehicle and storage medium
CN116958452A (en) Three-dimensional reconstruction method and system
CN115900697B (en) Object motion trail information processing method, electronic equipment and automatic driving vehicle
CN115727871A (en) Track quality detection method and device, electronic equipment and storage medium
CN113495281B (en) Real-time positioning method and device for movable platform
CN117168470A (en) Positioning information determining method and device, electronic equipment and storage medium
CN115560744A (en) Robot, multi-sensor-based three-dimensional mapping method and storage medium
CN115147561A (en) Pose graph generation method, high-precision map generation method and device
CN114861725A (en) Post-processing method, device, equipment and medium for perception and tracking of target

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination