CN115388906A - Pose determination method and device, electronic equipment and storage medium - Google Patents

Pose determination method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115388906A
CN115388906A CN202211028797.7A CN202211028797A CN115388906A CN 115388906 A CN115388906 A CN 115388906A CN 202211028797 A CN202211028797 A CN 202211028797A CN 115388906 A CN115388906 A CN 115388906A
Authority
CN
China
Prior art keywords
particle
state
lane line
pose
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211028797.7A
Other languages
Chinese (zh)
Inventor
林敏捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Anting Horizon Intelligent Transportation Technology Co ltd
Original Assignee
Shanghai Anting Horizon Intelligent Transportation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Anting Horizon Intelligent Transportation Technology Co ltd filed Critical Shanghai Anting Horizon Intelligent Transportation Technology Co ltd
Priority to CN202211028797.7A priority Critical patent/CN115388906A/en
Publication of CN115388906A publication Critical patent/CN115388906A/en
Priority to PCT/CN2023/113598 priority patent/WO2024041447A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching

Abstract

The embodiment of the disclosure discloses a pose determination method, a pose determination device, electronic equipment and a storage medium, wherein the method comprises the following steps: determining a first transverse state and a first course angle state respectively corresponding to the current moment of each first particle based on the first particle poses of m first particles corresponding to the movable equipment; determining a first matching score and a second matching score corresponding to each first particle respectively based on the perception lane line information; updating the first transverse weight of the corresponding first particle based on each first matching score to obtain a second transverse weight; updating the first course angle weight of the corresponding first particle based on each second matching score to obtain a second course angle weight; and determining the current positioning pose of the movable equipment based on the second transverse weights, the second course angle weights, the first transverse states and the first course angle states. The method realizes the consideration of the transverse positioning precision and the course angle positioning precision and effectively improves the positioning performance.

Description

Pose determination method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to positioning technologies, and in particular, to a method and an apparatus for determining a pose, an electronic device, and a storage medium.
Background
The vehicle-mounted positioning system based on visual perception and high-precision maps realizes real-time high-precision positioning of a vehicle end through a system state estimation technology. The system state estimation technology based on particle filtering is a common state estimation method in a vehicle-mounted positioning system due to the characteristics of low calculation cost, high stability and the like. In the filter, the estimation result of the filter, i.e., the performance of vehicle positioning, is directly affected by the difference between the state setting of the system and the updated observation mode of the state. In the related art visual positioning system, the state estimation generally sets the position and the posture of the vehicle as the state of the system for real-time prediction and update, generally uses the perceived lane line and the lane line in the high-precision map for matching, calculates the associated matching score of the perceived lane line and the map, then updates the state of the filter, and finally determines the positioning result of the vehicle based on the state of the filter. However, it is difficult for the conventional filter to achieve both the lateral positioning accuracy and the course angle positioning accuracy.
Disclosure of Invention
The disclosure is provided for solving the technical problem that the transverse positioning precision and the course angle positioning precision are difficult to be considered simultaneously. The embodiment of the disclosure provides a pose determination method and device, electronic equipment and a storage medium.
According to an aspect of the embodiments of the present disclosure, there is provided a pose determination method, including: determining first particle poses of m first particles corresponding to the movable device, wherein m is an integer larger than 1, and the first particle poses are obtained at the previous moment and correspond to the first particles; determining a first transverse state and a first course angle state respectively corresponding to each first particle at the current moment based on the pose of each first particle; determining a first matching score and a second matching score corresponding to each first particle respectively based on the perception lane line information; updating the first transverse weight of the corresponding first particle based on each first matching score to obtain a second transverse weight corresponding to each first particle; updating the first course angle weight of the corresponding first particle based on each second matching score to obtain a second course angle weight corresponding to each first particle; and determining the current positioning pose of the movable equipment based on the second transverse weight, the second course angle weight, the first transverse state and the first course angle state respectively corresponding to the first particles.
According to another aspect of the embodiments of the present disclosure, there is provided an apparatus for determining a pose, including: a first determining module, configured to determine first particle poses of m first particles corresponding to the movable device, where m is an integer greater than 1, and the first particle poses are obtained at previous time instants and correspond to the first particles; the first processing module is used for determining a first transverse state and a first course angle state respectively corresponding to the current moment of each first particle based on the pose of each first particle; the second processing module is used for determining a first matching score and a second matching score which correspond to each first particle respectively based on the perception lane line information; a third processing module, configured to update first lateral weights of the corresponding first particles based on the first matching scores, respectively, so as to obtain second lateral weights corresponding to the first particles, respectively; the fourth processing module is used for updating the first course angle weight of the corresponding first particle based on each second matching score to obtain a second course angle weight corresponding to each first particle; and the fifth processing module is used for determining the current positioning pose of the movable equipment based on the second transverse weight, the second course angle weight, the first transverse state and the first course angle state which are respectively corresponding to the first particles.
According to still another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the pose determination method according to any one of the above-described embodiments of the present disclosure.
According to still another aspect of an embodiment of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing the processor-executable instructions; the processor is configured to read the executable instructions from the memory and execute the instructions to implement the pose determination method according to any of the above embodiments of the present disclosure.
Based on the pose determination method, the pose determination device, the electronic equipment and the storage medium provided by the embodiments of the disclosure, the lateral state and the heading angle state of the particles are separated, and different weights are respectively adopted for the lateral state and the heading angle state of the particles in the positioning process, so that the consideration of the lateral positioning precision and the heading angle positioning precision is realized, and the positioning performance is effectively improved.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure, and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally indicate like parts or steps.
Fig. 1 is an exemplary application scenario of the pose determination method provided by the present disclosure;
fig. 2 is a flowchart illustrating a pose determination method according to an exemplary embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a pose determination method according to an exemplary embodiment of the present disclosure;
FIG. 4 is a flowchart of step 203 provided by an exemplary embodiment of the present disclosure;
FIG. 5 is a schematic flow chart of step 2022a provided by one exemplary embodiment of this disclosure;
FIG. 6 is a flowchart of step 202 provided by an exemplary embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a first grid coordinate region provided by an exemplary embodiment of the present disclosure;
FIG. 8 is a second particle mapping flow diagram provided by an exemplary embodiment of the present disclosure;
FIG. 9 is a flowchart of step 202 provided by another exemplary embodiment of the present disclosure;
fig. 10 is a flowchart illustrating a pose determination method according to still another exemplary embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of a pose determination apparatus provided by an exemplary embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of a second processing module 503 according to an exemplary embodiment of the disclosure;
fig. 13 is a schematic structural diagram of a fifth processing module 506 according to an exemplary embodiment of the present disclosure;
fig. 14 is a schematic structural diagram of a first processing module 502 provided in an exemplary embodiment of the present disclosure;
fig. 15 is a schematic structural diagram of a first processing module 502 according to another exemplary embodiment of the disclosure;
fig. 16 is a schematic structural diagram of an application example of the electronic device of the present disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some of the embodiments of the present disclosure, and not all of the embodiments of the present disclosure, and it is to be understood that the present disclosure is not limited by the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.
It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more and "at least one" may refer to one, two or more.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the disclosure, may be generally understood as one or more, unless explicitly defined otherwise or stated otherwise.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing the association object, and indicates that three relationships may exist, for example, a and/or B, may indicate: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
It should also be understood that the description of the embodiments in the present disclosure emphasizes the differences between the embodiments, and the same or similar parts may be referred to each other, and are not repeated for brevity.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Embodiments of the disclosure may be implemented in electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with electronic devices, such as terminal devices, computer systems, servers, and the like, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set top boxes, programmable consumer electronics, network pcs, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above systems, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Summary of the disclosure
In the process of implementing the present disclosure, the inventor finds that the vehicle-mounted positioning system based on visual perception and high-precision maps realizes real-time high-precision positioning of the vehicle end through a system state estimation technology. The system state estimation technology based on particle filtering becomes a common state estimation method in a vehicle-mounted positioning system due to the characteristics of low calculation cost, high stability and the like. In the filter, the estimation result of the filter, i.e., the performance of vehicle positioning, is directly affected by the difference between the state setting of the system and the updated observation mode of the state. In the related art visual positioning system, the state estimation generally sets the position and the posture of the vehicle as the state of the system for real-time prediction and update, generally uses the perceived lane line and the lane line in the high-precision map for matching, calculates the associated matching score of the perceived lane line and the map, then updates the state of the filter, and finally determines the positioning result of the vehicle based on the state of the filter. However, it is difficult for the conventional filter to achieve both the lateral positioning accuracy and the course angle positioning accuracy.
Brief description of the drawings
Fig. 1 is an exemplary application scenario of the pose determination method provided by the present disclosure.
For a high-precision positioning scene of a movable device, taking a vehicle as an example, when the vehicle needs to be positioned at high precision, the initial pose of the vehicle in a navigation map needs to be determined at first, and then high-precision positioning is carried out in the subsequent movement of the vehicle based on the initial pose. In fig. 1, x and y represent the x axis and y axis of the vehicle coordinate system, respectively, the y direction is the transverse direction, and the x direction is the longitudinal direction. When a vehicle is positioned, the observation result of the lane line needs to be combined, and the observation result refers to the matching result of the lane line (namely, the perception lane line) acquired by the camera and recognized by the environment image and the lane line in the map. The method determines a first matching score and a second matching score which are respectively used for updating the transverse weight and the course angle weight of each particle in a filter of particle filtering based on the perception lane line, thereby respectively updating the transverse state and the course angle state by adopting different weights, further determining the pose of the movable equipment based on the updated state, realizing higher precision of transverse positioning and course angle positioning, and solving the problems that the prior art can not give consideration to the transverse positioning precision and the course angle positioning precision and the like. In the embodiment of the present disclosure, the pose includes three degrees of freedom, i.e., a lateral coordinate component Y (lon), a longitudinal coordinate component X (lat), and a heading angle θ (yaw).
Exemplary method
Fig. 2 is a flowchart illustrating a pose determination method according to an exemplary embodiment of the present disclosure. The embodiment can be applied to an electronic device, specifically, for example, a vehicle-mounted computing platform, as shown in fig. 2, and includes the following steps:
step 201, determining first particle poses of m first particles corresponding to the movable device, where m is an integer greater than 1, and the first particle poses are obtained at previous time.
The movable device may be a vehicle, a robot, or the like, and is not limited specifically. The first particle pose of the first particle is the pose of the particle in the particle filter determined in the positioning process at the previous time, and the specific first particle number m may be set according to actual requirements, which is not limited in the present disclosure. The previous time may be any time before the current time, such as the time before the current time, or a certain time spaced from the current time by a certain time, and may be specifically determined according to an actual requirement.
Step 202, determining a first transverse state and a first course angle state respectively corresponding to the current time of each first particle based on the pose of each first particle.
Wherein the first lateral state and the first heading angle state of the first particle at the current time are the lateral state and the heading angle state of the predicted pose of the first particle at the current time relative to the predicted pose of the movable device at the current time. The predicted particle pose of the first particle at the current moment can be obtained by adopting any practicable prediction mode. Similarly, the predicted pose of the mobile device at the current moment can also be obtained by adopting any practicable prediction mode. For example, based on the pose of the first particle and the odometer information of the movable device, a corresponding motion model is used for prediction. The Motion Model may be set according to actual requirements, such as an odometer-based Motion Model (odometer Sample Motion Model) or other implementable Motion models, which is not limited in the present disclosure.
Step 203, determining a first matching score and a second matching score corresponding to each first particle based on the information of the perceived lane line.
The sensing lane line information is determined based on the acquired environment image in the sensing stage, and may include a sensing lane line parameter corresponding to at least one lane line, such as c 0 、c 1 、c 2 、c 3 ,c 0 、c 1 、c 2 、c 3 Coefficients of a cubic equation parameterized by a curve equation representing the fitted lane line. The specific sensing lane line information can be set according to actual requirements. The first matching score and the second matching score are matching scores respectively determined for the lateral state and the heading angle state, the first matching score corresponds to the lateral state, and the second matching score corresponds to the heading angle state. For the first matching score and the second matching score corresponding to the first particle, corresponding rules may be set, for example, for the determination of the first matching score, a first preset rule may be set, and for the second matching score, a first preset rule may be setThe determination of the matching score may set a second preset rule. The first preset rule and the second preset rule can be set according to actual requirements, and the set principle is that relative to the movable equipment, the matching score of a perception lane line sampling point at a close distance of the movable equipment contributes to a larger value in the first matching score, and the score of the sampling point at a middle distance and a long distance contributes to a smaller value, so that the transverse positioning precision is higher; the matching score of the sensing lane line sampling point at the far distance in the movable equipment contributes more in the second matching score, and the matching score of the sampling point at the near distance contributes less, so that the course angle positioning accuracy is higher.
In the prior art, the lateral state and the heading angle state are updated by the same weight, the weight is determined based on the total matching score of the lane line, and the geometric shapes of the lane line obtained by visual perception and the lane line in the map have a certain difference, so that the matching score of the perception lane line matching the map lane line has the following characteristics when the particle state is updated: when the matching scores of the sensing lane line sampling points at the close distance contribute more to the total matching scores, the transverse positioning accuracy is higher, and when the matching scores of the sensing lane line sampling points at the middle and far distances contribute more to the total matching scores, the course angle positioning accuracy is higher, so that the transverse (lat) positioning accuracy and the course angle (yaw) positioning accuracy of the vehicle are difficult to be considered at the same time in the game. According to the method, the transverse state and the course angle state are updated respectively through different matching scores, so that the transverse positioning precision and the course angle positioning precision can be considered, and the problem that the transverse positioning precision and the course angle positioning precision cannot be considered in the prior art is solved.
And 204, updating the first transverse weights of the corresponding first particles respectively based on the first matching scores to obtain second transverse weights corresponding to the first particles respectively.
Wherein the first lateral weight of the first particle is a weight corresponding to the first lateral state, and the first lateral weight may be determined and stored in a localization process at a previous time. Updating the first lateral weight based on the first matching score means adjusting the weight of the first lateral state based on the matching condition at the current time, and the specific updating rule may be set according to actual requirements. For example, when the matching effect is better, the first lateral weight of the first particle may be increased to obtain a corresponding updated second lateral weight. When the matching effect is poor, the first lateral weight of the first particle may be reduced, and details are not repeated.
Step 205, based on each second matching score, updating the first heading angle weight of the corresponding first particle, respectively, to obtain a second heading angle weight corresponding to each first particle, respectively.
Wherein the first heading angle weight of the first particle is a weight corresponding to the first heading angle state, and the first heading angle weight is also determined and stored in the positioning process at the previous moment. The update principle of the first course angle weight is similar to that of the first lateral weight, and is not described herein again.
Step 204 is not in sequence with step 205.
And step 206, determining the current positioning pose of the movable equipment based on the second transverse weight, the second course angle weight, the first transverse state and the first course angle state respectively corresponding to each first particle.
Because each first particle simulates the pose of the movable equipment, after the second transverse weight and the second course angle weight which correspond to each first particle respectively and the first transverse state and the first course angle state of each first particle at the current moment are determined, the current positioning pose of the movable equipment can be determined based on the second transverse weight, the second course angle weight, the first transverse state and the first course angle state which correspond to each first particle respectively. Because the pose includes three components, i.e., the lateral component, the longitudinal component and the heading angle, the first longitudinal state and the corresponding weight of the current time corresponding to each first particle can be implemented based on any implementable particle filtering, for example, based on a histogram filter, which is not limited in the disclosure.
In the method for determining the pose provided by this embodiment, different matching scores are respectively determined for the lateral state and the heading angle state of each particle in the positioning process, and are respectively used for updating the lateral weight and the heading angle weight, so that the lateral state and the heading angle state of the particle have different weights respectively, and the lateral state and the heading angle state are separated, thereby achieving the consideration of the lateral positioning accuracy and the heading angle positioning accuracy, and effectively improving the positioning performance.
Fig. 3 is a flowchart illustrating a pose determination method according to an exemplary embodiment of the present disclosure.
In an optional example, the determining, in step 203, a first matching score and a second matching score corresponding to each first particle based on the perceived lane line information may specifically include the following steps:
step 2031, based on the sensed lane line information, determining lane line sampling points in the vehicle coordinate system.
Wherein, the sensing lane line information is determined lane line information based on the collected environment image in the sensing stage, and the sensing lane line information may include sensing lane line parameters, such as c 0 、c 1 、c 2 、c 3 ,c 0 、c 1 、c 2 、c 3 Coefficients of a cubic equation parameterized by a curve equation representing the fitted lane line. The specific sensing lane line information can be set according to actual requirements. The vehicle coordinate system is a vehicle coordinate system with the center of the vehicle rear axle as the origin, and details are not repeated. The lane line sampling points under the vehicle coordinate system are obtained by sampling the perception lane lines under the vehicle coordinate system based on the perception lane line information. The specific sampling mode can be set according to actual requirements, for example, under an image coordinate system (with the upper left corner of the image as an origin, the lateral right side as a u-axis, and the longitudinal downward side as a v-axis), sampling is performed according to a preset sampling interval along the v direction with the center of a perception image corresponding to the perception lane line information as a starting point to obtain N original sampling points, the N original sampling points are projected to the x-axis of the vehicle coordinate system respectively through the internal reference and the external reference of the camera and the mapping relationship between the image coordinate system and the vehicle coordinate system to obtain N x-axis coordinates, wherein the ith x-axis coordinate is expressed as
Figure BDA0003813895660000061
And then can be based on perception carThe lane line information determines lane line sampling points under a vehicle coordinate system corresponding to the N original sampling points respectively, and the ith (i =1,2, …, N) lane line sampling point is represented as:
Figure BDA0003813895660000062
where t denotes the current time, c 0 、c 1 、c 2 、c 3 Representing the perceived lane line parameters, i.e. the coefficients of the cubic equation parameterized by the curve equation of the lane line as described above.
Step 2032, respectively using each first particle as a movable device, and transforming the first map lane line to the vehicle coordinate system to obtain a second map lane line in the vehicle coordinate system corresponding to each first particle.
The first map lane line is a lane line in the high-precision map, may include related information of at least one lane line, and may be specifically set according to actual requirements. For example, a local map of a certain range around a vehicle positioning pose may be obtained according to the positioning pose of the vehicle at a previous time, and a lane line in the local map may be obtained as a first map lane line, where a first particle is taken as a movable device to represent that the first particle is a movable device, so that a predicted particle pose (which may be referred to as a first predicted particle pose) at the current time of the first particle is taken as a pose of the movable device to establish a vehicle coordinate system of the movable device, specifically, a position component (X, Y) in the first predicted particle pose is taken as an origin of the vehicle coordinate system of the movable device, and an X-axis direction and a Y-axis direction of the vehicle coordinate system are determined according to a heading angle (pose component) in the first predicted particle pose. And converting the first map lane line into the vehicle coordinate system according to the mapping relation between the vehicle coordinate system and the map coordinate system which is obtained in advance, and obtaining second map lane lines under the vehicle coordinate system corresponding to the first particles respectively.
Step 2033, determining first matching scores respectively corresponding to the first particles based on the lane line sampling points, the second map lane lines respectively corresponding to the first particles, and the first preset rule.
The first preset rule is a matching score determining rule of the lane line sampling points corresponding to the transverse state, and can be set according to actual requirements, and the principle is that the matching score of the lane line sampling points at a close distance is relatively high, and the matching score of the lane line sampling points at a medium and long distance is relatively low, so that the transverse positioning precision is improved.
For each first particle, the minimum transverse distance between each lane line sampling point and the second map lane line can be determined based on each lane line sampling point and the second map lane line, the score of the first sampling point of each lane line sampling point is determined based on the minimum transverse distance, the preset maximum transverse distance threshold value and the first preset rule, and the first matching score corresponding to the first particle is determined based on the score of the first sampling point corresponding to each lane line sampling point. The minimum transverse distance between the lane line sampling point and the second map lane line is the transverse distance between the lane line sampling point and the nearest lane line in the second map lane line. The first matching score corresponding to the first particle may be a sum of the first sampling point scores of the lane line sampling points, and may be specifically set according to actual requirements.
Step 2034, determining second matching scores respectively corresponding to the first particles based on the lane line sampling points, the second map lane lines respectively corresponding to the first particles and a second preset rule.
The second preset rule is a matching score determination rule of the lane line sampling points corresponding to the course angle state, and can be set according to actual requirements, and the principle is that the matching score of the lane line sampling points at a short distance is relatively low, and the matching score of the lane line sampling points at a medium and long distance is relatively high, so that the course angle positioning accuracy is improved.
For each first particle, the minimum transverse distance between each lane line sampling point and the second map lane line can be determined based on each lane line sampling point and the second map lane line, the score of the second sampling point of each lane line sampling point is determined based on the minimum transverse distance, the preset maximum transverse distance threshold value and the second preset rule, and the second matching score corresponding to the first particle is determined based on the score of the second sampling point corresponding to each lane line sampling point. The minimum transverse distance between the lane line sampling point and the second map lane line is the transverse distance between the lane line sampling point and the nearest lane line in the second map lane line. The second matching score corresponding to the first particle may be a sum of the second sampling point scores of the lane line sampling points, and may be specifically set according to actual requirements.
Fig. 4 is a schematic flowchart of step 203 provided by an exemplary embodiment of the present disclosure.
In an optional example, the determining, in step 2033, first matching scores respectively corresponding to the first particles based on the lane line sampling point, the second map lane lines respectively corresponding to the first particles, and the first preset rule includes:
step 20331, for each first particle, based on the lane line sampling points and the second map lane line, determining the minimum lateral distances between the lane line sampling points and the second map lane line, respectively.
The minimum transverse distance between the lane line sampling point and the second map lane line is the transverse distance between the lane line sampling point and the nearest lane line in the second map lane line.
Step 20332, determining the scores of the first sampling points corresponding to the sampling points of the lane lines respectively based on the minimum transverse distances, the preset maximum transverse distance threshold and the first preset rule.
The preset maximum transverse distance threshold value can be set according to actual requirements, and the method is not limited in the disclosure. The first sampling point score refers to a matching score at the lane line sampling point for the lateral state.
Step 20333, based on the scores of the first sampling points corresponding to the lane line sampling points, determining the total score of the first sampling points of the lane line sampling points as the first matching score corresponding to the first particle.
For example, the first preset rule may be set as:
Figure BDA0003813895660000071
where x represents an x-axis coordinate in the vehicle coordinate system, F (x) may be referred to as a first matching score weight, and it can be seen that the first matching score weight is greater when the lane line sampling point is closer to the origin of the vehicle coordinate system, and the first matching score weight is smaller when the lane line sampling point is farther from the origin of the vehicle coordinate system.
Ith lane line sampling point
Figure BDA0003813895660000072
The first match score weight of (a) is:
Figure BDA0003813895660000073
based on the first match score weight, a first sample point score for the ith lane line sample point may be determined.
For each first particle, the minimum transverse distance between each lane line sampling point and the second map lane line can be determined based on each lane line sampling point and the second map lane line, the score of the first sampling point of each lane line sampling point is determined based on the minimum transverse distance, the preset maximum transverse distance threshold value and the first matching score weight, and the first matching score corresponding to the first particle is determined based on the score of the first sampling point corresponding to each lane line sampling point. The first matching score corresponding to the first particle may be a sum of the first sampling point scores of the lane line sampling points, and may be specifically set according to actual requirements.
Exemplary, ith lane line sampling Point
Figure BDA0003813895660000074
May be expressed as
Figure BDA0003813895660000075
Figure BDA0003813895660000076
Wherein g represents a preset maximum lateral distance threshold, t represents the current time,
Figure BDA0003813895660000077
indicating the ith lane line sampling point
Figure BDA0003813895660000078
The minimum lateral distance from the second map lane line, l (l =1,2, …, m) represents the l-th first particle, m represents the number of first particles, and m is a positive integer.
For the first particle, the sum of the scores of the first sampling points of the N lane line sampling points can obtain the first matching score corresponding to the first particle
Figure BDA0003813895660000081
Figure BDA0003813895660000082
In an optional example, the determining, in step 2034, a second matching score corresponding to each first particle based on the lane line sampling point, the second map lane line corresponding to each first particle, and the second preset rule includes:
step 20341, for each first particle, based on the lane line sampling points and the second map lane line, determining the minimum lateral distances between the lane line sampling points and the second map lane line, respectively.
This step is identical to step 20331 described above and will not be described further herein. In practical applications, this step may be the same as step 20331 described above.
And 20342, determining scores of second sampling points corresponding to the sampling points of the lane lines respectively based on the minimum transverse distances, the preset maximum transverse distance threshold and a second preset rule.
And the second sampling point score refers to the matching score at the lane line sampling point for the course angle state.
Step 20343, determining the total score of the second sampling points of the lane line sampling points as the second matching score corresponding to the first particle based on the scores of the second sampling points corresponding to the lane line sampling points.
For example, the second preset rule may be set as:
G(x)=x 0.3
where x represents the x-axis coordinate in the vehicle coordinate system, G (x) may be referred to as a second matching score weight, and it can be seen that the second matching score weight is greater when the lane line sampling point is closer to the origin of the vehicle coordinate system, and the second matching score weight is smaller when the lane line sampling point is farther from the origin of the vehicle coordinate system.
Ith lane line sampling point
Figure BDA0003813895660000083
The second match score weight of (a) is:
Figure BDA0003813895660000084
based on the second matching score weight, a second sample point score for the ith lane marking sample point may be determined.
For each first particle, the minimum transverse distance between each lane line sampling point and the second map lane line can be determined based on each lane line sampling point and the second map lane line, the score of the second sampling point of each lane line sampling point is determined based on the minimum transverse distance, the preset maximum transverse distance threshold value and the second matching score weight, and the second matching score corresponding to the first particle is determined based on the second sampling point score corresponding to each lane line sampling point. The minimum transverse distance between the lane line sampling point and the second map lane line refers to the transverse distance between the lane line sampling point and the nearest lane line in the second map lane line. The second matching score corresponding to the first particle may be a sum of the second sampling point scores of the lane line sampling points, and may be specifically set according to actual requirements.
Exemplary, ith lane line sampling Point
Figure BDA0003813895660000085
May be expressed as
Figure BDA0003813895660000086
Figure BDA0003813895660000087
Wherein g represents a preset maximum lateral distance threshold, t represents the current time,
Figure BDA0003813895660000088
indicating the ith lane line sampling point
Figure BDA0003813895660000089
The minimum lateral distance from the second map lane line, l (l =1,2, …, m) represents the l-th first particle, m represents the number of first particles, and m is a positive integer.
For the first particle, the sum of the scores of the second sampling points of the N lane line sampling points can obtain a second matching score corresponding to the first particle
Figure BDA00038138956600000810
Figure BDA00038138956600000811
The first particle is expressed as a first transverse weight
Figure BDA0003813895660000091
The first course angle weight is expressed as
Figure BDA0003813895660000092
The method comprises the following specific steps:
Figure BDA0003813895660000093
Figure BDA0003813895660000094
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003813895660000095
representing the weight of the first particle determined by the localization procedure at the previous instant.
Obtaining a first matching score corresponding to the first particle
Figure BDA0003813895660000096
And a second matching score
Figure BDA0003813895660000097
Thereafter, a first lateral weight of the first particle may be applied
Figure BDA0003813895660000098
And a first course angle weight
Figure BDA0003813895660000099
Updating to obtain a second transverse weight corresponding to the first particle
Figure BDA00038138956600000910
And a second course angle weight
Figure BDA00038138956600000911
The following:
Figure BDA00038138956600000912
Figure BDA00038138956600000913
wherein M is G An observation score of a GNSS sensor is represented as follows:
Figure BDA00038138956600000914
wherein the GNSS yaw Represents the heading angle of the GNSS sensor output,
Figure BDA00038138956600000915
indicating that the first particle corresponds to the first course angle state. The values of 0.001 and 10 are preset values, and can be adjusted according to actual requirements.
The method updates the particle transverse weight and the course angle weight by adopting different matching scores respectively, so that the contribution of the matching score of the transverse weight of the movable equipment to a lane line sampling point at a short distance is larger, and the contribution of the matching score of the course angle weight to a lane line sampling point at a middle and long distance is larger, thereby ensuring the transverse positioning precision and the course angle positioning precision.
In an optional example, the determining 206 a current positioning pose of the movable device based on the second lateral weight, the second heading angle weight, the first lateral state, and the first heading angle state respectively corresponding to the first particles includes:
step 2061, clustering each first particle to obtain a first number of clusters.
The clustering may adopt any practicable clustering algorithm, for example, a K-means clustering algorithm, and is not particularly limited. The first number may be set according to actual requirements, or determined according to a specific clustering algorithm, which is not limited specifically. Each cluster includes at least one first particle therein.
Step 2062, for each cluster, determining a second lateral state, a second heading angle state and a third lateral weight corresponding to the cluster based on the second lateral weight, the second heading angle weight, the first lateral state and the first heading angle state corresponding to the first particles in the cluster.
And obtaining the second transverse state corresponding to the cluster by weighted averaging the first transverse state of each first particle in the cluster according to the second transverse weight. Similarly, the second course angle state corresponding to the cluster can be obtained by the second course angle state of each first particle in the cluster according to the weighted average of the second course angle weights. The third lateral weight corresponding to a cluster may be the sum of the second lateral weights of the first particles in the cluster.
Illustratively, F first particles are included in the S (S =1,2, …, S; S denotes the number of clusters) cluster, wherein the F (F =1,2, …, F) first particle has a first transverse state of
Figure BDA00038138956600000916
The first course angle state is
Figure BDA00038138956600000917
The second transverse weight is
Figure BDA00038138956600000918
A second heading angle weight of
Figure BDA00038138956600000919
Then the second transverse state corresponding to the cluster
Figure BDA00038138956600000920
Second course angle state
Figure BDA00038138956600000921
And a third lateral weight
Figure BDA00038138956600000922
Respectively as follows:
Figure BDA00038138956600000923
Figure BDA0003813895660000101
Figure BDA0003813895660000102
wherein the first lateral state, the first course angle state, the second lateral weight, and the second course angle weight of each first particle in the cluster are respectively consistent with the obtained corresponding first particle, and different symbolic representations are only performed on the cluster, for example, if the first particle corresponds to the f-th first particle in the s-th cluster, the second lateral weight of the f-th first particle in the s-th cluster
Figure BDA0003813895660000103
The first transverse state, the first course angle and the second course angle are the same in weight, and are not described in detail herein.
Step 2063, the second lateral state and the second heading angle state of the cluster with the largest third lateral weight are respectively taken as the target lateral state and the target heading angle state.
For each cluster, if the third transverse weight of one cluster is the largest, the first particles of the cluster are closer to the real pose of the movable equipment, so that the second transverse state and the second course angle state of the cluster are used as the target transverse state and the target course angle state for determining the pose of the movable equipment, and the positioning accuracy of the pose of the movable equipment is ensured.
Exemplarily, the third lateral weight of the s-th (e.g. s = 3) cluster is the largest, and then the second lateral state thereof is set
Figure BDA0003813895660000104
As a target lateral state
Figure BDA0003813895660000105
Its second course angle state
Figure BDA0003813895660000106
As a target heading angle attitude
Figure BDA0003813895660000107
Figure BDA0003813895660000108
Step 2064, the target longitudinal state determined based on the first histogram filter is obtained.
The first histogram filter may be implemented in any way, and the disclosure is not limited thereto. The present disclosure only uses the longitudinal state determined by the first histogram filter and does not consider the lateral state and the heading angle state.
Step 2065, determining the current positioning pose of the movable equipment based on the longitudinal state of the target, the transverse state of the target, the heading angle state of the target and the first prediction pose of the movable equipment at the current moment.
Wherein the first predicted pose of the mobile device at the current time may be determined based on the positioning pose of the mobile device at the previous time, odometry information for the mobile device, and a motion model. The detailed prediction principle is not described in detail.
Illustratively, the target lateral state at the current time is represented as
Figure BDA0003813895660000109
Target course angle state of
Figure BDA00038138956600001010
The target longitudinal state is expressed as
Figure BDA00038138956600001011
The first predicted pose of the movable device at the current moment is represented as
Figure BDA00038138956600001012
The current positioning pose of the movable device is
Figure BDA00038138956600001013
Figure BDA00038138956600001014
Wherein T represents a transposition operation,
Figure BDA00038138956600001015
and representing the addition of the SE2 pose, which is specifically defined as follows:
definition of
Figure BDA00038138956600001016
c[0]=a[0]+b[0]*cos(a[2])-b[1]*sin(a[2])
c[1]=a[1]+b[0]*sin(a[2])+b[1]*cos(a[2])
c[2]=a[2]+b[2]
In an optional example, the determining, based on the pose of each first particle, a first lateral state and a first heading angle state respectively corresponding to each first particle at the current time in step 202 includes:
step 2021a, determining, based on the first particle poses, first predicted particle poses corresponding to the first particles, respectively.
The first predicted particle pose is obtained by predicting the pose of the first particle at the current time based on the movement condition (such as odometer information) of the movable device in the time period from the previous time to the current time, and the specific prediction mode can adopt any practicable mode, such as prediction based on a motion model.
For example, for a vehicle, a first predicted particle pose at the current time of the first particle may be predicted based on the odometer relative increment Δ Odom calculated from the vehicle chassis CAN information:
Figure BDA0003813895660000111
where t denotes a current time, t-1 denotes a previous time, l (l =1,2.., m) denotes an l-th first particle, m denotes the number of first particles, m is a positive integer,
Figure BDA0003813895660000112
representing the first particle pose of the ith first particle,
Figure BDA0003813895660000113
representing a first predicted particle pose for the ith first particle, Δ Odom representing the odometer relative increment,
Figure BDA0003813895660000114
denotes addition of SE2 pose, defined as described above, W l The gaussian white noise corresponding to the first particle is expressed, and the gaussian white noise of different first particles can be different and can be specifically set according to actual requirements.
Step 2022a, determining a first transverse state and a first heading angle state corresponding to each first particle based on the pose of each first predicted particle.
Wherein the first lateral state and the first heading angle state of the first particle at the current time are the lateral state and the heading angle state of the predicted pose of the first particle at the current time relative to the predicted pose of the movable device at the current time. Thus, a first lateral state and a first heading angle state may be determined for each first particle based on each first predicted particle pose and the first positioning pose of the movable device.
In an alternative example, fig. 5 is a flowchart of step 2022a provided by an exemplary embodiment of the present disclosure. In this example, the determining, based on the first predicted particle poses of step 202a2, the first lateral state and the first heading angle state respectively corresponding to each first particle includes:
step 2022a1, determine a first predicted pose of the movable device at the current time based on the first positioning pose of the movable device determined at the previous time.
Wherein the first positioning pose is the positioning pose of the movable device determined in the positioning procedure at the previous time (t-1), and is expressed as
Figure BDA0003813895660000115
Mobile deviceFirst predicted pose at present
Figure BDA0003813895660000116
Based on the first positioning pose and the odometer relative increment Δ Odom determination, expressed as follows:
Figure BDA0003813895660000117
wherein the content of the first and second substances,
Figure BDA0003813895660000118
the addition of the pose of SE2 is shown and defined as above, and is not described again.
Step 2022a2, determining a first lateral state and a first heading angle state corresponding to each first particle based on each first predicted particle pose and the first predicted pose.
Illustratively, the first lateral state of the first particle is represented as
Figure BDA0003813895660000119
The first course angle state is represented as
Figure BDA00038138956600001110
Then:
Figure BDA00038138956600001111
Figure BDA00038138956600001112
where, t represents the current time of day,
Figure BDA00038138956600001113
representing a first predicted particle pose for the ith first particle,
Figure BDA00038138956600001114
representing a first of a mobile deviceThe pose of the robot is predicted,
Figure BDA00038138956600001115
is an operator, defined as follows:
Figure BDA00038138956600001116
c[0]=(a[0]-b[0])*cos(b[2])+(a[1]-b[1])*sin(b[2])
c[1]=(a[1]-b[1])*cos(b[2])-(a[1]-b[1])*sin(b[2])
c[2]=a[2]-b[2]
Figure BDA00038138956600001117
is expressed by taking c 1],
Figure BDA00038138956600001118
Is expressed as c 2]Will be
Figure BDA00038138956600001119
As a, will
Figure BDA00038138956600001120
B is obtained by the above operation to obtain c 1]As a first transverse state
Figure BDA00038138956600001121
c[2]As a first course angle state
Figure BDA00038138956600001122
In an alternative example, fig. 6 is a flowchart of step 202 provided by an exemplary embodiment of the present disclosure. In this example, the determining, based on the pose of each first particle, the first lateral state and the first heading angle state respectively corresponding to the current time of each first particle in step 202 includes:
step 2021b, for each first particle pose, generating second predicted particle poses corresponding to the n second particles, respectively, based on the first particle pose.
Wherein the n second particles may be generated based on the odometry information and n different white gaussian noises.
Illustratively, for the first particle, the pose of the first particle based on the first particle
Figure BDA0003813895660000121
Generating second predicted particle poses corresponding to the n second particles respectively
Figure BDA0003813895660000122
Is represented as follows:
Figure BDA0003813895660000123
wherein j represents the jth second particle, Δ Odom represents the mobile device odometer relative increment, W lj Indicating white gaussian noise corresponding to the jth second particle.
Step 2022b, determining a third lateral state and a third heading angle state corresponding to each second particle based on the second predicted particle pose corresponding to each second particle.
For each second particle, the determination principle of the third transverse state and the third heading angle state is similar to that of the first transverse state and the first heading angle state of the first particle, and is not described herein again.
Illustratively, taking the first particle as an example, for the second particle, the third lateral state
Figure BDA0003813895660000124
And a third course angle state
Figure BDA0003813895660000125
Is represented as follows:
Figure BDA0003813895660000126
Figure BDA0003813895660000127
wherein the content of the first and second substances,
Figure BDA0003813895660000128
as defined above, it is not described herein in detail.
The weight (including the lateral weight and the longitudinal weight) of each second particle corresponding to the first particle is set to be the same as the first lateral weight and the first course angle weight of the first particle. The first lateral weight and the first heading angle weight of the first particle are determined and stored at a previous time, the first lateral weight of the first particle is expressed as
Figure BDA0003813895660000129
Before the weight is not updated at the current moment, the first course angle weight is the same as the first transverse weight
Figure BDA00038138956600001210
After each positioning process is finished, the determined weight of the transverse state of the first particle is used as a first transverse weight and a first course angle weight of the first particle at the next moment, and at the current moment, the weight of the transverse state of the first particle determined at the previous moment is used as a first transverse weight and a first course angle weight of the first particle at the next moment
Figure BDA00038138956600001211
As a first lateral weight and a first heading angle weight. Correspondingly, a fourth lateral weight of the jth second particle
Figure BDA00038138956600001212
And a fourth course angle weight
Figure BDA00038138956600001213
Respectively expressed as:
Figure BDA00038138956600001214
Figure BDA00038138956600001215
step 2023b, based on the third transverse state and the third heading angle state respectively corresponding to the m × n second particles, mapping each second particle into the first grid coordinate region, and obtaining a cell to which each second particle belongs.
Wherein the first grid coordinate region comprises m cells, m = m yaw *m lat ,m yaw And m lat The cell numbers of the first grid coordinate area in the course angle direction and the transverse direction are respectively represented. The first grid coordinate area is an area under a pre-established grid coordinate system, the number of unit grids included in the first grid coordinate area is the same as the number m of the first particles, and m is yaw And m lat Can be set according to actual requirements. The first grid coordinate area can be established according to a preset transverse state threshold, a preset course angle state threshold, a preset transverse state step length and a preset course angle state step length, and each preset threshold can be determined by synthesizing a third transverse state and a third course angle state corresponding to each second particle respectively so as to ensure that all or most of the second particles can be projected into the first grid coordinate area, and the first grid coordinate area can be specifically set according to actual requirements.
Illustratively, fig. 7 is a schematic diagram of a first grid coordinate region, the abscissa of which corresponds to the heading angle state (yaw), and the ordinate of which corresponds to the lateral state (lat), d yaw And d lat Respectively representing a heading angle state step length and a lateral state step length. According to the difference of the number of the unit cells included in the first grid coordinate area, the number of the unit cells respectively corresponding to the abscissa and the ordinate, the position of each unit cell in the first grid coordinate area relative to the coordinate origin is different, and the setting can be specifically set according to actual requirements]Shows the position of the point relative to the origin, and has the following conditionsThe following conditions:
when m is yaw And m lat When all are even numbers, then
Figure BDA0003813895660000131
Figure BDA0003813895660000132
When m is yaw And m lat Are all odd, then
Figure BDA0003813895660000133
Figure BDA0003813895660000134
If m yaw And m lat One is odd and the other is even, then m and n are determined according to the two situations, which are not described in detail again.
Accordingly, the coordinate range of each cell in the first grid coordinate region may be based on the cell location [ b, d ]]、m yaw And m lat And step size d of the raw and lat directions of each cell uaw And d lat To determine, detailed description is omitted.
For example, when m yaw And m lat When all are even, cell [ b, d]=[1,2]Then the cell [ b, d ]]The coordinate ranges of (a) are:
raw axis: [ d uaw *(b-1),d yaw *b]=[0,d uaw ]。
lat axis: [ d lat *(d-1),d lat *d]=[d lat ,2d lat ]。
In the first grid coordinate region, one or more second particles may be mapped to each cell.
In practical applications, the abscissa corresponds to the lateral state (lat), and the ordinate corresponds to the course angle state (yaw), which is not limited specifically.
Step 2024b, determining third particles corresponding to the cells, third predicted particle poses corresponding to the third particles, and fourth lateral states and fourth heading angle states corresponding to the third particles, based on the cells to which the second particles belong.
And the third particle corresponding to the cell is a particle corresponding to the cell obtained by particlizing the cell based on each second particle mapped into the cell. The third predicted particle pose for each third particle is determined by a weighted average of the second predicted particle poses of all second particles within the cell to which the third particle corresponds. Similarly, the fourth lateral state and the fourth heading angle state corresponding to each third particle pose are respectively obtained by weighted average of the third lateral state and the third heading angle state of all the second particles in the cell corresponding to the third particle.
Illustratively, M second particles are mapped in the k (k =1,2, …, M) th cell, wherein the second predicted particle pose corresponding to the i (i =1,2, …, M) th second particle is represented as
Figure BDA0003813895660000135
From the foregoing
Figure BDA0003813895660000136
Figure BDA0003813895660000137
And (4) obtaining. For example, if the jth second particle generated by the ith first particle is mapped to the ith second particle in the kth cell
Figure BDA0003813895660000138
Details are not repeated. Similarly, the third lateral state corresponding to the second particle is
Figure BDA0003813895660000139
Third course angle state
Figure BDA00038138956600001310
The third particle corresponding to the kth unit cell is called the kth particleA third particle, the third predicted particle pose corresponding to the third particle is expressed as
Figure BDA00038138956600001311
The fourth transverse state is represented as
Figure BDA00038138956600001312
And a fourth course angle state
Figure BDA00038138956600001313
Specifically, the following are shown:
Figure BDA0003813895660000141
Figure BDA0003813895660000142
Figure BDA0003813895660000143
wherein the content of the first and second substances,
Figure BDA0003813895660000144
represents the particle weight corresponding to the ith second particle
Figure BDA0003813895660000145
The weight determined for the previous moment of the first particle generating the second particle, e.g. the second particle generated by the first particle
Figure BDA0003813895660000146
Figure BDA0003813895660000147
The mapping information may be determined specifically according to the actual mapping situation, and is not described in detail herein.
Step 2025b, using the fourth lateral state and the fourth heading angle state respectively corresponding to each third particle as the first lateral state and the first heading angle state respectively corresponding to each first particle at the current time.
Exemplarily, fig. 8 is a schematic diagram of a second particle mapping process according to an exemplary embodiment of the present disclosure. Where black circles represent second particles and gray circles represent third particles.
According to the method, the number of the particles is expanded by respectively sampling m first particles for multiple times, and the positioning precision is improved on the basis of ensuring the distribution stability of the particles by carrying out a gridding process based on the expanded particles.
Fig. 9 is a flowchart of step 202 provided by another exemplary embodiment of the present disclosure.
In an optional example, the generating, for each first particle pose of step 2021b, a second predicted particle pose corresponding to each of the n second particles based on the first particle pose includes:
step 2021b1, based on the first particle pose, the odometer information, and n different gaussian white noises, generates n second predicted particle poses corresponding to the n second particles, respectively.
Illustratively, for the first particle, the pose of the first particle based on the first particle
Figure BDA0003813895660000148
Generating second predicted particle poses corresponding to the n second particles respectively
Figure BDA0003813895660000149
Is represented as follows:
Figure BDA00038138956600001410
where j represents the j-th second particle, Δ Odom represents the mobile device odometer relative increment, W lj Indicating white gaussian noise corresponding to the jth second particle.
In an optional example, the determining, in step 2022b, a third lateral state and a third heading angle state corresponding to each second particle based on the second predicted particle pose corresponding to each second particle includes:
step 2022b1, determine a first predicted pose of the movable device at the current time based on the first positioning pose of the movable device determined at the previous time.
This step is identical to the step 2022a1, and will not be described herein again.
Step 2022b2, determining a third lateral state and a third heading angle state respectively corresponding to each second particle based on the second predicted particle pose and the first predicted pose respectively corresponding to each second particle.
Illustratively, taking the first particle as an example, for the second particle, the third lateral state
Figure BDA00038138956600001411
And a third course angle state
Figure BDA00038138956600001412
Is represented as follows:
Figure BDA00038138956600001413
Figure BDA00038138956600001414
wherein the content of the first and second substances,
Figure BDA00038138956600001415
a second predicted particle pose representing a jth second particle generated by the ith first particle,
Figure BDA00038138956600001416
for definition, reference is made to the foregoing description and no further description is made herein.
In an alternative example, fig. 10 is a flowchart illustrating a pose determination method according to still another exemplary embodiment of the present disclosure. In this example, the method includes:
1. at the current moment t, first particle poses of m first particles obtained at the previous moment corresponding to the movable equipment are obtained, and m is an integer larger than 1.
Wherein the first particle pose of the first particle is expressed as
Figure BDA0003813895660000151
2. And generating second predicted particle poses corresponding to the n second particles respectively based on the first particle pose, the odometer information and n different Gaussian white noises aiming at each first particle pose.
Wherein, for the first particle, the pose of the first particle based on the first particle
Figure BDA0003813895660000152
Generating second predicted particle poses corresponding to the n second particles respectively
Figure BDA0003813895660000153
Is represented as follows:
Figure BDA0003813895660000154
wherein j represents the jth second particle, Δ Odom represents the mobile device odometer relative increment, W lj Indicating white gaussian noise corresponding to the jth second particle.
3. Based on the first positioning pose of the movable device determined at the previous time, a first predicted pose of the movable device at the current time is determined.
Wherein the first position pose is
Figure BDA0003813895660000155
The first predicted pose of the movable equipment at the current moment is
Figure BDA0003813895660000156
Is represented as follows:
Figure BDA0003813895660000157
4. and determining a third transverse state and a third course angle state respectively corresponding to each second particle based on the second predicted particle pose respectively corresponding to each second particle and the first predicted pose.
Wherein, taking the first particle as an example, for the jth second particle, the third transverse state
Figure BDA0003813895660000158
And a third course angle state
Figure BDA0003813895660000159
Is represented as follows:
Figure BDA00038138956600001510
Figure BDA00038138956600001511
wherein the content of the first and second substances,
Figure BDA00038138956600001512
a second predicted particle pose representing a jth second particle generated by the ith first particle,
Figure BDA00038138956600001513
for definition, reference is made to the foregoing description and no further description is made herein.
Correspondingly, a fourth lateral weight of the jth second particle
Figure BDA00038138956600001514
And a fourth course angle weight
Figure BDA00038138956600001515
Respectively expressed as:
Figure BDA00038138956600001516
Figure BDA00038138956600001517
5. and mapping each second particle to the first grid coordinate region based on the third transverse state and the third heading angle state respectively corresponding to the m × n second particles to obtain the cell to which each second particle respectively belongs.
Wherein the first grid coordinate region comprises m cells, m = m yaw *m lat ,m yaw And m lat The cell numbers of the first grid coordinate area in the course angle direction and the transverse direction are respectively represented.
6. And determining a third particle corresponding to each cell, a third predicted particle pose corresponding to each third particle, and a fourth transverse state and a fourth course angle state corresponding to each third particle based on the cell to which each second particle belongs.
Wherein, M second particles are mapped in the k (k =1,2, …, M) th cell, wherein the second predicted particle pose corresponding to the i (i =1,2, …, M) th second particle is represented as
Figure BDA00038138956600001518
From the foregoing
Figure BDA00038138956600001519
And (4) obtaining. For example, if the jth second particle generated by the ith first particle is mapped to the ith second particle in the kth cell
Figure BDA00038138956600001520
Details are not repeated. Similarly, the third lateral state corresponding to the second particle is
Figure BDA0003813895660000161
Third course angle state
Figure BDA0003813895660000162
The third particle corresponding to the kth unit cell is called the kth third particle, and the third predicted particle pose corresponding to the third particle is expressed as
Figure BDA0003813895660000163
The fourth transverse state is represented as
Figure BDA0003813895660000164
And a fourth course angle state
Figure BDA0003813895660000165
Specifically, the following are shown:
Figure BDA0003813895660000166
Figure BDA0003813895660000167
Figure BDA0003813895660000168
wherein the content of the first and second substances,
Figure BDA0003813895660000169
represents the particle weight corresponding to the ith second particle
Figure BDA00038138956600001610
The weight determined for the previous moment of the first particle generating the second particle, e.g. the second particle generated by the first particle
Figure BDA00038138956600001611
Figure BDA00038138956600001612
The mapping information may be determined specifically according to the actual mapping situation, and is not described in detail herein.
Correspondingly, the fifth lateral weight of the kth third particle
Figure BDA00038138956600001613
And a fifth course angle weight of
Figure BDA00038138956600001614
Figure BDA00038138956600001615
7. And taking the fourth transverse state and the fourth course angle state respectively corresponding to each third particle as the first transverse state and the first course angle state respectively corresponding to each first particle at the current moment.
Wherein the kth first particle is the kth third particle. Correspondingly, the first lateral weight of the kth first particle is the fifth lateral weight of the kth third particle, and the first course angle weight of the kth first particle is the fifth course angle weight of the kth third particle.
8. And determining lane line sampling points under a vehicle coordinate system based on the sensed lane line information.
Wherein, a total of N lane line sampling points under the vehicle coordinate system are determined, and the ith (i =1,2, …, N) lane line sampling point is expressed as:
Figure BDA00038138956600001616
where t denotes the current time, c 0 、c 1 、c 2 、c 3 Indicating the perceived lane line parameter.
9. And respectively taking the first particles as movable equipment, converting the first map lane lines into a vehicle coordinate system, and obtaining second map lane lines under the vehicle coordinate system corresponding to the first particles.
10. And for each first particle, determining the minimum transverse distance between each lane line sampling point and the second map lane line based on each lane line sampling point and the second map lane line.
And 11.1, determining the scores of the first sampling points corresponding to the sampling points of the lane lines respectively based on the minimum transverse distances, the preset maximum transverse distance threshold and the first preset rule.
Wherein, the ith lane line sampling point
Figure BDA00038138956600001617
May be expressed as
Figure BDA00038138956600001618
Figure BDA00038138956600001619
Wherein g represents a preset maximum lateral distance threshold, t represents the current time,
Figure BDA00038138956600001620
represents the ith lane line sampling point
Figure BDA00038138956600001621
The minimum lateral distance from the second map lane line, k (k =1,2, …, m) represents the kth first particle (third particle), m represents the number of first particles, m is a positive integer,
Figure BDA00038138956600001622
indicating the ith lane line sampling point
Figure BDA00038138956600001623
The first match score weight of (2).
And 11.2, determining second sampling point scores corresponding to the sampling points of the lane lines respectively based on the minimum transverse distances, the preset maximum transverse distance threshold and a second preset rule.
Wherein, the ith lane line sampling point
Figure BDA0003813895660000171
May be expressed as
Figure BDA0003813895660000172
Figure BDA0003813895660000173
Wherein g represents a preset maximum lateral distance threshold, t represents the current time,
Figure BDA0003813895660000174
indicating the ith lane line sampling point
Figure BDA0003813895660000175
The minimum lateral distance from the second map lane line, k (k =1,2, …, m) represents the kth first particle (third particle), m represents the number of first particles, m is a positive integer,
Figure BDA0003813895660000176
indicating the ith lane line sampling point
Figure BDA0003813895660000177
The second match score of (2) is weighted as.
12.1, determining the total score of the first sampling points of the lane line sampling points based on the scores of the first sampling points corresponding to the lane line sampling points respectively, and taking the total score as the first matching score corresponding to the first particle.
Wherein the first matching score corresponding to the kth first particle
Figure BDA0003813895660000178
Figure BDA0003813895660000179
And 12.2, determining the total score of the second sampling points of the lane line sampling points based on the scores of the second sampling points corresponding to the lane line sampling points respectively, and taking the total score as the second matching score corresponding to the first particle.
Wherein the second matching score corresponding to the kth first particle
Figure BDA00038138956600001710
Figure BDA00038138956600001711
And 13.1, updating the first transverse weight of the corresponding first particle based on each first matching score to obtain a second transverse weight corresponding to each first particle.
Wherein the second lateral weight corresponding to the kth first particle
Figure BDA00038138956600001712
Is represented as follows:
Figure BDA00038138956600001713
Figure BDA00038138956600001714
Figure BDA00038138956600001715
wherein the GNSS yaw Represents the heading angle of the GNSS sensor output,
Figure BDA00038138956600001716
indicating that the kth first particle corresponds to the first heading angle state. 0.001 and 10 are preset values, which can be adjusted according to actual requirements。
And 13.2, updating the first course angle weight of the corresponding first particle based on each second matching score to obtain a second course angle weight corresponding to each first particle.
Wherein the second course angle weight corresponding to the kth first particle
Figure BDA00038138956600001717
Is represented as follows:
Figure BDA00038138956600001718
Figure BDA00038138956600001719
14. and clustering the first particles to obtain a first number of clusters.
15. And for each cluster, determining a second transverse state, a second course angle state and a third transverse weight corresponding to the cluster based on a second transverse weight, a second course angle weight, a first transverse state and a first course angle state corresponding to first particles in the cluster.
Wherein F first particles are included in the S (S =1,2, …, S; S represents the number of clusters) cluster, wherein the first transverse state of the F (F =1,2, …, F) first particles is
Figure BDA0003813895660000181
The first course angle state is
Figure BDA0003813895660000182
The second lateral weight is
Figure BDA0003813895660000183
A second heading angle weight of
Figure BDA0003813895660000184
Then the second transverse state corresponding to the cluster
Figure BDA0003813895660000185
Second course angle state
Figure BDA0003813895660000186
And third lateral weight
Figure BDA0003813895660000187
Respectively as follows:
Figure BDA0003813895660000188
Figure BDA0003813895660000189
Figure BDA00038138956600001810
wherein the first lateral state, the first course angle state, the second lateral weight and the second course angle weight of each first particle in the cluster are respectively consistent with the obtained corresponding first particle (third particle), and different signs are only used for representing the cluster, for example, if the kth first particle corresponds to the fth first particle in the s-th cluster, the second lateral weight of the fth first particle in the s-th cluster
Figure BDA00038138956600001811
The first transverse state, the first course angle and the second course angle are the same in weight, and are not described in detail herein.
16. And respectively taking the second transverse state and the second heading angle state of the cluster with the maximum third transverse weight as a target transverse state and a target heading angle state.
Wherein the target lateral state is represented as
Figure BDA00038138956600001812
Target heading angle attitude is represented as
Figure BDA00038138956600001813
17. A target longitudinal state determined based on the first histogram filter is obtained.
Wherein the target longitudinal state is represented as
Figure BDA00038138956600001814
18. And determining the current positioning pose of the movable equipment based on the longitudinal state of the target, the transverse state of the target, the course angle state of the target and the first prediction pose of the movable equipment at the current moment.
Wherein the current positioning pose of the movable device
Figure BDA00038138956600001815
Is represented as follows:
Figure BDA00038138956600001816
where T denotes a transposition operation. For other symbols see above.
19. And (4) taking the second transverse weight corresponding to each first particle determined in the step (13.1) as the first transverse weight and the first course angle weight of the first particle at the next moment, and entering the iteration flow at the next moment.
Specifically, the above steps 1 to 19 are repeated with the next time as the current time.
According to the pose determining method, in the process of achieving state observation score calculation and state updating based on the particle filter, the transverse state and the course angle state are separated, and the transverse state and the course angle state are calculated respectively, so that the problem that transverse and course angle accuracy cannot be considered in the filter in the positioning process is solved, and the positioning accuracy of the transverse and course angles of a vehicle can reach the consistent high level no matter the vehicle runs on a straight road or a curved road of a high-precision map, and the pose determining method is not easily influenced by the curvature of a lane line of the map. Even if the geometric shape of the sensing lane line has certain deviation with the shape of the map lane line, a high-precision positioning result can be obtained, and the precision and the stability of the whole vehicle in the real-time positioning process are effectively improved. In addition, in the particle updating process, a plurality of second particles are generated for each first particle, the number of the particles is expanded, the expanded second particles are combined with a grid coordinate region to form third particles with the same number of the first particles, and the third particles are used as the first particles to perform sensing matching between a lane line and a map lane line, so that resampling of the particles is realized, particle distribution stability is ensured, and positioning accuracy is further improved.
Any of the pose determination methods provided by the embodiments of the present disclosure may be performed by any suitable device having data processing capabilities, including but not limited to: terminal equipment, a server and the like. Alternatively, the method for determining any pose provided by the embodiments of the present disclosure may be executed by a processor, for example, the processor may execute the method for determining any pose mentioned by the embodiments of the present disclosure by calling corresponding instructions stored in a memory. And will not be described in detail below.
Exemplary devices
Fig. 11 is a schematic structural diagram of a pose determination apparatus according to an exemplary embodiment of the present disclosure. The apparatus of this embodiment may be used to implement the corresponding method embodiment of the present disclosure, and the apparatus shown in fig. 11 includes: a first determination module 501, a first processing module 502, a second processing module 503, a third processing module 504, a fourth processing module 505, and a fifth processing module 506.
A first determining module 501, configured to determine first particle poses of m first particles corresponding to the movable device, where m is an integer greater than 1, and the first particle poses are poses corresponding to the first particles obtained at a previous time; a first processing module 502, configured to determine, based on the pose of each first particle determined by the first determining module 501, a first lateral state and a first heading angle state that correspond to each first particle at the current time, respectively; a second processing module 503, configured to determine, based on the perceived lane line information, a first matching score and a second matching score corresponding to each first particle, respectively; a third processing module 504, configured to update the first lateral weights of the corresponding first particles based on the first matching scores obtained by the second processing module 503, so as to obtain second lateral weights corresponding to the first particles; a fourth processing module 505, configured to update the first course angle weights of the corresponding first particles based on the second matching scores obtained by the second processing module 503, respectively, so as to obtain second course angle weights corresponding to the first particles, respectively; a fifth processing module 506, configured to determine a current positioning pose of the mobile device based on the second lateral weight, the second heading angle weight, the first lateral state, and the first heading angle state respectively corresponding to each first particle.
In an alternative example, fig. 12 is a schematic structural diagram of the second processing module 503 according to an exemplary embodiment of the disclosure. In this example, the second processing module 503 includes: a first determining unit 5031, a first converting unit 5032, a second determining unit 5033 and a third determining unit 5034.
A first determining unit 5031, configured to determine lane line sampling points in a vehicle coordinate system based on the perceived lane line information; a first converting unit 5032, configured to convert the first map lane lines into a vehicle coordinate system by using the first particles as mobile devices, respectively, to obtain second map lane lines in the vehicle coordinate system corresponding to the first particles, respectively; a second determining unit 5033, configured to determine, based on the lane line sampling point, the second map lane line corresponding to each first particle, and the first preset rule, a first matching score corresponding to each first particle; a third determining unit 5034, configured to determine a second matching score corresponding to each first particle based on the lane line sampling point, the second map lane line corresponding to each first particle, and the second preset rule.
In an alternative example, the second determining unit 5033 is specifically configured to: for each first particle, determining the minimum transverse distance between each lane line sampling point and the second map lane line based on each lane line sampling point and the second map lane line; determining first sampling point scores corresponding to the sampling points of the lane lines respectively based on the minimum transverse distances, the preset maximum transverse distance threshold and a first preset rule; and determining the total score of the first sampling points of the lane line sampling points as the first matching score corresponding to the first particle based on the scores of the first sampling points corresponding to the lane line sampling points.
In an alternative example, the third determining unit 5034 is specifically configured to: for each first particle, determining the minimum transverse distance between each lane line sampling point and the second map lane line based on each lane line sampling point and the second map lane line; determining second sampling point scores corresponding to the sampling points of the lane lines respectively based on the minimum transverse distances, the preset maximum transverse distance threshold and a second preset rule; and determining the total score of the second sampling points of the lane line sampling points as the second matching score corresponding to the first particle based on the scores of the second sampling points corresponding to the lane line sampling points.
In an alternative example, fig. 13 is a schematic structural diagram of a fifth processing module 506 according to an exemplary embodiment of the disclosure. In this example, the fifth processing module 506 includes: a clustering unit 5061, a first processing unit 5062, a second processing unit 5063, a first obtaining unit 5064, and a third processing unit 5065.
A clustering unit 5061, configured to cluster each first particle to obtain a first number of clusters; a first processing unit 5062, configured to determine, for each cluster, a second lateral state, a second heading angle state, and a third lateral weight corresponding to the cluster based on a second lateral weight, a second heading angle weight, a first lateral state, and a first heading angle state corresponding to a first particle in the cluster; a second processing unit 5063, configured to take the second lateral state and the second heading angle state of the cluster with the largest third lateral weight as a target lateral state and a target heading angle state, respectively; a first acquisition unit 5064, configured to acquire a target longitudinal state determined based on the first histogram filter; a third processing unit 5065, configured to determine a current positioning pose of the movable device based on the target longitudinal state, the target lateral state, the target heading angle state, and the first predicted pose of the movable device at the current time.
In an alternative example, fig. 14 is a schematic structural diagram of the first processing module 502 according to an exemplary embodiment of the disclosure. In this example, the first processing module 502 includes: a fourth determination unit 5021a and a fifth determination unit 5022a.
A fourth determining unit 5021a, configured to determine first predicted particle poses corresponding to the first particles, respectively, based on the first particle poses; the fifth determining unit 5022a determines a first transverse state and a first heading angle state corresponding to each first particle based on each first predicted particle pose.
In an optional example, the fifth determining unit 5022a is specifically configured to: determining a first predicted pose of the movable device at the current time based on the first positioning pose of the movable device determined at the previous time; and determining a first transverse state and a first course angle state corresponding to each first particle respectively based on each first predicted particle pose and the first predicted pose.
In an alternative example, fig. 15 is a schematic structural diagram of a first processing module 502 provided in another exemplary embodiment of the present disclosure. In this example, the first processing module 502 includes: a fourth processing unit 5021b, a fifth processing unit 5022b, a sixth processing unit 5023b, a seventh processing unit 5024b and an eighth processing unit 5025b.
A fourth processing unit 5021b, configured to generate, for each first particle pose, second predicted particle poses corresponding to the n second particles respectively based on the first particle pose; the fifth processing unit 5022b is configured to determine a third transverse state and a third heading angle state corresponding to each second particle based on the pose of the second predicted particle corresponding to each second particle; a sixth processing unit 5023b, configured to map each second particle into a first grid coordinate region based on a third transverse state and a third heading angle state corresponding to m × n second particles, respectively, to obtain a cell to which each second particle belongs, where the first grid coordinate region includes m cells, and m = m yaw *m lat ,m yaw And m lat Respectively representing the number of cells of the first grid coordinate area in the course angle direction and the transverse direction; a seventh processing unit 5024b for dividing the particles based on the second particlesDetermining a third particle corresponding to each cell, a third predicted particle pose corresponding to each third particle, and a fourth transverse state and a fourth course angle state corresponding to each third particle; the eighth processing unit 5025b is configured to use the fourth lateral state and the fourth heading angle state corresponding to each third particle as the first lateral state and the first heading angle state corresponding to each first particle at the current time.
In an optional example, the fourth processing unit 5021b is specifically configured to: and generating second predicted particle poses corresponding to the n second particles respectively based on the first particle pose, the odometer information and n different Gaussian white noises.
In an optional example, the fifth processing unit 5022b is specifically configured to: determining a first predicted pose of the movable device at the current time based on the first positioning pose of the movable device determined at the previous time; and determining a third transverse state and a third course angle state respectively corresponding to each second particle based on the second predicted particle pose respectively corresponding to each second particle and the first predicted pose.
Exemplary electronic device
An embodiment of the present disclosure further provides an electronic device, including: a memory for storing a computer program;
a processor configured to execute the computer program stored in the memory, and when the computer program is executed, the method for determining the pose according to any of the above embodiments of the present disclosure is implemented.
Fig. 16 is a schematic structural diagram of an application embodiment of the electronic device of the present disclosure. In this embodiment, the electronic device 10 includes one or more processors 11 and a memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by processor 11 to implement the methods of the various embodiments of the disclosure described above and/or other desired functionality. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input means 13 may be, for example, a microphone or a microphone array as described above for capturing an input signal of a sound source.
The input device 13 may also include, for example, a keyboard, a mouse, and the like.
The output device 14 may output various information including the determined distance information, direction information, and the like to the outside. The output devices 14 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device 10 relevant to the present disclosure are shown in fig. 16, omitting components such as buses, input/output interfaces, and the like. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the methods and apparatus described above, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform steps in methods according to various embodiments of the present disclosure as described in the "exemplary methods" section of this specification above.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform steps in methods according to various embodiments of the present disclosure as described in the "exemplary methods" section above of this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
In the present specification, the embodiments are described in a progressive manner, and each embodiment focuses on differences from other embodiments, and the same or similar parts in each embodiment are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. As used herein, the words "or" and "refer to, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. Such decomposition and/or recombination should be considered as equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (12)

1. A pose determination method, comprising:
determining first particle poses of m first particles corresponding to the movable device, wherein m is an integer larger than 1, and the first particle poses are obtained at the previous moment and correspond to the first particles;
determining a first transverse state and a first course angle state respectively corresponding to the current moment of each first particle based on the pose of each first particle;
determining a first matching score and a second matching score corresponding to each first particle respectively based on the perception lane line information;
updating the first transverse weight of the corresponding first particle based on each first matching score to obtain a second transverse weight corresponding to each first particle;
updating the first course angle weight of the corresponding first particle based on each second matching score to obtain a second course angle weight corresponding to each first particle;
and determining the current positioning pose of the movable equipment based on the second transverse weight, the second course angle weight, the first transverse state and the first course angle state respectively corresponding to the first particles.
2. The method of claim 1, wherein the determining a first matching score and a second matching score corresponding to each of the first particles based on the perceived lane line information comprises:
determining lane line sampling points under a vehicle coordinate system based on the perception lane line information;
respectively taking each first particle as the movable equipment, converting a first map lane line into the vehicle coordinate system, and obtaining a second map lane line under the vehicle coordinate system corresponding to each first particle;
determining the first matching scores corresponding to the first particles respectively based on the lane line sampling points, the second map lane lines corresponding to the first particles respectively and a first preset rule;
and determining the second matching scores respectively corresponding to the first particles based on the lane line sampling points, the second map lane lines respectively corresponding to the first particles and a second preset rule.
3. The method according to claim 2, wherein the determining the first matching score corresponding to each of the first particles based on the lane line sampling point, the second map lane line corresponding to each of the first particles, and a first preset rule comprises:
for each first particle, determining the minimum transverse distance between each lane line sampling point and the second map lane line based on each lane line sampling point and the second map lane line;
determining a first sampling point score corresponding to each lane line sampling point based on each minimum transverse distance, a preset maximum transverse distance threshold and the first preset rule;
determining a total score of the first sampling points of the lane line sampling points as the first matching score corresponding to the first particle based on the scores of the first sampling points corresponding to the lane line sampling points;
the determining, based on the lane line sampling point, the second map lane line corresponding to each of the first particles, and a second preset rule, the second matching score corresponding to each of the first particles includes:
for each first particle, determining the minimum transverse distance between each lane line sampling point and the second map lane line based on each lane line sampling point and the second map lane line;
determining second sampling point scores corresponding to the lane line sampling points respectively based on the minimum transverse distances, a preset maximum transverse distance threshold value and the second preset rule;
and determining the total score of the second sampling points of the lane line sampling points as the second matching score corresponding to the first particle based on the scores of the second sampling points corresponding to the lane line sampling points respectively.
4. The method of claim 1, wherein the determining the current positioning pose of the movable device based on the second lateral weight, the second course angle weight, the first lateral state, and the first course angle state respectively corresponding to each of the first particles comprises:
clustering the first particles to obtain a first number of clusters;
for each cluster, determining a second transverse state, a second course angle state and a third transverse weight corresponding to the cluster based on the second transverse weight, the second course angle weight, the first transverse state and the first course angle state corresponding to the first particles in the cluster;
taking the second transverse state and the second course angle state of the cluster with the maximum third transverse weight as a target transverse state and a target course angle state respectively;
acquiring a target longitudinal state determined based on a first histogram filter;
and determining the current positioning pose of the movable equipment based on the target longitudinal state, the target transverse state, the target course angle state and a first prediction pose of the movable equipment at the current moment.
5. The method of claim 1, wherein the determining, based on the pose of each first particle, a first lateral state and a first heading angle state respectively corresponding to each first particle at the current time comprises:
determining first predicted particle poses corresponding to the first particles respectively based on the first particle poses;
and determining the first transverse state and the first course angle state corresponding to each first particle respectively based on the pose of each first predicted particle.
6. The method of claim 5, wherein said determining the first lateral state and the first heading angle state for each of the first particles based on each of the first predicted particle poses comprises:
determining a first predicted pose of the movable device at a current time based on the first positioning pose of the movable device determined at a previous time;
and determining the first transverse state and the first course angle state respectively corresponding to each first particle based on each first predicted particle pose and the first predicted pose.
7. The method of claim 1, wherein the determining, based on the pose of each of the first particles, a first lateral state and a first heading angle state respectively corresponding to each of the first particles at the current time comprises:
for each first particle pose, generating second predicted particle poses corresponding to n second particles respectively based on the first particle pose;
determining a third transverse state and a third course angle state respectively corresponding to each second particle based on the second predicted particle pose respectively corresponding to each second particle;
mapping each second particle to a first grid coordinate region based on the third transverse state and the third course angle state respectively corresponding to the m × n second particles to obtain a cell to which each second particle respectively belongs, wherein the first grid coordinate region comprises m cells, and m = m yaw *m lat ,m yaw And m lat Respectively representing the number of cells of the first grid coordinate area in the course angle direction and the transverse direction;
determining third particles respectively corresponding to the cells, third predicted particle poses respectively corresponding to the third particles, and fourth transverse states and fourth course angle states respectively corresponding to the third particles based on the cells to which the second particles respectively belong;
and taking the fourth transverse state and the fourth course angle state respectively corresponding to each third particle as the first transverse state and the first course angle state respectively corresponding to each first particle at the current moment.
8. The method of claim 7, wherein the generating, for each of the first particle poses, a second predicted particle pose for each of n second particles based on the first particle pose comprises:
and generating the second predicted particle poses corresponding to the n second particles respectively based on the first particle pose, the odometer information and n different Gaussian white noises.
9. The method of claim 7, wherein the determining a third lateral state and a third heading angle state for each of the second particles based on the second predicted particle pose for each of the second particles comprises:
determining a first predicted pose of the movable device at a current time based on the first positioning pose of the movable device determined at a previous time;
and determining the third transverse state and the third course angle state respectively corresponding to each second particle based on the second predicted particle pose respectively corresponding to each second particle and the first predicted pose.
10. An apparatus for determining a pose, comprising:
a first determining module, configured to determine first particle poses of m first particles corresponding to the movable device, where m is an integer greater than 1, and the first particle poses are obtained at previous time instants and correspond to the first particles;
the first processing module is used for determining a first transverse state and a first course angle state respectively corresponding to the current moment of each first particle based on the pose of each first particle;
the second processing module is used for determining a first matching score and a second matching score which correspond to each first particle respectively based on the perception lane line information;
a third processing module, configured to update first lateral weights of the corresponding first particles based on the first matching scores, respectively, so as to obtain second lateral weights corresponding to the first particles, respectively;
the fourth processing module is used for updating the first course angle weight of the corresponding first particle based on each second matching score to obtain a second course angle weight corresponding to each first particle;
and the fifth processing module is used for determining the current positioning pose of the movable equipment based on the second transverse weight, the second course angle weight, the first transverse state and the first course angle state which are respectively corresponding to the first particles.
11. A computer-readable storage medium storing a computer program for executing the pose determination method according to any one of claims 1 to 9.
12. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instructions from the memory and executing the instructions to realize the pose determination method of any one of the claims 1 to 9.
CN202211028797.7A 2022-08-24 2022-08-24 Pose determination method and device, electronic equipment and storage medium Pending CN115388906A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211028797.7A CN115388906A (en) 2022-08-24 2022-08-24 Pose determination method and device, electronic equipment and storage medium
PCT/CN2023/113598 WO2024041447A1 (en) 2022-08-24 2023-08-17 Pose determination method and apparatus, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211028797.7A CN115388906A (en) 2022-08-24 2022-08-24 Pose determination method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115388906A true CN115388906A (en) 2022-11-25

Family

ID=84123206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211028797.7A Pending CN115388906A (en) 2022-08-24 2022-08-24 Pose determination method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN115388906A (en)
WO (1) WO2024041447A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024041447A1 (en) * 2022-08-24 2024-02-29 上海安亭地平线智能交通技术有限公司 Pose determination method and apparatus, electronic device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140005921A1 (en) * 2012-06-27 2014-01-02 Microsoft Corporation Proactive delivery of navigation options
CN110954113A (en) * 2019-05-30 2020-04-03 北京初速度科技有限公司 Vehicle pose correction method and device
CN111998860A (en) * 2020-08-21 2020-11-27 北京百度网讯科技有限公司 Automatic driving positioning data verification method and device, electronic equipment and storage medium
CN112977478A (en) * 2021-04-13 2021-06-18 北京主线科技有限公司 Vehicle control method and system
CN114034307A (en) * 2021-11-19 2022-02-11 智道网联科技(北京)有限公司 Lane line-based vehicle pose calibration method and device and electronic equipment
CN114119673A (en) * 2022-01-25 2022-03-01 北京地平线机器人技术研发有限公司 Method and device for determining initial pose, electronic equipment and storage medium
CN114162140A (en) * 2021-12-08 2022-03-11 武汉中海庭数据技术有限公司 Optimal lane matching method and system
CN114355415A (en) * 2022-01-06 2022-04-15 上海安亭地平线智能交通技术有限公司 Pose information determining method and device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100809352B1 (en) * 2006-11-16 2008-03-05 삼성전자주식회사 Method and apparatus of pose estimation in a mobile robot based on particle filter
KR102103651B1 (en) * 2018-11-28 2020-04-22 한국교통대학교산학협력단 Method for reduction of particle filtering degeneracy exploiting lane number from digital map and system using the method
CN114248778B (en) * 2020-09-22 2024-04-12 华为技术有限公司 Positioning method and positioning device of mobile equipment
CN115388906A (en) * 2022-08-24 2022-11-25 上海安亭地平线智能交通技术有限公司 Pose determination method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140005921A1 (en) * 2012-06-27 2014-01-02 Microsoft Corporation Proactive delivery of navigation options
CN110954113A (en) * 2019-05-30 2020-04-03 北京初速度科技有限公司 Vehicle pose correction method and device
CN111998860A (en) * 2020-08-21 2020-11-27 北京百度网讯科技有限公司 Automatic driving positioning data verification method and device, electronic equipment and storage medium
CN112977478A (en) * 2021-04-13 2021-06-18 北京主线科技有限公司 Vehicle control method and system
CN114034307A (en) * 2021-11-19 2022-02-11 智道网联科技(北京)有限公司 Lane line-based vehicle pose calibration method and device and electronic equipment
CN114162140A (en) * 2021-12-08 2022-03-11 武汉中海庭数据技术有限公司 Optimal lane matching method and system
CN114355415A (en) * 2022-01-06 2022-04-15 上海安亭地平线智能交通技术有限公司 Pose information determining method and device, electronic equipment and storage medium
CN114119673A (en) * 2022-01-25 2022-03-01 北京地平线机器人技术研发有限公司 Method and device for determining initial pose, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
章弘凯;陈年生;范光宇;: "基于粒子滤波的智能机器人定位算法", 计算机应用与软件, no. 02, 12 February 2020 (2020-02-12), pages 140 - 146 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024041447A1 (en) * 2022-08-24 2024-02-29 上海安亭地平线智能交通技术有限公司 Pose determination method and apparatus, electronic device and storage medium

Also Published As

Publication number Publication date
WO2024041447A1 (en) 2024-02-29

Similar Documents

Publication Publication Date Title
CN111812658B (en) Position determination method, device, system and computer readable storage medium
WO2020154970A1 (en) Deep learning–based feature extraction for lidar localization of autonomous driving vehicles
WO2020154973A1 (en) Lidar localization using rnn and lstm for temporal smoothness in autonomous driving vehicles
EP3714290A1 (en) Lidar localization using 3d cnn network for solution inference in autonomous driving vehicles
CN113436238B (en) Point cloud registration accuracy evaluation method and device and electronic equipment
WO2022017131A1 (en) Point cloud data processing method and device, and intelligent driving control method and device
CN113610172B (en) Neural network model training method and device and sensing data fusion method and device
CN114119673B (en) Method and device for determining initial pose, electronic equipment and storage medium
WO2019032588A1 (en) Vehicle sensor calibration and localization
WO2024041447A1 (en) Pose determination method and apparatus, electronic device and storage medium
WO2023131048A1 (en) Position and attitude information determining method and apparatus, electronic device, and storage medium
CN113807460A (en) Method and device for determining intelligent body action, electronic equipment and medium
CN108089597B (en) Method and device for controlling unmanned aerial vehicle based on ground station
CN115147683A (en) Pose estimation network model training method, pose estimation method and device
CN114782510A (en) Depth estimation method and device for target object, storage medium and electronic equipment
JP4921847B2 (en) 3D position estimation device for an object
CN111469781A (en) Method and apparatus for outputting information
CN110542422B (en) Robot positioning method, device, robot and storage medium
CN112308923A (en) Lane line-based camera pose adjusting method and device, storage medium and equipment
CN114280583B (en) Laser radar positioning accuracy verification method and system without GPS signal
CN110766793A (en) Map construction method and device based on semantic point cloud
CN114937251A (en) Training method of target detection model, and vehicle-mounted target detection method and device
CN115546319B (en) Lane keeping method, lane keeping apparatus, computer device, and storage medium
JP7425169B2 (en) Image processing method, device, electronic device, storage medium and computer program
CN113129361B (en) Pose determining method and device for movable equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination