CN112861833A - Vehicle lane level positioning method and device, electronic equipment and computer readable medium - Google Patents

Vehicle lane level positioning method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN112861833A
CN112861833A CN202110450888.9A CN202110450888A CN112861833A CN 112861833 A CN112861833 A CN 112861833A CN 202110450888 A CN202110450888 A CN 202110450888A CN 112861833 A CN112861833 A CN 112861833A
Authority
CN
China
Prior art keywords
information
static
obstacle
dynamic
pose data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110450888.9A
Other languages
Chinese (zh)
Other versions
CN112861833B (en
Inventor
雷戈航
骆沛
倪凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heduo Technology Guangzhou Co ltd
Original Assignee
HoloMatic Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HoloMatic Technology Beijing Co Ltd filed Critical HoloMatic Technology Beijing Co Ltd
Priority to CN202110450888.9A priority Critical patent/CN112861833B/en
Publication of CN112861833A publication Critical patent/CN112861833A/en
Application granted granted Critical
Publication of CN112861833B publication Critical patent/CN112861833B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Abstract

The embodiment of the disclosure discloses a vehicle lane level positioning method, a vehicle lane level positioning device, an electronic device and a computer readable medium. One embodiment of the method comprises: acquiring a detection data set and a vehicle pose information set in a historical time period; combining each vehicle pose information in the vehicle pose information set with the detection data in the detection data set matching the vehicle pose information to generate a pose data information set, and obtaining a pose data information set; in response to determining that the pose data set meets a predetermined condition, performing information classification on each pose data information in the pose data set to obtain a dynamic pose data information set and a static pose data information set; generating static obstacle information; generating a dynamic obstacle information set; and generating vehicle lane-level positioning information based on the static obstacle information and the dynamic obstacle information set. This embodiment may improve the efficiency of vehicle lane-level positioning.

Description

Vehicle lane level positioning method and device, electronic equipment and computer readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a vehicle lane level positioning method, a vehicle lane level positioning device, electronic equipment and a computer readable medium.
Background
A vehicle lane level positioning method is a basic technology in the field of automatic driving. At present, when lane-level positioning is performed on a vehicle, the following methods are generally adopted: and simultaneously, running a plurality of lane-level positioning methods in a vehicle system, and integrating the results of the plurality of lane-level positioning methods to perform vehicle lane-level positioning.
However, when the vehicle lane-level positioning is performed in the above manner, there are often technical problems as follows:
firstly, each lane-level positioning method in the common methods needs to access high-precision map data or process a large amount of data output after being detected by a sensor, so that a large amount of computing resources and memory resources are occupied, problems of lack of computing resources, insufficient computing capacity and the like are caused, and the lane-level positioning efficiency of the vehicle is reduced;
secondly, the vehicle sensor is easily affected by the environment, which results in the accuracy of the data output by the vehicle sensor being reduced, and therefore, the accuracy of the lane-level positioning results output by some lane-level positioning methods is also reduced, and because the lane-level positioning results are not verified by the common method, the accuracy of the lane-level positioning is reduced when the lane-level positioning is performed by synthesizing the results of a plurality of non-verified lane-level positioning methods.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a vehicle lane-level localization method, apparatus, electronic device and computer readable medium to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a vehicle lane-level localization method, the method comprising: acquiring a detection data set and a vehicle pose information set in a historical time period, wherein the detection data in the detection data set comprises: timestamp and obstacle detection point coordinate values, the vehicle pose information in the vehicle pose information set comprising: a timestamp and a vehicle pose matrix; combining each vehicle pose information in the vehicle pose information set with the detection data in the detection data set matching the vehicle pose information to generate a pose data information set, and obtaining a pose data information set; in response to determining that the pose data set meets a predetermined condition, performing information classification on each pose data information in the pose data set to obtain a dynamic pose data information set and a static pose data information set; generating static barrier information based on the static pose data information set; generating a dynamic barrier information set based on the dynamic pose data information set; and generating vehicle lane-level positioning information based on the static obstacle information and the dynamic obstacle information set.
In some embodiments, the performing static check on the static obstacle expression based on the projective point coordinate value set and a pre-generated static obstacle expression at a previous time to obtain a static obstacle expression check value includes:
based on the projection point coordinate value set and a static obstacle expression generated in advance at the last moment, an average value and a variance value included in the static obstacle expression check information are generated through the following formulas:
Figure 378647DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 826946DEST_PATH_IMAGE002
representing an average value of distances between each proxel coordinate value in the proxel coordinate value set and the static obstacle expression at the previous moment,
Figure 465738DEST_PATH_IMAGE003
the sequence number is shown to indicate that,
Figure 329788DEST_PATH_IMAGE004
the time of the last moment is indicated,
Figure 472057DEST_PATH_IMAGE005
representing the number of proxel coordinate values in the proxel coordinate value set,
Figure 294519DEST_PATH_IMAGE006
representing coefficients of first order terms in the static obstacle expression,
Figure 155028DEST_PATH_IMAGE007
representing the coefficients of the first order terms in the static obstacle expression at the previous time instant,
Figure 822770DEST_PATH_IMAGE008
an abscissa value representing a projected point coordinate value in the projected point coordinate value set,
Figure 819544DEST_PATH_IMAGE009
set of coordinate values representing said projected points
Figure 547329DEST_PATH_IMAGE003
The abscissa value of the coordinate value of each projected point,
Figure 629555DEST_PATH_IMAGE010
a ordinate value representing a projected point coordinate value in the projected point coordinate value set,
Figure 835408DEST_PATH_IMAGE011
Set of coordinate values representing said projected points
Figure 975706DEST_PATH_IMAGE003
The ordinate values of the coordinate values of the respective projection points,
Figure 671130DEST_PATH_IMAGE012
represents a constant term in the static obstacle expression,
Figure 116017DEST_PATH_IMAGE013
a constant term in the expression representing the static obstacle at the previous time,
Figure 250196DEST_PATH_IMAGE014
and representing the variance value of the distance between each projective point coordinate value in the projective point coordinate value set and the static obstacle expression at the previous moment.
In a second aspect, some embodiments of the present disclosure provide an obstacle information generating apparatus, the apparatus comprising: an acquisition unit configured to acquire a set of detection data and a set of vehicle pose information within a historical period of time, wherein the detection data in the set of detection data includes: timestamp and obstacle detection point coordinate values, the vehicle pose information in the vehicle pose information set comprising: a timestamp and a vehicle pose matrix; a selection unit configured to combine each vehicle pose information in the set of vehicle pose information and the detection data in the set of detection data that matches the vehicle pose information to generate a set of pose data information, resulting in a set of pose data information sets; a classification unit configured to perform information classification on each pose data information in the set of pose data information groups to obtain a dynamic pose data information set and a static pose data information set in response to determining that the set of pose data information groups satisfies a predetermined condition; a first generation unit configured to generate static obstacle information based on the set of static pose data information; a second generation unit configured to generate a dynamic obstacle information set based on the dynamic pose data information set; a third generating unit configured to generate vehicle lane-level positioning information based on the static obstacle information and the dynamic obstacle information set.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first aspect.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium on which a computer program is stored, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect.
The above embodiments of the present disclosure have the following advantages: by the vehicle lane-level positioning method of some embodiments of the present disclosure, the efficiency of vehicle lane-level positioning may be improved. Specifically, the reasons for the reduced efficiency of vehicle lane-level positioning are: in the conventional methods, each lane-level positioning method needs to access high-precision map data or process a large amount of data output after being detected by a sensor, so that a large amount of computing resources and memory resources are occupied, and the problems of lack of computing resources, insufficient computing capacity and the like are caused. Based on this, the vehicle lane-level localization method of some embodiments of the present disclosure, first, reduces the number of sensors. A small number of sensors (e.g., millimeter wave radar and inertial measurement unit) may be used to detect vehicle ambient information to provide a set of detection data and a set of vehicle pose information over a historical period of time. This can reduce the amount of data output after detection by a large number of sensors. Then, the lane level of the vehicle is determined by the lane level positioning method in the method, so that the lane level positioning is avoided by integrating various lane level positioning methods. Thus, the amount of access to high-precision map data when using a plurality of lane-level positioning methods can be reduced. Therefore, consumption of computing resources and memory resources is saved by reducing the data volume output after detection of a large number of sensors and reducing the access volume of high-precision map data. Furthermore, the efficiency of vehicle lane level location is improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
FIG. 1 is a schematic illustration of one application scenario of a vehicle lane-level localization method of some embodiments of the present disclosure;
FIG. 2 is a flow chart of some embodiments of a vehicle lane-level localization method according to the present disclosure;
FIG. 3 is a flow chart of further embodiments of a vehicle lane-level locating method according to the present disclosure;
FIG. 4 is a schematic structural diagram of some embodiments of a vehicle lane-level locating device according to the present disclosure;
FIG. 5 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of an application scenario of the obstacle information generation method of some embodiments of the present disclosure.
In the application scenario of fig. 1, first, the computing device 101 may acquire a detection data set 102 and a vehicle pose information set 103 in a historical time period, where the detection data in the detection data set 102 includes: timestamp and obstacle detection point coordinate values, and the vehicle pose information in the vehicle pose information set 103 includes: a timestamp and a vehicle pose matrix. Next, the computing device 101 may combine each vehicle pose information in the vehicle pose information set 103 with the detection data in the detection data set 102 that matches the vehicle pose information to generate a set of pose data information, resulting in a set of pose data information sets 104. The computing device 101 may then, in response to determining that the set of pose data information sets 104 satisfies a predetermined condition, perform information classification on each pose data information in the set of pose data information sets 104 to obtain a set of dynamic pose data information 105 and a set of static pose data information 106. Thereafter, the computing device 101 may generate static obstacle information 107 based on the set of static pose data information 106 described above. The computing device 101 may then generate a set of dynamic obstacle information 108 based on the set of dynamic pose data information 105 described above. Finally, the computing device 101 may generate vehicle lane-level positioning information 109 based on the static obstacle information 107 and the dynamic obstacle information set 108.
The computing device 101 may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of multiple servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices enumerated above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of computing devices in FIG. 1 is merely illustrative. There may be any number of computing devices, as implementation needs dictate.
With continued reference to fig. 2, a flow 200 of some embodiments of a vehicle lane-level localization method according to the present disclosure is shown. The vehicle lane level positioning method comprises the following steps:
step 201, a detection data set and a vehicle pose information set in a historical time period are obtained.
In some embodiments, the performing subject of the vehicle lane-level localization method (e.g., computing device 101 shown in fig. 1) may acquire the detection data set and the vehicle pose information set over the historical time period in a wired manner or in a wireless manner. The detection data in the detection data set may include, but is not limited to: the timestamp and the coordinate values of the obstacle detection points, and the vehicle pose information in the vehicle pose information set may include, but is not limited to: a timestamp and a vehicle pose matrix. The detection data set for the above-described historical period of time (e.g., 3 seconds) may be generated by a first in-vehicle sensor (e.g., millimeter wave radar). The vehicle pose information set may be generated by a second onboard sensor (e.g., an inertial measurement unit). The vehicle pose matrix included in the vehicle pose information can be used for representing the variation of the position and the posture of the vehicle. The detection data and the pose information can be in corresponding relation through the included time stamps. For example, the detection data having the same time stamp as the pose information can be determined by the time stamp. Pose information having the same time stamp as the detection data can also be determined by the time stamp. The above-mentioned obstacle detection point coordinate values may be coordinate values of obstacle detection points in the vehicle coordinate system. The vehicle coordinate system may be established with the center of the vehicle as an origin, the through-origin with the advancing direction of the vehicle as a horizontal axis, and the through-origin with a horizontal orientation perpendicular to the horizontal axis (e.g., a horizontal leftward direction) as a vertical axis.
Step 202, combining each vehicle pose information in the vehicle pose information set with the detection data matched with the vehicle pose information in the detection data set to generate a pose data information set, and obtaining a pose data information set.
In some embodiments, the executing entity may combine each vehicle pose information in the set of vehicle pose information and the detection data in the set of detection data that matches the vehicle pose information to generate a set of pose data information sets, resulting in a set of pose data information sets. Wherein each of the vehicle pose information in the set of vehicle pose information and the detection data in the set of detection data having the same timestamp as the vehicle pose information may be combined to generate a set of pose data information. The pose data information set can be used to characterize all of the inspection data and vehicle pose information corresponding to the same timestamp. Thus, structured fusion of data generated by two different sensors in a time dimension (for example, storing data having a correspondence as one record in a database) can be performed, so that a storage tag (for example, a table name) of the fused data is reduced. Thus, the occupation of storage resources can be reduced from the viewpoint of data structuring.
And 203, in response to the fact that the pose data set meets the preset condition, performing information classification on each pose data information in the pose data set to obtain a dynamic pose data information set and a static pose data information set.
In some embodiments, the executing agent may perform information classification on each pose data information in the set of pose data information sets to obtain a dynamic pose data information set and a static pose data information set in response to determining that the set of pose data information sets satisfies a predetermined condition. The predetermined condition may be that vehicle pose information and detection data exist in each pose data information group in the pose data information group set, and a position and pose variation represented by a vehicle pose matrix included in the vehicle pose information is smaller than a preset vehicle position and pose variation. The dynamic pose data information set may be information characterizing dynamic obstacles around the vehicle (e.g., information of surrounding traveling vehicles). The static pose data information set may be information characterizing static obstacles around the vehicle (e.g., information of a lane fence). Specifically, the vehicle position and posture matrix is smaller than the preset vehicle position and posture variation amount, and the vehicle position and posture variation amount can be used for representing that the position and posture variation amount of the vehicle is in the preset vehicle position and posture matrix range, meets the actual condition, and is in a normal state. In addition, the information classification may be to classify each pose data information group in the pose data information group set by a classification algorithm (e.g., a decision tree, a logistic regression, etc.).
In some optional implementations of some embodiments, the detection data in the detection data set may further include a relative velocity value, and the vehicle pose information in the vehicle pose information set may further include a vehicle speed-to-ground value. The performing step of performing information classification on each pose data information in the pose data information group set by the performing step may include the following steps:
firstly, determining a ground speed value corresponding to the coordinate value of the obstacle detection point included in the pose data information based on the relative speed value and the vehicle ground speed value included in each pose data information in the pose data information group set, and obtaining a ground speed value set. The relative speed value may be a speed value of the obstacle detection point represented by the coordinate value of the obstacle detection point included in the detection data with respect to the vehicle. The vehicle speed to ground value may be a speed value of the vehicle relative to the ground at a certain time. The difference between the relative velocity value included in each pose data and the vehicle ground velocity value may be determined as the ground velocity value.
And secondly, determining the pose data information corresponding to the ground speed values which are greater than a preset speed threshold value in the set of ground speed values as dynamic pose data information to obtain a dynamic pose data information set. The preset speed threshold (e.g., 0.1 m/s) may represent a minimum resolution of the speed value, so as to distinguish whether the obstacle detection point corresponding to the pose data information is static or dynamic.
And thirdly, determining the pose data information corresponding to the ground speed values which are less than or equal to the preset speed threshold value in the set of ground speed values as static pose data information to obtain a static pose data information set.
And step 204, generating static obstacle information based on the static pose data information set.
In some embodiments, the execution subject may generate static obstacle information based on the set of static pose data information. The static obstacle information may be input into a preset deep learning model (e.g., a recurrent neural network, a deep neural network, etc.), and the static obstacle information may be generated. The static obstacle information may be used to characterize a distance value between the vehicle and the obstacle.
And step 205, generating a dynamic obstacle information set based on the dynamic pose data information set.
In some embodiments, the execution subject may generate a dynamic obstacle information set based on the dynamic pose data information set. The dynamic pose data information set can be input into a preset deep learning model (for example, a deep generation network, a deep belief network, and the like), and a dynamic obstacle information set is generated. The number of dynamic obstacle information sets may be used to characterize the number of dynamic obstacles (e.g., vehicles, pedestrians, etc.) present in the surrounding environment of the vehicle. The dynamic obstacle information set described above may be used to characterize the position of the dynamic obstacle in the vehicle coordinate system.
In some optional implementations of some embodiments, the generating, by the execution subject, a dynamic obstacle information set based on the dynamic pose data information set may include:
firstly, generating a dynamic obstacle centroid coordinate value set and an average velocity value set based on the dynamic pose data information set. The dynamic pose data information in the dynamic pose data information set can be classified by using a classification algorithm (for example, a mean shift clustering algorithm, a hierarchical clustering algorithm, etc.), so as to obtain a dynamic barrier information set. Each dynamic obstacle information set may be used to characterize a dynamic obstacle. And selecting a predetermined number of pieces of dynamic obstacle information with the farthest relative distance from the dynamic obstacle information group. And determining the average value of the coordinate values of the obstacle coordinate points corresponding to the preset number of pieces of dynamic obstacle information as the centroid coordinate value. And determining the average value of the relative speeds corresponding to the preset number of pieces of dynamic obstacle information as an average speed value.
And secondly, according to the average speed value group, carrying out coordinate value verification on each dynamic obstacle centroid coordinate value in the dynamic obstacle centroid coordinate value group to obtain a verified dynamic obstacle centroid coordinate value group serving as a dynamic obstacle information set. Wherein, each dynamic obstacle mass center coordinate value in the average velocity value set and the dynamic obstacle mass center coordinate value set may be input to a preset vehicle dynamics model (e.g., kalman filter) to perform coordinate value verification, so as to generate a verified dynamic obstacle confidence coordinate value set.
In some optional implementations of some embodiments, the executing entity generating a set of dynamic obstacle centroid coordinate values and a set of average velocity values based on the set of dynamic pose data information may include:
firstly, adding the coordinate values of the obstacle detection points included by each piece of dynamic pose data information in the dynamic pose data information set into a dynamic coordinate value set. The initial state of the dynamic coordinate value may be an empty set.
And secondly, determining a speed difference value between the relative speed values corresponding to every two dynamic coordinate values in the dynamic coordinate value set to obtain a speed difference value set. Wherein, each two dynamic coordinate values may be any two dynamic coordinate values in the dynamic coordinate value set. Since the dynamic coordinate values may be the above-mentioned obstacle detection point coordinate values, the dynamic pose data information includes the obstacle detection point coordinate values, and the dynamic pose data information also includes the relative velocity values. Accordingly, a relative velocity value corresponding to the dynamic coordinate value may be determined.
And thirdly, determining the difference value between the relative speed values corresponding to every two dynamic coordinate values in the dynamic coordinate value set as a speed difference value to obtain a speed difference value set. The speed difference values in the speed difference value set can be used for representing the speed difference values between the obstacle detection points corresponding to the coordinate values of the two obstacle detection points.
And fourthly, classifying each dynamic coordinate distance value in the dynamic coordinate distance value sets based on the dynamic coordinate distance value sets and the speed difference value sets to obtain a dynamic coordinate distance value set. The dynamic coordinate distance value set corresponding to each piece of dynamic pose data information in the dynamic pose data information set may be greater than a preset dynamic coordinate distance threshold value in the dynamic coordinate distance value set, and the dynamic coordinate distance value set corresponding to the speed difference value set may be smaller than a preset speed difference threshold value in the speed difference value set.
And fifthly, determining the centroid coordinate value and the average speed value of each dynamic coordinate distance value in each dynamic coordinate distance value group in the dynamic coordinate distance value group set to obtain a dynamic obstacle substance centroid coordinate value group and an average speed value group. Wherein, the average value of each dynamic distance value in the dynamic coordinate distance value group can be determined as the centroid coordinate value. The relative speed value corresponding to each dynamic coordinate distance value in the dynamic coordinate distance value group can be determined as an average speed value.
And sixthly, generating a vehicle position coordinate value set as a dynamic obstacle information set according to the dynamic obstacle centroid coordinate value set and the average speed value set. The position coordinate value of the dynamic obstacle can be generated according to a preset adaptive filter.
And step 206, generating vehicle lane-level positioning information based on the static obstacle information and the dynamic obstacle information set.
In some embodiments, the execution subject may generate the vehicle lane-level positioning information based on the static obstacle information and the dynamic obstacle information set. The static obstacle information and the dynamic obstacle information set may comprehensively represent a change in a peripheral obstacle distance of the vehicle in the historical time period. The position of the vehicle can be estimated according to the vehicle-mounted positioning device. The lane number information of the same position may then be selected from the pre-stored lane number information to determine the current lane number of the vehicle. The lane in which the vehicle is located can be further determined according to the distance value (i.e., the distance value between the vehicle and the obstacle) represented by the static obstacle information (e.g., the obstacle is a lane fence, and the distances from different lanes to the lane fence are different). Finally, the lane in which the vehicle is located may be logically determined according to the distance value and the coordinate value (for example, the position coordinate value of the dynamic obstacle) of each dynamic obstacle included in the dynamic obstacle information set, and the final lane of the vehicle may be determined to generate the vehicle lane-level positioning information. Thus, the vehicle lane level positioning is completed.
The above embodiments of the present disclosure have the following advantages: by the vehicle lane-level positioning method of some embodiments of the present disclosure, the efficiency of vehicle lane-level positioning may be improved. Specifically, the reasons for the reduced efficiency of vehicle lane-level positioning are: in the conventional methods, each lane-level positioning method needs to access high-precision map data or process a large amount of data output after being detected by a sensor, so that a large amount of computing resources and memory resources are occupied, and the problems of lack of computing resources, insufficient computing capacity and the like are caused. Based on this, the vehicle lane-level localization method of some embodiments of the present disclosure, first, reduces the number of sensors. A small number of sensors (e.g., millimeter wave radar and inertial measurement unit) may be used to detect vehicle ambient information to provide a set of detection data and a set of vehicle pose information over a historical period of time. This can reduce the amount of data output after detection by a large number of sensors. Then, the lane level of the vehicle is determined by the lane level positioning method in the method, so that the lane level positioning is avoided by integrating various lane level positioning methods. Thus, the amount of access to high-precision map data when using a plurality of lane-level positioning methods can be reduced. Therefore, consumption of computing resources and memory resources is saved by reducing the data volume output after detection of a large number of sensors and reducing the access volume of high-precision map data. Furthermore, the efficiency of vehicle lane level location is improved.
With further reference to fig. 3, a flow 300 of further embodiments of a vehicle lane-level localization method is shown. The process 300 of the vehicle lane-level positioning method includes the following steps:
step 301, a detection data set and a vehicle pose information set in a historical time period are obtained.
And 302, combining each vehicle pose information in the vehicle pose information set with the detection data matched with the vehicle pose information in the detection data set to generate a pose data information set, so as to obtain a pose data information set.
In some embodiments, the specific implementation manner and technical effects of the steps 301 and 302 can refer to the steps 201 and 202 in the embodiments corresponding to fig. 2, which are not described herein again.
And 303, in response to the fact that the pose data set meets the preset condition, performing information classification on each pose data information in the pose data set to obtain a dynamic pose data information set and a static pose data information set.
In some embodiments, the executing entity may perform information classification on each pose data information in the pose data set in response to determining that the pose data set satisfies a predetermined condition, resulting in a dynamic pose data information set and a static pose data information set. The information classification may be to classify each pose data information in each pose data information group in the set of pose data information groups. And classifying each pose data information group to obtain a dynamic pose data information set and a static pose data information set. That is, each timestamp in the historical time period may correspond to one set of dynamic pose data information and one set of static pose data information.
And 304, generating a static obstacle expression based on the static pose data information set.
In some embodiments, the execution agent may generate a static obstacle expression based on the set of static pose data information. The static pose data information of each static pose data information set can be classified, and the static pose data information representing the target static obstacle (such as a curbstone) is selected. Then, expression fitting (e.g., straight line fitting, quadratic curve fitting, etc.) may be performed on each obstacle detection point coordinate value included in the static pose data information representing the target static obstacle. Therefore, a static obstacle expression can be generated for representing the distribution of the coordinate values of the obstacle detection points corresponding to the fixed obstacles on the road in the vehicle coordinate system. The static obstacle information may include a static obstacle expression and a static obstacle expression check value. The static obstacle expression check value can be used for checking whether the static obstacle represented by the static obstacle expression is suitable for serving as a reference object for vehicle lane-level positioning. Thus, the position of the vehicle can be determined by using fixed obstacles on the road. In addition, each timestamp in the historical time period may correspond to a static obstacle expression.
In some optional implementations of some embodiments, the generating a static obstacle expression by the execution subject based on the static pose data information set may include:
the method comprises the following steps of firstly, determining the minimum fitting obstacle coordinate number corresponding to a preset basic expression. The preset basic expression may be a straight line expression or a polynomial curve expression. The above-described minimum fitting obstacle coordinate number may represent the number of the minimum coordinate values for solving the basic expression.
As an example, when the base expression is a straight line (e.g., y = kx + b), the minimum number of fitted obstacle coordinates may be two. That is, two obstacle detection point coordinate values are required to solve the basic expression.
Secondly, selecting the static pose data information with the minimum fitting barrier coordinate number from the static pose data information set, and executing the following fitting steps based on a preset cycle count threshold value, a preset expression information set, a preset detection point minimum number threshold value and barrier detection point coordinate values included by the selected static pose data information:
the first substep: and fitting the basic expression based on the coordinate values of the obstacle detection points included in the selected static pose data information to obtain a basic fitting expression. The coordinate values of the obstacle detection points included in the selected static pose data information can be input into the basic expression, and parameters of the basic expression are generated. From this, a basic fitting expression can be obtained.
The second substep: and determining the static pose data information in the static pose data information set, which is not matched with the selected static pose data information, as target static pose data information to obtain a target static pose data information set. Wherein the unmatched static pose data information may be different static pose data information from each selected static pose data information. That is, the static pose data information in the static pose data information set that is different from each selected static pose data information may be determined as the target static pose data information, so as to obtain the target static pose data information set.
The third substep: and determining a point-line distance value between the coordinate value of the obstacle detection point included by each piece of target static pose data information in the target static pose data information set and the basic fitting expression to obtain a point-line distance value set. The point-line distance value between the coordinate value of the obstacle detection point and the basic fitting expression included in each set of target static pose data information can be determined through a point-to-straight line distance value formula, and a point-line distance value group is obtained.
A fourth substep: and determining the maximum tolerance distance value between the coordinate value of the obstacle detection point and the basic fitting expression included by each piece of target static pose data information in the target static pose data information set. The maximum tolerance distance value can be used for screening the coordinate values of the obstacle detection points close to the basic fitting expression. The maximum tolerance distance value may be selected from a relationship table in which a preset expression corresponds to the maximum tolerance distance value. Different forms of expressions and corresponding maximum tolerance distance values may be included in the relational table.
As an example, the base fit expression is a straight line, and the corresponding maximum tolerance distance value in the relationship table may be 0.01 meters.
A fifth substep: and adding the coordinate values of the obstacle detection points corresponding to the point-line distance values smaller than the maximum tolerance distance value in the point-line distance value group into the initial detection point coordinate value set to obtain a detection point coordinate value set. Wherein, the initial state of the initial detection point coordinate value set may be an empty set. The initial set of detection point coordinate values may be used to store, in each cycle, obstacle detection point coordinate values corresponding to point line distance values smaller than the maximum tolerance distance value in the set of point line distance values, so as to generate a set of detection point coordinate values.
A sixth substep: the loop count value is incremented by 1. The initial value of the loop count value may be 0. The loop count value (e.g., 100) is used to define the number of loops of the fitting step.
A seventh substep: in response to determining that the loop count value is equal to the loop count threshold value, a base fit expression corresponding to the number of detected point coordinate values whose number is the largest among the detected point coordinate values included in the respective expression information in the expression information set is determined as a static obstacle expression. And the minimum quantity threshold is used for representing the minimum quantity of the coordinate values of the obstacle detection points, the distance between which and the basic fitting expression is within the maximum tolerance distance value.
An eighth substep: in response to determining that the loop count value is equal to the loop count threshold value, a base fit expression corresponding to a largest detected point coordinate value among the detected point coordinate values included in the respective pieces of expression information in the expression information set is determined as a static obstacle expression. Wherein the loop count value being equal to the loop count threshold value may indicate that the number of loops has reached a predetermined number of times, whereby the loop may be ended and the static obstacle expression may be output.
In some optional implementations of some embodiments, the executing entity generates the static obstacle expression based on the static pose data information set, and may further include the following steps:
initializing the set of detected point coordinate values to an initial set of detected point coordinate values in response to determining that the cycle count value is less than the cycle count threshold, and selecting the number of static pose data information from the set of static pose data information that is not selected for the minimum number of fitted obstacle coordinates to perform the fitting step again. The initialization may be to empty the content in the set of detected point coordinate values to obtain an empty set of initial detected point coordinate values for subsequent participation in the loop again. After the static pose data information is selected from the static pose data information set, the selected static pose data information can be marked, and therefore the unselected static pose data information in the static pose data information set can be determined. Therefore, the aim of avoiding repeated selection can be fulfilled.
In addition, when the number of unselected static pose data information is smaller than the number of static pose data information corresponding to the minimum number of fitted obstacle coordinates, the loop may be ended and the base fitted expression corresponding to the largest detected point coordinate value among the detected point coordinate values included in each piece of expression information in the expression information set may be determined as the static obstacle expression. In practice, the static obstacle expression may be used to characterize the expression of the static obstacle in the vehicle coordinate system in the surroundings of the vehicle at the current time. Thus, a static obstacle expression may be assigned at each time during the travel of the vehicle.
And 305, sampling the static obstacle expression at intervals in a horizontal axis interval preset in a vehicle coordinate system to obtain a sampling point coordinate value set.
In some embodiments, the execution subject may sample the static obstacle expression at intervals within a horizontal axis interval preset in a vehicle coordinate system, so as to obtain a sampling point coordinate value set. And the sampling point coordinate values in the sampling point coordinate value set correspond to timestamps included in the vehicle pose information set. The predetermined abscissa period may be equal to or greater than zero and equal to or less than the abscissa threshold. The interval sampling may be performed at intervals of a preset length.
In addition, in consideration of the characteristics of the millimeter wave radar: the accuracy of the acquired coordinate values of the obstacle detection points is in inverse relation with the distance between the carrier and the obstacle. That is, the accuracy of the coordinates of the obstacle detection points acquired at a closer position to the vehicle (for example, the above-mentioned vehicle) is higher, and the accuracy of the coordinates of the obstacle detection points acquired at a farther position from the vehicle becomes lower. Therefore, the preset length interval may be set to an increased length. So that more samples can be taken at a position closer to the carrier. Gradually sparsely sampling at locations farther from the carrier. Thus, the accuracy of sampling can be improved. The length interval may be based on a preset basic interval, and each sampling may use a product of the preset basic interval and an increase factor (for example, a number greater than 1 and less than or equal to 2) as a sampling interval.
And step 306, based on the vehicle position matrix corresponding to each sampling point coordinate value in the sampling point coordinate value set, projecting each sampling point coordinate value in the sampling point coordinate value set to a vehicle coordinate system to generate a projection point coordinate value, and obtaining a projection point coordinate value set.
In some embodiments, the executing body may project each sampling point coordinate value in the sampling point coordinate value set into a vehicle coordinate system to generate a projective point coordinate value based on a vehicle pose matrix corresponding to each sampling point coordinate value in the sampling point coordinate value set, so as to obtain a projective point coordinate value set. The sampling point coordinate value and the vehicle pose matrix correspond to each other, and the time stamp corresponding to the sampling point coordinate value and the time stamp corresponding to the vehicle pose matrix are the same. The coordinate values of each sampling point in the set of sampling point coordinate values can be projected by the following steps:
in the first step, the product of the timestamp of the current time and the vehicle pose matrix corresponding to the timestamp of the previous time can be determined as the pose projection matrix. Wherein, the current time may correspond to a time stamp. A plurality of consecutive time stamps may be included in the historical time period. The plurality of consecutive time stamps may be time-ordered. When a time stamp is determined, the previous time stamp adjacent to the time stamp may be taken as the previous time. Therefore, a vehicle pose matrix corresponding to the timestamp at the last time can be determined.
And secondly, determining the product of each sampling point coordinate value in the sampling point coordinate value set and the pose projection matrix as a projection point coordinate value to obtain a projection point coordinate value set.
And 307, performing static verification on the static obstacle expression based on the projection point coordinate value set and the pre-generated static obstacle expression at the previous moment to obtain static obstacle expression verification information.
In some embodiments, the executing entity may perform static checking on the static obstacle expression based on the projective point coordinate value set and a pre-generated static obstacle expression at a previous time, so as to obtain a static obstacle expression checking information set. Based on the projected point coordinate value set and a pre-generated static obstacle expression at the previous time, an average value and a variance value included in the static obstacle expression check information can be generated through the following formula:
Figure 96929DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 697674DEST_PATH_IMAGE002
an average value of distances between the coordinate values of the projection points in the coordinate value set of the projection points and the static obstacle expression at the previous moment is represented,
Figure 754492DEST_PATH_IMAGE003
the sequence number is shown to indicate that,
Figure 98886DEST_PATH_IMAGE004
the time of the last moment is indicated,
Figure 800126DEST_PATH_IMAGE005
representing the number of projective point coordinate values in the set of projective point coordinate values,
Figure 571772DEST_PATH_IMAGE006
Representing the coefficients of the first order terms in the above static obstacle expression,
Figure 115886DEST_PATH_IMAGE007
representing the coefficient of the first order term in the static obstacle expression at the previous moment,
Figure 263971DEST_PATH_IMAGE008
an abscissa value indicating a projected point coordinate value in the projected point coordinate value set,
Figure 819717DEST_PATH_IMAGE009
set of coordinate values representing the above-mentioned projected points
Figure 27845DEST_PATH_IMAGE003
The abscissa value of the coordinate value of each projected point,
Figure 465779DEST_PATH_IMAGE010
a vertical coordinate value indicating a coordinate value of a projected point in the projected point coordinate value set,
Figure 214292DEST_PATH_IMAGE011
set of coordinate values representing the above-mentioned projected points
Figure 952441DEST_PATH_IMAGE003
The ordinate values of the coordinate values of the respective projection points,
Figure 3574DEST_PATH_IMAGE012
represents a constant term in the above static obstacle expression,
Figure 194384DEST_PATH_IMAGE013
represents a constant term in the static obstacle expression at the previous time,
Figure 746588DEST_PATH_IMAGE014
and representing the variance value of the distance between each projective point coordinate value in the projective point coordinate value set and the static obstacle expression at the previous moment.
The above formula and its related content are used as an invention point of the embodiment of the present disclosure, and solve the technical problems mentioned in the background art, i.e., "the vehicle sensor is easily affected by the environment, which results in the decrease of the accuracy of the data output by the sensor, and therefore, the accuracy of the lane-level positioning results output by some lane-level positioning methods is also decreased, and because the lane-level positioning results are not verified by the common method, the accuracy of the lane-level positioning is decreased when the lane-level positioning is performed by combining the results of a plurality of unverified lane-level positioning methods". Factors that lead to a reduction in the accuracy of lane-level positioning tend to be as follows: the vehicle sensor is easily affected by the environment, which causes the accuracy of the data output by the sensor to be reduced, therefore, the accuracy of the lane-level positioning result output by some lane-level positioning methods is also reduced, and the lane-level positioning result is not verified by the common method. If the above factors are solved, the accuracy of the vehicle lane-level positioning can be improved. To achieve this, first, the environmental factors existing during the actual running of the vehicle, that is, the existence of static obstacles and dynamic obstacles, are considered. When vehicle lane level positioning is carried out, different influences exist on a positioning result by a static obstacle and a dynamic obstacle. Therefore, in order to improve the accuracy of the vehicle lane-level localization, each pose data information in the set of pose data information sets is subjected to information classification. Then, some static obstacles (for example, a trunk) with larger reference difficulty exist in the static obstacles. There are also some static obstacles (e.g., curbs, fences, etc.) that are easy to reference. Therefore, the method screens the static obstacles again, and selects the static pose data information meeting the conditions as the reference content of lane-level positioning. Then, it is considered that the coordinate values included in the selected static pose data information are in a hash state, and the hashed coordinate values cannot well reflect the form (such as a straight line or a plurality of curves) of the static obstacle in the vehicle coordinate system so as to perform unified mathematical operation. Therefore, a static obstacle expression is introduced, thereby facilitating a unified mathematical operation. Then, the accuracy of the introduced static obstacle expression is taken into account. Therefore, the formula is introduced, and the projected coordinate values of the sampling points after projection are statically checked. The static obstacle expression can be ensured to have higher accuracy. Thus, in the case where a small number of vehicle sensors are used (for example, only the millimeter wave radar and the inertial measurement unit may be considered), the influence of the environment on the sensor detection result is reduced (that is, the active detection result of the millimeter wave radar may be less affected by the weather and light). And fitting the reality from multiple aspects improves the accuracy of the data generated at each step. Furthermore, the accuracy of the generated vehicle lane-level positioning information is improved.
And 308, in response to the fact that the static obstacle expression verification information meets the preset static verification condition, determining the static obstacle expression and the static obstacle expression verification value as the static obstacle information.
In some embodiments, the execution body may determine the static obstacle expression and the static obstacle expression check information as static obstacle information in response to determining that the static obstacle expression check information satisfies a preset static check condition. The preset static verification condition may be: the average value included in the static obstacle expression verification information is smaller than a preset average threshold value, and the variance value included in the static obstacle expression verification information is smaller than a preset variance threshold value.
Optionally, each timestamp in the historical time period may correspond to a static obstacle expression, and therefore, the execution main body may further perform the static check on each static obstacle expression to obtain a static obstacle expression check information set. Then, the static obstacle expression score value may be generated by the following formula:
Figure 339243DEST_PATH_IMAGE015
wherein the content of the first and second substances,
Figure 92435DEST_PATH_IMAGE016
represents the above static obstacle expression score value.
Figure 98438DEST_PATH_IMAGE017
Indicating the initial value of credit.
Figure 64120DEST_PATH_IMAGE018
And indicating the number of static obstacle expression check information satisfying the preset static check condition in the static obstacle expression check information group.
Figure 839178DEST_PATH_IMAGE019
Indicating a score value increase factor (e.g., 1.2).
And 309, generating a dynamic obstacle information set based on the dynamic pose data information set.
In some embodiments, the specific implementation manner and technical effects of step 309 may refer to step 205 in those embodiments corresponding to fig. 2, and are not described herein again.
Step 310, generating vehicle lane-level positioning information based on the static obstacle information and the dynamic obstacle information set.
In some embodiments, the execution subject may generate the vehicle lane-level positioning information based on the static obstacle information and the dynamic obstacle information set. The method includes determining a static obstacle expression included in the static obstacle information, a preset closest distance of the vehicle, a preset road width value, and a preset emergency lane width value. Then, the lane where the vehicle is located may be determined according to the static obstacle expression, the closest distance of the vehicle, the road width value, and the emergency lane width value, and the lane where the vehicle is located may be obtained.
For example, taking the current road with two lanes and one emergency lane as an example, when the closest distance value is smaller than the sum of half of the road width value and the emergency lane width value, the lane in which the vehicle is located is the first lane counted from the left. When the closest distance is greater than half of the road width value, it may be determined that the vehicle is on a second lane from the left. Therefore, the purpose of vehicle lane level positioning is achieved.
Optionally, the execution main body may further generate vehicle lane-level positioning information based on the static obstacle information group, the static obstacle expression score value, and the dynamic obstacle information set. First, a lateral distance value between the obstacle and the vehicle may be determined with respect to a position coordinate value of the obstacle included in the dynamic obstacle information at the current time in the dynamic obstacle information set. Then, a vehicle lane-level positioning rule may be generated based on the lateral distance value, the road width value, the static obstacle expression score value, the emergency lane width value, the static obstacle expression included in the static obstacle information, and the closest distance of the vehicle. And finally, the lane information which accords with the vehicle lane level positioning rule can be used as vehicle lane level positioning information.
As an example, the road on which the vehicle is located has three or more lanes and one emergency lane, for example. When the closest distance value is smaller than the sum of half of the road width value and the emergency lane width value, and the score value of the static obstacle expression is larger than a preset score threshold, the vehicle lane-level positioning rule set may include one rule that: the lane in which the vehicle is located is not the first lane counted on the right and the second lane counted on the right. When the lateral distance value is smaller than the width of a lane in the road, the set of vehicle lane-level positioning rules may be generated to include a rule: the vehicle is not on the first lane from the left.
As can be seen from fig. 3, compared to the description of some embodiments corresponding to fig. 2, the flow 300 of the vehicle lane-level localization method in some embodiments corresponding to fig. 3 embodies the steps of generating static obstacle information and generating vehicle lane-level localization information. Fitting the reality from multiple aspects improves the accuracy of the data generated at each step. So that the accuracy of generating static obstacle information is improved. And finally, generating vehicle lane-level positioning information by combining the static obstacle information and the dynamic obstacle information. The generated vehicle lane-level positioning information is more accurate. Therefore, the method can improve the accuracy of the generated vehicle lane level positioning information on the basis of improving the efficiency of vehicle lane level positioning.
With further reference to fig. 4, as an implementation of the methods illustrated in the above figures, the present disclosure provides some embodiments of a vehicle lane-level positioning apparatus, which correspond to those method embodiments illustrated in fig. 2, and which may be particularly applicable in various electronic devices.
As shown in fig. 4, the vehicle lane-level locating device 400 of some embodiments includes: an acquisition unit 401, a selection unit 402, a classification unit 403, a first generation unit 404, a second generation unit 405, and a third generation unit 406. Wherein the obtaining unit 401 is configured to obtain a detection data set and a vehicle pose information set in a historical time period, wherein the detection data in the detection data set includes: timestamp and obstacle detection point coordinate values, the vehicle pose information in the vehicle pose information set comprising: a timestamp and a vehicle pose matrix. A selecting unit 402 configured to combine each vehicle pose information in the set of vehicle pose information and the detection data in the set of detection data that matches the vehicle pose information to generate a set of pose data information, resulting in a set of pose data information sets. A classification unit 403 configured to perform information classification on each pose data information in the set of pose data information sets to obtain a dynamic pose data information set and a static pose data information set in response to determining that the set of pose data information sets satisfies a predetermined condition. A first generating unit 404 configured to generate static obstacle information based on the above-described static pose data information set. A second generating unit 405 configured to generate a dynamic obstacle information set based on the dynamic pose data information set. A third generating unit 406 configured to generate vehicle lane-level positioning information based on the static obstacle information and the dynamic obstacle information set.
It will be understood that the elements described in the apparatus 400 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 400 and the units included therein, and will not be described herein again.
Referring now to FIG. 5, a block diagram of an electronic device (e.g., computing device 101 of FIG. 1) 500 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 5 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program, when executed by the processing device 501, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a detection data set and a vehicle pose information set in a historical time period, wherein the detection data in the detection data set comprises: timestamp and obstacle detection point coordinate values, the vehicle pose information in the vehicle pose information set comprising: a timestamp and a vehicle pose matrix; combining each vehicle pose information in the vehicle pose information set with the detection data in the detection data set matching the vehicle pose information to generate a pose data information set, and obtaining a pose data information set; in response to determining that the pose data set meets a predetermined condition, performing information classification on each pose data information in the pose data set to obtain a dynamic pose data information set and a static pose data information set; generating static barrier information based on the static pose data information set; generating a dynamic barrier information set based on the dynamic pose data information set; and generating vehicle lane-level positioning information based on the static obstacle information and the dynamic obstacle information set.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a selection unit, a classification unit, a first generation unit, a second generation unit, and a third generation unit. Where the names of these units do not constitute a limitation on the units themselves in some cases, for example, the acquisition unit may also be described as a "unit that acquires the detection data set and the vehicle pose information set within the history time period".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (10)

1. A vehicle lane-level localization method, comprising:
acquiring a detection data set and a vehicle pose information set in a historical time period, wherein the detection data in the detection data set comprises: timestamp and obstacle detection point coordinate values, the vehicle pose information in the vehicle pose information set comprising: a timestamp and a vehicle pose matrix;
combining each vehicle pose information in the vehicle pose information set with detection data in the detection data set that matches the vehicle pose information to generate a pose data information set, resulting in a pose data information set;
in response to determining that the pose data set meets a predetermined condition, performing information classification on each pose data information in the pose data set to obtain a dynamic pose data information set and a static pose data information set;
generating static barrier information based on the static pose data information set;
generating a dynamic barrier information set based on the dynamic pose data information set;
and generating vehicle lane-level positioning information based on the static obstacle information and the dynamic obstacle information set.
2. The method of claim 1, wherein the inspection data in the inspection data set further includes a relative velocity value, the vehicle pose information in the vehicle pose information set further includes a vehicle speed-to-ground value; and
The dynamic and static classification of each pose data information in the pose data information group set to obtain a dynamic pose data information set and a static pose data information set includes:
determining a ground speed value corresponding to the coordinate value of the obstacle detection point included in the pose data information based on the relative speed value and the vehicle ground speed value included in each pose data information in the pose data information group set to obtain a ground speed value set;
determining pose data information corresponding to the ground speed values which are greater than a preset speed threshold value in the set of ground speed values as dynamic pose data information to obtain a dynamic pose data information set;
and determining pose data information corresponding to the ground speed values smaller than or equal to the preset speed threshold value in the set of ground speed values as static pose data information to obtain a static pose data information set.
3. The method of claim 1, wherein the static obstacle information comprises a static obstacle expression and a static obstacle expression check value; and
generating static obstacle information based on the set of static pose data information, comprising:
generating a static obstacle expression based on the static pose data information set;
Sampling the static obstacle expression at intervals in a horizontal axis interval preset in a vehicle coordinate system to obtain a sampling point coordinate value set, wherein the sampling point coordinate value in the sampling point coordinate value set corresponds to a timestamp included in vehicle pose information in the vehicle pose information set;
based on the vehicle position matrix corresponding to each sampling point coordinate value in the sampling point coordinate value set, projecting each sampling point coordinate value in the sampling point coordinate value set to a vehicle coordinate system to generate a projection point coordinate value, and obtaining a projection point coordinate value set;
performing static verification on the static obstacle expression based on the projection point coordinate value set and a pre-generated static obstacle expression at the previous moment to obtain static obstacle expression verification information;
in response to determining that the static obstacle expression check information satisfies a preset static check condition, determining the static obstacle expression and the static obstacle expression check value as static obstacle information.
4. The method of claim 2, wherein the generating a dynamic set of obstacle information based on the dynamic set of pose data information comprises:
Generating a dynamic barrier centroid coordinate value set and an average velocity value set based on the dynamic pose data information set;
and according to the average speed value group, carrying out coordinate value verification on each dynamic obstacle centroid coordinate value in the dynamic obstacle centroid coordinate value group to obtain a verified dynamic obstacle centroid coordinate value group serving as a dynamic obstacle information set.
5. The method of claim 1, wherein the generating vehicle lane-level positioning information based on the static obstacle information and the set of dynamic obstacle information comprises:
and selecting target lane information from a lane information group detected in advance as vehicle lane-level positioning information based on the static obstacle information and the dynamic obstacle information set.
6. The method of claim 4, wherein,
generating a set of dynamic obstacle centroid coordinate values and a set of average velocity values based on the set of dynamic pose data information, comprising:
adding the coordinate values of the obstacle detection points included by each piece of dynamic pose data information in the dynamic pose data information set into a dynamic coordinate value set, wherein the initial state of the dynamic coordinate values is an empty set;
Determining a distance value between every two dynamic coordinate values in the dynamic coordinate value set as a dynamic coordinate distance value to obtain a dynamic coordinate distance value set;
determining a speed difference value between relative speed values corresponding to every two dynamic coordinate values in the dynamic coordinate value set to obtain a speed difference value set;
classifying each dynamic coordinate distance value in the dynamic coordinate distance value set based on the dynamic coordinate distance value set and the speed difference value set to obtain a dynamic coordinate distance value set;
and determining the centroid coordinate value and the average speed value of each dynamic coordinate distance value in each dynamic coordinate distance value group in the dynamic coordinate distance value group set to obtain a dynamic obstacle substance centroid coordinate value group and an average speed value group.
7. The method of claim 3, wherein the generating a static obstacle expression based on the set of static pose data information comprises:
determining the minimum fitting barrier coordinate number corresponding to a preset basic expression;
selecting the static pose data information with the minimum fitting obstacle coordinate number from the static pose data information set, and executing the following fitting steps based on a preset cycle count threshold value, a preset expression information set, a preset detection point minimum number threshold value and an obstacle detection point coordinate value included by the selected static pose data information:
Fitting the basic expression based on the coordinate values of the obstacle detection points included in the selected static pose data information to obtain a basic fitting expression;
determining the static pose data information in the static pose data information set, which is not matched with the selected static pose data information, as target static pose data information to obtain a target static pose data information set;
determining a point-line distance value between a coordinate value of an obstacle detection point and a basic fitting expression included by each piece of target static pose data information in the target static pose data information set to obtain a point-line distance value set;
determining a maximum tolerance distance value between a coordinate value of an obstacle detection point and a basic fitting expression included in each piece of target static pose data information in the target static pose data information set;
adding the coordinate values of the obstacle detection points corresponding to the point-line distance values smaller than the maximum tolerance distance value in the point-line distance value group into an initial detection point coordinate value set to obtain a detection point coordinate value set, wherein the initial state of the initial detection point coordinate value set is an empty set;
increasing a loop count value by 1, wherein an initial value of the loop count value is 0;
In response to determining that the number of detected point coordinate values in the set of detected point coordinate values is greater than a threshold of a minimum number of detected points, adding the number of detected point coordinate values and a base fitting expression as expression information into the expression information set;
in response to determining that the loop count value is equal to the loop count threshold value, determining a basis-fit expression corresponding to the number of detected point coordinate values whose number of detected point coordinate values included in each of the expression information sets is the largest as a static obstacle expression.
8. The method of claim 7, wherein the generating a static obstacle expression based on the set of static pose data information further comprises:
initializing the set of detection point coordinate values to an initial set of detection point coordinate values in response to determining that a cycle count value is less than the cycle count threshold, and electing the unselected minimum number of fitted obstacle coordinates from the set of static pose data information to perform the fitting step again.
9. A vehicle lane-level locating device comprising:
an acquisition unit configured to acquire a set of detection data and a set of vehicle pose information over a historical period of time, wherein the detection data in the set of detection data includes: timestamp and obstacle detection point coordinate values, the vehicle pose information in the vehicle pose information set comprising: a timestamp and a vehicle pose matrix;
A selection unit configured to combine each vehicle pose information in the set of vehicle pose information with detection data in the set of detection data that matches the vehicle pose information to generate a set of pose data information, resulting in a set of pose data information sets;
a classification unit configured to perform information classification on each pose data information in the set of pose data information groups to obtain a dynamic pose data information set and a static pose data information set in response to determining that the set of pose data information groups satisfies a predetermined condition;
a first generation unit configured to generate static obstacle information based on the set of static pose data information;
a second generation unit configured to generate a dynamic obstacle information set based on the dynamic pose data information set;
a third generating unit configured to generate vehicle lane-level positioning information based on the static obstacle information and the set of dynamic obstacle information.
10. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8.
CN202110450888.9A 2021-04-26 2021-04-26 Vehicle lane level positioning method and device, electronic equipment and computer readable medium Active CN112861833B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110450888.9A CN112861833B (en) 2021-04-26 2021-04-26 Vehicle lane level positioning method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110450888.9A CN112861833B (en) 2021-04-26 2021-04-26 Vehicle lane level positioning method and device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN112861833A true CN112861833A (en) 2021-05-28
CN112861833B CN112861833B (en) 2021-08-31

Family

ID=75992850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110450888.9A Active CN112861833B (en) 2021-04-26 2021-04-26 Vehicle lane level positioning method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN112861833B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205088A (en) * 2021-07-06 2021-08-03 禾多科技(北京)有限公司 Obstacle image presentation method, electronic device, and computer-readable medium
CN114596706A (en) * 2022-03-15 2022-06-07 阿波罗智联(北京)科技有限公司 Detection method and device of roadside sensing system, electronic equipment and roadside equipment
CN114724115A (en) * 2022-05-18 2022-07-08 禾多科技(北京)有限公司 Obstacle positioning information generation method, device, equipment and computer readable medium
CN115143985A (en) * 2022-09-05 2022-10-04 小米汽车科技有限公司 Vehicle positioning method and device, vehicle and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200072631A1 (en) * 2018-08-31 2020-03-05 Here Global B.V. Use of geographic database comprising lane level information for traffic parameter prediction
CN111272180A (en) * 2018-12-04 2020-06-12 赫尔环球有限公司 Method and apparatus for estimating a positioning location on a map
CN112562373A (en) * 2020-08-28 2021-03-26 郭荣江 Method for automobile automatic driving lane level positioning and roadside traffic identification and command signal identification

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200072631A1 (en) * 2018-08-31 2020-03-05 Here Global B.V. Use of geographic database comprising lane level information for traffic parameter prediction
CN111272180A (en) * 2018-12-04 2020-06-12 赫尔环球有限公司 Method and apparatus for estimating a positioning location on a map
CN112562373A (en) * 2020-08-28 2021-03-26 郭荣江 Method for automobile automatic driving lane level positioning and roadside traffic identification and command signal identification

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205088A (en) * 2021-07-06 2021-08-03 禾多科技(北京)有限公司 Obstacle image presentation method, electronic device, and computer-readable medium
CN114596706A (en) * 2022-03-15 2022-06-07 阿波罗智联(北京)科技有限公司 Detection method and device of roadside sensing system, electronic equipment and roadside equipment
CN114596706B (en) * 2022-03-15 2024-05-03 阿波罗智联(北京)科技有限公司 Detection method and device of road side perception system, electronic equipment and road side equipment
CN114724115A (en) * 2022-05-18 2022-07-08 禾多科技(北京)有限公司 Obstacle positioning information generation method, device, equipment and computer readable medium
CN115143985A (en) * 2022-09-05 2022-10-04 小米汽车科技有限公司 Vehicle positioning method and device, vehicle and readable storage medium
CN115143985B (en) * 2022-09-05 2022-12-09 小米汽车科技有限公司 Vehicle positioning method and device, vehicle and readable storage medium

Also Published As

Publication number Publication date
CN112861833B (en) 2021-08-31

Similar Documents

Publication Publication Date Title
CN112861833B (en) Vehicle lane level positioning method and device, electronic equipment and computer readable medium
US11060882B2 (en) Travel data collection and publication
EP3137355B1 (en) Device for designating objects to a navigation module of a vehicle equipped with said device
US20200082561A1 (en) Mapping objects detected in images to geographic positions
CN112590813B (en) Method, device, electronic device and medium for generating information of automatic driving vehicle
CN113034566B (en) High-precision map construction method and device, electronic equipment and storage medium
CN114120650B (en) Method and device for generating test results
CN112328731B (en) Vehicle lane level positioning method and device, electronic equipment and computer readable medium
US11466992B2 (en) Method, apparatus, device and medium for detecting environmental change
CN113126624B (en) Automatic driving simulation test method, device, electronic equipment and medium
CN115616937B (en) Automatic driving simulation test method, device, equipment and computer readable medium
CN114993328B (en) Vehicle positioning evaluation method, device, equipment and computer readable medium
CN114970705A (en) Driving state analysis method, device, equipment and medium based on multi-sensing data
CN111709665B (en) Vehicle safety assessment method and device
CN110321854B (en) Method and apparatus for detecting target object
CN116583891A (en) Critical scene identification for vehicle verification and validation
CN113758492A (en) Map detection method and device
US20200386569A1 (en) Trajectory sampling using spatial familiarity
CN113902047B (en) Image element matching method, device, equipment and storage medium
CN115712749A (en) Image processing method and device, computer equipment and storage medium
JP2021124633A (en) Map generation system and map generation program
CN114663524B (en) Multi-camera online calibration method and device, electronic equipment and computer readable medium
CN112533208A (en) Model training method, false terminal identification method and device, and electronic device
CN112815959B (en) Vehicle lane level positioning system, method and device and electronic equipment
CN111310643B (en) Vehicle counting method and device based on point cloud data and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Vehicle lane level positioning methods, devices, electronic devices and computer-readable media

Effective date of registration: 20230228

Granted publication date: 20210831

Pledgee: Bank of Shanghai Co.,Ltd. Beijing Branch

Pledgor: HOLOMATIC TECHNOLOGY (BEIJING) Co.,Ltd.

Registration number: Y2023980033668

PE01 Entry into force of the registration of the contract for pledge of patent right
CP03 Change of name, title or address

Address after: 201, 202, 301, No. 56-4 Fenghuang South Road, Huadu District, Guangzhou City, Guangdong Province, 510806

Patentee after: Heduo Technology (Guangzhou) Co.,Ltd.

Address before: 100099 101-15, 3rd floor, building 9, yard 55, zique Road, Haidian District, Beijing

Patentee before: HOLOMATIC TECHNOLOGY (BEIJING) Co.,Ltd.

CP03 Change of name, title or address