CN111814114A - Method, device, electronic device, vehicle and storage medium for lane positioning verification - Google Patents

Method, device, electronic device, vehicle and storage medium for lane positioning verification Download PDF

Info

Publication number
CN111814114A
CN111814114A CN202010621661.1A CN202010621661A CN111814114A CN 111814114 A CN111814114 A CN 111814114A CN 202010621661 A CN202010621661 A CN 202010621661A CN 111814114 A CN111814114 A CN 111814114A
Authority
CN
China
Prior art keywords
lane
vehicle
value
positioning
level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010621661.1A
Other languages
Chinese (zh)
Other versions
CN111814114B (en
Inventor
杨鹏斌
杨洋
陈文龙
张建旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Technology Beijing Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010621661.1A priority Critical patent/CN111814114B/en
Publication of CN111814114A publication Critical patent/CN111814114A/en
Application granted granted Critical
Publication of CN111814114B publication Critical patent/CN111814114B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Multimedia (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Remote Sensing (AREA)
  • Operations Research (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Algebra (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

Provided are a method and equipment for verifying lane-level positioning results, electronic equipment, a vehicle and a storage medium, and relates to an automatic driving technology. The method comprises the following steps: acquiring a lane estimation value EL (t) of a vehicle obtained by a lane-level positioning algorithm, wherein the lane estimation value EL (t) indicates which lane the vehicle is in at the current time t; comparing the lane locating value CL (t) of the vehicle with the lane estimation value EL (t), wherein the lane locating value CL (t) is derived from a lane locating value CL (t-1) and a motion state of the vehicle itself and indicates which lane the vehicle is in at the current time t, wherein the lane locating value CL (t-1) indicates which lane the vehicle was in at the previous time t-1; and in response to the comparison indicating that the lane positioning value cl (t) does not coincide with the lane estimation value el (t), selectively correcting or not correcting the lane positioning value cl (t) to the lane estimation value el (t) based on the lane positioning history state of the vehicle stored in the likelihood tree.

Description

Method, device, electronic device, vehicle and storage medium for lane positioning verification
Technical Field
The present disclosure relates to the field of automatic driving technologies, and in particular, to a method and device for verifying lane-level positioning results, an electronic device, a vehicle, and a storage medium.
Background
In recent years, the automatic driving technology has gained wide attention in the industry, and the application of the technology based on the fusion of high-precision maps and lane line perception data to realize lane-level positioning is more and more extensive. If a vehicle positions itself in the wrong lane, safe driving may still be achieved while maintaining the lane, but when the vehicle makes a road decision plan (e.g., determines whether to change lanes) with the wrong lane positioning as a reference, catastrophic consequences may occur, threatening driving safety.
However, in practical applications, there are often false detections in the perception data, or the map does not match the reality due to the current problems, even when the two are wrong at the same time, so that the lane-level positioning of the vehicle has a serious deviation from the actual situation.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, unless otherwise indicated, the problems mentioned in this section should not be considered as having been acknowledged in any prior art.
Disclosure of Invention
According to an aspect of the present disclosure, there is provided a method of verifying a lane-level positioning result, comprising the steps of: acquiring a lane estimation value EL (t) of a vehicle obtained by a lane-level positioning algorithm, wherein the lane estimation value EL (t) indicates which lane the vehicle is in at the current time t; comparing the lane locating value CL (t) of the vehicle with the lane estimation value EL (t), wherein the lane locating value CL (t) is derived from a lane locating value CL (t-1) and a motion state of the vehicle itself and indicates which lane the vehicle is in at the current time t, wherein the lane locating value CL (t-1) indicates which lane the vehicle was in at the previous time t-1; and in response to the comparison indicating that the lane positioning value cl (t) does not coincide with the lane estimation value el (t), selectively correcting or not correcting the lane positioning value cl (t) to the lane estimation value el (t) based on the lane positioning history state of the vehicle stored in the likelihood tree.
According to another aspect of the present disclosure, there is provided an apparatus for verifying a lane-level localization result, including: an obtaining unit configured to obtain a lane estimation value el (t) of a vehicle obtained by a lane-level positioning algorithm, wherein the lane estimation value el (t) indicates which lane the vehicle is located in at a current time t; a comparison unit configured to compare a lane locating value CL (t) of the vehicle with the lane estimation value el (t), wherein the lane locating value CL (t) is derived from a lane locating value CL (t-1) and a motion state of the vehicle itself and indicates which lane the vehicle is in at a current time t, wherein the lane locating value CL (t-1) indicates which lane the vehicle is in at a previous time t-1; and a correction unit configured to selectively correct or not correct the lane positioning value cl (t) to the lane estimation value el (t) based on a lane positioning history state of the vehicle stored in the likelihood tree in response to the comparison indicating that the lane positioning value cl (t) does not coincide with the lane estimation value el (t).
According to another aspect of the present disclosure, there is provided an electronic device for automatic driving, including: a processor; and a memory storing a program comprising instructions which, when executed by the processor, cause the processor to perform the above-described method of verifying lane-level positioning results.
According to another aspect of the present disclosure, there is provided a vehicle including: the electronic device for automatic driving.
According to another aspect of the present disclosure, there is also provided a computer readable storage medium storing a program, the program comprising instructions which, when executed by a processor of an electronic device, cause the electronic device to perform the above-described method of verifying lane-level positioning results.
According to some embodiments of the present disclosure, previously maintained historical trajectory/state information of a vehicle may be utilized as a reference for determining lane-level positioning results of the vehicle, thereby avoiding lane-level positioning of the vehicle from being seriously deviated from actual conditions due to errors in perception data or map mismatch.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of illustration only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
FIG. 1 is a flow chart illustrating a method of verifying lane-level positioning results according to an exemplary embodiment of the present disclosure;
FIG. 2 illustrates a flow diagram of one particular implementation of the method of FIG. 1;
FIG. 3 shows a schematic diagram for generating and updating an N-way likelihood tree, according to an example embodiment of the present disclosure;
fig. 4 illustrates a case where a threshold value of a depth of an N-ary likelihood tree is set to 5 according to an exemplary embodiment of the present disclosure;
FIG. 5 shows a schematic diagram of an N-way likelihood tree with no second child node in a layer of child nodes according to an example embodiment of the present disclosure;
FIG. 6 shows a schematic diagram of an N-way likelihood tree with a second child node in each layer of child nodes according to an example embodiment of the present disclosure;
FIG. 7 shows a schematic block diagram of an electronic device for autonomous driving according to an exemplary embodiment of the present disclosure;
FIG. 8 is a schematic block diagram illustrating a vehicle incorporating the electronic device of FIG. 7, according to an exemplary embodiment of the present disclosure;
fig. 9 is a schematic block diagram illustrating an apparatus for verifying lane-level localization results according to an exemplary embodiment of the present disclosure; and
fig. 10 is a block diagram illustrating an exemplary computing device that may be used to implement the units and functions described herein.
Detailed Description
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", and the like to describe various elements is not intended to limit the positional relationship, the temporal relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various described examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
When a vehicle is positioned in a wrong lane and a road decision making plan (e.g., determining whether to change lanes) is made based on the wrong lane position, catastrophic consequences can occur, threatening driving safety. In practical applications, the lane-level positioning of the vehicle sometimes results in a severe deviation from the actual situation due to errors in the perception data or a map that does not match the actual situation.
There are two methods of verifying lane level positioning results. The first method is to detect the input map data or sensing data (for example, image data in front of the vehicle acquired by an onboard camera) corresponding to the current frame with the map data or sensing data as a priori data. Typically, the sensing data may be detected using map data as a priori data. This is because the map data is stable compared to the perception data, and the map data can also be checked by consecutive multiple frames of perception data. If the continuously occurring map data and the perceptual data are not compatible, the data related to the current frame is directly discarded. This method is simple but it must be ensured that the data, which is a priori data, is accurate. For example, if the perception data is detected with the map data as a reference, the map data must be accurate. However, in practical situations, it sometimes happens that both the map data and the perception data are erroneous, which will cause unpredictable results of the algorithm.
The second method is to determine the corresponding probability of positioning each lane after continuously accumulating the sensing data and the map data for a period of time, and to take the lane corresponding to the maximum probability as the current lane-level positioning result. The method has certain robustness to data errors and is not easily affected when the data errors occur. However, this method requires data to be accumulated in advance for a period of time, which often does not provide effective positioning, resulting in poor user experience. Moreover, if the fault tolerance of the algorithm is to be improved, the time period for accumulating data, i.e., the time period for initialization, needs to be extended, which will not meet the actual use requirements.
The method and the device aim to avoid errors in positioning a lane where a vehicle is located due to sensing data error detection or errors in partial roads in a short period of time according to historical track/state information of the vehicle. In some embodiments of the present disclosure, an N-ary likelihood tree with a fixed depth is maintained to store historical trajectory/state information of the vehicle, and the N-ary likelihood tree is used as a reference for determining lane-level localization results of the vehicle.
It will be appreciated that embodiments of the present disclosure focus on how to verify lane-level positioning results from a lane-level positioning algorithm, rather than discussing the positioning process of the lane-level positioning algorithm itself. In the following description, specific lane-level localization algorithms will not be discussed in depth in order not to obscure the disclosed subject matter. In other words, embodiments of the present disclosure may be applicable to verify lane-level positioning results of various lane-level positioning algorithms.
Fig. 1 shows a flowchart of a method 100 of verifying lane-level positioning results according to an exemplary embodiment of the present disclosure. As shown in fig. 1, the method 100 may include: acquiring a lane estimation value of the vehicle obtained by a lane-level positioning algorithm (step S110); comparing the lane positioning value of the vehicle with the lane estimation value (step S120); and selectively correcting or not correcting the lane positioning value to the lane estimation value based on the lane positioning history state of the vehicle stored in the likelihood tree (step S130).
In the scheme provided by the disclosure, when a vehicle runs, the historical state of the running of the vehicle is recorded by maintaining an N-branch likelihood tree. The lane-level locating algorithm may be run for a period of time after the vehicle is started to complete initialization of the algorithm. During this time, if the lane-level localization algorithm can accurately locate the lane in which the vehicle is located, the initialization step is completed.
The steps S110 to S130 of the method 100 are described in detail below.
In step S110, a lane estimation value el (t) of the vehicle obtained by the lane-level positioning algorithm is obtained, where the lane estimation value el (t) indicates which lane the vehicle is located in at the current time t.
In the lane-level localization algorithm, a lane estimation value el (t) of the vehicle at the current time t may be calculated according to fusion of high-precision map data and perception data. Specifically, the vehicle may first locate its own geographic position at the current time t, and then obtain high-precision map data and determine the number of lanes in the road on which the vehicle is currently driving. Meanwhile, the vehicle can acquire the perception data of the current moment t during driving. For example, the vehicle may acquire one frame of image at the current time t through an in-vehicle sensor (e.g., an in-vehicle camera). And calculating the corresponding probability of the vehicle in each lane by using a probabilistic algorithm according to the perception data by the vehicle, and taking the lane corresponding to the maximum probability as the lane estimation value EL (t) of the vehicle at the current time t.
In step S110, the high-precision map data may be stored in a memory inside the vehicle in advance, or may be acquired in real time by the vehicle networking system. In addition, the vehicle may acquire and analyze the perception data using a preset time interval. The preset time interval may be 100ms, 500ms, 1s, etc., and is not limited thereto. After the perception data is acquired, lane lines on both sides of the vehicle, topographic data on both left and right sides of the vehicle, and the like may be acquired by analyzing the perception data, and then the probability that the vehicle belongs to each lane may be calculated using a probabilistic algorithm by the number of lanes in the road where the vehicle is currently located, which is acquired based on the location and the nearby high-precision map. The terrain data may include, but is not limited to, guardrails, greenbelts, terraces, and their orientation and distance relative to the vehicle.
In step S120, the lane positioning value cl (t) of the vehicle is compared with the lane estimation value el (t). The lane locating value CL (t) is derived according to the lane locating value CL (t-1) and the motion state of the vehicle, and indicates which lane the vehicle is in at the current time t. The lane estimation value CL (t-1) indicates which lane the vehicle was in at the previous time t-1.
In this step, when the current time is t, the vehicle may determine a lane CL (t) in which the vehicle should be located currently, according to the lane positioning value CL (t-1) of the vehicle at the previous time t-1, in combination with the motion state of the vehicle from the time t-1 to the time t. Here, the motion state of the vehicle may include, but is not limited to, a speed of the vehicle, a heading direction of the vehicle, a turning amplitude and a turning speed of a steering wheel, and the like. It will be appreciated that, depending on the particular technical implementation, various target tracking algorithms may be utilized to infer the vehicle position at time t from the vehicle position at time t-1, resulting in a lane position value. The following description likewise does not delve into the specific manner of obtaining lane position values in order not to obscure the subject matter of the present disclosure.
When the current time is t, as described in step S110, a lane estimation value el (t) of the vehicle at the current time t is obtained based on a lane-level localization algorithm. The lane positioning value cl (t) obtained by the inference is compared with the lane estimation value el (t) obtained by the lane-level positioning algorithm. If the two are consistent, the current lane positioning is accurate; otherwise, a determination is made as to whether the lane in which the vehicle is currently located is el (t) or cl (t).
In a normal case, the lane position value cl (t) is identical to the lane estimation value el (t). In some cases, the lane-level localization algorithm may suffer from disturbances, for example due to errors in the perception data or the map data, resulting in lane estimation values el (t) that are inconsistent with the lane localization values cl (t). At this time, the lane positioning value cl (t) needs to be checked to determine whether to accept the lane estimation value el (t) for lane positioning.
In step S130, in response to the comparison indicating that the lane positioning value cl (t) does not coincide with the lane estimation value el (t), the lane positioning value cl (t) is selectively corrected or not corrected to the lane estimation value el (t) based on the lane positioning history state of the vehicle stored in the likelihood tree.
Fig. 2 shows a flow diagram of one particular implementation 200 of the method 100 of fig. 1. The method 100 of fig. 1 is further described below in conjunction with the specific implementation 200 of fig. 2.
In step S201, the corresponding probability of the vehicle being in each lane of the road at the current time t obtained by the lane-level localization algorithm is obtained. The lane estimation value el (t) indicates the lane of which the probability is the highest at the current time t.
In step S202, it is determined whether or not the likelihood tree is initialized. This determination may be performed by determining whether a likelihood tree currently exists.
In step S203, a root node is constructed to initialize a likelihood tree. Wherein the root node corresponds to a lane in which the vehicle is located with a maximum probability.
In step S204, the lane position value cl (t) of the vehicle is compared with the lane estimation value el (t). This corresponds to step S120 in fig. 1.
In step S205, it is determined whether the lane positioning value cl (t) coincides with the lane estimation value el (t), and in response to the comparison indicating that the lane positioning value cl (t) coincides with the lane estimation value el (t), the step in the dashed-line frame part 260 of updating the likelihood tree with the respective probabilities that the vehicle is in the respective lanes at the current time t obtained by the lane-level positioning algorithm is performed; otherwise, the step in the dashed-line box section 280 is executed to determine whether to change the lane positioning value cl (t) to the lane estimation value el (t).
According to some embodiments, updating the likelihood tree using the respective probabilities of the vehicle being in the respective lanes at the current time t derived by the lane-level localization algorithm comprises: in response to determining that the number of lanes has changed based on the map or that the vehicle has changed lanes based on the motion state of the vehicle, adding a layer of sub-nodes to the likelihood tree, the added layer of sub-nodes representing respective lanes present in the map at the current time tmep, respectively, storing respective probabilities that the vehicle is in the respective lanes at the current time tmep in respective ones of the added layer of sub-nodes.
In the example of fig. 2, in step S206 and step S207, it is determined whether the number of lanes has changed and the vehicle has changed lanes, respectively. In some embodiments, the determination of whether the number of lanes changes may be based on the high-precision map data acquired at the current time t, and the determination of whether the lane change of the vehicle occurs may be performed in accordance with the moving state of the vehicle during the period from the previous time t-1 to the current time t. And when the number of lanes is confirmed to be changed or the vehicle is changed, adding a layer of sub-nodes to the likelihood tree, wherein the added layer of sub-nodes respectively represent the lanes existing in the map at the current time t, and respectively storing the corresponding probability that the vehicle is positioned in each lane at the current time t into the corresponding sub-nodes in the added layer of sub-nodes.
According to some embodiments, the likelihood tree comprises at least one layer of sub-nodes, the sub-nodes in each layer representing a respective set of previous time instants t-tkK-1, 2, …, depth } corresponding to the previous time t-tkA plurality of lanes present in a map used by said lane-level localization algorithm, depth representing said likelihood treeDepth. The lane positioning history state of the vehicle includes: each previous time t-tkA lane position value CL (t-t) of the vehiclek) And each previous time t-t obtained by the lane-level positioning algorithmkA respective probability of the vehicle being in each of the plurality of lanes.
Fig. 3 shows a schematic diagram for generating and updating an N-ary likelihood tree according to an exemplary embodiment of the present disclosure.
As shown in fig. 3, the depth of the N-ary likelihood tree is 3, and the numbers 0, 1,2, 3 in the figure represent the number of levels of the N-ary likelihood tree. The sub-nodes (k ═ 1,2, …, depth) of the (depth +1-k) th layer in the N-ary likelihood tree and the previous time t-tkThe lanes in the time road correspond respectively and are stored at the previous time t-tkAnd the probability that the vehicle is in the corresponding lane is obtained by the time lane level positioning algorithm. E.g., layer 3, 2 and 1 child nodes, with t-t1、t-t2And t-t3The lanes in the road correspond at the time. Previous time t-tkAnd indicating the key points of the lane positioning historical state, including the time point when the lane serial number changes or the time point when the number of lanes changes. For example, in FIG. 3, layer 3 corresponds to the previous time t-t1The number of lanes in the map at that time is changed from 4 to 5; layer 2 corresponds to the previous time t-t2At this time, the lane position value CL (t-t)2) The lane number indicated is changed from lane 2 to lane 3. In some examples, the lane positioning historical state keypoints may further include a point in time when a layer 1 child node is created from the root node. For example, in FIG. 3, layer 1 corresponds to the previous time t-t3At this point, a level 1 child node is created from the root node.
In addition, the gray child nodes in the (depth +1-k) th layer represent the previous time instants t-tkTime lane positioning value CL (t-t)k) The indicated lane. For example at the corresponding time t-t at layer 31The number of lanes in the map is 5, and a lane position value CL (t-t)k) Indicating that the vehicle is in lane 3.
Referring back to fig. 2, in step S208,when the depth of the N-branch likelihood tree is increased by 1 at the current time t, the value CL (t-t) is positioned by the lane in the layer with the maximum current depth1) And the child node corresponding to the indicated lane is a father node, and the N-branch likelihood tree is added by one layer. The added layer has a number of child nodes corresponding to the number of changed lanes, and the child nodes correspond to the changed lanes, respectively. The respective probabilities of the vehicle being in the respective lanes at the current time t derived by the lane-level localization algorithm may be stored in the respective child nodes in the added layer.
If it is determined in steps S206 and S207 that the number of lanes has not changed and the vehicle has not changed lanes, the depth of the N-way likelihood tree is maintained, and the probabilities that the vehicle is in each lane at the current time t obtained from the perception data according to the lane-level localization algorithm are respectively accumulated to the child node corresponding to each lane in the layer with the maximum current depth of the N-way likelihood tree, as shown in step S209.
It is to be understood that in steps S208 and S209, if the likelihood tree has no child nodes after initialization, a level 1 child node may be created, which represents each lane in the map at the current time t.
Here, since the probability value that the vehicle is in each lane is between 0 and 1, the probability is accumulated to the child node. When the number of lanes of the road is not changed and the vehicle does not change, if the probability is multiplied, the probability on some sub-nodes is smaller and smaller with the time. To avoid this, some embodiments of the present disclosure adopt a way of adding the probability values logarithmically first.
To ensure efficient utilization of data and avoid infinite expansion of the N-ary likelihood tree, in some embodiments of the present disclosure, the depth of the N-ary likelihood tree is defined as a certain threshold.
When the depth of the likelihood tree exceeds a preset value, removing the root node of the likelihood tree and the lane positioning value CL (t-t) in the layer 1 sub-nodedepth) Indicated lane pairChild nodes other than the corresponding child node.
When it is determined that the depth of the N-ary likelihood tree exceeds the threshold, the oldest state of the N-ary likelihood tree (i.e., the root node of the N-ary likelihood tree and the nodes connected to the root node and having no child nodes) is deleted, as shown in steps S210 to S211.
As shown in fig. 4, a case where the threshold value of the depth of the N-ary likelihood tree is set to 5 according to an exemplary embodiment of the present disclosure is shown. And when the depth of the N-branch likelihood tree after expansion exceeds 5, deleting the current root node and other child nodes except the gray child node in the layer 1.
When it is judged that the lane position value cl (t) does not coincide with the lane estimation value el (t), the steps in the broken-line frame part 280 are executed.
The steps performed in the dashed box 280 are summarized as follows:
determining an offset of the lane estimation el (t) relative to the lane positioning value cl (t);
determining whether there is a second child node in each layer of child nodes of the likelihood tree that is offset by the offset amount relative to a first child node in the layer, wherein the first child node in each layer of child nodes represents the corresponding time t-tkTime said lane positioning value CL (t-t)k) An indicated lane;
in response to determining that the second child node is present in each level of child nodes of the likelihood tree, determining a decision value that is a function of both an accumulated probability of the lane represented by the first child node in each level of the likelihood tree and an accumulated probability of the lane represented by the second child node in each level of the likelihood tree; and
selectively performing one of the following operations depending on the determination value:
correcting the lane positioning value CL (t) to the lane estimation value EL (t) and emptying the likelihood tree;
not correcting the lane positioning value CL (t), and updating the likelihood tree by using the corresponding probability of the vehicle in each lane at the current time t obtained by the lane-level positioning algorithm; and
the lane position value cl (t) is not corrected and the likelihood tree is not updated.
Specifically, in step S212, the offset amount of the lane estimation value el (t) with respect to the lane positioning value cl (t) is calculated. Here, the offset amount is determined by subtracting the lane estimation value el (t) from the lane positioning value cl (t), that is: lane _ bias ═ el (t) -cl (t).
In step S213, the relative lane position value CL (t-t) in each layer of the N-ary likelihood tree may be calculatedk) Cumulative likelihood probability log (P) of the child node (i.e., "second child node" as described in the summary paragraph above) corresponding to the lane in which the indicated lane is offset by lane _ biasexpected):
Figure BDA0002563223220000091
Here, the first and second liquid crystal display panels are,
Figure BDA0002563223220000092
P(2) irepresents the probability of the lane represented by the "second child node" in the ith layer. In some embodiments, step S213 may also be performed after step S214, e.g. together with step S215.
In response to determining in step S214 that the second child node is not present in a certain level of child nodes of the likelihood tree (i.e., that a broken link is present), the lane locating value cl (t) is not corrected, and the likelihood tree is not updated. When it is confirmed that the state of the N-way likelihood tree is broken, the result el (t) obtained by the lane-level localization algorithm at this time is considered to be inaccurate, and thus, the result is rejected in step 217. That is, it is determined that the vehicle is still in the lane corresponding to the lane positioning value cl (t) at the current time t, and the N-ary likelihood tree is not updated with the respective probabilities that the vehicle is in the respective lanes at the current time t obtained by the lane-level positioning algorithm.
A structural schematic diagram of an N-way likelihood tree with no second child node in a certain level of child nodes according to an exemplary embodiment of the present disclosure is shown in fig. 5. Without loss of generality, assume that the current time t corresponds to the dashed box in the figure. In this example, the 2 nd child node in the dashed line frame corresponds to the lane positioning value cl (t), and the 4 th child node in the dashed line frame corresponds to the lane estimation value el (t). Thus, the offset amount of the lane estimation value el (t) with respect to the lane positioning value cl (t) is 2.
Backtracking from the 5 th layer of the N-way likelihood tree upwards, and judging whether a positioning value CL (t-t) relative to a lane exists in each layerk) The corresponding child node is offset by the child node of offset 2. As shown in FIG. 5, when going back up from the 5 th level sub-node and reaching the 3 rd level sub-node, the 3 rd sub-node in the 3 rd level is associated with the lane position value CL (t-t)3) Correspondingly, the child node offset by offset 2 from the 3 rd child node at this time should be at the 5 th child node. However, there is no such 5 th child node in the layer 3 child node, which results in a chain-breaking of the backtracking of the state. When the chain breakage occurs, it indicates that the sensing data obtained at this time has an error, and therefore the sensing data obtained in this frame is discarded, and the sensing data obtained at the current time t is discarded.
A schematic diagram of an N-way likelihood tree with a second child node in each layer of child nodes according to an exemplary embodiment of the present disclosure is shown in fig. 6. As shown in fig. 6, it can be determined from the state of the child node indicated in the dashed line box that the lane estimation value el (t) is offset by 1 with respect to the lane positioning value CL (t), and that there is an offset with respect to the lane positioning value CL (t-t) in each layer when going back up from the lowest layer (5 th layer) upk) The child node at offset 1 is offset.
When it is determined in step S214 that the N-branch likelihood tree is not broken, in the same manner as in step S213, in step S215, the cumulative likelihood probability log (P) of the child node (i.e., "first child node" described in the above summary paragraph) corresponding to the lane positioning value CL (t-tk) in each layer of the N-branch likelihood tree is calculatedcurrent)。
Here, the first and second liquid crystal display panels are,
Figure BDA0002563223220000101
P(1) irepresents the probability of the lane represented by the "first child node" in the ith layer.
In step S216, based on the cumulative likelihood probability calculated as described above, a determination value ratio:
Figure BDA0002563223220000102
the ratio value calculated by the above equation may be used as a basis for determining whether the vehicle is in the lane estimation value el (t) or the lane positioning value cl (t).
Specifically, according to the calculated value of ratio, one of the following operations is performed.
Correcting the lane positioning value CL (t) to the lane estimation value EL (t) and emptying the likelihood tree;
not correcting the lane positioning value CL (t), and updating the likelihood tree by using the corresponding probability of the vehicle in each lane at the current time t obtained by the lane-level positioning algorithm; and
the lane position value cl (t) is not corrected and the likelihood tree is not updated.
As shown in step 219 in the dashed-line box portion 280, when the ratio is greater than the first threshold value (0.8), the result obtained by the lane-level locating algorithm is considered to be accurate, and thus the lane locating value cl (t) is corrected to the lane estimation value el (t). Meanwhile, since the lane positioning value cl (t) acquired before is deviated, it is inferred that there may be an inaccurate situation in the acquired historical data, which may affect the subsequent determination result, and thus the N-way likelihood tree is cleared in step 220. In the processing of the next frame, the current lane estimation value el (t) may be reinitialized as a root node to generate an N-way likelihood tree.
When the ratio is greater than the second threshold value (0.5) and does not exceed the first threshold value (0.8), it is considered that it cannot be accurately determined whether the vehicle is in the lane corresponding to the lane estimation value el (t) or the lane corresponding to the lane positioning value cl (t) at this time depending on the ratio, and therefore the results of both are retained, as shown in step 218. That is, the lane positioning value cl (t) is not corrected to the lane estimation value el (t), but the N-way likelihood tree is updated using the probability that the vehicle is in each lane at the current time t, which is obtained by the lane-level positioning algorithm.
It should be noted here that although the results of the lane locating value cl (t) and the lane estimation value el (t) are retained in this layer, when the depth of the N-way likelihood tree is increased, the N-way likelihood tree is still expanded with the child node corresponding to the lane locating value cl (t) as a parent node. And, when at time t + m1Determining that the number of lanes changes or that the vehicle changes lanes based on time t + m1Updating each child node in the newly added layer of the N-way likelihood tree according to the corresponding probability of the vehicle in each lane obtained by the lane-level positioning algorithm.
When the ratio does not exceed the second threshold (0.5), the lane estimation el (t) obtained by the lane-level localization algorithm is considered inaccurate, and thus the result is rejected, as shown in step 217. That is, the lane positioning value cl (t) is not corrected to the lane estimation value el (t), and the N-way likelihood tree is not updated with the respective probabilities that the vehicle is in each lane at the current time t obtained by the lane-level positioning algorithm.
For safety reasons, in some embodiments of the present disclosure, a more conservative strategy is taken to correct the lane. This is because even if a positioning error of the lane occurs, the vehicle can safely travel in the current lane in a short time. If the result of lane positioning is unstable and changes frequently, the risk of vehicle driving is increased. Thus, in some embodiments of the present disclosure, the first threshold is set to 0.8 and the second threshold is set to 0.5. Of course, in a practical scenario applying the method proposed in the present disclosure, the first threshold and the second threshold may be set to other values according to application requirements.
As shown in fig. 2, after processing this frame of image, the flow of method 200 ends. At this point, the next frame of image is received and the steps of method 200 are re-executed.
Additionally, the present disclosure also provides, according to some embodiments, an electronic device for autonomous driving. As shown in fig. 7, the electronic device 700 may include: a processor 701; and a memory 702 storing a program comprising instructions which, when executed by the processor, cause the processor to perform one of the above-described methods of verifying lane-level positioning results.
Here, the electronic device for automatic driving may include various wired or wireless intelligent navigation terminals, devices, and the like.
According to some embodiments, a vehicle is provided. As shown in fig. 8, a vehicle 800 may include: the electronic device 700 for autonomous driving described above.
According to some embodiments, the present disclosure also provides a computer readable storage medium storing a program comprising instructions which, when executed by a processor of an electronic device, cause the electronic device to perform one of the above-described methods of verifying lane-level localization results.
Additionally, the present disclosure also provides an apparatus 900 for verifying lane-level positioning results, according to some embodiments. As shown in fig. 9, the apparatus 900 may include: an obtaining unit 910, configured to obtain a lane estimation value el (t) of a vehicle obtained by a lane-level positioning algorithm, wherein the lane estimation value el (t) indicates which lane the vehicle is located in at a current time t; a comparison unit 920 configured to compare a lane locating value CL (t) of the vehicle with the lane estimation value el (t), wherein the lane locating value CL (t) is derived from a lane locating value CL (t-1) and a motion state of the vehicle itself and indicates which lane the vehicle is in at a current time t, wherein the lane locating value CL (t-1) indicates which lane the vehicle is in at a previous time t-1; and a correction unit 930 configured to selectively correct or not correct the lane positioning value cl (t) to the lane estimation value el (t) based on the lane positioning history state of the vehicle stored in the likelihood tree in response to the comparison indicating that the lane positioning value cl (t) does not coincide with the lane estimation value el (t).
Here, the operations of the above units 910 to 930 of the apparatus 900 are similar to the operations of the steps S110 to S130 described above, and are not described again here.
Fig. 10 illustrates an exemplary computing device 2000 that may be used to implement the elements and functions described herein. The computing device 2000 may be an example of the electronic device 700 described above with respect to fig. 7 and the device 900 described with respect to fig. 9. The computing device 2000 may be any machine configured to perform various processes, operations, and/or calculations, and may be, but is not limited to, a workstation, a server, a desktop computer, a laptop computer, a tablet computer, a personal digital assistant, a bot, a smart phone, an in-vehicle computer, or any combination thereof. The above-described method of verifying lane-level positioning results may be implemented in whole or at least in part by a computing device 2000 or similar device or system.
Computing device 2000 may include elements to connect with bus 2002 (possibly via one or more interfaces) or to communicate with bus 2002. For example, computing device 2000 may include a bus 2002, one or more processors 2004, one or more input devices 2006, and one or more output devices 2008. The one or more processors 2004 may be any type of processor and may include, but are not limited to, one or more general purpose processors and/or one or more special purpose processors (e.g., special processing chips). Input device 2006 may be any type of device capable of inputting information to computing device 2000 and may include, but is not limited to, a mouse, a keyboard, a touch screen, a microphone, and/or a remote control. Output device 2008 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. The computing device 2000 may also include or be connected with a non-transitory storage device 2010, which may be any storage device that is non-transitory and that may enable data storage, and may include, but is not limited to, a magnetic disk drive, an optical storage device, solid state memory, a floppy disk, a flexible disk, a hard disk, a magnetic tape, or any other magnetic medium, an optical disk or any other optical medium, a ROM (read only memory), a RAM (random access memory), a cache memory, and/or any other memory chip or cartridge, and/or any other medium from which a computer may read data, instructions, and/or code. The non-transitory storage device 2010 may be removable from the interface. The non-transitory storage device 2010 may have data/programs (including instructions)/code for implementing the above-described methods and steps. Computing device 2000 may also include a communication device 2012. The communication device 2012 may be any type of device or system that enables communication with external devices and/or with a network and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication devices, and/or chipsets such as bluetooth (TM) devices, 1302.11 devices, WiFi devices, WiMax devices, cellular communication devices, and/or the like.
The computing device 2000 may also include a working memory 2014, which may be any type of working memory that can store programs (including instructions) and/or data useful for the operation of the processor 2004, and may include, but is not limited to, random access memory and/or read only memory devices.
Software elements (programs) may be located in the working memory 2014 including, but not limited to, an operating system 2016, one or more application programs 2018, drivers, and/or other data and code. Instructions for performing the above-described methods and steps may be included in the one or more applications 2018, and the above-described method of verifying lane-level positioning results may be implemented by the processor 2004 reading and executing the instructions of the one or more applications 2018. More specifically, in the method for verifying the lane-level positioning result, the steps S110 to S130 and the steps S410 to S440 can be realized by the processor 2004 executing the application 2018 having the instructions of the steps S110 to S130 and the steps S510 to S540, for example. Further, other steps in the above-described method of verifying lane-level positioning results may be implemented, for example, by the processor 2004 executing an application 2018 having instructions to perform the respective steps. Executable code or source code of instructions of the software elements (programs) may be stored in a non-transitory computer-readable storage medium (such as the storage device 2010 described above) and, upon execution, may be stored in the working memory 2014 (possibly compiled and/or installed). Executable code or source code for the instructions of the software elements (programs) may also be downloaded from a remote location.
It will also be appreciated that various modifications may be made in accordance with specific requirements. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. For example, some or all of the disclosed methods and apparatus may be implemented by programming hardware (e.g., programmable logic circuitry including Field Programmable Gate Arrays (FPGAs) and/or Programmable Logic Arrays (PLAs)) in an assembly language or hardware programming language such as VERILOG, VHDL, C + +, using logic and algorithms according to the present disclosure.
It should also be understood that the foregoing method may be implemented in a server-client mode. For example, a client may receive data input by a user and send the data to a server. The client may also receive data input by the user, perform part of the processing in the foregoing method, and transmit the data obtained by the processing to the server. The server may receive data from the client and perform the aforementioned method or another part of the aforementioned method and return the results of the execution to the client. The client may receive the results of the execution of the method from the server and may present them to the user, for example, through an output device. It should also be understood that the components of computing device 2000 may be distributed across a network. For example, some processes may be performed using one processor while other processes may be performed by another processor that is remote from the one processor. Other components of the computing system 2000 may also be similarly distributed. As such, the computing device 2000 may be interpreted as a distributed computing system that performs processing at multiple locations.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the methods, systems, and apparatus described above are merely exemplary embodiments or examples and that the scope of the present disclosure is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.

Claims (15)

1. A method for verifying lane-level positioning results comprises the following steps:
acquiring a lane estimation value EL (t) of a vehicle obtained by a lane-level positioning algorithm, wherein the lane estimation value EL (t) indicates which lane the vehicle is in at the current time t;
comparing the lane locating value CL (t) of the vehicle with the lane estimation value EL (t), wherein the lane locating value CL (t) is derived from a lane locating value CL (t-1) and a motion state of the vehicle itself and indicates which lane the vehicle is in at the current time t, wherein the lane locating value CL (t-1) indicates which lane the vehicle was in at the previous time t-1; and
in response to the comparison indicating that the lane positioning value cl (t) does not coincide with the lane estimation value el (t), selectively correcting or not correcting the lane positioning value cl (t) to the lane estimation value el (t) based on the lane positioning history state of the vehicle stored in the likelihood tree.
2. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
wherein the likelihood tree comprises at least one layer of child nodes, and the child nodes in each layer represent the previous time set t-tkK-1, 2, …, depth } corresponding to the previous time t-tkA plurality of lanes present in a map used by said lane-level localization algorithm, depth representing a depth of said likelihood tree, and
wherein the lane positioning history state of the vehicle includes: each previous time t-tkA lane position value CL (t-t) of the vehiclek) And the lane levelEach previous time t-t obtained by the positioning algorithmkA respective probability of the vehicle being in each of the plurality of lanes.
3. The method of claim 2, wherein said selectively modifying or not modifying said lane position value cl (t) to said lane estimate el (t) comprises:
determining an offset of the lane estimation el (t) relative to the lane positioning value cl (t);
determining whether there is a second child node in each layer of child nodes of the likelihood tree that is offset by the offset amount relative to a first child node in the layer, wherein the first child node in each layer of child nodes represents a corresponding previous time t-tkTime said lane positioning value CL (t-t)k) An indicated lane;
in response to determining that the second child node is present in each level of child nodes of the likelihood tree, determining a decision value that is a function of both an accumulated probability of the lane represented by the first child node in each level of the likelihood tree and an accumulated probability of the lane represented by the second child node in each level of the likelihood tree; and
selectively performing one of the following operations depending on the determination value:
correcting the lane positioning value CL (t) to the lane estimation value EL (t) and emptying the likelihood tree;
not correcting the lane positioning value CL (t), and updating the likelihood tree by using the corresponding probability of the vehicle in each lane at the current time t obtained by the lane-level positioning algorithm; and
the lane position value cl (t) is not corrected and the likelihood tree is not updated.
4. The method of claim 3, wherein said selectively modifying or not modifying said lane position value CL (t) to said lane estimate EL (t) further comprises:
in response to determining that the second child node is not present in a level of child nodes of the likelihood tree, the lane locating value CL (t) is not modified, and the likelihood tree is not updated.
5. The method of claim 2, further comprising:
in response to the comparison indicating that the lane positioning value CL (t) is consistent with the lane estimation value EL (t), updating the likelihood tree with the respective probabilities that the vehicle is in the respective lanes at the current time t obtained by the lane-level positioning algorithm.
6. The method of any of claims 3 to 5, wherein updating the likelihood tree with the respective probabilities of the vehicle being in the respective lanes at the current time t derived by the lane-level localization algorithm comprises:
in response to determining that the number of lanes has changed based on the map or that the vehicle has changed lanes based on the state of motion of the vehicle, adding a layer of sub-nodes to the likelihood tree, the added layer of sub-nodes representing respective lanes present in the map at a current time t, respectively, and storing respective probabilities that the vehicle is in the respective lanes at the current time t into respective ones of the added layer of sub-nodes, respectively.
7. The method of any of claims 3 to 5, wherein updating the likelihood tree with the respective probabilities of the vehicle being in the respective lanes at the current time t derived by the lane-level localization algorithm comprises:
in response to determining that the number of lanes is unchanged based on the high-precision map and that the vehicle does not change lanes based on the lane positioning value cl (t), accumulating respective probabilities that the vehicle is in respective lanes at a current time t into respective child nodes in a layer of the likelihood tree that is currently the deepest, respectively.
8. The method of claim 6, wherein said adding the likelihood tree with one level of child nodes comprises:
the child node of the layer with the maximum current depth of the likelihood treeMiddle and lane positioning value CL (t-t)1) And taking the child node corresponding to the indicated lane as a parent node, and adding the likelihood tree to the child node of the layer.
9. The method of claim 8, further comprising:
when the depth of the likelihood tree exceeds a preset value, removing the root node of the likelihood tree and the lane positioning value CL (t-t) in the layer 1 sub-nodedepth) And the child nodes except the child node corresponding to the indicated lane.
10. The method of claim 2, wherein the lane estimation el (t) of the vehicle indicates a most probable lane of the plurality of lanes present in the map used by the lane-level localization algorithm at the current time t.
11. The method according to claim 3, wherein the determination value is determined according to the following formula:
Figure FDA0002563223210000031
wherein log (P)expected) Representing the cumulative probability of the lane represented by the second child node in each level of the likelihood tree,
Figure FDA0002563223210000032
P(2) irepresenting a probability of the lane represented by the second child node in the ith layer;
wherein log (P)current) Representing the cumulative probability of the lane represented by the first child node in each level of the likelihood tree,
Figure FDA0002563223210000033
P(1) irepresenting the probability of the lane represented by the first child node in the ith layer.
12. An apparatus for verifying lane-level positioning results, comprising:
an obtaining unit configured to obtain a lane estimation value el (t) of a vehicle obtained by a lane-level positioning algorithm, wherein the lane estimation value el (t) indicates which lane the vehicle is located in at a current time t;
a comparison unit configured to compare a lane locating value CL (t) of the vehicle with the lane estimation value el (t), wherein the lane locating value CL (t) is derived from a lane locating value CL (t-1) and a motion state of the vehicle itself and indicates which lane the vehicle is in at a current time t, wherein the lane locating value CL (t-1) indicates which lane the vehicle is in at a previous time t-1; and
a correction unit configured to selectively correct or not correct the lane positioning value cl (t) to the lane estimation value el (t) based on a lane positioning history state of the vehicle stored in the likelihood tree in response to the comparison indicating that the lane positioning value cl (t) does not coincide with the lane estimation value el (t).
13. An electronic device for verifying lane-level positioning results, comprising:
a processor; and
a memory storing a program comprising instructions which, when executed by the processor, cause the processor to carry out the method according to any one of claims 1 to 11.
14. A vehicle, comprising:
the electronic device for autonomous driving of claim 13.
15. A computer readable storage medium storing a program, the program comprising instructions that when executed by a processor of an electronic device cause the electronic device to perform the method of any of claims 1-11.
CN202010621661.1A 2020-06-30 2020-06-30 Lane positioning verification method, lane positioning verification device, electronic device, vehicle and storage medium Active CN111814114B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010621661.1A CN111814114B (en) 2020-06-30 2020-06-30 Lane positioning verification method, lane positioning verification device, electronic device, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010621661.1A CN111814114B (en) 2020-06-30 2020-06-30 Lane positioning verification method, lane positioning verification device, electronic device, vehicle and storage medium

Publications (2)

Publication Number Publication Date
CN111814114A true CN111814114A (en) 2020-10-23
CN111814114B CN111814114B (en) 2023-12-08

Family

ID=72856773

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010621661.1A Active CN111814114B (en) 2020-06-30 2020-06-30 Lane positioning verification method, lane positioning verification device, electronic device, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN111814114B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113701781A (en) * 2021-09-30 2021-11-26 重庆长安汽车股份有限公司 Matching lane searching method based on high-precision map and visual lane line

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120150437A1 (en) * 2010-12-13 2012-06-14 Gm Global Technology Operations Llc. Systems and Methods for Precise Sub-Lane Vehicle Positioning
US20190077398A1 (en) * 2017-09-14 2019-03-14 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for vehicle lane change prediction using structural recurrent neural networks
CN110110029A (en) * 2019-05-17 2019-08-09 百度在线网络技术(北京)有限公司 Method and apparatus for matching lane
CN110556012A (en) * 2019-09-16 2019-12-10 北京百度网讯科技有限公司 Lane positioning method and vehicle positioning system
US10553117B1 (en) * 2018-11-06 2020-02-04 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for determining lane occupancy of surrounding vehicles
CN110979346A (en) * 2019-11-29 2020-04-10 北京百度网讯科技有限公司 Method, device and equipment for determining lane where vehicle is located
CN111046709A (en) * 2018-10-15 2020-04-21 广州汽车集团股份有限公司 Vehicle lane level positioning method and system, vehicle and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120150437A1 (en) * 2010-12-13 2012-06-14 Gm Global Technology Operations Llc. Systems and Methods for Precise Sub-Lane Vehicle Positioning
US20190077398A1 (en) * 2017-09-14 2019-03-14 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for vehicle lane change prediction using structural recurrent neural networks
CN111046709A (en) * 2018-10-15 2020-04-21 广州汽车集团股份有限公司 Vehicle lane level positioning method and system, vehicle and storage medium
US10553117B1 (en) * 2018-11-06 2020-02-04 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for determining lane occupancy of surrounding vehicles
CN110110029A (en) * 2019-05-17 2019-08-09 百度在线网络技术(北京)有限公司 Method and apparatus for matching lane
CN110556012A (en) * 2019-09-16 2019-12-10 北京百度网讯科技有限公司 Lane positioning method and vehicle positioning system
CN110979346A (en) * 2019-11-29 2020-04-10 北京百度网讯科技有限公司 Method, device and equipment for determining lane where vehicle is located

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SEUNG GWEON JEONG ET AL.: "Real-time lane detection for autonomous vehicle", 2001 IEEE INTERNATIONAL SYMPOSIUM ON INDUSTRIAL ELECTRONICS PROCEEDINGS, vol. 3, pages 1466 - 1471, XP010548840 *
宋士奇;朴燕;王健;: "基于车道信息融合的车辆行为识别", 液晶与显示, no. 01, pages 83 - 93 *
张林;杨兆升;冯金巧;汪建;: "车辆导航系统中基于三维电子地图的单GPS车道级定位方法研究", 交通与计算机, no. 06, pages 22 - 26 *
谢辉;高斌;熊硕;王悦;: "结构化道路中动态车辆的轨迹预测", 汽车安全与节能学报, no. 04, pages 27 - 36 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113701781A (en) * 2021-09-30 2021-11-26 重庆长安汽车股份有限公司 Matching lane searching method based on high-precision map and visual lane line
CN113701781B (en) * 2021-09-30 2023-07-18 重庆长安汽车股份有限公司 Matching lane searching method based on high-precision map and visual lane lines

Also Published As

Publication number Publication date
CN111814114B (en) 2023-12-08

Similar Documents

Publication Publication Date Title
CN110979346B (en) Method, device and equipment for determining lane where vehicle is located
CN107976688A (en) Obstacle detection method and related device
KR102054090B1 (en) Methods, devices, storage media and devices for automatically discovering gas station POIs
US11408739B2 (en) Location correction utilizing vehicle communication networks
CN110900602B (en) Positioning recovery method and device, robot and storage medium
CN110006439B (en) Map track data matching method, map track data matching device, server and storage medium
CN110969145B (en) Remote sensing image matching optimization method and device, electronic equipment and storage medium
JP7371148B2 (en) Method, device, storage medium and program for determining vehicle positioning information
CN107745711B (en) Method and device for determining route in automatic driving mode
CN109635868B (en) Method and device for determining obstacle type, electronic device and storage medium
Zhang et al. Vehicle positioning system with multi‐hypothesis map matching and robust feedback<? show [AQ ID= Q1]?>
CN111814114B (en) Lane positioning verification method, lane positioning verification device, electronic device, vehicle and storage medium
CN112985440B (en) Method, device, storage medium and program product for detecting travel track deviation
CN116186565A (en) Automatic driving control method and track searching and index constructing method and device
CN114117261B (en) Track detection method and device, electronic equipment and storage medium
WO2022252482A1 (en) Robot, and environment map construction method and apparatus therefor
CN115249407B (en) Indicator light state identification method and device, electronic equipment, storage medium and product
US20230061682A1 (en) Systems and methods for bayesian likelihood estimation of fused objects
CN113183988B (en) Method, device and equipment for supervising automatic driving of vehicle and storage medium
CN115406452A (en) Real-time positioning and mapping method, device and terminal equipment
CN114664104A (en) Road network matching method and device
CN113701768A (en) Path determination method and device and electronic equipment
CN112767732B (en) Parking position determining method and device and electronic equipment
CN113823082B (en) Navigation data processing method, device, equipment and storage medium
CN114379592B (en) Target association method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211019

Address after: 105 / F, building 1, No. 10, Shangdi 10th Street, Haidian District, Beijing 100085

Applicant after: Apollo Intelligent Technology (Beijing) Co.,Ltd.

Address before: 2 / F, baidu building, 10 Shangdi 10th Street, Haidian District, Beijing 100085

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant