CN116481548B - Positioning method and device for automatic driving vehicle and electronic equipment - Google Patents

Positioning method and device for automatic driving vehicle and electronic equipment Download PDF

Info

Publication number
CN116481548B
CN116481548B CN202310744537.8A CN202310744537A CN116481548B CN 116481548 B CN116481548 B CN 116481548B CN 202310744537 A CN202310744537 A CN 202310744537A CN 116481548 B CN116481548 B CN 116481548B
Authority
CN
China
Prior art keywords
automatic driving
driving vehicle
lane
positioning
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310744537.8A
Other languages
Chinese (zh)
Other versions
CN116481548A (en
Inventor
李岩
费再慧
张海强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mushroom Car Union Information Technology Co Ltd
Original Assignee
Mushroom Car Union Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mushroom Car Union Information Technology Co Ltd filed Critical Mushroom Car Union Information Technology Co Ltd
Priority to CN202310744537.8A priority Critical patent/CN116481548B/en
Publication of CN116481548A publication Critical patent/CN116481548A/en
Application granted granted Critical
Publication of CN116481548B publication Critical patent/CN116481548B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

The application discloses a positioning method and device for an automatic driving vehicle and electronic equipment, wherein the method comprises the following steps: when the multi-sensor positioning of the automatic driving vehicle fails, acquiring positioning information of the automatic driving vehicle at the failure moment and corresponding high-precision map data; determining the type of a lane line of a lane where the own vehicle is located according to the positioning information of the failure moment and the high-precision map data; determining whether the self-vehicle meets a preset visual positioning requirement according to the type of a lane line of a lane where the self-vehicle is located; if so, performing visual positioning according to the road image acquired by the vehicle-side blind-supplementing camera, the positioning information of the failure moment and the high-precision map data. According to the application, under the condition that the multi-sensor positioning failure only depends on visual positioning, the longitudinal positioning accuracy of the vehicle is ensured as much as possible through the lane line attribute and the high-precision map data, the utilization rate of the lane keeping function in urban environment is improved, and the manual takeover rate is reduced. The vehicle-side blind compensating camera is used as a main sensor for longitudinal positioning, so that the perception discontinuity of the front-view camera is eliminated.

Description

Positioning method and device for automatic driving vehicle and electronic equipment
Technical Field
The present application relates to the field of autopilot technologies, and in particular, to a method and an apparatus for positioning an autopilot vehicle, and an electronic device.
Background
The gradual reduction of the cost of the laser radar and the increasingly mature positioning technology of the laser SLAM (Simultaneous Localization and Mapping, synchronous positioning and mapping) enable Multi-sensor Fusion (MSF) positioning based on laser SLAM+IMU (Inertial Measurement Unit ) +GNSS (Global Navigation Satellite System, global satellite navigation system)/RTK (Real-time kinematic)/vehicle body data to be the first choice of an automatic driving positioning technology, and multiple information sources are fused together in a filtering mode or an optimizing mode to output the optimal positioning estimation at the current moment.
On the basis, the map matching and positioning technology based on vision and the vision SLAM positioning technology are also widely used in the fields of underground garages, intelligent cabins and the like and used as additional observation information to ensure the positioning stability and the functional integrity of the automatic driving vehicle.
Different from the laser SLAM, the current mainstream visual SLAM positioning methods such as ORB-SLAM and VINS-Mono are easily interfered by external environments, such as strong illumination, rain and fog, rapid running of vehicles and the like, so that feature tracking fails, and stability required by automatic driving vehicle positioning cannot be achieved.
Similarly, the map matching technology based on road surface elements such as lane lines only can provide transverse positioning correction results in most of the time, and when longitudinal positioning elements are not recognized for a long time, longitudinal accumulated errors can also cause that vehicles cannot enter correct roads when turning, turning and other conditions are met, so that the possibility of manual connection is increased.
Disclosure of Invention
The embodiment of the application provides a positioning method and device for an automatic driving vehicle and electronic equipment, which are used for ensuring the positioning precision and stability of the automatic driving vehicle in a special scene and reducing the manual takeover rate.
The embodiment of the application adopts the following technical scheme:
in a first aspect, an embodiment of the present application provides a positioning method for an autopilot vehicle, where the method includes:
under the condition that the multi-sensor positioning state of the automatic driving vehicle is in a failure state, positioning information of the automatic driving vehicle at the failure moment and corresponding high-precision map data are obtained;
determining lane line type information of a lane where an automatic driving vehicle is located according to the positioning information of the failure moment and the high-precision map data;
determining whether the automatic driving vehicle meets a preset visual positioning requirement according to lane line type information of a lane where the automatic driving vehicle is located;
And under the condition that the automatic driving vehicle meets the preset visual positioning requirement, performing visual positioning according to the road image acquired by the vehicle-side blind-supplement camera of the automatic driving vehicle, the positioning information at the failure moment and the high-precision map data to obtain a visual positioning result of the automatic driving vehicle.
Optionally, the acquiring the positioning information of the failure moment of the automatic driving vehicle and the corresponding high-precision map data when the multi-sensor positioning state of the automatic driving vehicle is in the failure state includes:
acquiring confidence degrees of a plurality of sensor observation information of an automatic driving vehicle;
and under the condition that the confidence degree of the observation information of the plurality of sensors is lower than the corresponding confidence degree threshold value, determining that the multi-sensor positioning state of the automatic driving vehicle is in a failure state.
Optionally, the determining whether the autonomous vehicle meets the preset visual positioning requirement according to the lane line type information of the lane where the autonomous vehicle is located includes:
determining whether the lane in which the automatic driving vehicle is located contains a dotted line lane according to the lane line type information of the lane in which the automatic driving vehicle is located;
if yes, determining that the automatic driving vehicle meets a preset visual positioning requirement;
Otherwise, determining that the automatic driving vehicle does not meet the preset visual positioning requirement.
Optionally, under the condition that the automatic driving vehicle meets a preset visual positioning requirement, performing visual positioning according to the road image collected by the vehicle-side blind-complement camera of the automatic driving vehicle, the positioning information of the failure moment and the high-precision map data, and obtaining a visual positioning result of the automatic driving vehicle includes:
according to the road image acquired by the vehicle-side blind-complement camera of the automatic driving vehicle, carrying out lane line identification by using a lane line identification model to obtain a lane line identification result;
determining a transverse positioning result of the automatic driving vehicle according to the lane line identification result;
and determining a longitudinal positioning result of the automatic driving vehicle according to the lane line identification result, the positioning information of the failure moment, the high-precision map data and the speed information of the automatic driving vehicle.
Optionally, the lane line recognition result includes an end point position of a dotted line lane line, and the determining the longitudinal positioning result of the autonomous vehicle according to the lane line recognition result, the positioning information of the failure time, the high-precision map data, and the speed information of the autonomous vehicle includes:
Determining the road course angle of the road where the automatic driving vehicle is and the attribute information of a dotted line lane according to the high-precision map data, wherein the attribute information of the dotted line lane comprises the length and the interval of the dotted line lane;
based on the positioning information of the failure moment, predicting according to the road course angle and the speed information of the automatic driving vehicle to obtain a predicted position of the automatic driving vehicle;
determining local displacement of the automatic driving vehicle according to the end point position of the dotted line lane line and the attribute information of the dotted line lane line based on the positioning information of the failure moment;
and correcting the predicted position of the automatic driving vehicle according to the local displacement of the automatic driving vehicle to obtain a longitudinal positioning result of the automatic driving vehicle.
Optionally, under the condition that the automatic driving vehicle meets a preset visual positioning requirement, performing visual positioning according to the road image collected by the vehicle-side blind-complement camera of the automatic driving vehicle, the positioning information of the failure moment and the high-precision map data, and obtaining a visual positioning result of the automatic driving vehicle includes:
under the condition that a dotted line lane line included in a lane where an automatic driving vehicle is located is a single-side dotted line lane line, determining whether the automatic driving vehicle meets a preset lane change requirement according to the positioning information of the failure moment and the high-precision map data;
Under the condition that the automatic driving vehicle meets the preset lane changing requirement, controlling the automatic driving vehicle to change lanes;
and performing visual positioning according to the lane changing result of the automatic driving vehicle, the road image acquired by the vehicle-side blind-supplementing camera, the positioning information of the failure moment and the high-precision map data to obtain a visual positioning result of the automatic driving vehicle.
Optionally, the determining whether the automatic driving vehicle meets the preset lane change requirement according to the positioning information of the failure moment and the high-precision map data includes:
determining whether the automatic driving vehicle approaches an intersection or not according to the positioning information of the failure moment and the intersection information in the high-precision map data;
determining whether an adjacent lane of the lane where the automatic driving vehicle is located contains a double-sided dotted line lane line according to lane line information in the high-precision map data;
if the automatic driving vehicle approaches an intersection or the adjacent lane of the lane where the automatic driving vehicle is located does not contain double-sided dotted line lane lines, determining that the automatic driving vehicle does not meet the preset lane change requirement;
and if the automatic driving vehicle is not close to the intersection and the adjacent lane of the lane where the automatic driving vehicle is located contains double-sided dotted line lane lines, determining that the automatic driving vehicle meets the preset lane change requirement.
Optionally, the method further comprises:
determining whether the positioning states of the plurality of sensors can be restored within a preset time;
if the positioning state of the at least one sensor can be restored within the preset time, controlling the running of the automatic driving vehicle according to the original running route;
if the positioning states of the plurality of sensors cannot be recovered within the preset time, the driving route is re-planned according to the driving requirement of the automatic driving vehicle and the navigation map.
In a second aspect, an embodiment of the present application further provides a positioning device for an autopilot vehicle, where the device includes:
the acquisition unit is used for acquiring the positioning information of the automatic driving vehicle at the failure moment and corresponding high-precision map data under the condition that the multi-sensor positioning state of the automatic driving vehicle is in the failure state;
the first determining unit is used for determining lane line type information of a lane where the automatic driving vehicle is located according to the positioning information of the failure moment and the high-precision map data;
the second determining unit is used for determining whether the automatic driving vehicle meets the preset visual positioning requirement according to the lane line type information of the lane where the automatic driving vehicle is located;
and the visual positioning unit is used for performing visual positioning according to the road image acquired by the vehicle-side blind-supplement camera of the automatic driving vehicle, the positioning information of the failure moment and the high-precision map data under the condition that the automatic driving vehicle meets the preset visual positioning requirement, so as to obtain a visual positioning result of the automatic driving vehicle.
In a third aspect, an embodiment of the present application further provides an electronic device, including:
a processor; and
a memory arranged to store computer executable instructions which, when executed, cause the processor to perform any of the methods described hereinbefore.
In a fourth aspect, embodiments of the present application also provide a computer-readable storage medium storing one or more programs, which when executed by an electronic device comprising a plurality of application programs, cause the electronic device to perform any of the methods described above.
The above at least one technical scheme adopted by the embodiment of the application can achieve the following beneficial effects: according to the positioning method of the automatic driving vehicle, under the condition that the multi-sensor positioning state of the automatic driving vehicle is in the failure state, positioning information of the automatic driving vehicle at the failure moment and corresponding high-precision map data are obtained; then determining lane line type information of a lane where the automatic driving vehicle is located according to the positioning information of the failure moment and the high-precision map data; determining whether the automatic driving vehicle meets a preset visual positioning requirement according to lane line type information of a lane where the automatic driving vehicle is located; and finally, under the condition that the automatic driving vehicle meets the preset visual positioning requirement, performing visual positioning according to the road image acquired by the vehicle-side blind-supplement camera of the automatic driving vehicle, the positioning information at the failure moment and the high-precision map data to obtain a visual positioning result of the automatic driving vehicle. According to the positioning method of the automatic driving vehicle, provided by the embodiment of the application, under the condition that the multi-sensor positioning state is in a failure state and visual positioning is only relied on, the accuracy of longitudinal positioning of the vehicle is ensured as much as possible through the attribute of the lane line and the high-precision map data, the utilization rate of the lane keeping function in urban environment is improved, and the manual take-over rate is reduced. By using the vehicle side blind-supplementing camera as a main sensor for longitudinal positioning, the perception discontinuity of the front-view monocular camera caused by IPM conversion, shielding, strong illumination and the like is eliminated, and the perception calculation time is reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a flow chart of a method for positioning an autonomous vehicle according to an embodiment of the present application;
FIG. 2 is a schematic view of a positioning device for an automatic driving vehicle according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
The embodiment of the application provides a positioning method of an automatic driving vehicle, as shown in fig. 1, and provides a flow chart of the positioning method of the automatic driving vehicle in the embodiment of the application, wherein the method at least comprises the following steps S110 to S140:
step S110, under the condition that the multi-sensor positioning state of the automatic driving vehicle is in a failure state, positioning information of the automatic driving vehicle at the failure moment and corresponding high-precision map data are acquired.
The positioning method of the automatic driving vehicle of the embodiment of the application can be regarded as an auxiliary positioning scheme adopted for meeting the positioning accuracy requirement of the automatic driving vehicle as far as possible and reducing the manual take-over rate under the condition of failure of multi-sensor fusion positioning, namely the positioning logic of the embodiment of the application can not be started under the condition of normal multi-sensor fusion positioning, and the visual positioning logic of the embodiment of the application is triggered under the condition that the positioning state of the multi-sensor is in a failure state.
The fact that the multi-sensor positioning state is in the failure state refers to the fact that other sensors except visual positioning include laser radar and GNSS/RTK positioning failure, when the fact that the sensors are in the failure state is detected, positioning information of an automatic driving vehicle at the failure moment needs to be obtained, and corresponding local high-precision map data are further obtained according to the positioning information.
It should be noted that, at the moment when the multi-sensor positioning has just failed, the positioning information of the autonomous vehicle may still be considered to be sufficiently accurate, so that it may be used as a basis for acquiring local high-precision map data and performing track deduction subsequently.
And step S120, determining lane line type information of the lane where the automatic driving vehicle is located according to the positioning information of the failure moment and the high-precision map data.
The positioning information at the failure moment reflects the absolute position of the automatic driving vehicle, and the high-precision map data can further provide the lane information of the current road section corresponding to the automatic driving vehicle, so that the lane line type corresponding to the lane where the automatic driving vehicle is located can be determined by matching the positioning information with the high-precision map data, and the lane line type is mainly divided into a dotted line lane line and a solid line lane line.
Step S130, determining whether the automatic driving vehicle meets the preset visual positioning requirement according to the lane line type information of the lane where the automatic driving vehicle is located.
Because the existing visual positioning scheme based on lane line matching can only provide correction information of the transverse position, and the requirement of positioning accuracy of an automatic driving vehicle cannot be met by only relying on the correction information of the transverse position under the condition that the multi-sensor positioning state is in a failure state, especially under the condition that urban environment intersections are more, the lane keeping cannot last for too long time due to longitudinal accumulated errors, the embodiment of the application needs to further acquire the information capable of realizing longitudinal position correction on the basis.
Because the solid line lane line is a continuous line, reference cannot be provided for longitudinal position correction, the broken line lane line is formed by solid line intervals of a plurality of small sections, the high-precision map can contain endpoint position information of each section of broken line lane line, interval among the broken line lane lines and the like, and each section of broken line lane line can be respectively identified theoretically when the lane line is identified, so that reference can be provided for the moving distance of a vehicle relative to the broken line lane line, and further a basis is provided for longitudinal position correction.
Based on the above, the embodiment of the application can judge whether the automatic driving vehicle meets the preset visual positioning requirement of the embodiment of the application or not according to the type of the lane line of the lane where the automatic driving vehicle is currently located, for example, a dotted line lane line or a solid line lane line, namely, the visual positioning is realized based on the dotted line lane line.
And step S140, under the condition that the automatic driving vehicle meets the preset visual positioning requirement, performing visual positioning according to the road image acquired by the vehicle-side blind-supplement camera of the automatic driving vehicle, the positioning information of the failure moment and the high-precision map data to obtain a visual positioning result of the automatic driving vehicle.
If the automatic driving vehicle currently meets the preset visual positioning requirement, visual positioning can be performed according to road images acquired by blind-supplement cameras on two sides of the vehicle and combining positioning information at the failure moment with high-precision map data. Because the shooting angles of the blind-mate cameras on the two sides of the vehicle are respectively shot against the lane lines on the two sides of the lane, compared with the forward-looking monocular camera, the method can eliminate the perception discontinuity of the forward-looking monocular camera caused by IPM (Inverse Perspective Mapping, reverse perspective transformation), shielding, strong illumination and the like, and reduce the perception calculation time.
According to the positioning method of the automatic driving vehicle, provided by the embodiment of the application, under the condition that the multi-sensor positioning state is in a failure state and visual positioning is only relied on, the accuracy of longitudinal positioning of the vehicle is ensured as much as possible through the attribute of the lane line and the high-precision map data, the utilization rate of the lane keeping function in urban environment is improved, and the manual take-over rate is reduced. By using the vehicle side blind-supplementing camera as a main sensor for longitudinal positioning, the perception discontinuity of the front-view monocular camera caused by IPM conversion, shielding, strong illumination and the like is eliminated, and the perception calculation time is reduced.
In some embodiments of the present application, the obtaining the positioning information of the failure moment of the autonomous vehicle and the corresponding high-precision map data when the multi-sensor positioning state of the autonomous vehicle is in the failure state includes: acquiring confidence degrees of a plurality of sensor observation information of an automatic driving vehicle; and under the condition that the confidence degree of the observation information of the plurality of sensors is lower than the corresponding confidence degree threshold value, determining that the multi-sensor positioning state of the automatic driving vehicle is in a failure state.
As described above, the positioning method of the automatic driving vehicle of the present application may be regarded as an auxiliary positioning scheme adopted to meet the positioning accuracy requirement of the automatic driving vehicle as much as possible and reduce the manual take-over rate in the case of failure of multi-sensor positioning, so that the embodiment of the present application may determine the multi-sensor positioning state.
The confidence level of the observation information of the plurality of sensors is firstly obtained, the observation information of the plurality of sensors can comprise GNSS/RTK positioning information, laser radar positioning information and the like, the observation information of the plurality of sensors is respectively compared with corresponding confidence level thresholds, if the observation information of the plurality of sensors is lower than the corresponding confidence level thresholds, the observation information of all sensors except the vision positioning is not available at present, the fusion positioning of the plurality of sensors is in a failure state, and the positioning strategy is required to be switched into the vision positioning logic of the embodiment of the application.
In some embodiments of the present application, the determining whether the autonomous vehicle meets a preset visual positioning requirement according to lane line type information of a lane in which the autonomous vehicle is located includes: determining whether the lane in which the automatic driving vehicle is located contains a dotted line lane according to the lane line type information of the lane in which the automatic driving vehicle is located; if yes, determining that the automatic driving vehicle meets a preset visual positioning requirement; otherwise, determining that the automatic driving vehicle does not meet the preset visual positioning requirement.
The lane line types can be specifically divided into three cases of double-sided dotted line lane lines, single-sided dotted line lane lines and double-sided solid line lane lines, and when judging whether the automatic driving vehicle currently meets the preset visual positioning requirement, the lane where the automatic driving vehicle currently is located is mainly judged whether the dotted line lane lines are included.
Because, for the case of a double-sided solid lane line, it is not possible to provide correction information for the longitudinal positioning of the vehicle based on the attribute characteristics of the solid lane line, and the vehicle is also not able to make lane changes across the solid line, this case may be considered as not meeting the preset visual positioning requirements. For the situations of double-side dotted line lane lines and single-side dotted line lane lines, as the dotted line lane lines exist on at least one side of the vehicle, correction information can be provided for longitudinal positioning of the automatic driving vehicle based on the attribute characteristics of the dotted line lane lines, so that the preset visual positioning requirement can be considered to be met.
In some embodiments of the present application, when the automatic driving vehicle meets a preset visual positioning requirement, performing visual positioning according to a road image collected by a vehicle-side blind-complement camera of the automatic driving vehicle, the positioning information of the failure moment, and the high-precision map data, and obtaining a visual positioning result of the automatic driving vehicle includes: according to the road image acquired by the vehicle-side blind-complement camera of the automatic driving vehicle, carrying out lane line identification by using a lane line identification model to obtain a lane line identification result; determining a transverse positioning result of the automatic driving vehicle according to the lane line identification result; and determining a longitudinal positioning result of the automatic driving vehicle according to the lane line identification result, the positioning information of the failure moment, the high-precision map data and the speed information of the automatic driving vehicle.
When the automatic driving vehicle currently meets the preset visual positioning requirement, road images acquired by blind-mate cameras on two sides of the vehicle can be further acquired, the blind-mate cameras on the vehicle side are used as main sensors for positioning the current automatic driving vehicle, and then the existing lane line recognition model is utilized to recognize the lane lines in the road images acquired by the blind-mate cameras on the vehicle side.
Because the shooting angle of the vehicle-side blind compensating camera is just opposite to the lane line and is shot in parallel, the transverse distance from the automatic driving vehicle to the lane line can be determined according to the identified lane line, and the transverse distance is further converted into the world coordinate system by combining the conversion relation between the image and the world coordinate system, so that the correction of the transverse position of the automatic driving vehicle is realized.
In addition, based on the identified broken line lane, local track deduction is further carried out by combining initial positioning information at the failure moment, high-precision map data and speed information of the automatic driving vehicle, so that the correction of the longitudinal position of the automatic driving vehicle is realized.
In some embodiments of the present application, the lane line recognition result includes an end point position of a dotted line lane line, and the determining the longitudinal positioning result of the autonomous vehicle according to the lane line recognition result, the positioning information of the failure time, the high-precision map data, and the speed information of the autonomous vehicle includes: determining the road course angle of the road where the automatic driving vehicle is and the attribute information of a dotted line lane according to the high-precision map data, wherein the attribute information of the dotted line lane comprises the length and the interval of the dotted line lane; based on the positioning information of the failure moment, predicting according to the road course angle and the speed information of the automatic driving vehicle to obtain a predicted position of the automatic driving vehicle; determining local displacement of the automatic driving vehicle according to the end point position of the dotted line lane line and the attribute information of the dotted line lane line based on the positioning information of the failure moment; and correcting the predicted position of the automatic driving vehicle according to the local displacement of the automatic driving vehicle to obtain a longitudinal positioning result of the automatic driving vehicle.
When the longitudinal position of the automatic driving vehicle is corrected, the road course angle of the current road of the automatic driving vehicle and the attribute information of the dotted line lane line can be determined according to the road information provided in the high-precision map data, wherein the road course angle refers to the direction of the road under the world coordinate system, and the vehicle course angle is consistent with the direction of the road course angle in the current road, and under the condition that the multi-sensor positioning fails, a large error can exist in the vehicle course angle, so that the positioning information at the failure moment can be taken as a starting point, the road course angle is taken as a reference, and the current speed information of the automatic driving vehicle is input into the Kalman filter together for position prediction, and the predicted position of the automatic driving vehicle is obtained.
The attribute information of the dashed line lane lines may specifically include the length of each section of the dashed line lane line and the interval between two adjacent sections of the dashed line lane lines, and in the construction stage of the high-precision map, these attribute information may be acquired, and according to the continuous identification of the dashed line lane lines, the number of the currently identified sections of the dashed line lane lines may be determined, and further, the local displacement of the autonomous vehicle may be calculated according to the length of each section of the dashed line lane line and the interval between two adjacent sections of the dashed line lane lines, for example, assuming that the length of the dashed line lane line is 6 meters, the interval is 9 meters, and at the current moment, when the vehicle identifies the start point of the 3 rd section of the dashed line lane line, the local displacement of the vehicle may be determined to be 6×2+9×2=30m.
Because the positioning information of the failure moment is known, the predicted position of the current moment can be longitudinally corrected based on the local displacement, so that the observation value of the filter is updated, and a more accurate and more stable longitudinal positioning result is obtained.
It should be noted that, when calculating the local displacement based on what section of the dashed lane line is currently identified, considering that the distance traveled by the vehicle may not be enough to span a complete section of the dashed lane line or the interval between two sections of the dashed lane lines due to the frequency of the positioning output in a short time when the multi-sensor positioning of the autonomous vehicle just fails, the track deduction position may still provide an accurate positioning result in a short time, so that the calculation of the local displacement may be not considered first, and the track deduction position may be used to calculate the local displacement based on the track deduction position, when the vehicle travels a sufficient distance, that is, what section of the dashed lane line the vehicle can be identified to, so as to correct the track deduction position.
In addition, since the absolute position of the autonomous vehicle generally refers to the position of the center of the rear axle of the vehicle in the world coordinate system, the position of the vehicle-side blind mate camera at the time of installation is known, and the conversion relationship of the external parameters to the center of the rear axle can be calibrated in advance, the obtained visual positioning result can be subjected to distance compensation according to the external parameters of the vehicle-side blind mate camera, that is, the visual positioning position is compensated to the position of the center of the rear axle of the vehicle.
In some embodiments of the present application, when the automatic driving vehicle meets a preset visual positioning requirement, performing visual positioning according to a road image collected by a vehicle-side blind-complement camera of the automatic driving vehicle, the positioning information of the failure moment, and the high-precision map data, and obtaining a visual positioning result of the automatic driving vehicle includes: under the condition that a dotted line lane line included in a lane where an automatic driving vehicle is located is a single-side dotted line lane line, determining whether the automatic driving vehicle meets a preset lane change requirement according to the positioning information of the failure moment and the high-precision map data; under the condition that the automatic driving vehicle meets the preset lane changing requirement, controlling the automatic driving vehicle to change lanes; and performing visual positioning according to the lane changing result of the automatic driving vehicle, the road image acquired by the vehicle-side blind-supplementing camera, the positioning information of the failure moment and the high-precision map data to obtain a visual positioning result of the automatic driving vehicle.
As described above, the broken line lane line may be classified into a case of a one-sided broken line lane line and a case of a two-sided broken line lane line, and for the one-sided broken line lane line, the blind-supplement camera on the side of the autonomous vehicle may recognize the broken line lane line and thereby correct the lateral and longitudinal positions of the vehicle. However, under some road scenes, certain errors may exist when the blind-mate cameras on one side are only relied on to identify the single-side dotted line lane lines, and if the lane where the vehicle is located contains the double-side dotted line lane lines, the blind-mate cameras on two sides of the vehicle can respectively identify the dotted line lane lines, so that the blind-mate cameras on two sides of the vehicle can mutually supplement and restrict the blind-mate cameras, and the accuracy of the transverse and longitudinal position correction is improved.
Based on the above, the embodiment of the application can judge the type of the dotted line lane line included in the lane where the automatic driving vehicle is currently located, if the type is the single-side dotted line lane line, whether the automatic driving vehicle currently meets the preset lane change requirement or not can be further judged, and the purpose of lane change is to enable the automatic driving vehicle to be switched to the lane including the double-side dotted line lane line, so that the blind-supplementing cameras on two sides of the vehicle can conveniently recognize the dotted line lane line.
The preset lane change requirement can be determined by combining with the actual road information provided in the high-precision map data, if the automatic driving vehicle currently meets the preset lane change requirement, the automatic driving vehicle can be controlled to change lanes, namely, the lane change is controlled to be on a lane comprising double-side dotted line lane lines, after the lane change is completed, the visual positioning is performed according to the lane image acquired by the vehicle-side blind-supplementing camera, the positioning information of the failure moment and the high-precision map data and the positioning logic realized on the basis of the dotted line lane lines in the embodiment, so that the visual positioning result of the automatic driving vehicle is obtained.
In some embodiments of the present application, the determining whether the automatic driving vehicle meets the preset lane change requirement according to the positioning information of the failure time and the high-precision map data includes: determining whether the automatic driving vehicle approaches an intersection or not according to the positioning information of the failure moment and the intersection information in the high-precision map data; determining whether an adjacent lane of the lane where the automatic driving vehicle is located contains a double-sided dotted line lane line according to lane line information in the high-precision map data; if the automatic driving vehicle approaches an intersection or the adjacent lane of the lane where the automatic driving vehicle is located does not contain double-sided dotted line lane lines, determining that the automatic driving vehicle does not meet the preset lane change requirement; and if the automatic driving vehicle is not close to the intersection and the adjacent lane of the lane where the automatic driving vehicle is located contains double-sided dotted line lane lines, determining that the automatic driving vehicle meets the preset lane change requirement.
In determining whether the autonomous vehicle currently meets the preset lane change requirements, one may consider the following aspects:
1) Judging the adjacent intersections: in general, the vehicle cannot perform lane changing operation when approaching the intersection, so that the embodiment of the application can determine the distance from the current vehicle to the intersection according to the position of the failure moment and the intersection position information provided in the high-precision map data, and then compare the distance with the preset distance threshold value, and if the distance is smaller than the preset distance threshold value, the vehicle is relatively close to the intersection, and lane changing operation is not easy to perform.
2) Judging double-sided dotted line lane lines: since the purpose of lane changing is to change a vehicle from a lane including a single-sided dotted line lane line to a lane including a double-sided dotted line lane line, whether the adjacent lane of the current lane is a lane including a double-sided dotted line lane line or not can be determined by high-precision map data based on the current lane of the vehicle, and if the adjacent lane does not include a double-sided dotted line lane line, the lane changing requirement set in the embodiment of the present application is not satisfied.
If the automatic driving vehicle does not approach the intersection and the adjacent lanes of the lanes contain double-sided dotted line lane lines, the automatic driving vehicle can be considered to currently meet the preset lane change requirement, and the lane change can be performed by crossing the single-sided dotted line lane lines to the lane containing the double-sided dotted line lane lines, so that blind-supplementing cameras on two sides of the vehicle can conveniently recognize the dotted line lane lines.
The above two judgment conditions may be regarded as mutually independent judgment conditions, and may be executed simultaneously or sequentially, and in particular, how to judge the two judgment conditions may be flexibly set by a person skilled in the art according to actual needs, and are not particularly limited herein.
In some embodiments of the application, the controlling the autonomous vehicle to change lanes comprises: carrying out lane line recognition on the road image acquired by the vehicle-side blind-supplementing camera to obtain a lane line recognition result; determining whether the automatic driving vehicle finishes lane change according to the lane line identification result and the course angle change information of the automatic driving vehicle; and outputting a lane changing result of the automatic driving vehicle under the condition that the automatic driving vehicle finishes lane changing.
When the automatic driving vehicle is controlled to change the lane, the embodiment of the application can also judge whether the lane changing process is finished or not according to the perception result of the vehicle-side blind-supplementing camera, can firstly acquire the road image acquired by the vehicle-side blind-supplementing camera, and then uses the existing lane line recognition result to perform lane line recognition to obtain the lane line recognition result.
On the one hand, if the automatic driving vehicle finishes lane changing, the blind-mate cameras on both sides of the automatic driving vehicle should be able to recognize the dotted lane lines on both sides, respectively, so it can be judged whether the automatic driving vehicle finishes lane changing based on the lane line recognition result. On the other hand, the course angle of the automatic driving vehicle can be obviously changed after the automatic driving vehicle finishes the lane changing from the lane changing before the lane changing is started to the lane changing process, so that whether the automatic driving vehicle finishes the lane changing can be judged from the course angle change mode of the IMU.
In some embodiments of the application, after controlling the autonomous vehicle to make a lane change, the method further comprises: and determining whether the automatic driving vehicle needs to change the lane again according to the original driving route and the high-precision map data, and controlling the automatic driving vehicle to change the lane under the condition that the automatic driving vehicle needs to change the lane again so as to enable the automatic driving vehicle to drive according to the original driving route.
The purpose of lane changing in the foregoing embodiment is to change the lane of the vehicle into the lane including the lane lines with double-sided dashed lines, so that more basis can be provided for correcting the lateral and longitudinal positions of the vehicle, but the lane changing decision does not consider whether the vehicle still accords with the predetermined driving route after lane changing, so that in the driving process of the vehicle, the embodiment of the application can perform the vehicle behavior judgment on whether the lane changing needs to be performed again according to the high-precision map data and the original driving route in real time, so as to ensure that the vehicle drives on the correct driving route.
The method can specifically judge whether the vehicle needs to change the lane or not according to the distance from the automatic driving vehicle to the intersection or the last variable lane position and the original driving route, and control the vehicle to change the lane when the vehicle needs to change the lane, specifically how to control the vehicle to change the lane to the lane conforming to the set driving route, and the person skilled in the art can flexibly determine the lane according to the prior art without specific limitation.
In some embodiments of the application, the method further comprises: determining whether the positioning states of the plurality of sensors can be restored within a preset time; if the positioning state of the at least one sensor can be restored within the preset time, controlling the running of the automatic driving vehicle according to the original running route; if the positioning states of the plurality of sensors cannot be recovered within the preset time, the driving route is re-planned according to the driving requirement of the automatic driving vehicle and the navigation map.
The embodiment of the application can meet the positioning precision requirement of the automatic driving vehicle in a short time, but if the observation information of other sensors is missing for a long time, the automatic driving vehicle still generates larger positioning errors, so that in order to ensure the running safety of the automatic driving vehicle, the embodiment of the application can judge the fault recovery time of the laser SLAM and the GNSS/RTK in real time in the running process of the vehicle, if the positioning state of at least one sensor can be recovered in a short time, the automatic driving vehicle can run according to the original route, otherwise, the route can be reselected according to the running requirement and the navigation map, for example, the road sections with more road side devices, less turning and good road conditions can be preferentially selected, thereby minimizing the risk that the automatic driving vehicle is taken over and the running risk.
In summary, the positioning method of the automatic driving vehicle of the present application has at least the following technical effects:
1) The high-precision map data is used as priori information of lane selection, and under the condition that only visual positioning is left, the longitudinal positioning accuracy of the vehicle is guaranteed to the greatest extent through the attribute of the pavement elements, so that the utilization rate of a lane keeping function in urban environment is improved, and the manual takeover rate is reduced;
2) The vehicle side blind compensating camera is used as a main sensor for longitudinal positioning, so that the perception discontinuity of the front-view monocular camera caused by IPM conversion, shielding, strong illumination and the like is eliminated, and the perception calculation time is reduced.
The embodiment of the present application further provides a positioning device 200 for an autopilot vehicle, as shown in fig. 2, and a schematic structural diagram of the positioning device for an autopilot vehicle in the embodiment of the present application is provided, where the device 200 includes: an acquisition unit 210, a first determination unit 220, a second determination unit 230, and a visual positioning unit 240, wherein:
an obtaining unit 210, configured to obtain, when the multi-sensor positioning state of the autonomous vehicle is in a failure state, positioning information of the autonomous vehicle at a failure time and corresponding high-precision map data;
A first determining unit 220, configured to determine lane line type information of a lane where the autonomous vehicle is located according to the positioning information of the failure moment and the high-precision map data;
a second determining unit 230, configured to determine whether the autonomous vehicle meets a preset visual positioning requirement according to lane line type information of a lane in which the autonomous vehicle is located;
and the visual positioning unit 240 is configured to perform visual positioning according to the road image collected by the vehicle-side blind-mate camera of the automatic driving vehicle, the positioning information at the failure moment, and the high-precision map data, to obtain a visual positioning result of the automatic driving vehicle when the automatic driving vehicle meets a preset visual positioning requirement.
In some embodiments of the present application, the obtaining unit 210 is specifically configured to: acquiring confidence degrees of a plurality of sensor observation information of an automatic driving vehicle; and under the condition that the confidence degree of the observation information of the plurality of sensors is lower than the corresponding confidence degree threshold value, determining that the multi-sensor positioning state of the automatic driving vehicle is in a failure state.
In some embodiments of the present application, the second determining unit 230 is specifically configured to: determining whether the lane in which the automatic driving vehicle is located contains a dotted line lane according to the lane line type information of the lane in which the automatic driving vehicle is located; if yes, determining that the automatic driving vehicle meets a preset visual positioning requirement; otherwise, determining that the automatic driving vehicle does not meet the preset visual positioning requirement.
In some embodiments of the present application, the visual positioning unit 240 is specifically configured to: according to the road image acquired by the vehicle-side blind-complement camera of the automatic driving vehicle, carrying out lane line identification by using a lane line identification model to obtain a lane line identification result; determining a transverse positioning result of the automatic driving vehicle according to the lane line identification result; and determining a longitudinal positioning result of the automatic driving vehicle according to the lane line identification result, the positioning information of the failure moment, the high-precision map data and the speed information of the automatic driving vehicle.
In some embodiments of the present application, the lane line recognition result includes an end point position of a dotted lane line, and the visual positioning unit 240 is specifically configured to: determining the road course angle of the road where the automatic driving vehicle is and the attribute information of a dotted line lane according to the high-precision map data, wherein the attribute information of the dotted line lane comprises the length and the interval of the dotted line lane; based on the positioning information of the failure moment, predicting according to the road course angle and the speed information of the automatic driving vehicle to obtain a predicted position of the automatic driving vehicle; determining local displacement of the automatic driving vehicle according to the end point position of the dotted line lane line and the attribute information of the dotted line lane line based on the positioning information of the failure moment; and correcting the predicted position of the automatic driving vehicle according to the local displacement of the automatic driving vehicle to obtain a longitudinal positioning result of the automatic driving vehicle.
In some embodiments of the present application, the visual positioning unit 240 is specifically configured to: under the condition that a dotted line lane line included in a lane where an automatic driving vehicle is located is a single-side dotted line lane line, determining whether the automatic driving vehicle meets a preset lane change requirement according to the positioning information of the failure moment and the high-precision map data; under the condition that the automatic driving vehicle meets the preset lane changing requirement, controlling the automatic driving vehicle to change lanes; and performing visual positioning according to the lane changing result of the automatic driving vehicle, the road image acquired by the vehicle-side blind-supplementing camera, the positioning information of the failure moment and the high-precision map data to obtain a visual positioning result of the automatic driving vehicle.
In some embodiments of the present application, the visual positioning unit 240 is specifically configured to: determining whether the automatic driving vehicle approaches an intersection or not according to the positioning information of the failure moment and the intersection information in the high-precision map data; determining whether an adjacent lane of the lane where the automatic driving vehicle is located contains a double-sided dotted line lane line according to lane line information in the high-precision map data; if the automatic driving vehicle approaches an intersection or the adjacent lane of the lane where the automatic driving vehicle is located does not contain double-sided dotted line lane lines, determining that the automatic driving vehicle does not meet the preset lane change requirement; and if the automatic driving vehicle is not close to the intersection and the adjacent lane of the lane where the automatic driving vehicle is located contains double-sided dotted line lane lines, determining that the automatic driving vehicle meets the preset lane change requirement.
In some embodiments of the application, the apparatus further comprises: a third determining unit for determining whether the positioning states of the plurality of sensors can be restored within a preset time; the control unit is used for controlling the running of the automatic driving vehicle according to the original running route if the positioning state of the at least one sensor can be restored within the preset time; and the planning unit is used for re-planning the driving route according to the driving requirement of the automatic driving vehicle and the navigation map if the positioning states of the plurality of sensors cannot be recovered within the preset time.
It can be understood that the above-mentioned positioning device for an automatic driving vehicle can implement each step of the positioning method for an automatic driving vehicle provided in the foregoing embodiment, and the relevant explanation about the positioning method for an automatic driving vehicle is applicable to the positioning device for an automatic driving vehicle, which is not repeated herein.
Fig. 3 is a schematic structural view of an electronic device according to an embodiment of the present application. Referring to fig. 3, at the hardware level, the electronic device includes a processor, and optionally an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, network interface, and memory may be interconnected by an internal bus, which may be an ISA (Industry Standard Architecture ) bus, a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus, or EISA (Extended Industry Standard Architecture ) bus, among others. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 3, but not only one bus or type of bus.
And the memory is used for storing programs. In particular, the program may include program code including computer-operating instructions. The memory may include memory and non-volatile storage and provide instructions and data to the processor.
The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form the positioning device of the automatic driving vehicle on a logic level. The processor is used for executing the programs stored in the memory and is specifically used for executing the following operations:
under the condition that the multi-sensor positioning state of the automatic driving vehicle is in a failure state, positioning information of the automatic driving vehicle at the failure moment and corresponding high-precision map data are obtained;
Determining lane line type information of a lane where an automatic driving vehicle is located according to the positioning information of the failure moment and the high-precision map data;
determining whether the automatic driving vehicle meets a preset visual positioning requirement according to lane line type information of a lane where the automatic driving vehicle is located;
and under the condition that the automatic driving vehicle meets the preset visual positioning requirement, performing visual positioning according to the road image acquired by the vehicle-side blind-supplement camera of the automatic driving vehicle, the positioning information at the failure moment and the high-precision map data to obtain a visual positioning result of the automatic driving vehicle.
The method performed by the positioning device of the autonomous vehicle disclosed in the embodiment of fig. 1 of the present application may be applied to or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
The electronic device may also execute the method executed by the positioning device of the autopilot vehicle in fig. 1, and implement the function of the positioning device of the autopilot vehicle in the embodiment shown in fig. 1, which is not described herein.
The embodiment of the present application also proposes a computer-readable storage medium storing one or more programs, the one or more programs including instructions, which when executed by an electronic device comprising a plurality of application programs, enable the electronic device to perform a method performed by a positioning apparatus for an autonomous vehicle in the embodiment shown in fig. 1, and in particular for performing:
under the condition that the multi-sensor positioning state of the automatic driving vehicle is in a failure state, positioning information of the automatic driving vehicle at the failure moment and corresponding high-precision map data are obtained;
determining lane line type information of a lane where an automatic driving vehicle is located according to the positioning information of the failure moment and the high-precision map data;
determining whether the automatic driving vehicle meets a preset visual positioning requirement according to lane line type information of a lane where the automatic driving vehicle is located;
and under the condition that the automatic driving vehicle meets the preset visual positioning requirement, performing visual positioning according to the road image acquired by the vehicle-side blind-supplement camera of the automatic driving vehicle, the positioning information at the failure moment and the high-precision map data to obtain a visual positioning result of the automatic driving vehicle.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (9)

1. A method of positioning an autonomous vehicle, wherein the method comprises:
under the condition that the multi-sensor positioning state of the automatic driving vehicle is in a failure state, positioning information of the automatic driving vehicle at the failure moment and corresponding high-precision map data are obtained;
determining lane line type information of a lane where an automatic driving vehicle is located according to the positioning information of the failure moment and the high-precision map data;
Determining whether the automatic driving vehicle meets a preset visual positioning requirement according to lane line type information of a lane where the automatic driving vehicle is located;
under the condition that the automatic driving vehicle meets the preset visual positioning requirement, performing visual positioning according to the road image acquired by the vehicle-side blind-supplement camera of the automatic driving vehicle, the positioning information of the failure moment and the high-precision map data to obtain a visual positioning result of the automatic driving vehicle;
the multi-sensor positioning state refers to a non-visual sensor positioning state;
the determining whether the automatic driving vehicle meets the preset visual positioning requirement according to the lane line type information of the lane where the automatic driving vehicle is located comprises the following steps:
determining whether the lane in which the automatic driving vehicle is located contains a dotted line lane according to the lane line type information of the lane in which the automatic driving vehicle is located;
if yes, determining that the automatic driving vehicle meets a preset visual positioning requirement;
otherwise, determining that the automatic driving vehicle does not meet the preset visual positioning requirement.
2. The method of claim 1, wherein the acquiring location information of a failure moment of the autonomous vehicle and corresponding high-precision map data in a case where the multi-sensor location state of the autonomous vehicle is in a failure state comprises:
Acquiring confidence degrees of a plurality of sensor observation information of an automatic driving vehicle;
and under the condition that the confidence degree of the observation information of the plurality of sensors is lower than the corresponding confidence degree threshold value, determining that the multi-sensor positioning state of the automatic driving vehicle is in a failure state.
3. The method of claim 1, wherein, in the case that the automatic driving vehicle meets the preset visual positioning requirement, performing visual positioning according to the road image collected by the vehicle-side blind-supplement camera of the automatic driving vehicle, the positioning information of the failure moment and the high-precision map data, and obtaining a visual positioning result of the automatic driving vehicle comprises:
according to the road image acquired by the vehicle-side blind-complement camera of the automatic driving vehicle, carrying out lane line identification by using a lane line identification model to obtain a lane line identification result;
determining a transverse positioning result of the automatic driving vehicle according to the lane line identification result;
and determining a longitudinal positioning result of the automatic driving vehicle according to the lane line identification result, the positioning information of the failure moment, the high-precision map data and the speed information of the automatic driving vehicle.
4. The method of claim 3, wherein the lane line recognition result includes an end point position of a dotted line lane line, and the determining a longitudinal positioning result of the autonomous vehicle according to the lane line recognition result, the positioning information of the failure time, the high-precision map data, and the speed information of the autonomous vehicle includes:
Determining the road course angle of the road where the automatic driving vehicle is and the attribute information of a dotted line lane according to the high-precision map data, wherein the attribute information of the dotted line lane comprises the length and the interval of the dotted line lane;
based on the positioning information of the failure moment, predicting according to the road course angle and the speed information of the automatic driving vehicle to obtain a predicted position of the automatic driving vehicle;
determining local displacement of the automatic driving vehicle according to the end point position of the dotted line lane line and the attribute information of the dotted line lane line based on the positioning information of the failure moment;
and correcting the predicted position of the automatic driving vehicle according to the local displacement of the automatic driving vehicle to obtain a longitudinal positioning result of the automatic driving vehicle.
5. The method of claim 1, wherein, in the case that the automatic driving vehicle meets the preset visual positioning requirement, performing visual positioning according to the road image collected by the vehicle-side blind-supplement camera of the automatic driving vehicle, the positioning information of the failure moment and the high-precision map data, and obtaining a visual positioning result of the automatic driving vehicle comprises:
under the condition that a dotted line lane line included in a lane where an automatic driving vehicle is located is a single-side dotted line lane line, determining whether the automatic driving vehicle meets a preset lane change requirement according to the positioning information of the failure moment and the high-precision map data;
Under the condition that the automatic driving vehicle meets the preset lane changing requirement, controlling the automatic driving vehicle to change lanes;
and performing visual positioning according to the lane changing result of the automatic driving vehicle, the road image acquired by the vehicle-side blind-supplementing camera, the positioning information of the failure moment and the high-precision map data to obtain a visual positioning result of the automatic driving vehicle.
6. The method of claim 5, wherein the determining whether the autonomous vehicle meets a preset lane change requirement based on the location information of the failure time and the high-precision map data comprises:
determining whether the automatic driving vehicle approaches an intersection or not according to the positioning information of the failure moment and the intersection information in the high-precision map data;
determining whether an adjacent lane of the lane where the automatic driving vehicle is located contains a double-sided dotted line lane line according to lane line information in the high-precision map data;
if the automatic driving vehicle approaches an intersection or the adjacent lane of the lane where the automatic driving vehicle is located does not contain double-sided dotted line lane lines, determining that the automatic driving vehicle does not meet the preset lane change requirement;
and if the automatic driving vehicle is not close to the intersection and the adjacent lane of the lane where the automatic driving vehicle is located contains double-sided dotted line lane lines, determining that the automatic driving vehicle meets the preset lane change requirement.
7. The method of any one of claims 1-6, wherein the method further comprises:
determining whether the positioning states of the plurality of sensors can be restored within a preset time;
if the positioning state of the at least one sensor can be restored within the preset time, controlling the running of the automatic driving vehicle according to the original running route;
if the positioning states of the plurality of sensors cannot be recovered within the preset time, the driving route is re-planned according to the driving requirement of the automatic driving vehicle and the navigation map.
8. A positioning device for an autonomous vehicle, wherein the device comprises:
the acquisition unit is used for acquiring the positioning information of the automatic driving vehicle at the failure moment and corresponding high-precision map data under the condition that the multi-sensor positioning state of the automatic driving vehicle is in the failure state;
the first determining unit is used for determining lane line type information of a lane where the automatic driving vehicle is located according to the positioning information of the failure moment and the high-precision map data;
the second determining unit is used for determining whether the automatic driving vehicle meets the preset visual positioning requirement according to the lane line type information of the lane where the automatic driving vehicle is located;
the visual positioning unit is used for performing visual positioning according to the road image acquired by the vehicle-side blind-supplement camera of the automatic driving vehicle, the positioning information of the failure moment and the high-precision map data under the condition that the automatic driving vehicle meets the preset visual positioning requirement, so as to obtain a visual positioning result of the automatic driving vehicle;
The multi-sensor positioning state refers to a non-visual sensor positioning state;
the second determining unit is specifically configured to:
determining whether the lane in which the automatic driving vehicle is located contains a dotted line lane according to the lane line type information of the lane in which the automatic driving vehicle is located;
if yes, determining that the automatic driving vehicle meets a preset visual positioning requirement;
otherwise, determining that the automatic driving vehicle does not meet the preset visual positioning requirement.
9. An electronic device, comprising:
a processor; and
a memory arranged to store computer executable instructions which, when executed, cause the processor to perform the method of any of claims 1 to 7.
CN202310744537.8A 2023-06-25 2023-06-25 Positioning method and device for automatic driving vehicle and electronic equipment Active CN116481548B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310744537.8A CN116481548B (en) 2023-06-25 2023-06-25 Positioning method and device for automatic driving vehicle and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310744537.8A CN116481548B (en) 2023-06-25 2023-06-25 Positioning method and device for automatic driving vehicle and electronic equipment

Publications (2)

Publication Number Publication Date
CN116481548A CN116481548A (en) 2023-07-25
CN116481548B true CN116481548B (en) 2023-10-03

Family

ID=87227240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310744537.8A Active CN116481548B (en) 2023-06-25 2023-06-25 Positioning method and device for automatic driving vehicle and electronic equipment

Country Status (1)

Country Link
CN (1) CN116481548B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101764839B1 (en) * 2016-03-11 2017-08-03 재단법인대구경북과학기술원 System and method for lane level positioning
CN107703528A (en) * 2017-09-25 2018-02-16 武汉光庭科技有限公司 Low precision GPS vision positioning method and system is combined in automatic Pilot
CN110806215A (en) * 2019-11-21 2020-02-18 北京百度网讯科技有限公司 Vehicle positioning method, device, equipment and storage medium
CN113359171A (en) * 2021-05-17 2021-09-07 交控科技股份有限公司 Positioning method and device based on multi-sensor fusion and electronic equipment
CN115257787A (en) * 2022-07-29 2022-11-01 智道网联科技(北京)有限公司 Driving method and device for autonomous vehicle, electronic device, and storage medium
CN115507862A (en) * 2022-09-28 2022-12-23 智道网联科技(北京)有限公司 Lane line positioning method and device, electronic device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113071493B (en) * 2021-04-16 2023-10-31 阿波罗智联(北京)科技有限公司 Method, apparatus, storage medium and program product for lane change control of vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101764839B1 (en) * 2016-03-11 2017-08-03 재단법인대구경북과학기술원 System and method for lane level positioning
CN107703528A (en) * 2017-09-25 2018-02-16 武汉光庭科技有限公司 Low precision GPS vision positioning method and system is combined in automatic Pilot
CN110806215A (en) * 2019-11-21 2020-02-18 北京百度网讯科技有限公司 Vehicle positioning method, device, equipment and storage medium
CN113359171A (en) * 2021-05-17 2021-09-07 交控科技股份有限公司 Positioning method and device based on multi-sensor fusion and electronic equipment
CN115257787A (en) * 2022-07-29 2022-11-01 智道网联科技(北京)有限公司 Driving method and device for autonomous vehicle, electronic device, and storage medium
CN115507862A (en) * 2022-09-28 2022-12-23 智道网联科技(北京)有限公司 Lane line positioning method and device, electronic device and storage medium

Also Published As

Publication number Publication date
CN116481548A (en) 2023-07-25

Similar Documents

Publication Publication Date Title
EP3702230B1 (en) Method and apparatus for planning travelling path, and vehicle
US11260861B2 (en) Method, device and computer-readable storage medium with instructions for determining the lateral position of a vehicle relative to the lanes on a roadway
US11551552B2 (en) Distributing processing resources across local and cloud-based systems with respect to autonomous navigation
CN111311902B (en) Data processing method, device, equipment and machine readable medium
JP2021123330A (en) Method, device, electronic apparatus and computer readable medium for planning lane changing path
JP6419666B2 (en) Automatic driving device
CN112567439B (en) Method and device for determining traffic flow information, electronic equipment and storage medium
US20210174113A1 (en) Method for limiting object detection area in a mobile system equipped with a rotation sensor or a position sensor with an image sensor, and apparatus for performing the same
US20190266419A1 (en) Method, Device And Computer-Readable Storage Medium With Instructions For Determining The Lateral Position Of A Vehicle Relative To The Lanes Of A Road
CN113009539A (en) Automatic lane changing processing method for vehicle, vehicle and equipment
CN113771839B (en) Automatic parking decision planning method and system
CN112433531A (en) Trajectory tracking method and device for automatic driving vehicle and computer equipment
CN114547222A (en) Semantic map construction method and device and electronic equipment
CN114754778B (en) Vehicle positioning method and device, electronic equipment and storage medium
CN115962774A (en) Point cloud map updating method and device, electronic equipment and storage medium
KR20170070725A (en) Control Method for Predictive Shifting through Recognizing Road Configuration
CN114719840A (en) Vehicle intelligent driving guarantee method and system based on road characteristic fusion
CN116027375B (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN116481548B (en) Positioning method and device for automatic driving vehicle and electronic equipment
CN114910083B (en) Positioning method and positioning device, electronic equipment and storage medium
CN116295490A (en) Vehicle positioning method and device, electronic equipment and storage medium
CN115950441A (en) Fusion positioning method and device for automatic driving vehicle and electronic equipment
CN115900735A (en) Vehicle positioning method and device, vehicle and storage medium
JP7351321B2 (en) Sensor abnormality estimation device
CN115240453A (en) Driving control method, device and system for automatic driving vehicle and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant