WO2022147924A1 - 车辆定位方法和装置、存储介质及电子设备 - Google Patents

车辆定位方法和装置、存储介质及电子设备 Download PDF

Info

Publication number
WO2022147924A1
WO2022147924A1 PCT/CN2021/086687 CN2021086687W WO2022147924A1 WO 2022147924 A1 WO2022147924 A1 WO 2022147924A1 CN 2021086687 W CN2021086687 W CN 2021086687W WO 2022147924 A1 WO2022147924 A1 WO 2022147924A1
Authority
WO
WIPO (PCT)
Prior art keywords
lane
target
target vehicle
driving
road
Prior art date
Application number
PCT/CN2021/086687
Other languages
English (en)
French (fr)
Inventor
肖海
尚进
Original Assignee
广州汽车集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州汽车集团股份有限公司 filed Critical 广州汽车集团股份有限公司
Priority to CN202180003892.9A priority Critical patent/CN115038934A/zh
Publication of WO2022147924A1 publication Critical patent/WO2022147924A1/zh

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/06Road conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/10Path keeping
    • B60W30/12Lane keeping
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • B60W2552/30Road curve radius
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/40High definition maps
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3658Lane guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Definitions

  • the present application relates to the field of vehicle control, and in particular, to a vehicle positioning method and device, a storage medium, and an electronic device.
  • the assisted positioning system can achieve vehicle positioning.
  • the method cannot be universally applied to more vehicles due to the high cost of using on-board lidar and high-resolution point cloud maps. Therefore, for most cost-constrained vehicles, accurate vehicle localization still cannot be achieved within controlled costs.
  • the assisted positioning system can achieve vehicle positioning.
  • the method cannot be universally applied to more vehicles due to the high cost of using on-board lidar and high-resolution point cloud maps. Therefore, for most cost-constrained vehicles, accurate vehicle localization still cannot be achieved within controlled costs.
  • embodiments of the present application provide a vehicle positioning method and device, a storage medium, and an electronic device to at least solve the problem of low vehicle positioning accuracy under the premise of limited cost in the prior art.
  • a vehicle positioning method comprising: pre-positioning a target vehicle in a running state based on a global positioning system to obtain a priori position of the target vehicle in a map;
  • the lateral position information and longitudinal position information of the current position determine the target lane where the target vehicle is located according to the lateral position information, and locate the running position of the target vehicle in the target lane according to the longitudinal position information .
  • a vehicle positioning device including: a pre-positioning unit, configured to pre-position a target vehicle in a running state based on a global positioning system, and obtain the location of the target vehicle in a map.
  • an acquisition unit for acquiring the driving image data collected by the camera sensor in the target vehicle in the target area where the prior position is located; an identification unit for using the vector image pair of the target area
  • the driving image data is visually recognized to obtain lateral position information and longitudinal position information of the current position of the target vehicle; a positioning unit is used to determine the target lane where the target vehicle is located according to the lateral position information, and The driving position of the target vehicle in the target lane is located according to the longitudinal position information.
  • a computer-readable storage medium stores a computer program, wherein the computer program is configured to perform the vehicle positioning method during operation.
  • an electronic device including a memory and a processor, wherein a computer program is stored in the memory, and the processor is configured to execute the vehicle positioning through the computer program method.
  • the camera sensor is used instead of the vehicle-mounted lidar to obtain the driving image data of the target vehicle in the running state, and the horizontal position information and the vertical position information of the current position of the target vehicle are obtained in combination with the vector diagram of the current area of the target vehicle. , so as to use the lateral position information and the longitudinal position information to determine the target lane where the target vehicle is currently located and the driving position in the target lane. That is to say, on the premise of saving the cost of use, combined with the driving image data and vector graphics collected by the camera sensor, the target lane and driving position of the target vehicle in the driving state can be accurately identified, so as to achieve the effect of improving the vehicle positioning accuracy.
  • the problem of relatively low vehicle positioning accuracy under the premise of limited cost in the prior art is overcome.
  • FIG. 1 is a flowchart of an optional vehicle positioning method according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an application environment of an optional vehicle positioning method according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of an optional vehicle positioning method according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of yet another optional vehicle positioning method according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of yet another optional vehicle positioning method according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an optional vehicle positioning device according to an embodiment of the present application.
  • a vehicle positioning method is provided.
  • the vehicle positioning method includes:
  • S108 Determine the target lane where the target vehicle is located according to the lateral position information, and locate the running position of the target vehicle in the target lane according to the longitudinal position information. .
  • the vehicle positioning method may be applied to, but not limited to, a vehicle configured with an advanced driver assistance system (ADAS), so as to assist the vehicle in realizing precise positioning in real time during the automatic driving process.
  • ADAS advanced driver assistance system
  • the ADAS system uses various sensors installed on the vehicle to sense the surrounding environment at any time during the driving process of the vehicle, collect data, identify, detect and track static and dynamic objects, collect the map data of the navigator, and calculate and analyze the system. , so that the driver is aware of possible dangers in advance, thereby effectively improving the comfort and safety of vehicle driving.
  • the sensors may be, but are not limited to, use low-cost camera sensors, and the map data may be, but are not limited to, vector graphics. Among them, size information and orientation information are recorded in the vector graphics.
  • the above is just an example, and the present embodiment is not limited to this.
  • the camera sensor is used instead of the vehicle-mounted lidar to obtain the driving image data of the target vehicle in the driving state, and the horizontal position information and the vertical position information of the current position of the target vehicle are obtained in combination with the vector diagram of the area where the target vehicle is currently located. , using the lateral position information and the longitudinal position information to determine the target lane where the target vehicle is currently located and the driving position in the target lane. That is to say, under the premise of saving the cost of use, combined with the driving image data and vector graphics collected by the camera sensor, the target lane and driving position of the target vehicle in the driving state can be accurately identified, and the effect of improving the vehicle positioning accuracy can be achieved. Under the premise of limited cost in the related art, the problem of relatively low vehicle positioning accuracy is overcome.
  • the global positioning system may include: a global navigation satellite system (GNSS), and the vector diagram may be acquired by the following components: an inertial measurement unit (IMU) and an odometer.
  • GNSS is a space-based radio navigation system that can provide users with all-weather three-dimensional coordinates and velocity and time information at any location on the earth's surface or in nearby space.
  • the IMU is usually composed of a gyroscope, an accelerometer and an algorithm processing unit, which obtains the motion trajectory of the human body by measuring the acceleration and rotation angle.
  • the method in the process of visually recognizing the driving image data by using the vector diagram of the area where the target vehicle is currently located to obtain the lateral position information and longitudinal position information of the current position of the target vehicle, can Including but not limited to the use of visual recognition techniques and deep learning enabled by deep neural networks.
  • the visual recognition technology in this embodiment is a machine vision technology, which mainly uses a computer to simulate human visual functions, extracts information from a target image, processes and understands it, and finally uses it for actual detection, measurement and control.
  • the deep neural network in this embodiment is a human brain thought simulation mode, and the lane information of each lane on the driving road located by the deep neural network is identified from the driving image data of the target vehicle in combination with the vector graphics .
  • the lane information herein may include, but is not limited to: the lane marked by each lane, the lane mark of the target lane where the target vehicle is currently located, the offset distance of the target vehicle relative to the lane center of the target lane; in addition, the target vehicle is also included The heading angle and the error between the azimuth angle and the tangent of the target lane.
  • the content of the above lane information is only an example, and this embodiment may further include other information, which is not limited here.
  • the camera sensor is used instead of the vehicle-mounted lidar to obtain the driving image data of the target vehicle in the running state, and the horizontal position information and the vertical position of the current position of the target vehicle are obtained in combination with the vector diagram of the area where the target vehicle is currently located.
  • the lateral position information and longitudinal position information combined with the driving image data and vector graphics collected by the camera sensor, so as to accurately identify and locate the target lane and driving position of the target vehicle in the driving state, and achieve the effect of improving the vehicle positioning accuracy. . Therefore, the problem of relatively low positioning accuracy of the vehicle in the prior art under the premise of limited cost is overcome.
  • the steps of visually recognizing the driving image data by using the vector diagram of the area where the target vehicle is currently located to obtain the lateral position information and longitudinal position information of the current position of the target vehicle include:
  • S3 Acquire lateral position information and longitudinal position information of the current position of the target vehicle according to the road information.
  • the road network may include, but is not limited to, a road network formed by various lanes in different directions in an area.
  • the above-mentioned fixed road objects may include, but are not limited to, traffic lights, traffic signs, utility poles, and the like. The above is only an example for illustration, and this embodiment does not limit it.
  • the recognition result is obtained by using visual recognition technology to visually recognize the driving image data collected by the camera sensor, and after comparing the recognition result with the information in the ADAS vector diagram, Obtain road information such as the road network and fixed road objects (such as traffic lights and signs) in the area where the target vehicle 202 is located.
  • road information such as the road network and fixed road objects (such as traffic lights and signs) in the area where the target vehicle 202 is located.
  • the above-mentioned road information will be used to determine the lateral position information and the longitudinal position information of the position where the target vehicle 202 is currently located.
  • the lateral position information and the vertical position information of the target vehicle are precisely located, so as to ensure the improvement of the positioning accuracy under the premise of saving the use cost.
  • the step of obtaining the lateral position information of the current position of the target vehicle according to the road information includes:
  • S1 use a deep neural network to detect and identify road information to obtain lane information of all lanes on the current road of the target vehicle, wherein the lane information includes: lane lines marked for each lane, the target lane where the target vehicle is currently located. Lane markings, and the offset distance of the target vehicle relative to the lane center of the current target lane;
  • the method further includes: using an independent encoding channel or a bit mask to label the current target vehicle in different ways. Differential marking of each lane on the driving road.
  • the deep neural network here is obtained by using multiple sample data for training, and is used to detect lane lines of multiple lanes in road information after identification and comparison.
  • the network structure of the deep neural network in this paper can include, but is not limited to, convolutional layers, pooling layers and fully connected layers, wherein each weight parameter in the described network structure can be obtained through multiple iterations of training. It is not repeated here.
  • the lane lines of multiple lanes within the field of view of the camera are acquired by detecting and recognizing road information.
  • the lane lines here can be marked in different ways, but not limited to this.
  • each lane line may be displayed by dots.
  • different lane lines can also be identified by using different line types.
  • different lane lines can also be marked with different colors (not shown in the figure).
  • the identified lane information may further include street lights 304 , traffic lights 306 and the like.
  • FIG. 3 shows an example, which is not limited to this embodiment.
  • the deep neural network can be used to detect and recognize the road information to obtain the lane information of each lane.
  • the curve of each lane line is fitted by the algorithm module in the on-board processor of the vehicle, and then generated according to the fitting function (quadratic polynomial curve fitting function, cubic polynomial curve fitting function or cubic spline interpolation, etc.) Fit the lane parameters of each curve, where the lane parameters here may include but are not limited to lane markings and offset distances from the lane center.
  • the lane mark of the target lane where the target vehicle is currently located and the offset distance of the target vehicle from the lane center of the target lane are determined from all lanes by visual recognition technology. For example, suppose that the target lane of the target vehicle is identified as an exit or inner or outer lane; furthermore, it is located inside the lane: the target vehicle is located at a position offset 10 cm from the lane center of the target lane.
  • the deep neural network is used to detect and identify the road, obtain the lane information of each lane, and then identify and obtain the lateral position information of the target lane where the target vehicle is located.
  • the lane markings of the target lane and the offset distance of the target vehicle relative to the lane center of the target lane can be located laterally with precision to the centimeter level.
  • obtaining the longitudinal position information of the current position of the target vehicle according to the road information includes:
  • S4 Determine the running position of the target vehicle in the target lane according to the comparison result, so as to obtain longitudinal position information of the target vehicle.
  • the fitting parameters used for fitting the lane may be obtained, wherein the lane parameters used for fitting the lane here may include but are not limited to: Fitting coefficient, offset distance from the center of the lane, the degree of error between the heading angle of the target vehicle and the direction of the lane tangent, and the turning radius of the current position of the target vehicle.
  • the target vehicle is located in the right outer lane, and the lane radius here is the radius R (its positioning accuracy is on the order of centimeters). This is reflected on the narrow boundary of the lateral Gaussian distribution of Figure 4 (ellipse where the target vehicle is located).
  • the reference lane radii r1, r2 . . . rn recorded by the target vehicle at a plurality of consecutive positions in the longitudinal direction of the target lane based on the current position are acquired from the vector image.
  • the longitudinal position of the target vehicle in the current target lane is precisely positioned, which is the closest to the lane radius R.
  • the longitudinal position corresponding to the reference lane radius ri is determined as the driving position of the target vehicle.
  • the driving position corresponding to the target vehicle in the target lane is determined by comparing the lane radii, thereby realizing precise positioning of the target vehicle in the longitudinal direction.
  • the driving position of the target vehicle in the target lane is determined according to the comparison result to obtain longitudinal position information of the target vehicle, including:
  • the solution for determining the driving position according to the longitudinal driving area may include at least one of the following solutions:
  • the maximum likelihood method is used on the following data to determine the driving position based on the longitudinal driving area: the actual curvature of the target lane, the reference curvature estimated for the target lane by the positioning system, and the target a sample curvature estimated by the vehicle based on the sample points associated with the current position;
  • the driving position is estimated from the target road sign closest to the current position of the target vehicle;
  • the driving position is estimated from the longitudinal driving area using the carrier phase finding technique.
  • the longitudinal driving area in which the target vehicle is located can be determined first by using the above method, and further, by using the following method Either of estimating the driving position of the target vehicle in the longitudinal direction according to the longitudinal driving area:
  • the longitudinal localization problem can be solved using the maximum likelihood method.
  • curve 502 marks the actual road with true curvature on the map;
  • Gaussian distribution curve 504 is the longitudinal position estimated from the fusion of GNSS, IMU and odometer, and can be seen from the figure , there is a certain deviation between the two, for example, about 10.0 meters.
  • the diamond-shaped points in curve 506 are the curvature estimated from a set of sample points around the current position of the target vehicle. Using the above curves 502 to 506, the maximum likelihood method can be used to estimate the precise location of the longitudinal location within 1-2 meters.
  • the embodiments of the present application further perform precise longitudinal positioning on the longitudinal driving area by combining different methods, so as to obtain the driving position of the target vehicle in the target lane, which ensures the accuracy of vehicle positioning.
  • a vehicle positioning device for implementing the above-mentioned vehicle positioning method is also provided.
  • the device includes:
  • Pre-positioning unit 602 pre-positioning the target vehicle in the running state based on the global positioning system, and obtaining the prior position of the target vehicle in the map;
  • an acquisition unit 604 configured to acquire the driving image data collected by the camera sensor in the target vehicle in the target area where the prior position is located;
  • the identification unit 606 is used to visually identify the driving image data by using the vector diagram of the target area to obtain the horizontal position information and the vertical position information of the current position of the target vehicle;
  • the positioning unit 608 is configured to determine the target lane where the target vehicle is located according to the lateral position information, and locate the running position of the target vehicle in the target lane according to the longitudinal position information.
  • the identification unit includes:
  • the recognition module is used to visually recognize the driving image data by using the visual recognition technology to obtain the recognition result
  • the matching module is used to match the vector image with the recognition result to determine the road information of the current area of the target vehicle in the driving image data, wherein the road information includes the road network and the fixed road objects located beside the road;
  • the obtaining module is used for obtaining the horizontal position information and the vertical position information of the current position of the target vehicle according to the road information.
  • a computer program product or computer program comprising computer instructions, and the computer instructions are stored in a computer readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, the processor executes the computer instructions, causing the computer device to perform the above-mentioned vehicle positioning method, wherein the computer program is configured to perform any of the above-mentioned methods at runtime steps in the examples.
  • the computer-readable storage medium described above may be arranged to store a computer program for performing the following steps:
  • S4 Determine the target lane where the target vehicle is located according to the lateral position information, and locate the running position of the target vehicle in the target lane according to the longitudinal position information.
  • the steps of the method in the foregoing embodiment may be completed by instructing hardware related to the terminal device through a program.
  • the program may be stored in a computer-readable storage medium, and the storage medium may include a flash disk, a read only memory (ROM), a random access memory (RAM), a magnetic or optical disk, and the like.
  • the integrated units in the above-mentioned embodiments are implemented in the form of software functional units and sold or used as independent products, they may be stored in the above-mentioned computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art, or all or part of the technical solution, and the computer software product is stored in a storage medium,
  • Several instructions are included to cause one or more computer devices (which may be personal computers, servers, or network devices, etc.) to perform all or part of the steps of the methods described in the various embodiments of the present application.
  • the disclosed client can be implemented in other ways.
  • the apparatus embodiments described above are merely exemplary.
  • the division of the units is only a logical function division, and there may be other division methods in actual implementation.
  • multiple units or components may be combined or integrated into another system, or some features may be omitted, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be realized through some interfaces, and the indirect coupling or communication connection between units or modules may be realized in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiments of the present application.
  • each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional units.
  • the camera sensor is used instead of the vehicle-mounted lidar to obtain the driving image data of the target vehicle in the running state, and the horizontal position information and the vertical position information of the current position of the target vehicle are obtained in combination with the vector diagram of the current area of the target vehicle. , so as to use the lateral position information and the longitudinal position information to determine the target lane where the target vehicle is currently located and the driving position in the target lane. That is to say, on the premise of saving the cost of use, combined with the driving image data and vector graphics collected by the camera sensor, the target lane and driving position of the target vehicle in the driving state can be accurately identified, so as to achieve the effect of improving the vehicle positioning accuracy.
  • the problem of relatively low vehicle positioning accuracy under the premise of limited cost in the prior art is overcome.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

一种车辆定位方法和装置、计算机可读存储介质及电子设备,其中该方法包括:基于全球定位系统对处于行驶状态的目标车辆进行预定位,得到目标车辆在地图中的先验位置(S102);获取目标车辆中的摄像头传感器在先验位置所在的目标区域采集的行驶图像数据(S104);利用目标区域的矢量图对行驶图像数据进行视觉识别,获取目标车辆当前所处位置的横向位置信息和纵向位置信息(S106);根据横向位置信息确定目标车辆所在的目标车道,并根据纵向位置信息定位目标车辆在目标车道内的行驶位置(S108)。该方法解决了现有技术中在成本有限的前提下车辆定位精度较低的技术问题。

Description

车辆定位方法和装置、存储介质及电子设备
本专利申请要求 2021年01月05日提交的美国专利申请号为17/142,212的优先权,该申请的全文以引用的方式并入本申请中。
技术领域
本申请涉及车辆控制领域,具体而言,涉及一种车辆定位方法和装置、存储介质及电子设备。
背景技术
如今,智能驾驶技术被应用于越来越多的车辆,以辅助驾驶员更安全且可靠地完成驾驶过程。为了实现上述目的,通常需要精确地定位车辆的当前位置。
目前,对车辆的当前位置进行定位很大程度上依赖于昂贵的车载激光雷达(用于实现光检测和测距)和昂贵的预先绘制的高分辨率点云地图。即,借助于车载激光雷达和高分辨率地图,辅助定位系统来实现车辆定位。然而,当使用上述方法时,由于使用车载激光雷达和高分辨率点云地图的成本较高,因此该方法不能普遍应用于更多的车辆。因此,对于大多数成本受限的车辆,仍然不能在受控成本内实现精确的车辆定位。
鉴于上述问题,目前尚未提出有效的解决方案。
技术问题
现有技术中,对车辆的当前位置进行定位很大程度上依赖于昂贵的车载激光雷达(用于实现光检测和测距)和昂贵的预先绘制的高分辨率点云地图。即,借助于车载激光雷达和高分辨率地图,辅助定位系统来实现车辆定位。然而,当使用上述方法时,由于使用车载激光雷达和高分辨率点云地图的成本较高,因此该方法不能普遍应用于更多的车辆。因此,对于大多数成本受限的车辆,仍然不能在受控成本内实现精确的车辆定位。
技术解决方案
针对上述技术问题,本申请实施例提供一种车辆定位方法和装置、存储介质及电子设备,以至少解决在现有技术中成本有限的前提下,车辆定位精度较低的问题。
根据本申请实施例的一个方面,提供了一种车辆定位方法,包括:基于全球定位系统对处于运行状态的目标车辆进行预定位,以获取所述目标车辆在地图中的先验位置;
获取所述目标车辆中的摄像头传感器在所述先验位置所在的目标区域内采集的行驶图像数据;利用所述目标区域的矢量图对所述行驶图像数据进行视觉识别,以获取所述目标车辆当前所处位置的横向位置信息和纵向位置信息;根据所述横向位置信息确定所述目标车辆所在的目标车道,并根据所述纵向位置信息定位所述目标车辆在所述目标车道内的行驶位置。
根据本申请实施例的另一个方面,还提供了一种车辆定位装置,包括:预定位单元,用于基于全球定位系统对处于运行状态的目标车辆进行预定位,获取所述目标车辆在地图中的先验位置;获取单元,用于获取所述目标车辆中的摄像头传感器在所述先验位置所在的目标区域内采集的行驶图像数据;识别单元,用于利用所述目标区域的矢量图对所述行驶图像数据进行视觉识别,以获取所述目标车辆当前所处位置的横向位置信息和纵向位置信息;定位单元,用于根据所述横向位置信息确定所述目标车辆所在的目标车道,并根据所述纵向位置信息定位所述目标车辆在所述目标车道中的行驶位置。
根据本申请实施例的又一方面,还提供了一种计算机可读存储介质。所述计算机可读存储介质存储计算机程序,其中所述计算机程序被配置为在运行期间执行所述车辆定位方法。
根据本申请实施例的又一方面,还提供了一种电子设备,包括存储器和处理器,其中,所述存储器中存储有计算机程序,所述处理器用于通过所述计算机程序执行所述车辆定位方法。
有益效果
本申请实施例中,通过摄像头传感器代替车载激光雷达,以获取目标车辆运行状态下的行驶图像数据,结合目标车辆当前所在区域的矢量图,获取目标车辆当前所在位置的横向位置信息和纵向位置信息,从而利用横向位置信息和纵向位置信息确定目标车辆当前所在的目标车道和目标车道内的行驶位置。也就是说,在节约使用成本的前提下,结合摄像头传感器采集的行驶图像数据和矢量图,精确识别行驶状态下的目标车辆所处的目标车道和行驶位置,从而达到提高车辆定位精度的效果,从而克服了在现有技术中成本有限的前提下,车辆定位精度相对较低的问题。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,并构成本申请的一部分,这些附图用于与本申请的实施例一起解释本申请,而不是限制本申请。在附图中:
图1是根据本申请实施例的一种可选的车辆定位方法的流程图;
图2是根据本申请实施例的一种可选的车辆定位方法的应用环境的示意图;
图3是根据本申请的实施例的一种可选的车辆定位方法的示意图;
图4是根据本申请实施例的又一种可选的车辆定位方法的示意图;
图5是根据本申请实施例的又一种可选的车辆定位方法的示意图;
图6是根据本申请实施例的一种可选的车辆定位装置的结构示意图。
本发明的实施方式
为了使本领域的技术人员更好地理解本申请的技术方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整的描述。显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。本领域普通技术人员基于本申请的实施例,在没有创造性劳动前提下获得的所有其它实施例,都应当属于本申请的保护范围。
需要说明的是,本申请的说明书、权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,但不必用于描述特定的顺序或先后次序。应当理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”及其任何变形,意图在于覆盖不排他的包含,例如,包括了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可以包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其他步骤或单元。
根据本申请实施例的一个方面,提供了一种车辆定位方法。可选地,作为一种可选的实施方式,如图1所示,所述车辆定位方法包括:
S102,基于全球定位系统对处于运行状态的目标车辆进行预定位,获取目标车辆在地图中的先验位置;
S104,获取目标车辆中的摄像头传感器在先验位置所在的目标区域内采集的行驶图像数据;
S106,利用目标区域的矢量图对行驶图像数据进行视觉识别,以获取目标车辆当前所处位置的横向位置信息和纵向位置信息;
S108,根据横向位置信息确定目标车辆所在的目标车道,并根据纵向位置信息定位目标车辆在目标车道中的行驶位置。.
可选地,在本实施例中,该车辆定位方法可以应用于但不限于配置有高级驾驶辅助系统(ADAS)的车辆,以便辅助车辆在自动驾驶过程中实时实现精确定位。ADAS系统利用安装在车辆上的各种传感器,在车辆行驶过程中随时感知周围环境,采集数据,对静态和动态物体进行识别、检测和跟踪,采集导航仪的地图数据,对系统进行计算和分析,使驾驶员提前意识到可能发生的危险,从而有效提高车辆驾驶的舒适性和安全性。传感器可以是但不限于使用低成本的摄像头传感器,并且地图数据可以是但不限于矢量图。其中,大小信息和方向信息都被记录在矢量图中。以上只是举例说明,本实施方式不限定于此。
在本实施方式中,通过摄像头传感器代替车载激光雷达获取目标车辆行驶状态下的行驶图像数据,结合目标车辆当前所处区域的矢量图,获取目标车辆当前所处位置的横向位置信息和纵向位置信息,利用横向位置信息和纵向位置信息确定目标车辆当前所处的目标车道和目标车道内的行驶位置。也就是说,在节约使用成本的前提下,结合摄像头传感器采集的行驶图像数据和矢量图,精确识别行驶状态下的目标车辆所处的目标车道和行驶位置,达到提高车辆定位精度的效果,从而克服了在相关技术中成本有限的前提下,车辆定位精度相对较低的问题。
可选地,在本实施例中,全球定位系统可以包括:全球导航卫星系统(GNSS),并且矢量图可以通过以下部件来获取:惯性测量单元(IMU)和里程表。其中,GNSS是基于空间的无线电导航系统,其可以在地球表面上或附近空间中的任何位置处向用户提供全天候三维坐标和速度以及时间信息。IMU通常由陀螺仪、加速度计和算法处理单元组成,其通过测量加速度和旋转角度获得人体的运动轨迹。
另外,在本实施例中,在利用目标车辆当前所处区域的矢量图对行驶图像数据进行视觉识别,以获取目标车辆当前所处位置的横向位置信息和纵向位置信息的过程中,该方法可以包括但不限于使用视觉识别技术和由深度神经网络实现的深度学习。
本实施例中的视觉识别技术是一种机器视觉技术,主要是利用计算机模拟人的视觉功能,从目标图像中提取信息并进行处理和理解,最终用于实际的检测、测量和控制。
此处,本实施例中的深度神经网络是人脑思想模拟模式,并且结合矢量图,从目标车辆的行驶图像数据中识别出由深度神经网络定位得到的行驶道路上的每个车道的车道信息。本文中的车道信息可包括但不限于:每个车道所标记的车道、目标车辆当前所在的目标车道的车道标识、目标车辆相对于目标车道的车道中心的偏移距离;此外,还包括目标车辆的航向角以及方位角与目标车道的切线之间的误差。上述车道信息的内容仅为一示例,本实施例可进一步包括其他信息,此处不作限制。
根据本申请的实施例,通过摄像头传感器代替车载激光雷达获取目标车辆运行状态下的行驶图像数据,结合目标车辆当前所处区域的矢量图,获取目标车辆当前所处位置的横向位置信息和纵向位置信息,这样,利用横向位置信息和纵向位置信息,并结合摄像头传感器采集的行驶图像数据和矢量图,从而准确识别和定位目标车辆行驶状态下的目标车道和行驶位置,达到提高车辆定位精度的效果。因此,克服了现有技术中在成本有限的前提下车辆的定位精度相对较低的问题。
作为一种可选的方案,利用目标车辆当前所处区域的矢量图对行驶图像数据进行视觉识别,以获取目标车辆当前所处位置的横向位置信息和纵向位置信息的步骤包括:
S1,利用视觉识别技术对行驶图像数据进行视觉识别,得到识别结果;
S2,将矢量图与识别结果进行匹配,以确定行驶图像数据中目标车辆当前所在区域的道路信息,其中道路信息包括道路网络和位于道路旁边的固定道路对象;
S3,根据道路信息获取目标车辆当前位置的横向位置信息和纵向位置信息。
可选地,在本实施例中,道路网络可以包括但不限于一个区域中不同方向的各种车道形成的道路组网。上述固定道路对象可包括但不限于交通灯、交通标志、电线杆等。以上只是举例说明,本实施例对此不作限制。
具体地,参考图2所示的场景,假设识别结果是通过对摄像头传感器采集的行驶图像数据采用视觉识别技术进行视觉识别后得到的,并且将识别结果与ADAS矢量图中的信息进行比较后,得到目标车辆202所在区域的道路网络、固定道路对象(如红绿灯、标志)等道路信息。此外,将使用上述道路信息确定目标车辆202当前所处的位置的横向位置信息和纵向位置信息。
其中,在精确的矢量图中,如OpenDrive,与道路路段相关的各种关键几何信息(例如当前车道转弯半径、当前车道标识等)可以通过从地图上的给定参考点检索或缩放而直观地获得。例如,如图2所示,椭圆2D高斯分布结合了用于惯性测量单元和里程计测量的传统全球定位系统,其在图中以高斯内核大小证明。因此,基于上述综合信息,通过分析和估计可以识别车道的宽度大于3.5m。另外,如图2所示,也可以确定目标车辆202当前所处的目标车道是最外侧车道(即,横向位置信息),行驶位置是弯道(即,纵向位置信息)。
本申请实施例通过结合视觉识别结果和矢量图,精确定位目标车辆所在位置的横向位置信息和纵向位置信息,实现在节约使用成本的前提下,确保提高定位精度。
作为一种可选的方案,根据道路信息获取目标车辆当前位置的横向位置信息的步骤包括:
S1,利用深度神经网络对道路信息进行检测和识别,以获得目标车辆当前行驶道路上的所有车道的车道信息,其中车道信息包括:为各个车道标记的车道线、目标车辆当前所在的目标车道的车道标识、以及目标车辆相对于当前目标车道的车道中心的偏移距离;
S2,根据目标车道的车道标识和偏移距离生成横向位置信息。
可选地,在本实施例中,在获取目标车辆当前行驶道路上的所有车道的车道信息之后,该方法还包括:使用独立的编码通道或位掩码,以不同的标记方式对目标车辆当前行驶道路上的每条车道进行差异化标记。
需要说明的是,此处的深度神经网络是通过利用多个样本数据进行训练而得到的,用于识别和比较后在道路信息中检测多条车道的车道线。本文中的深度神经网络的网络结构可以包括但不限于卷积层、池化层和全连接层,其中所描述的网络结构中的每个权重参数可以通过多次迭代训练获得。在此不再赘述。
另外,在本实施例中,通过检测和识别道路信息来获取摄像头视野内的多个车道(例如,直行车道或弯道车道)的车道线。此处车道线可以以不同的方式标记,但不限于此。
例如,假设目标车辆302所在的行驶道路中的所有车道的车道信息可以如图3所示:如图3(a)所示,每条车道线可以由点显示。并且如图3(b)所示,不同的车道线也可以通过使用不同的线类型来识别。另外,也可以使用不同的颜色标记不同的车道线(图中未示出)。为此,在该实施例中,可以但不限于使用额外的编码通道或单独的位掩码(与捕获的图像尺寸一致)来实现耗时的图案提取过程,该过程消除了与每个单独的车道线相关联的过程,例如,不必再次滑动窗口以识别图像中的各个像素。
另外,如图3所示,识别的车道信息还可包括路灯304、交通灯306等。图3示出了一个示例,其不限于该实施例。
请参考以下实施例给出的具体描述:
在对行驶图像数据进行视觉识别获得识别结果,并将该识别结果与矢量图进行匹配获得道路信息后,可以通过深度神经网络来检测和识别道路信息,以获得每条车道的车道信息。通过车辆上的车载处理器中的算法模块对每条车道线的曲线进行拟合,然后根据拟合函数(二次多项式曲线拟合函数、三次多项式曲线拟合函数或三次样条插值等)生成拟合每条曲线的车道参数,其中,此处的车道参数可包括但不限于车道标识和距车道中心的偏移距离。
此外,通过视觉识别技术从所有车道中确定目标车辆当前所处的目标车道的车道标识以及目标车辆距目标车道的车道中心的偏移距离。例如,假设识别出目标车辆的目标车道是出口或内车道或外车道;此外,其位于车道内侧:目标车辆位于距目标车道的车道中心偏移10厘米的位置处。
通过本申请实施例,利用深度神经网络对道路进行检测识别,获取每条车道的车道信息,进而识别得到目标车辆所在的目标车道的横向位置信息。例如,可以横向精确地定位目标车道的车道标识和目标车辆相对于目标车道的车道中心的偏移距离,并且精度可以达到厘米级。
作为一种可选方案,根据道路信息获取目标车辆当前位置的纵向位置信息包括:
S1,在获取横向位置信息时,根据道路信息获取与目标车辆所在的目标车道对应的拟合车道的车道参数,其中车道参数包括目标车道半径;
S2,从矢量图中读取目标车道的参考车道半径;
S3,比较目标车道半径与参考车道半径;
S4,根据比较结果确定目标车辆在目标车道内的行驶位置,以获得目标车辆的纵向位置信息。
具体结合以下实施例进行描述:在确定目标车辆所在的目标车道的情况下,可以获取用于拟合车道的拟合参数,其中,此处用于拟合车道的车道参数可以包括但不限于:拟合系数、距车道中心的偏移距离、目标车辆的航向角与车道切线方向之间的误差度、以及目标车辆当前所处位置的转弯半径。
假定目标车辆的横向定位结果如图4所示,目标车辆位于右外车道,并且此处的车道半径是半径R (其定位精度在厘米的量级)。其反映在图4的横向高斯分布的窄边界上(目标车辆所在的椭圆)。
此外,从矢量图获取目标车辆在基于当前位置的目标车道的纵向方向上的多个连续位置处记录的参考车道半径r1、r2…rn。通过依次将半径R与参考车道半径r1、r2…rn进行比较,利用纵向方向上的最接近的匹配机制,将目标车辆在当前目标车道中的纵向位置进行精确定位,即将与车道半径R最接近的参考车道半径ri对应的纵向位置确定为目标车辆的行驶位置。
本申请实施例通过比较车道半径确定目标车道中目标车辆对应的行驶位置,从而实现目标车辆纵向方向上精确定位。
作为一种可选的方案,根据比较结果确定目标车辆在目标车道内的行驶位置,以获得目标车辆的纵向位置信息,包括:
S1,根据比较结果确定目标车辆在目标车道内的纵向行驶区域;
S2,根据纵向行驶区域确定行驶位置。
可选地,在该实施例中,根据纵向行驶区域确定行驶位置的方案可以包括以下方案中的至少一个:
1. 在目标车道的车道曲率不恒定的情况下,对以下数据使用最大似然法,以便根据纵向行驶区域确定行驶位置:目标车道的实际曲率、通过定位系统为目标车道估算的参考曲率、以及由目标车辆基于与当前位置相关联的采样点估算的采样曲率;
2. 在纵向行驶区域中包括地图路标的情况下,根据最接近目标车辆的当前位置的目标路标估计行驶位置;
3. 在目标车道的车道曲率恒定并且纵向行驶区域中不包括地图路标的情况下,使用存储在定位系统中的定位标记数据估算行驶位置;
4. 使用载波相位查找技术从纵向行驶区域估算行驶位置。
需要说明的是,在上述实施例中,由于处理成本依次高于车道半径方式的处理成本,因此可以首先通过使用上述方式确定目标车辆所处的纵向行驶区域,并且进一步地,通过使用以下方式中的任一种根据纵向行驶区域估计目标车辆在纵向方向上的行驶位置:
1) 在具有非恒定(不规则)曲率的路段中,可以使用最大似然法解决纵向定位问题。如图5所示,曲线502在地图上标记有真实曲率的实际道路;高斯分布曲线504是根据全球导航卫星系统、惯性测量单元和里程表的融合估计的纵向位置,并且从图中可以看出,在两者之间存在一定的偏差,例如,大约10.0米。曲线506中的菱形点是从目标车辆的当前位置周围的一组采样点估计的曲率。利用上述曲线502到506,可以使用最大似然法估计1~2米内的纵向定位的精确定位。
2) 在标有地图路标的路段中,利用视觉识别技术识别出与目标车辆所在位置最近的路标(如电线杆);然后,使用这些识别结果更新其纵向估计。根据视觉性能、摄像机视角、处理速度和地图精度,实现了2~3米内纵向定位的精确定位。
3) 联合上述实施例1) -2)进行共定位。
4) 在曲率恒定的路段(例如,直线或圆)上,可以使用上述方式2直接进行纵向定位。
5) 在曲率不变且没有明显路标的路段上,可以获得其它视觉提示,以改善全球导航卫星系统、惯性测量单元、里程表等的定位。例如,可以使用城市/地平线草图,只要其在地图上以某种方式表示即可。
6) 仅当上述工具都不可用时,才使用全球导航卫星系统、惯性测量单元、里程表。此外,实时动态定位(Real time kinematic,简称RTK)技术与上述所有机制并行,在此,也可以采用其他基于无线电的定位增强功能(如V2X机构)实现目标车辆自身的定位。
本申请实施例通过结合不同的方法,进一步对纵向行驶区域进行精确的纵向定位,从而得到目标车辆在目标车道内的行驶位置,保证了车辆定位的精度。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合。但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
根据本申请实施例的另一方面,还提供了一种用于实现上述车辆定位方法的车辆定位装置。如图6所示,该装置包括:
1) 预定位单元602,基于全球定位系统对处于运行状态的目标车辆进行预定位,获取目标车辆在地图中的先验位置;
2) 获取单元604,用于获取目标车辆中的摄像头传感器在先验位置所在的目标区域内采集的行驶图像数据;
3) 识别单元606,用于利用目标区域的矢量图对行驶图像数据进行视觉识别,以获取目标车辆当前所处位置的横向位置信息和纵向位置信息;
4) 定位单元608,用于根据横向位置信息确定目标车辆所在的目标车道,并根据纵向位置信息定位目标车辆在目标车道中的行驶位置。
可选地,对于本申请实施例提供的车辆定位装置的具体实施例,可以参考上述方法实施例,在此不再赘述。
作为一种可选的方案,该识别单元包括:
1) 识别模块,用于利用视觉识别技术对行驶图像数据进行视觉识别,得到识别结果;
2) 匹配模块,用于将矢量图与识别结果进行匹配,以确定行驶图像数据中目标车辆当前所在区域的道路信息,其中道路信息包括道路网络和位于道路旁边的固定道路对象;
3) 获取模块,用于根据道路信息获取目标车辆当前位置的横向位置信息和纵向位置信息。
可选地,对于本申请实施例提供的车辆定位装置的具体实施例,可以参考上述方法实施例,在此不再赘述。
根据本申请的一个方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,并且计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述车辆定位方法,其中该计算机程序被设置为在运行时执行上述任一项方法实施例中的步骤。
可选地,在该实施例中,上述计算机可读存储介质可以被布置为存储用于执行以下步骤的计算机程序:
S1,基于全球定位系统对处于运行状态的目标车辆进行预定位,获取目标车辆在地图中的先验位置;
S2,获取目标车辆中的摄像头传感器在先验位置所在的目标区域内采集的行驶图像数据;
S3,利用目标区域的矢量图对行驶图像数据进行视觉识别,以获取目标车辆当前所处位置的横向位置信息和纵向位置信息;
S4,根据横向位置信息确定目标车辆所在的目标车道,并根据纵向位置信息定位目标车辆在目标车道中的行驶位置。
可选地,在本实施例中,本领域普通技术人员可以理解,上述实施例中的方法的全部或部分步骤可以通过程序来指令终端设备相关的硬件来完成。该程序可以存储在一计算机可读存储介质中,并且存储介质可以包括闪存盘、只读存储器(ROM)、随机存取器(RAM)、磁盘或光盘等。
本申请实施例的序号仅用于描述,而不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详细描述的部分,可以参考其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端可以通过其他方式实现。其中,以上所描述的装置实施例仅仅是示例性的。例如,所述单元的划分只是一种逻辑功能划分,在实际实现中可以有另外的划分方式。例如多个单元或组件可以被结合或者集成到另一系统中,或者一些特征可以被忽略,或不执行。另外,所显示或讨论的相互之间耦合或直接耦合或通信连接可以是通过一些接口来实现,单元或模块之间的间接耦合或通信连接可以以电性或其它形式实现。
所述作为分离部件说明的单元可以是或者也可以不是物理上分离的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布在多个网络单元上。可以根据实际需要选择其中的部分或全部单元,以实现本申请实施例方案的目的
另外,本申请实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述的集成单元既可以以硬件的形式实现,也可以软件功能单元的形式实现。
以上仅是本申请的优选实施方式。需要说明的是,对于本领域的普通技术人员来说,在不脱离本申请的原理的前提下,可以做进一步的改进和润饰,这些改进和润饰也应视为本申请的保护范围。
工业实用性
本申请实施例中,通过摄像头传感器代替车载激光雷达,以获取目标车辆运行状态下的行驶图像数据,结合目标车辆当前所在区域的矢量图,获取目标车辆当前所在位置的横向位置信息和纵向位置信息,从而利用横向位置信息和纵向位置信息确定目标车辆当前所在的目标车道和目标车道内的行驶位置。也就是说,在节约使用成本的前提下,结合摄像头传感器采集的行驶图像数据和矢量图,精确识别行驶状态下的目标车辆所处的目标车道和行驶位置,从而达到提高车辆定位精度的效果,从而克服了在现有技术中成本有限的前提下,车辆定位精度相对较低的问题。

Claims (10)

  1. 一种车辆定位方法,包括:
    基于全球定位系统对处于运行状态的目标车辆进行预定位,以获取所述目标车辆在地图中的先验位置;
    获取所述目标车辆中的摄像头传感器在所述先验位置所在的目标区域内采集的行驶图像数据;
    利用所述目标区域的矢量图对所述行驶图像数据进行视觉识别,以获取所述目标车辆当前所处位置的横向位置信息和纵向位置信息;
    根据所述横向位置信息确定所述目标车辆所在的目标车道,并根据所述纵向位置信息定位所述目标车辆在所述目标车道内的行驶位置。
  2. 根据权利要求1所述的方法,其中利用所述目标区域的矢量图对所述行驶图像数据进行视觉识别,以获取所述目标车辆当前所处位置的横向位置信息和纵向位置信息的步骤包括:
    利用视觉识别技术对所述行驶图像数据进行视觉识别,得到识别结果;
    将所述矢量图与所述识别结果进行匹配,以确定所述行驶图像数据中所述目标车辆当前所在区域的道路信息,其中所述道路信息包括道路网络和位于道路旁边的固定道路对象;
    根据所述道路信息获取所述目标车辆当前位置的所述横向位置信息和所述纵向位置信息。
  3. 根据权利要求2所述的方法,其中根据所述道路信息获取所述目标车辆当前位置的所述横向位置信息的步骤包括:
    利用深度神经网络对所述道路信息进行检测和识别,以获取所述目标车辆当前行驶的道路上所有车道的车道信息,其中所述车道信息包括:为各车道标记的车道线、所述目标车辆当前所处的所述目标车道的车道标识、以及所述目标车辆相对于当前所述目标车道的车道中心的偏移距离;
    根据所述目标车道的所述车道标识和所述偏移距离生成所述横向位置信息。
  4. 根据权利要求2所述的方法,其中根据所述道路信息获取所述目标车辆当前位置的所述纵向位置信息的步骤包括:
    在获取所述横向位置信息时,根据所述道路信息获取与所述目标车辆所在的所述目标车道对应的拟合车道的车道参数,其中所述车路参数包括目标车道半径;
    从所述矢量图中读取所述目标车道的参考车道半径;
    比较所述目标车道半径与所述参考车道半径;
    根据比较结果确定所述目标车辆在所述目标车道内的行驶位置,以获取所述目标车辆的所述纵向位置信息。
  5. 根据权利要求4所述的方法,其中根据所述比较结果确定所述目标车辆在所述目标车道中的所述行驶位置,以获取所述目标车辆的所述纵向位置信息,包括:
    根据所述比较结果确定所述目标车辆在所述目标车道中的纵向行驶区域;
    根据所述纵向行驶区域中确定所述行驶位置。
  6. 根据权利要求5所述的方法,其中根据所述纵向行驶区域确定所述行驶位置包括以下至少一项:
    在所述目标车道的车道曲率不恒定的情况下,对以下数据使用最大似然法,以根据所述纵向行驶区域确定所述行驶位置:所述目标车道的实际曲率、通过定位系统为所述目标车道估计的参考曲率、以及由所述目标车辆基于与当前位置相关联的采样点估计的采样曲率;
    在所述纵向行驶区域中包括地图路标的情况下,根据最接近所述目标车辆的当前位置的目标路标估计所述行驶位置;
    在所述目标车道的车道曲率恒定并且所述纵向行驶区域中不包括地图路标的情况下,使用存储在所述定位系统中的定位标记数据估计所述行驶位置;
    使用载波相位查找技术从所述纵向行驶区域估计所述行驶位置。
  7. 根据权利要求3所述的方法,其中在获取所述目标车辆当前行驶的道路上的所有车道的车道信息之后,所述方法还包括:
    使用独立的编码通道或位掩码,以不同的标记方式对所述目标车辆当前行驶的道路上的每条车道进行差异化标记。
  8. 一种车辆定位装置,包括:
    预定位单元,用于基于全球定位系统对处于运行状态的目标车辆进行预定位,获取所述目标车辆在地图中的先验位置;
    获取单元,用于获取所述目标车辆中的摄像头传感器在所述先验位置所在的目标区域内采集的行驶图像数据;
    识别单元,用于利用所述目标区域的矢量图对所述行驶图像数据进行视觉识别,以获取所述目标车辆当前所处位置的横向位置信息和纵向位置信息;
    定位单元,用于根据所述横向位置信息确定所述目标车辆所在的目标车道,并根据所述纵向位置信息定位所述目标车辆在所述目标车道中的行驶位置。
  9. 根据权利要求8所述的装置,其中所述识别单元包括:
    识别模块,用于利用视觉识别技术对所述行驶图像数据进行视觉识别,得到识别结果;
    匹配模块,用于将所述矢量图与所述识别结果进行匹配,以确定所述行驶图像数据中所述目标车辆当前所在区域的道路信息,其中所述道路信息包括道路网络和位于道路旁边的固定道路对象;
    获取模块,用于根据所述道路信息获取所述目标车辆当前位置的所述横向位置信息和所述纵向位置信息。
  10. 一种计算机可读存储介质,其中所述计算机可读存储介质包括存储的程序,其中所述程序运行时执行根据权利要求1所述的方法。
PCT/CN2021/086687 2021-01-05 2021-04-12 车辆定位方法和装置、存储介质及电子设备 WO2022147924A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202180003892.9A CN115038934A (zh) 2021-01-05 2021-04-12 车辆定位方法和装置、存储介质及电子设备

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17/142,212 US11951992B2 (en) 2021-01-05 2021-01-05 Vehicle positioning method and apparatus, storage medium, and electronic device
US17/142,212 2021-01-05

Publications (1)

Publication Number Publication Date
WO2022147924A1 true WO2022147924A1 (zh) 2022-07-14

Family

ID=82219860

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/086687 WO2022147924A1 (zh) 2021-01-05 2021-04-12 车辆定位方法和装置、存储介质及电子设备

Country Status (3)

Country Link
US (1) US11951992B2 (zh)
CN (1) CN115038934A (zh)
WO (1) WO2022147924A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503482A (zh) * 2023-06-26 2023-07-28 小米汽车科技有限公司 车辆位置的获取方法、装置及电子设备

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359436B (zh) * 2022-08-18 2023-04-28 中国人民公安大学 基于遥感图像的排查方法、装置、设备及存储介质
CN115311867B (zh) * 2022-10-11 2023-01-10 腾讯科技(深圳)有限公司 隧道场景的定位方法、装置、计算机设备、存储介质
CN115583243B (zh) * 2022-10-27 2023-10-31 阿波罗智联(北京)科技有限公司 确定车道线信息的方法、车辆控制方法、装置和设备
CN115782926B (zh) * 2022-12-29 2023-12-22 苏州市欧冶半导体有限公司 一种基于道路信息的车辆运动预测方法及装置
CN116153082B (zh) * 2023-04-18 2023-06-30 安徽省中兴工程监理有限公司 一种基于机器视觉的高速公路路况采集分析处理系统

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107643086A (zh) * 2016-07-22 2018-01-30 北京四维图新科技股份有限公司 一种车辆定位方法、装置及系统
CN108303103A (zh) * 2017-02-07 2018-07-20 腾讯科技(深圳)有限公司 目标车道的确定方法和装置
JP2019028028A (ja) * 2017-08-03 2019-02-21 株式会社Subaru 車両の走行車線特定装置
CN110784680A (zh) * 2019-08-09 2020-02-11 中国第一汽车股份有限公司 一种车辆定位方法、装置、车辆和存储介质
KR20200039853A (ko) * 2018-09-28 2020-04-17 전자부품연구원 벡터맵과 카메라를 이용한 자율주행 차량이 위치한 차선 추정 방법
CN111046709A (zh) * 2018-10-15 2020-04-21 广州汽车集团股份有限公司 车辆车道级定位方法、系统、车辆及存储介质
CN111060094A (zh) * 2018-10-16 2020-04-24 三星电子株式会社 车辆定位方法和装置
CN111380538A (zh) * 2018-12-28 2020-07-07 沈阳美行科技有限公司 一种车辆定位方法、导航方法及相关装置

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10406981B2 (en) 2014-03-20 2019-09-10 Magna Electronics Inc. Vehicle vision system with curvature estimation
WO2016130719A2 (en) * 2015-02-10 2016-08-18 Amnon Shashua Sparse map for autonomous vehicle navigation
US10248124B2 (en) * 2016-07-21 2019-04-02 Mobileye Vision Technologies, Inc. Localizing vehicle navigation using lane measurements
JP6859927B2 (ja) * 2017-11-06 2021-04-14 トヨタ自動車株式会社 自車位置推定装置
US20200339156A1 (en) * 2017-12-27 2020-10-29 Honda Motor Co., Ltd. Vehicle control device, vehicle control method, and storage medium
JP6755071B2 (ja) * 2018-06-08 2020-09-16 株式会社Subaru 車両の走行制御装置
KR102611927B1 (ko) * 2018-07-11 2023-12-08 르노 에스.아.에스. 주행 환경 정보의 생성 방법, 운전 제어 방법, 주행 환경 정보 생성 장치
CN109635737B (zh) 2018-12-12 2021-03-26 中国地质大学(武汉) 基于道路标记线视觉识别辅助车辆导航定位方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107643086A (zh) * 2016-07-22 2018-01-30 北京四维图新科技股份有限公司 一种车辆定位方法、装置及系统
CN108303103A (zh) * 2017-02-07 2018-07-20 腾讯科技(深圳)有限公司 目标车道的确定方法和装置
JP2019028028A (ja) * 2017-08-03 2019-02-21 株式会社Subaru 車両の走行車線特定装置
KR20200039853A (ko) * 2018-09-28 2020-04-17 전자부품연구원 벡터맵과 카메라를 이용한 자율주행 차량이 위치한 차선 추정 방법
CN111046709A (zh) * 2018-10-15 2020-04-21 广州汽车集团股份有限公司 车辆车道级定位方法、系统、车辆及存储介质
CN111060094A (zh) * 2018-10-16 2020-04-24 三星电子株式会社 车辆定位方法和装置
CN111380538A (zh) * 2018-12-28 2020-07-07 沈阳美行科技有限公司 一种车辆定位方法、导航方法及相关装置
CN110784680A (zh) * 2019-08-09 2020-02-11 中国第一汽车股份有限公司 一种车辆定位方法、装置、车辆和存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503482A (zh) * 2023-06-26 2023-07-28 小米汽车科技有限公司 车辆位置的获取方法、装置及电子设备
CN116503482B (zh) * 2023-06-26 2023-10-20 小米汽车科技有限公司 车辆位置的获取方法、装置及电子设备

Also Published As

Publication number Publication date
US20220212672A1 (en) 2022-07-07
US11951992B2 (en) 2024-04-09
CN115038934A (zh) 2022-09-09

Similar Documents

Publication Publication Date Title
WO2022147924A1 (zh) 车辆定位方法和装置、存储介质及电子设备
EP3640599B1 (en) Vehicle localization method and apparatus
US10788830B2 (en) Systems and methods for determining a vehicle position
US20210311490A1 (en) Crowdsourcing a sparse map for autonomous vehicle navigation
KR101454153B1 (ko) 가상차선과 센서 융합을 통한 무인 자율주행 자동차의 항법시스템
EP3524936A1 (en) Method and apparatus providing information for driving vehicle
KR102595897B1 (ko) 차선 결정 방법 및 장치
EP3032221B1 (en) Method and system for improving accuracy of digital map data utilized by a vehicle
EP2372304B1 (en) Vehicle position recognition system
Schreiber et al. Laneloc: Lane marking based localization using highly accurate maps
US20180024562A1 (en) Localizing vehicle navigation using lane measurements
CN102208035B (zh) 图像处理系统及位置测量系统
CA3029124A1 (en) Crowdsourcing and distributing a sparse map, and lane measurements for autonomous vehicle navigation
CN108885106A (zh) 使用地图的车辆部件控制
JP2019045379A (ja) 自車位置推定装置
CN110470309A (zh) 本车位置推断装置
US10907972B2 (en) 3D localization device
CN112805766B (zh) 用于更新详细地图的装置和方法
CN110332945A (zh) 基于交通道路标线视觉识别的车载导航方法和装置
JP2020003463A (ja) 自車位置推定装置
JP6834914B2 (ja) 物体認識装置
JP2023164553A (ja) 位置推定装置、推定装置、制御方法、プログラム及び記憶媒体
JP7025293B2 (ja) 自車位置推定装置
WO2017199369A1 (ja) 地物認識装置、地物認識方法およびプログラム
JP6790951B2 (ja) 地図情報学習方法及び地図情報学習装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21916974

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21916974

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 21916974

Country of ref document: EP

Kind code of ref document: A1