CN111260722A - Vehicle positioning method, apparatus and storage medium - Google Patents

Vehicle positioning method, apparatus and storage medium Download PDF

Info

Publication number
CN111260722A
CN111260722A CN202010050505.4A CN202010050505A CN111260722A CN 111260722 A CN111260722 A CN 111260722A CN 202010050505 A CN202010050505 A CN 202010050505A CN 111260722 A CN111260722 A CN 111260722A
Authority
CN
China
Prior art keywords
reference object
precision
influence
influence factors
accuracy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010050505.4A
Other languages
Chinese (zh)
Other versions
CN111260722B (en
Inventor
裴新欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010050505.4A priority Critical patent/CN111260722B/en
Publication of CN111260722A publication Critical patent/CN111260722A/en
Application granted granted Critical
Publication of CN111260722B publication Critical patent/CN111260722B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a vehicle positioning method, equipment and a storage medium, and relates to the field of automatic driving. The specific implementation scheme is as follows: acquiring the precision of the position of an element reference object on a road; and positioning the vehicle according to the precision of the position of the reference object and the position of the reference object. The method of the embodiment of the application improves the accuracy of vehicle positioning.

Description

Vehicle positioning method, apparatus and storage medium
Technical Field
The present application relates to the field of automatic driving technologies, and in particular, to a method, an apparatus, and a storage medium for vehicle positioning in automatic driving.
Background
Compared with the traditional map, the high-precision map can provide high-precision three-dimensional information, and the accurate road shape is as follows: such as grade, curvature, elevation, etc., as well as lane lines, traffic signs, etc., can provide a solid foundation for an assisted steering system or an autonomous steering system. At present, most high-precision maps are generally generated by collecting through professional collecting equipment. The high-precision map is limited by the price of acquisition equipment, the real-time performance of the high-precision map is restricted, the automatic driving system needs to complete the real-time perception of the environment, and one of the aims of the high-precision map is to make up the deficiency of a perception system and provide prior information; if the prior is not accurate enough, the sensing result of the sensing system is influenced, and further the subsequent decision and planning are influenced, and the state of the whole automatic driving system is influenced. And data acquisition is carried out through crowdsourcing equipment and is transmitted back to the cloud for processing, so that the map updating period can be greatly shortened.
The current crowdsourcing update algorithm is generally completed by shooting road elements through a camera, and performing simple processing at an equipment terminal or directly returning videos to a cloud for post-processing. The processing is generally performed by a real-time localization and mapping (SLAM) or three-dimensional reconstruction (SFM) technique. If the accuracy of the elements in the high-accuracy map is low, the self-positioning of the automatic driving vehicle is influenced.
Disclosure of Invention
The application provides a vehicle positioning method, equipment and a storage medium, and the accuracy of vehicle positioning is improved.
A first aspect of the present application provides a vehicle positioning method, including:
acquiring the precision of the position of a reference object on a road;
and positioning the vehicle according to the precision of the position of the reference object and the position of the reference object.
In the scheme, the precision of the position of the reference object on the road is obtained; the vehicle is positioned according to the precision of the position of the reference object and the position of the reference object, and the precision of the position of the reference object is considered, so that the positioning accuracy is high.
In a possible implementation manner, before the obtaining the accuracy of the position of the reference object, the method includes:
acquiring an influence factor of at least one influence factor, wherein the influence factor is the influence factor on the precision of the position of the reference object;
and determining the precision of the position of the reference object according to the influence factors of the influence factors and the precision of the influence factors.
In the above specific embodiment, the influence factors of the influence factors are obtained by analyzing the influence factors of the position accuracy of the reference object, and the position accuracy of the reference object is determined according to the influence factors of the influence factors and the accuracy of the influence factors, so that the result is accurate.
In a possible implementation manner, the obtaining an influence factor of at least one influence factor includes:
and acquiring the influence factors of the influence factors according to the shooting parameters of the image acquisition equipment and the position of the reference object.
In one possible implementation, the shooting parameters include at least one of: the focal length, distortion model, distortion coefficient, frame rate, motion speed and the number of image frames visible by the reference object of the image acquisition equipment.
In a possible implementation manner, the obtaining the influence factor of each influence factor includes:
acquiring influence factors of the influence factors according to the shooting parameters of the image acquisition equipment, the position of the reference object and a preset corresponding relation; the corresponding relation is the corresponding relation between the shooting parameters of the image acquisition equipment and the positions of the reference objects and the influence factors.
In the above embodiment, the influence factors of the influence factors are obtained through the preset corresponding relationship, the operation is simple, and the efficiency is high.
In a possible implementation manner, if the number of the influencing factors is at least two, the determining the accuracy of the position of the reference object includes:
weighting the influence factors of the influence factors and the precision of the influence factors to determine the precision of the position of the reference object; and the weight corresponding to the influence factor of each influence factor is determined according to the back projection error of the reference object in the image.
In the above specific embodiment, the influence factors of the respective influence factors are obtained by analyzing the influence factors of the position accuracy of the reference object, and then the position accuracy of the reference object is determined by adopting a weighting processing mode according to the influence factors of the respective influence factors and the accuracy of the influence factors, so that the method is simple to implement and accurate in result.
In a possible implementation manner, before the obtaining the influence factor of the at least one influence factor, the method further includes:
determining the influence factors according to a camera imaging model; the influencing factors include at least one of: the focal length of the image acquisition equipment, the pixel point position of the reference object in the image and the distortion coefficient of the image acquisition equipment.
In one possible implementation, the positioning a vehicle according to the accuracy of the position of the reference object and the position of the reference object includes:
positioning the vehicle according to the precision of the positions of at least two reference objects, the positions of the at least two reference objects and the weights corresponding to the reference objects; the weight corresponding to the reference object is determined according to the precision of the position of the reference object.
In the above specific embodiment, the vehicle is positioned by the accuracy of the positions of the at least two reference objects, and the weights corresponding to the reference objects, so that the method is simple to implement and the positioning result is accurate.
In one possible implementation, the smaller the accuracy of the position of the reference object, the smaller the weight corresponding to the reference object.
In a possible implementation manner, after determining the accuracy of the position of the reference object, the method further includes:
updating a map according to the precision of the position of the reference object and the position of the reference object;
accordingly, the accuracy of the position of the reference object includes:
and acquiring the precision of the position of the reference object from the updated map.
In one possible implementation manner, before the positioning the vehicle, the method further includes:
and acquiring the position of the reference object by utilizing a three-dimensional reconstruction algorithm according to the image data of the reference object.
In the above specific embodiment, after the accuracy of the position of the reference object is obtained, the map can be updated, so that the real-time performance is good, the map data is accurate, and the reference value is improved.
The present application provides in a first aspect a vehicle positioning device comprising:
the acquisition module is used for acquiring the precision of the position of a reference object on a road;
and the processing module is used for positioning the vehicle according to the precision of the position of the reference object and the position of the reference object.
A third aspect of the present application provides an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the first aspects of the present application.
A fourth aspect of the present application provides a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any of the first aspects of the present application.
A fifth aspect of the present application provides a data processing method, including:
acquiring an influence factor of at least one influence factor, wherein the influence factor is the influence factor on the precision of the position of the reference object;
and determining the precision of the position of the reference object according to the influence factors of the influence factors and the precision of the influence factors.
One embodiment in the above application has the following advantages or benefits: acquiring the precision of the position of a reference object on a road; the vehicle is positioned according to the precision of the position of the reference object and the position of the reference object, and the precision of the position of the reference object is considered, so that the self-positioning of the automatic driving vehicle is influenced if the precision of elements in a high-precision map is low in the related art, and the positioning accuracy is high.
Other effects of the above-described alternative will be described below with reference to specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a diagram of an application scenario of an embodiment of the method of the present application;
FIG. 2 is a schematic flow chart diagram illustrating one embodiment of a vehicle locating method of the present application;
FIG. 3 is a schematic diagram of an embodiment of the method of the present application;
FIG. 4 is a schematic diagram of another embodiment of the method of the present application;
FIG. 5 is a schematic illustration of the impact factor of the focal length of an embodiment of the method of the present application;
FIG. 6 is a schematic diagram of impact factors for pixel point locations according to an embodiment of the method of the present application;
FIG. 7 is a graph illustrating the impact factors of distortion coefficient k1 according to an embodiment of the method of the present application;
FIG. 8 is a graph illustrating the impact factors of distortion coefficient k2 according to an embodiment of the method of the present application;
FIG. 9 is a schematic diagram of yet another embodiment of the method of the present application;
FIG. 10 is a block diagram of an apparatus for implementing vehicle localization in accordance with an embodiment of the present application;
FIG. 11 is a block diagram of an electronic device in which embodiments of the present application may be implemented.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Before describing the method provided by the present application, an application scenario of the embodiment of the present application is first described with reference to fig. 1. Fig. 1 is an application scenario architecture diagram provided in the embodiment of the present application. Optionally, as shown in fig. 1, the application scenario includes a crowdsourcing device 11, an electronic device 12, and a server 13; the crowdsourcing device 11 may comprise, for example, an image capture device (e.g., an in-vehicle camera), or the like. The crowdsourcing equipment may be one or more, one is taken as an example in fig. 1, and the crowdsourcing equipment may be a terminal device of a user, a vehicle navigation device, and the like, and an electronic device, such as a computer, a vehicle device, and the like.
The crowdsourcing equipment 11, the electronic equipment 12, and the electronic equipment 12 and the server 13 may be connected to each other through a network.
When the map is updated, data are acquired through crowdsourcing equipment and are transmitted back to the cloud for processing, and the map updating period can be greatly shortened.
The current crowdsourcing update algorithm is generally completed by shooting road elements through a camera, and simply processing the road elements in crowdsourcing equipment or directly transmitting videos back to a cloud for post-processing. The processing is generally performed by a real-time localization and mapping (SLAM) or three-dimensional reconstruction (SFM) technique.
If the reconstruction of the elements in the map is only completed and the accuracy of the reconstructed elements cannot be evaluated or predicted, the quality of map data can be affected if the elements are directly updated into the high-precision map, and troubles and risks are brought to the use of the subsequent high-precision map. But also affects the self-positioning of the autonomous vehicle if the accuracy of the elements in the high-accuracy map is low.
Therefore, according to the method provided by the embodiment of the application, after the road element is reconstructed, the accuracy of the position of the road element can be obtained, and when the vehicle is positioned, the accuracy of the position of the reference object can be referred to in addition to the position of the reference object (i.e. the reconstructed road element), so that the positioning accuracy is improved.
The method provided by the invention can be realized by an electronic device such as a processor executing corresponding software codes, and can also be realized by an electronic device executing corresponding software codes and simultaneously performing data interaction with a server.
The technical solution of the present application will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. For convenience of understanding, the specific examples in the following embodiments are described by taking the big data field as an example, but are not limited to the application scenario.
Fig. 2 is a schematic flowchart of a vehicle positioning method according to an embodiment of the present application. As shown in fig. 2, the method provided by this embodiment includes the following steps:
s101, acquiring the precision of the position of a reference object on a road;
examples of the reference object include a lane line on a road, a traffic sign, and the like.
In an embodiment, the accuracy of the position of the reference object on the road may be obtained by a high-precision map, i.e. in addition to the position of the road element, the accuracy of the position of the road element may be provided by a high-precision map, i.e. the error of the position, e.g. the error of the position (a, b, c) of the three-dimensional point of the road element is 10 cm.
In one embodiment, the position of the reference object may be obtained by using a three-dimensional reconstruction algorithm based on the image data of the reference object.
As shown in fig. 3, the three-dimensional points of the reference object are reconstructed from N images captured by the image capturing device on the vehicle using a three-dimensional reconstruction algorithm, such as SFM, to obtain the position (x, 11, z) of the reference object.
Optionally, the position of the reference object may be updated by reconstructing a three-dimensional point of the reference object from images acquired by crowdsourcing devices on a plurality of vehicles, or according to position data of the reference object reconstructed by the crowdsourcing devices on the plurality of vehicles.
And S102, positioning the vehicle according to the position accuracy of the reference object and the position of the reference object.
When the automatic driving vehicle carries out self-positioning, different weights can be used for different observation targets (namely different reference objects) of the vehicle according to the acquired precision information of the reference objects on the road, so that high-precision positioning is realized, and the positioning robustness is ensured.
For example: as shown in fig. 4, the vehicle perceives 5 reference objects around the vehicle at the same time, where the reference objects 1, 2, 3 have an accuracy of 10cm and the reference objects 4 and 5 have an accuracy of 50 cm.
Then the weight of the observed values for the reference objects 4, 5 is reduced when the vehicle is self-locating using the reference objects.
The method of the embodiment acquires the precision of the position of the reference object on the road; the vehicle is positioned according to the precision of the position of the reference object and the position of the reference object, and the precision of the position of the reference object is considered, so that the self-positioning of the automatic driving vehicle is influenced if the precision of elements in a high-precision map is low in the related art, and the positioning accuracy is high.
In one embodiment, the accuracy of the position of the reference object may be determined by:
acquiring an influence factor of at least one influence factor, wherein the influence factor is the influence factor on the precision of the position of the reference object;
and determining the precision of the position of the reference object according to the influence factors of the influence factors and the precision of the influence factors.
In one embodiment, determining an influencing factor on the accuracy of the position of the reference object according to a camera imaging model; the influencing factors include at least one of: the focal length of the image acquisition equipment, the pixel point position of the reference object in the image and the distortion coefficient of the image acquisition equipment.
In general, crowd-sourced update distances are done in the range of 200-400 meters, with roads being substantially straight on most highways or loops. Therefore, in the embodiment of the present application, modeling is performed according to a straight line.
In an alternative embodiment, as shown in fig. 3, the last image capturing device (e.g. camera) that can see the reference object (i.e. the three-dimensional point to be reconstructed) is taken as the world coordinate system, i.e. the rightmost image capturing device in fig. 3, according to the pinhole imaging principle:
Figure BDA0002370988160000081
wherein the content of the first and second substances,
Figure BDA0002370988160000082
(u0,v0) Pixel coordinates representing the center of the image plane, cx、cyRespectively representing the amount of deviation of the optical axis in the image plane, fx、fyRespectively representing focal lengths in the horizontal and vertical directions, ZcIs the z coordinate value in the camera coordinate system. i denotes the ith image, i.e., the image of the vehicle (image capturing apparatus) captured at the ith position, and l denotes the interval of the vehicle at the adjacent capture position. Increasing from right to left i in fig. 3.
The following can be obtained:
Figure BDA0002370988160000083
further calculation yields:
Figure BDA0002370988160000084
when three-dimensional reconstruction is performed using N images:
Figure BDA0002370988160000085
it can be seen that the factors that influence the result of the final three-dimensional reconstruction (i.e. the position of the reference object) are: focal length, pixel location accuracy (further distortion model and distortion coefficient may be considered).
In one embodiment, the impact factor of the focal length may be derived as follows:
using the above formula (4) to fxAnd fyAnd (3) calculating a partial derivative to obtain:
Figure BDA0002370988160000091
writing in matrix form: df GdX
Wherein df is ═ dfx… dfxdfy… dfy]TWherein, N df are includedxN dfy。dX=[dx dy dzdl]T
Figure BDA0002370988160000092
Where G is a 2N 4 matrix.
Thus, dX ═ GTG)-1GTdf
The following can be obtained:
Figure BDA0002370988160000093
Figure BDA0002370988160000094
the impact factor of the focal length is thus defined as
Figure BDA0002370988160000095
I.e., fDOP is the factor that affects the error of the focal length on the position of the final reconstructed three-dimensional point.
In one embodiment, the impact factor of the pixel point position can be derived as follows:
if the distortion coefficient is accurate, the error of the pixel position in the corresponding image can be regarded as white gaussian noise.
And (3) solving the partial derivatives of u and v by using the formula (4) to obtain:
Figure BDA0002370988160000096
writing in matrix form: du HdX
Wherein du is [ du ═ du [ ]0… duN-1dv0… dvN-1]T,dX=[dx dy dz dl]T
Figure BDA0002370988160000101
Thus, dX ═ HTH)-1HTdu
Figure BDA0002370988160000102
Figure BDA0002370988160000103
The influence factor of the pixel point position is thus defined as
Figure BDA0002370988160000104
That is, uDOP is the influence factor of the pixel error of the pixel point position in the image on the final reconstruction three-dimensional point error.
In one embodiment, if there are no completely accurate references (using references of other identical hardware instead of or to approximate the reference case), for example, for the general type opencv model.
In general, the opencv model is as follows:
Figure BDA0002370988160000105
in general, k3, p1 and p2 are close to zero, and in the embodiment of the application, k3, p1 and p2 are assumed to be zero. Wherein u ', v' are pixel coordinates after model calculation.
The above model is simplified as follows:
Figure BDA0002370988160000106
the calculation can obtain:
Figure BDA0002370988160000107
can be calculated according to the above equation (9)
Figure BDA0002370988160000108
Definition of
Figure BDA00023709881600001010
dk1=FdX
Figure BDA0002370988160000111
Available dX ═ FTF)-1FTdk1
Figure BDA0002370988160000112
Figure BDA0002370988160000113
The influence factor of the distortion coefficient k1 is thus defined as
Figure BDA0002370988160000114
The influencing factor k2DOP of the distortion coefficient k2 can be obtained in a similar manner to k1 and will not be described in detail here.
Further, according to the influence factors of the influence factors and the precision of the influence factors, the precision of the finally reconstructed three-dimensional point, for example, the precision of the three-dimensional point:
σ3D=fDOP×σf+k1DOP×σk1+k2DOP×σk2+uDOP×σu(10)
in other embodiments, if the number of influencing factors is at least two, the accuracy of determining the position of the reference object may be determined by:
weighting the influence factors of the influence factors and the precision of the influence factors to determine the precision of the position of the reference object; the weight corresponding to the influence factor of each influence factor is determined according to the back projection error of the reference object in the image.
Specifically, on the basis of the formula (10), each influencing factor may be multiplied by a different weight, and the weight may be obtained by a back projection error of the three-dimensional point in the corresponding image.
In the above specific embodiment, the influence factors of the influence factors are obtained by analyzing the influence factors of the position accuracy of the reference object, and then the position accuracy of the reference object is determined according to the influence factors of the influence factors and the accuracy of the influence factors, so that the method is simple to implement and accurate in result.
In summary, the influence factor of each influence factor can be calculated according to the shooting parameters of the image acquisition device, such as the focal length, the distortion model, the distortion coefficient, the frame rate, the motion speed, and the number of image frames visible to the reference object, and the position of the reference object.
With a movement speed v of 90km/h, a frame rate of 30Hz, a focal length fx of 2000, the reference object is visible in the range of 12 frames of images, as shown in fig. 5, showing the values of fDOP for the positions of the different three-dimensional points. For example: when the three-dimensional point x is 8m and y is 4m, fDOP, which is an influence factor of the focal length, is 0.002, and assuming that an error of the focal length is 10, for example, an error of the position of the three-dimensional point is 0.02 m.
As shown in fig. 6, the uDOP values for the positions of the different three-dimensional points are shown. For example: when the three-dimensional point x is 8m and y is 4m, uDOP, which is an influence factor of the pixel point position, is 0.005, and assuming that the error of the pixel point position is 10, for example, the error of the three-dimensional point position is 0.05 m.
As shown in fig. 7, k1DOP values of positions of different three-dimensional points are shown. For example: when the three-dimensional point x is 8m and y is 4m, the influence factor of k1DOP, i.e., the distortion coefficient k1, is 0.9, and the error of the distortion coefficient k1 is 0.1, for example, the error of the position of the three-dimensional point is 0.09 m.
As shown in fig. 8, k2DOP values for the positions of different three-dimensional points are shown. For example: when the three-dimensional point x is 8m and y is 4m, the influence factor of k2DOP, i.e., the distortion coefficient k2, is 0.7, and the error of the distortion coefficient k2 is 0.2, for example, the error of the position of the three-dimensional point is 0.14 m.
In an embodiment, when the accuracy of the position of the reconstructed three-dimensional point needs to be determined by using the influence factor, the position of the reference object (i.e., the position obtained after reconstruction) and the shooting parameters of the image acquisition device may be calculated by using the above formula, or may be searched in the form of a correspondence table. And recording the shooting parameters of the image acquisition equipment and the position of the reference object in the corresponding relation table, and the corresponding relation between the shooting parameters and the influence factors.
On the basis of the above embodiment, further, S102 may be implemented as follows:
positioning the vehicle according to the precision of the positions of the at least two reference objects, the positions of the at least two reference objects and the weights corresponding to the reference objects; the weight corresponding to the reference object is determined according to the accuracy of the position of the reference object.
The smaller the accuracy of the position of the reference object is, the smaller the weight corresponding to the reference object is.
Specifically, different weights can be used for different observation targets (i.e., different reference objects) of the vehicle according to the acquired precision information of the reference object on the road, so that high-precision positioning is realized, and the robustness of positioning is ensured.
For example: as shown in fig. 4, the vehicle perceives 5 reference objects around the vehicle at the same time, where the reference objects 1, 2, 3 have an accuracy of 10cm and the reference objects 4 and 5 have an accuracy of 50 cm.
Then when the vehicle uses the above-mentioned reference object to perform self-positioning, the weight of the observed value of the reference objects 1, 2 and 3 is greater than that of the observed value of the reference objects 4 and 5.
In the above specific embodiment, the vehicle is positioned by the accuracy of the positions of the at least two reference objects, and the weights corresponding to the reference objects, so that the method is simple to implement and the positioning result is accurate.
In one embodiment, after determining the accuracy of the position of the reference object, the method further includes:
updating the map according to the precision of the position of the reference object and the position of the reference object;
accordingly, obtaining the accuracy of the position of the reference object includes:
and obtaining the precision of the position of the reference object from the updated map.
Specifically, after the accuracy of the position of the reference object is obtained by calculation according to a formula or by looking up a correspondence table, the map may be updated according to the accuracy of the position of the reference object, the position of the reference object in the map data may be updated, and the accuracy of the position of the reference object may be added, or in another embodiment, whether to update the position of the reference object in the map may be determined according to the accuracy of the position of the reference object, for example, when the accuracy is small, that is, when the error is large, the update may not be performed.
Alternatively, the final position may be obtained by further processing, for example, by weighting processing or by processing such as a deep learning model, according to the positions of the reference objects provided by the crowdsourcing devices and the accuracy of the positions of the reference objects.
In the above specific embodiment, after the accuracy of the position of the reference object is obtained, the map can be updated, so that the real-time performance is good, the map data is accurate, and the reference value is improved.
As shown in fig. 9, in the present embodiment, a three-dimensional point of a reference object on a road is reconstructed through an image acquired by an image acquisition device of a vehicle, for example, SFM three-dimensional reconstruction is performed to obtain a position of the reconstructed three-dimensional point, fDOP, uDOP, k1DOP and k2DOP are calculated in the manner of the foregoing embodiment, and a precision of the position of the three-dimensional point is predicted through fDOP, uDOP, k1DOP and k2DOP to obtain a predicted value of the precision of the position of the three-dimensional point, for example, a weight value may be obtained through back projection error in the manner of weighting processing.
Fig. 10 is a block diagram of an apparatus for implementing vehicle localization according to an embodiment of the present application. As shown in fig. 10, the vehicle positioning apparatus 100 according to the present embodiment includes:
an acquisition module 1001 for acquiring the accuracy of the position of a reference on a road;
the processing module 1002 is configured to position the vehicle according to the accuracy of the position of the reference object and the position of the reference object.
In one possible implementation, the processing module 1002 is configured to:
acquiring an influence factor of at least one influence factor, wherein the influence factor is the influence factor on the precision of the position of the reference object;
and determining the precision of the position of the reference object according to the influence factors of the influence factors and the precision of the influence factors.
In one possible implementation, the processing module 1002 is configured to:
and acquiring the influence factors of the influence factors according to the shooting parameters of the image acquisition equipment and the position of the reference object.
In one possible implementation, the shooting parameters include at least one of: the focal length, distortion model, distortion coefficient, frame rate, motion speed and the number of image frames visible by the reference object of the image acquisition equipment.
In one possible implementation, the processing module 1002 is configured to:
acquiring influence factors of the influence factors according to the shooting parameters of the image acquisition equipment, the position of the reference object and a preset corresponding relation; the corresponding relation is the corresponding relation between the shooting parameters of the image acquisition equipment and the positions of the reference objects and the influence factors.
In a possible implementation manner, if the number of the influencing factors is at least two, the processing module 1002 is configured to:
weighting the influence factors of the influence factors and the precision of the influence factors to determine the precision of the position of the reference object; and the weight corresponding to the influence factor of each influence factor is determined according to the back projection error of the reference object in the image.
In one possible implementation, the processing module 1002 is configured to:
determining the influence factors according to a camera imaging model; the influencing factors include at least one of: the focal length of the image acquisition equipment, the pixel point position of the reference object in the image and the distortion coefficient of the image acquisition equipment.
In one possible implementation, the processing module 1002 is configured to:
positioning the vehicle according to the precision of the positions of at least two reference objects, the positions of the at least two reference objects and the weights corresponding to the reference objects; the weight corresponding to the reference object is determined according to the precision of the position of the reference object.
In one possible implementation, the smaller the accuracy of the position of the reference object, the smaller the weight corresponding to the reference object.
In one possible implementation, the processing module 1002 is configured to:
updating a map according to the precision of the position of the reference object and the position of the reference object;
accordingly, the obtaining module 1001 is configured to:
and acquiring the precision of the position of the reference object from the updated map.
In one possible implementation, the processing module 1002 is configured to:
and acquiring the position of the reference object by utilizing a three-dimensional reconstruction algorithm according to the image data of the reference object.
The vehicle positioning device provided by the embodiment of the application can execute the technical scheme in any method embodiment, the implementation principle and the technical effect are similar, and the detailed description is omitted here.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 11, is a block diagram of an electronic device of a method of vehicle positioning according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 11, the electronic apparatus includes: one or more processors 1101, a memory 1102, and interfaces for connecting the various components, including a high speed interface and a low speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 11, a processor 1101 is taken as an example.
The memory 1102 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the method of vehicle localization provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method of vehicle localization provided herein.
The memory 1102, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules (e.g., the obtaining module 1001 and the processing module 1002 shown in fig. 10) corresponding to the method of vehicle positioning in the embodiment of the present application. The processor 1101 executes various functional applications of the server and data processing, namely, a method of vehicle positioning in the above-described method embodiments, by executing non-transitory software programs, instructions and modules stored in the memory 1102.
The memory 1102 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the vehicle-located electronic device, and the like. Further, the memory 1102 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 1102 may optionally include memory located remotely from the processor 1101, which may be connected to vehicle-located electronics over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method of vehicle localization may further include: an input device 1103 and an output device 1104. The processor 1101, the memory 1102, the input device 1103 and the output device 1104 may be connected by a bus or other means, and are exemplified by being connected by a bus in fig. 11.
The input device 1103 may receive input numeric or character information and generate key signal inputs related to user settings and function controls of the vehicle-positioned electronic apparatus, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer, one or more mouse buttons, a track ball, a joystick, or other input device. The output devices 1104 may include a display device, auxiliary lighting devices (e.g., LEDs), tactile feedback devices (e.g., vibrating motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the precision of the position of the reference object on the road is obtained; the vehicle is positioned according to the precision of the position of the reference object and the position of the reference object, and the precision of the position of the reference object is considered, so that the positioning accuracy is high.
An embodiment of the present application further provides a data processing method, including:
acquiring an influence factor of at least one influence factor, wherein the influence factor is the influence factor on the precision of the position of the reference object;
and determining the precision of the position of the reference object according to the influence factors of the influence factors and the precision of the influence factors.
The method of this embodiment is similar to the implementation principle and the technical effect of the method in the foregoing embodiments, and is not described herein again.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (15)

1. A vehicle positioning method, characterized by comprising:
acquiring the precision of the position of a reference object on a road;
and positioning the vehicle according to the precision of the position of the reference object and the position of the reference object.
2. The method of claim 1, wherein said obtaining the accuracy of the position of the reference object comprises, prior to:
acquiring an influence factor of at least one influence factor, wherein the influence factor is the influence factor on the precision of the position of the reference object;
and determining the precision of the position of the reference object according to the influence factors of the influence factors and the precision of the influence factors.
3. The method of claim 2, wherein the obtaining the impact factor of the at least one impact factor comprises:
and acquiring the influence factors of the influence factors according to the shooting parameters of the image acquisition equipment and the position of the reference object.
4. The method of claim 3, wherein the shooting parameters comprise at least one of: the focal length, distortion model, distortion coefficient, frame rate, motion speed and the number of image frames visible by the reference object of the image acquisition equipment.
5. The method according to claim 3 or 4, wherein the obtaining the influence factor of each influence factor comprises:
acquiring influence factors of the influence factors according to the shooting parameters of the image acquisition equipment, the position of the reference object and a preset corresponding relation; the corresponding relation is the corresponding relation between the shooting parameters of the image acquisition equipment and the positions of the reference objects and the influence factors.
6. The method according to any one of claims 2-4, wherein if the number of influencing factors is at least two, the determining the accuracy of the position of the reference object comprises:
weighting the influence factors of the influence factors and the precision of the influence factors to determine the precision of the position of the reference object; and the weight corresponding to the influence factor of each influence factor is determined according to the back projection error of the reference object in the image.
7. The method according to any one of claims 2-4, wherein before obtaining the impact factor of at least one impact factor, further comprising:
determining the influence factors according to a camera imaging model; the influencing factors include at least one of: the focal length of the image acquisition equipment, the pixel point position of the reference object in the image and the distortion coefficient of the image acquisition equipment.
8. The method according to any one of claims 1-4, wherein said locating the vehicle based on the accuracy of the position of the reference object and the position of the reference object comprises:
positioning the vehicle according to the precision of the positions of at least two reference objects, the positions of the at least two reference objects and the weights corresponding to the reference objects; the weight corresponding to the reference object is determined according to the precision of the position of the reference object.
9. The method of claim 8, wherein the smaller the accuracy of the position of the reference object, the smaller the weight corresponding to the reference object.
10. The method of any one of claims 2-4, wherein after determining the accuracy of the position of the reference object, further comprising:
updating a map according to the precision of the position of the reference object and the position of the reference object;
accordingly, the accuracy of the position of the reference object includes:
and acquiring the precision of the position of the reference object from the updated map.
11. The method of any of claims 1-4, wherein prior to locating the vehicle, further comprising:
and acquiring the position of the reference object by utilizing a three-dimensional reconstruction algorithm according to the image data of the reference object.
12. A vehicle positioning device, comprising:
the acquisition module is used for acquiring the precision of the position of a reference object on a road;
and the processing module is used for positioning the vehicle according to the precision of the position of the reference object and the position of the reference object.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-11.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-11.
15. A data processing method, comprising:
acquiring an influence factor of at least one influence factor, wherein the influence factor is the influence factor on the precision of the position of the reference object;
and determining the precision of the position of the reference object according to the influence factors of the influence factors and the precision of the influence factors.
CN202010050505.4A 2020-01-17 2020-01-17 Vehicle positioning method, device and storage medium Active CN111260722B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010050505.4A CN111260722B (en) 2020-01-17 2020-01-17 Vehicle positioning method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010050505.4A CN111260722B (en) 2020-01-17 2020-01-17 Vehicle positioning method, device and storage medium

Publications (2)

Publication Number Publication Date
CN111260722A true CN111260722A (en) 2020-06-09
CN111260722B CN111260722B (en) 2023-12-26

Family

ID=70950630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010050505.4A Active CN111260722B (en) 2020-01-17 2020-01-17 Vehicle positioning method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111260722B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111708857A (en) * 2020-06-10 2020-09-25 北京百度网讯科技有限公司 Processing method, device and equipment of high-precision map data and storage medium
CN112157642A (en) * 2020-09-16 2021-01-01 上海电机学院 A unmanned robot that patrols and examines for electricity distribution room

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101894366A (en) * 2009-05-21 2010-11-24 北京中星微电子有限公司 Method and device for acquiring calibration parameters and video monitoring system
CN103185578A (en) * 2011-12-30 2013-07-03 上海博泰悦臻电子设备制造有限公司 Mobile positioning device and navigation device
CN106842269A (en) * 2017-01-25 2017-06-13 北京经纬恒润科技有限公司 Localization method and system
CN206479647U (en) * 2017-01-25 2017-09-08 北京经纬恒润科技有限公司 Alignment system and automobile
CN107339996A (en) * 2017-06-30 2017-11-10 百度在线网络技术(北京)有限公司 Vehicle method for self-locating, device, equipment and storage medium
CN107643086A (en) * 2016-07-22 2018-01-30 北京四维图新科技股份有限公司 A kind of vehicle positioning method, apparatus and system
CN109815300A (en) * 2018-12-13 2019-05-28 北京邮电大学 A kind of vehicle positioning method
CN110287276A (en) * 2019-05-27 2019-09-27 百度在线网络技术(北京)有限公司 High-precision map updating method, device and storage medium
CN110595459A (en) * 2019-09-18 2019-12-20 百度在线网络技术(北京)有限公司 Vehicle positioning method, device, equipment and medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101894366A (en) * 2009-05-21 2010-11-24 北京中星微电子有限公司 Method and device for acquiring calibration parameters and video monitoring system
CN103185578A (en) * 2011-12-30 2013-07-03 上海博泰悦臻电子设备制造有限公司 Mobile positioning device and navigation device
CN107643086A (en) * 2016-07-22 2018-01-30 北京四维图新科技股份有限公司 A kind of vehicle positioning method, apparatus and system
CN106842269A (en) * 2017-01-25 2017-06-13 北京经纬恒润科技有限公司 Localization method and system
CN206479647U (en) * 2017-01-25 2017-09-08 北京经纬恒润科技有限公司 Alignment system and automobile
CN107339996A (en) * 2017-06-30 2017-11-10 百度在线网络技术(北京)有限公司 Vehicle method for self-locating, device, equipment and storage medium
CN109815300A (en) * 2018-12-13 2019-05-28 北京邮电大学 A kind of vehicle positioning method
CN110287276A (en) * 2019-05-27 2019-09-27 百度在线网络技术(北京)有限公司 High-precision map updating method, device and storage medium
CN110595459A (en) * 2019-09-18 2019-12-20 百度在线网络技术(北京)有限公司 Vehicle positioning method, device, equipment and medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JIALI BAO ET AL.: "Vehicle self-localization using 3D building map and stereo camera" *
李祎承: "面向智能车的道路场景建模与高精度定位研究" *
王博宇: "动基座下双目鱼眼视觉系统目标定位算法研究" *
许宇能等: "基于单目摄像头的车辆前方道路三维重建" *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111708857A (en) * 2020-06-10 2020-09-25 北京百度网讯科技有限公司 Processing method, device and equipment of high-precision map data and storage medium
CN111708857B (en) * 2020-06-10 2023-10-03 北京百度网讯科技有限公司 Processing method, device, equipment and storage medium for high-precision map data
CN112157642A (en) * 2020-09-16 2021-01-01 上海电机学院 A unmanned robot that patrols and examines for electricity distribution room
CN112157642B (en) * 2020-09-16 2022-08-23 上海电机学院 A unmanned robot that patrols and examines for electricity distribution room

Also Published As

Publication number Publication date
CN111260722B (en) 2023-12-26

Similar Documents

Publication Publication Date Title
CN110595494B (en) Map error determination method and device
CN111462029B (en) Visual point cloud and high-precision map fusion method and device and electronic equipment
CN111220154A (en) Vehicle positioning method, device, equipment and medium
CN108279670B (en) Method, apparatus and computer readable medium for adjusting point cloud data acquisition trajectory
CN112132829A (en) Vehicle information detection method and device, electronic equipment and storage medium
CN111739005B (en) Image detection method, device, electronic equipment and storage medium
CN110675635B (en) Method and device for acquiring external parameters of camera, electronic equipment and storage medium
CN111553844B (en) Method and device for updating point cloud
CN113989450A (en) Image processing method, image processing apparatus, electronic device, and medium
CN111612852A (en) Method and apparatus for verifying camera parameters
JP7337121B2 (en) Roundabout navigation method, apparatus, device and storage medium
CN111079079B (en) Data correction method, device, electronic equipment and computer readable storage medium
KR20210089602A (en) Method and device for controlling vehicle, and vehicle
US20210239491A1 (en) Method and apparatus for generating information
CN112147632A (en) Method, device, equipment and medium for testing vehicle-mounted laser radar perception algorithm
CN111311743B (en) Three-dimensional reconstruction precision testing method and device and electronic equipment
CN110794844A (en) Automatic driving method, device, electronic equipment and readable storage medium
CN111784834A (en) Point cloud map generation method and device and electronic equipment
CN111767853A (en) Lane line detection method and device
CN112581533B (en) Positioning method, positioning device, electronic equipment and storage medium
CN112101209A (en) Method and apparatus for determining a world coordinate point cloud for roadside computing devices
CN112330815A (en) Three-dimensional point cloud data processing method, device and equipment based on obstacle fusion
CN111721281A (en) Position identification method and device and electronic equipment
CN111597987B (en) Method, apparatus, device and storage medium for generating information
CN111260722B (en) Vehicle positioning method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant