CN111260722B - Vehicle positioning method, device and storage medium - Google Patents

Vehicle positioning method, device and storage medium Download PDF

Info

Publication number
CN111260722B
CN111260722B CN202010050505.4A CN202010050505A CN111260722B CN 111260722 B CN111260722 B CN 111260722B CN 202010050505 A CN202010050505 A CN 202010050505A CN 111260722 B CN111260722 B CN 111260722B
Authority
CN
China
Prior art keywords
reference object
accuracy
influence
influence factor
factors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010050505.4A
Other languages
Chinese (zh)
Other versions
CN111260722A (en
Inventor
裴新欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010050505.4A priority Critical patent/CN111260722B/en
Publication of CN111260722A publication Critical patent/CN111260722A/en
Application granted granted Critical
Publication of CN111260722B publication Critical patent/CN111260722B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a vehicle positioning method, device and storage medium, and relates to the field of automatic driving. The specific implementation scheme is as follows: obtaining the accuracy of the position of the element reference object on the road; and positioning the vehicle according to the accuracy of the position of the reference object and the position of the reference object. The method improves the accuracy of vehicle positioning.

Description

Vehicle positioning method, device and storage medium
Technical Field
The present application relates to the field of autopilot technology, and in particular, to a method and apparatus for locating a vehicle during autopilot, and a storage medium.
Background
Compared with the traditional map, the high-precision map can provide high-precision three-dimensional information and accurate road shape: such as grade, curvature, elevation, etc., as well as detailed road elements such as lane lines, traffic signs, etc., can provide a solid foundation for an assisted or automated driving system. At present, most high-precision maps are generally acquired and generated through professional acquisition equipment. The price of the acquisition equipment is limited, the real-time performance of the high-precision map is limited, the automatic driving system needs to complete the real-time sensing of the environment, one of the purposes of the high-precision map is to make up the defect of a sensing system and provide priori information; if the priori is not accurate enough, the sensing result of the sensing system is influenced, and further the subsequent decision and planning are influenced, so that the state of the whole automatic driving system is influenced. And the data is acquired through the crowdsourcing equipment and transmitted back to the cloud for processing, so that the map updating period can be greatly shortened.
The current crowdsourcing updating algorithm is generally completed by shooting road elements through a camera, and performing simple processing on equipment terminals or directly transmitting video back to a cloud for post-processing. The processing is generally accomplished by techniques such as real-time localization and mapping (simultaneous localization and mapping, SLAM) or three-dimensional reconstruction (structure from motion, SFM). If the element accuracy in the high-precision map is low, the self-positioning of the automatic driving vehicle can be influenced.
Disclosure of Invention
The application provides a vehicle positioning method, device and storage medium, which improve the accuracy of vehicle positioning.
A first aspect of the present application provides a vehicle positioning method, including:
obtaining the accuracy of the position of the reference object on the road;
and positioning the vehicle according to the accuracy of the position of the reference object and the position of the reference object.
In the above-described scheme, the accuracy of the position of the reference object on the road is obtained; the vehicle is positioned according to the accuracy of the position of the reference object and the position of the reference object, and the accuracy of the positioning is high because the accuracy of the position of the reference object is considered.
In one possible implementation manner, before the obtaining the accuracy of the position of the reference object, the method includes:
acquiring an influence factor of at least one influence factor, wherein the influence factor is an influence factor on the accuracy of the position of the reference object;
and determining the accuracy of the position of the reference object according to the influence factors of the influence factors and the accuracy of the influence factors.
In the above embodiment, the influence factors of the influence factors are obtained by analyzing the influence factors of the accuracy of the position of the reference object, and then the accuracy of the position of the reference object is determined according to the influence factors of the influence factors and the accuracy of the influence factors, so that the result is accurate.
In one possible implementation manner, the acquiring the influence factor of the at least one influence factor includes:
and acquiring the influence factors of the influence factors according to the shooting parameters of the image acquisition equipment and the position of the reference object.
In one possible implementation, the shooting parameters include at least one of the following: focal length, distortion model, distortion coefficient, frame rate, motion speed, and number of frames of image visible to the reference object of the image acquisition device.
In one possible implementation manner, the obtaining the influence factor of each influence factor includes:
acquiring influence factors of the influence factors according to shooting parameters of the image acquisition equipment, the positions of the reference objects and preset corresponding relations; the corresponding relation is the corresponding relation between the shooting parameters of the image acquisition equipment and the position of the reference object and the influence factors.
In the specific embodiment, the influence factors of the influence factors are obtained through the preset corresponding relation, so that the operation is simple and the efficiency is high.
In one possible implementation manner, if the number of the influencing factors is at least two, the determining the accuracy of the position of the reference object includes:
weighting the influence factors of the influence factors and the precision of the influence factors to determine the precision of the position of the reference object; the weight corresponding to the influence factor of each influence factor is determined according to the back projection error of the reference object in the image.
In the above embodiment, the influence factors of the influence factors are obtained by analyzing the influence factors of the accuracy of the position of the reference object, and then the accuracy of the position of the reference object is determined by adopting a weighting processing mode according to the influence factors of the influence factors and the accuracy of the influence factors, so that the implementation is simple and the result is accurate.
In one possible implementation manner, before the obtaining the influence factor of the at least one influence factor, the method further includes:
determining the influencing factors according to a camera imaging model; the influencing factors comprise at least one of the following: the focal length of the image acquisition device, the pixel point position of the reference object in the image and the distortion coefficient of the image acquisition device.
In one possible implementation manner, the positioning the vehicle according to the accuracy of the position of the reference object and the position of the reference object includes:
positioning the vehicle according to the accuracy of the positions of at least two references, the positions of the at least two references and the weights corresponding to the references; the weight corresponding to the reference object is determined according to the accuracy of the position of the reference object.
In the specific embodiment, the vehicle is positioned by the precision of the positions of the at least two reference objects, the positions of the at least two reference objects and the weights corresponding to the reference objects, so that the implementation is simple, and the positioning result is accurate.
In one possible implementation, the lower the accuracy of the position of the reference object, the lower the weight corresponding to the reference object.
In one possible implementation manner, after determining the accuracy of the position of the reference object, the method further includes:
updating a map according to the accuracy of the position of the reference object and the position of the reference object;
correspondingly, the acquiring the accuracy of the position of the reference object comprises the following steps:
and acquiring the accuracy of the position of the reference object from the updated map.
In one possible implementation manner, before the positioning of the vehicle, the method further includes:
and acquiring the position of the reference object by utilizing a three-dimensional reconstruction algorithm according to the image data of the reference object.
In the specific embodiment, after the accuracy of the position of the reference object is obtained, the map can be updated, so that the real-time performance is good, the map data is accurate, and the reference value is improved.
A first aspect of the present application provides a vehicle positioning device, comprising:
the acquisition module is used for acquiring the accuracy of the position of the reference object on the road;
and the processing module is used for positioning the vehicle according to the accuracy of the position of the reference object and the position of the reference object.
A third aspect of the present application provides an electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the first aspects of the present application.
A fourth aspect of the present application provides a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of the first aspects of the present application.
A fifth aspect of the present application provides a data processing method, including:
acquiring an influence factor of at least one influence factor, wherein the influence factor is an influence factor on the accuracy of the position of the reference object;
and determining the accuracy of the position of the reference object according to the influence factors of the influence factors and the accuracy of the influence factors.
One embodiment of the above application has the following advantages or benefits: obtaining the accuracy of the position of the reference object on the road; according to the precision of the position of the reference object and the position of the reference object, the vehicle is positioned, and the precision of the position of the reference object is considered, so that the self-positioning of the automatic driving vehicle is influenced if the element precision in a high-precision map is lower in the related art, and the positioning accuracy is higher.
Other effects of the above alternative will be described below in connection with specific embodiments.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
FIG. 1 is an application scenario diagram of an embodiment of a method of the present application;
FIG. 2 is a flow chart of an embodiment of a vehicle positioning method of the present application;
FIG. 3 is a schematic diagram of an embodiment of the method of the present application;
FIG. 4 is a schematic diagram of another embodiment of the method of the present application;
FIG. 5 is a schematic view of a focal length influence factor according to an embodiment of the method of the present application;
FIG. 6 is a schematic diagram of an influence factor of pixel location according to an embodiment of the present application;
FIG. 7 is a schematic diagram of the influence factor of the distortion coefficient k1 according to an embodiment of the method of the present application;
FIG. 8 is a graph showing the influence factors of the distortion coefficient k2 according to one embodiment of the method of the present application;
FIG. 9 is a schematic diagram of a further embodiment of the method of the present application;
FIG. 10 is a block diagram of an apparatus for implementing vehicle positioning in accordance with an embodiment of the present application;
fig. 11 is a block diagram of an electronic device in which embodiments of the present application may be implemented.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Before describing the method provided in the present application, an application scenario of an embodiment of the present application is first described with reference to fig. 1. Fig. 1 is an application scenario architecture diagram provided in an embodiment of the present application. Optionally, as shown in fig. 1, the application scenario includes a crowdsourcing device 11, an electronic device 12 and a server 13; the crowdsourcing device 11 may include, for example, an image acquisition device (e.g., an onboard camera) or the like. The crowdsourcing device may be one or more, and in fig. 1, one is taken as an example, and the crowdsourcing device may be a terminal device, a vehicle navigation device, and the like of a user, and an electronic device, such as a computer, a vehicle device, and the like.
The crowdsourcing device 11 and the electronic device 12, and the electronic device 12 and the server 13 may be connected through a network.
When the map is updated, the crowd-sourced equipment is used for collecting data and transmitting the data back to the cloud for processing, so that the map updating period can be greatly shortened.
The current crowdsourcing updating algorithm is generally completed by shooting road elements through a camera, and performing simple processing on crowdsourcing equipment or directly transmitting video back to a cloud for post-processing. The processing is generally accomplished by techniques such as real-time localization and mapping (simultaneous localization and mapping, SLAM) or three-dimensional reconstruction (structure from motion, SFM).
If the reconstruction of the elements in the map is only completed and the accuracy of the reconstructed elements cannot be estimated or predicted, the quality of map data can be affected if the elements are directly updated into the high-precision map, and the subsequent use of the high-precision map is plagued and risked. And if the element precision in the high-precision map is low, the self-positioning of the automatic driving vehicle can be influenced.
Therefore, according to the method, after the road element is reconstructed, the accuracy of the position of the road element can be obtained, and when the vehicle is positioned, the accuracy of the position of the reference object (namely the reconstructed road element) can be referred to in addition to the position of the reference object, so that the positioning accuracy is improved.
The method provided by the invention can be realized by an electronic device such as a processor executing corresponding software codes, or can be realized by the electronic device executing the corresponding software codes and simultaneously carrying out data interaction with a server.
The technical scheme of the present application is described in detail below with specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes. For ease of understanding, the specific examples in the following embodiments are described by taking the big data field as an example, but are not limited to this application scenario.
Fig. 2 is a flow chart of a vehicle positioning method according to an embodiment of the present application. As shown in fig. 2, the method provided in this embodiment includes the following steps:
s101, obtaining the accuracy of the position of a reference object on a road;
wherein the reference object is for example a lane line on a road, a traffic sign or the like.
In an embodiment, the accuracy of the position of the reference object on the road may be obtained by the high-precision map, that is, the accuracy of the position of the road element may be provided in the high-precision map, that is, the error of the position, for example, the position (a, b, c) error of the three-dimensional point of the road element is 10cm.
In one embodiment, the position of the reference object may be acquired using a three-dimensional reconstruction algorithm based on the image data of the reference object.
As shown in fig. 3, N images captured by an image capturing device on a vehicle are reconstructed from three-dimensional points of a reference object by using a three-dimensional reconstruction algorithm, such as SFM, to obtain positions (x, 11, z) of the reference object.
Optionally, the three-dimensional point of the reference object may be reconstructed through images acquired by the crowdsourcing devices on the plurality of vehicles, or the position of the reference object may be updated according to the position data of the reference object reconstructed by the crowdsourcing devices on the plurality of vehicles.
S102, positioning the vehicle according to the accuracy of the position of the reference object and the position of the reference object.
When the automatic driving vehicle performs self-positioning, different weights can be used for different observation targets (namely different reference objects) of the vehicle according to the acquired precision information of the reference objects on the road, so that high-precision positioning is realized, and meanwhile, the positioning robustness is ensured.
For example: as shown in fig. 4, the vehicle perceives the surroundings with 5 references at the same time, wherein references 1, 2, 3 are 10cm accurate and references 4 and 5 are 50cm accurate.
When the vehicle performs self-positioning using the above-described reference object, the weight of the observed value with respect to the reference objects 4, 5 is reduced.
The method of the embodiment obtains the accuracy of the position of the reference object on the road; according to the precision of the position of the reference object and the position of the reference object, the vehicle is positioned, and the precision of the position of the reference object is considered, so that the self-positioning of the automatic driving vehicle is influenced if the element precision in a high-precision map is lower in the related art, and the positioning accuracy is higher.
In one embodiment, the accuracy of the position of the reference object may be determined by:
acquiring an influence factor of at least one influence factor, wherein the influence factor is an influence factor on the accuracy of the position of the reference object;
and determining the accuracy of the position of the reference object according to the influence factors of the influence factors and the accuracy of the influence factors.
In one embodiment, determining an influencing factor on the accuracy of the position of the reference object according to the camera imaging model; the influencing factors comprise at least one of the following: the focal length of the image acquisition device, the pixel point position of the reference object in the image and the distortion coefficient of the image acquisition device.
Generally, crowdsourcing update distances are done in the range of 200-400 meters, with roads being essentially straight on most highways or loops. Thus, modeling is performed in straight lines in the embodiments of the present application.
In an alternative embodiment, as shown in fig. 3, the last image capturing device (e.g. camera) that can see the reference object (i.e. the three-dimensional point to be reconstructed) is taken as the world coordinate system, i.e. the image capturing device at the far right in fig. 3, according to the principle of small-bore imaging:
wherein,(u 0 ,v 0 ) Pixel coordinates representing the center of the image plane, c x 、c y Respectively represent the offset of the optical axis in the image plane, f x 、f y Respectively represent the focal lengths in the horizontal direction and the vertical direction, Z c Is the z coordinate value in the camera coordinate system. i denotes an i-th image, i.e., an image acquired by the vehicle (image acquisition apparatus) at an i-th position, and l denotes an interval of the vehicles at adjacent acquisition positions. In fig. 3, i increases from right to left.
The method can obtain:
further calculation results in:
when three-dimensional reconstruction is performed using N images:
it can be seen that the factors that affect the result of the final three-dimensional reconstruction (i.e. the position of the reference) are: focal length, pixel location accuracy (distortion model and distortion coefficients may further be considered).
In one embodiment, the focal length influence factor may be derived by:
the f is determined by the above formula (4) x And f y And (3) obtaining a deviation guide, namely:
writing in a matrix form: df= GdX
Wherein df= [ df ] x … df x df y … df y ] T Wherein, include N df x N df y 。dX=[dx dy dz dl] T
Wherein G is a 2Nx4 matrix.
Thus, dx= (G T G) -1 G T df
The method can obtain:
the focal length influencing factor is thus defined asI.e. fDOP is the factor of the influence of the focal length on the error in the position of the final reconstructed three-dimensional point.
In an embodiment, the influence factor of the pixel location may be obtained by:
if there is an accurate distortion coefficient, then the error in the pixel location in the corresponding image can be considered Gaussian white noise.
And (3) performing bias derivative on u and v by using the formula (4) to obtain:
writing in a matrix form: du= HdX
Wherein du= [ du ] 0 … du N-1 dv 0 … dv N-1 ] T ,dX=[dx dy dz dl] T
Thus, dx= (H T H) -1 H T du
Therefore, the influence factor of the pixel position is defined asI.e. uopp is the influencing factor of the pixel error of the pixel point position in the image on the error of the final reconstructed three-dimensional point.
In one embodiment, if there is not a completely accurate reference (using other references of the same hardware instead of or approximating the reference case), for example for the generic opencv model.
In general, the opencv model is as follows:
in general, k3, p1, p2 are close to zero, and in the present embodiment, k3, p1, p2 are assumed to be zero. Where u ', v' are the pixel coordinates after model calculation.
The above model is simplified as:
the calculation can be as follows:
can be calculated according to the above formula (9)
Definition of the definitiondk 1 =FdX
To obtain dX= (F) T F) -1 F T dk 1
The influence factor of the distortion coefficient k1 is thus defined as
The influence factor k2DOP of the distortion coefficient k2 can be obtained in a similar manner to k1, and will not be described here.
Further, according to the influence factors of the influence factors and the precision of the influence factors, the precision of the finally reconstructed three-dimensional point, for example, the precision of the three-dimensional point, can be obtained:
σ 3D =fDOP×σ f +k1DOP×σ k1 +k2DOP×σ k2 +uDOP×σ u (10)
in other embodiments, if the number of influencing factors is at least two, the accuracy of determining the position of the reference object may be determined by:
weighting the influence factors of all influence factors and the precision of all influence factors to determine the precision of the position of the reference object; the weight corresponding to the influence factor of each influence factor is determined according to the back projection error of the reference object in the image.
Specifically, based on the formula (10), each influencing factor can be multiplied by different weights, and the weights can be obtained through the back projection errors of the three-dimensional points in the corresponding images.
In the specific embodiment, the influence factors of the influence factors are obtained by analyzing the influence factors of the accuracy of the position of the reference object, and then the accuracy of the position of the reference object is determined according to the influence factors of the influence factors and the accuracy of the influence factors, so that the implementation is simple, and the result is accurate.
In summary, it can be known that the influence factors of the influence factors can be calculated according to the shooting parameters of the image capturing device and the position of the reference object, and the shooting parameters include a focal length, a distortion model, a distortion coefficient, a frame rate, a moving speed and the number of frames of an image in which the reference object is visible.
The reference is visible in a range of 12 frames of images with a motion speed v=90 km/h, a frame rate of 30Hz, a focal length fx=fy=2000, as shown in fig. 5, showing fDOP values for the positions of the different three-dimensional points. For example: when the three-dimensional point x is 8m and y is 4m, fDOP, that is, the influence factor of the focal length is 0.002, and the error of the focal length is 10, for example, the error of the position of the three-dimensional point is 0.02m.
As shown in fig. 6, the uv op values for the positions of the different three-dimensional points are shown. For example: when the three-dimensional point x is 8m and y is 4m, the factor of influence of the u DOP, i.e., the pixel position, is 0.005, and the error of the pixel position is 10, for example, the error of the three-dimensional point position is 0.05m.
As shown in fig. 7, k1DOP values for the positions of the different three-dimensional points are shown. For example: when the three-dimensional point x is 8m and y is 4m, the influence factor of the distortion coefficient k1, which is k1DOP, is 0.9, and the error of the distortion coefficient k1 is 0.1, for example, the error of the position of the three-dimensional point is 0.09m.
As shown in fig. 8, k2DOP values for the positions of the different three-dimensional points are shown. For example: when the three-dimensional point x is 8m and y is 4m, the influence factor of the distortion coefficient k2, which is k2DOP, is 0.7, and the error of the distortion coefficient k2 is 0.2, for example, the error of the position of the three-dimensional point is 0.14m.
In an embodiment, when the accuracy of the position of the reconstructed three-dimensional point needs to be determined by using the influence factor, the position of the reference object (i.e. the position obtained after reconstruction) and the shooting parameter of the image acquisition device can be calculated by using the formula, or can be made into a form of a corresponding relation table for searching. The corresponding relation table records the shooting parameters of the image acquisition equipment, the positions of the reference objects and the corresponding relation with the influence factors.
On the basis of the above embodiment, further, S102 may be implemented as follows:
positioning the vehicle according to the accuracy of the positions of the at least two references, the positions of the at least two references and the weights corresponding to the references; the weight corresponding to the reference object is determined according to the accuracy of the position of the reference object.
The lower the accuracy of the position of the reference object, the lower the weight corresponding to the reference object.
Specifically, different weights can be used for different observation targets (namely different reference objects) of the vehicle according to the acquired precision information of the reference objects on the road, so that high-precision positioning is realized, and meanwhile, the positioning robustness is ensured.
For example: as shown in fig. 4, the vehicle perceives the surroundings with 5 references at the same time, wherein references 1, 2, 3 are 10cm accurate and references 4 and 5 are 50cm accurate.
Then the weight of the observations of references 1, 2 and 3 is greater than the weight of the observations of references 4, 5 when the vehicle is self-locating with the above references.
In the specific embodiment, the vehicle is positioned by the precision of the positions of the at least two reference objects, the positions of the at least two reference objects and the weights corresponding to the reference objects, so that the implementation is simple, and the positioning result is accurate.
In one embodiment, after determining the accuracy of the position of the reference object, the method further includes:
updating the map according to the accuracy of the position of the reference object and the position of the reference object;
correspondingly, acquiring the accuracy of the position of the reference object comprises:
and acquiring the accuracy of the position of the reference object from the updated map.
Specifically, after the accuracy of the position of the reference object is obtained by calculation according to a formula or by searching the correspondence table, the map may be updated according to the accuracy of the position of the reference object, the position of the reference object in the map data may be updated, and the accuracy of the position of the reference object may be added, or in other embodiments, whether to update the position of the reference object in the map may be determined according to the accuracy of the position of the reference object, for example, when the accuracy is small, that is, when the error is large, the update may not be performed.
Alternatively, the final position may be further processed according to the positions of the reference objects provided by the crowd-sourcing devices and the accuracy of the positions of the reference objects, for example, through weighting processing, or through deep learning model or the like.
In the specific embodiment, after the accuracy of the position of the reference object is obtained, the map can be updated, so that the real-time performance is good, the map data is accurate, and the reference value is improved.
As shown in fig. 9, in this embodiment, the three-dimensional points of the reference object on the road are reconstructed by the image acquired by the image acquisition device of the vehicle, for example, by SFM three-dimensional reconstruction, so as to obtain the positions of the reconstructed three-dimensional points, the accuracy of the positions of the three-dimensional points is predicted by calculating fDOP, uDOP, k DOP and k2DOP by the method of the foregoing embodiment, and the predicted value of the accuracy of the positions of the three-dimensional points is obtained by fDOP, uDOP, k DOP and k2DOP, for example, by weighting, so that the weight can be obtained by the back projection error.
Fig. 10 is a block diagram of an apparatus for achieving vehicle positioning according to an embodiment of the present application. As shown in fig. 10, the vehicle positioning device 100 provided in the present embodiment includes:
an acquisition module 1001 for acquiring accuracy of a position of a reference object on a road;
the processing module 1002 is configured to locate a vehicle according to the accuracy of the position of the reference object and the position of the reference object.
In one possible implementation, the processing module 1002 is configured to:
acquiring an influence factor of at least one influence factor, wherein the influence factor is an influence factor on the accuracy of the position of the reference object;
and determining the accuracy of the position of the reference object according to the influence factors of the influence factors and the accuracy of the influence factors.
In one possible implementation, the processing module 1002 is configured to:
and acquiring the influence factors of the influence factors according to the shooting parameters of the image acquisition equipment and the position of the reference object.
In one possible implementation, the shooting parameters include at least one of the following: focal length, distortion model, distortion coefficient, frame rate, motion speed, and number of frames of image visible to the reference object of the image acquisition device.
In one possible implementation, the processing module 1002 is configured to:
acquiring influence factors of the influence factors according to shooting parameters of the image acquisition equipment, the positions of the reference objects and preset corresponding relations; the corresponding relation is the corresponding relation between the shooting parameters of the image acquisition equipment and the position of the reference object and the influence factors.
In one possible implementation, if the number of the influencing factors is at least two, the processing module 1002 is configured to:
weighting the influence factors of the influence factors and the precision of the influence factors to determine the precision of the position of the reference object; the weight corresponding to the influence factor of each influence factor is determined according to the back projection error of the reference object in the image.
In one possible implementation, the processing module 1002 is configured to:
determining the influencing factors according to a camera imaging model; the influencing factors comprise at least one of the following: the focal length of the image acquisition device, the pixel point position of the reference object in the image and the distortion coefficient of the image acquisition device.
In one possible implementation, the processing module 1002 is configured to:
positioning the vehicle according to the accuracy of the positions of at least two references, the positions of the at least two references and the weights corresponding to the references; the weight corresponding to the reference object is determined according to the accuracy of the position of the reference object.
In one possible implementation, the lower the accuracy of the position of the reference object, the lower the weight corresponding to the reference object.
In one possible implementation, the processing module 1002 is configured to:
updating a map according to the accuracy of the position of the reference object and the position of the reference object;
correspondingly, the obtaining module 1001 is configured to:
and acquiring the accuracy of the position of the reference object from the updated map.
In one possible implementation, the processing module 1002 is configured to:
and acquiring the position of the reference object by utilizing a three-dimensional reconstruction algorithm according to the image data of the reference object.
The vehicle positioning device provided in the embodiment of the present application may execute the technical solution in any of the above method embodiments, and its implementation principle and technical effect are similar, and will not be repeated here.
According to embodiments of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 11, a block diagram of an electronic device of a method of vehicle positioning according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 11, the electronic device includes: one or more processors 1101, memory 1102, and interfaces for connecting the various components, including a high speed interface and a low speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). In fig. 11, a processor 1101 is taken as an example.
Memory 1102 is a non-transitory computer-readable storage medium provided herein. Wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of vehicle positioning provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method of vehicle positioning provided herein.
The memory 1102 is used as a non-transitory computer readable storage medium for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules (e.g., the acquisition module 1001, the processing module 1002 shown in fig. 10) corresponding to the method of vehicle positioning in the embodiments of the present application. The processor 1101 executes various functional applications of the server and data processing, i.e., a method of achieving vehicle positioning in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 1102.
Memory 1102 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created from the use of the vehicle-positioned electronic device, etc. In addition, memory 1102 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 1102 optionally includes memory located remotely from processor 1101, which may be connected to the vehicle located electronics via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method of vehicle positioning may further include: an input device 1103 and an output device 1104. The processor 1101, memory 1102, input device 1103 and output device 1104 may be connected by a bus or other means, for example in fig. 11.
The input device 1103 may receive input numeric or character information, as well as generate key signal inputs related to user settings and function control of the vehicle-positioned electronic device, such as a touch screen, keypad, mouse, trackpad, touchpad, pointer stick, one or more mouse buttons, trackball, joystick, and like input devices. The output device 1104 may include a display device, auxiliary lighting (e.g., LEDs), and haptic feedback (e.g., a vibration motor), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the accuracy of the position of the reference object on the road is obtained; the vehicle is positioned according to the accuracy of the position of the reference object and the position of the reference object, and the accuracy of the positioning is high because the accuracy of the position of the reference object is considered.
An embodiment of the present application further provides a data processing method, including:
acquiring an influence factor of at least one influence factor, wherein the influence factor is an influence factor on the accuracy of the position of the reference object;
and determining the accuracy of the position of the reference object according to the influence factors of the influence factors and the accuracy of the influence factors.
The method of this embodiment is similar to the implementation principle and technical effect of the method in the foregoing embodiment, and will not be described herein.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (12)

1. A vehicle positioning method, characterized by comprising:
obtaining the accuracy of the position of the reference object on the road;
positioning a vehicle according to the accuracy of the position of the reference object and the position of the reference object;
the positioning the vehicle according to the accuracy of the position of the reference object and the position of the reference object comprises the following steps:
positioning the vehicle according to the accuracy of the positions of at least two references, the positions of the at least two references and the weights corresponding to the references; the weight corresponding to the reference object is determined according to the accuracy of the position of the reference object;
before the accuracy of the position of the reference object is obtained, the method comprises the following steps:
acquiring an influence factor of at least one influence factor, wherein the influence factor is an influence factor on the accuracy of the position of the reference object; determining the accuracy of the position of the reference object according to the influence factors of the influence factors and the accuracy of the influence factors;
if the number of the influencing factors is at least two, determining the accuracy of the position of the reference object includes:
weighting the influence factors of the influence factors and the precision of the influence factors to determine the precision of the position of the reference object; the weight corresponding to the influence factor of each influence factor is determined according to the back projection error of the reference object in the image.
2. The method of claim 1, wherein the obtaining the influence factor of the at least one influence factor comprises:
and acquiring the influence factors of the influence factors according to the shooting parameters of the image acquisition equipment and the position of the reference object.
3. The method of claim 2, wherein the photographing parameters include at least one of: focal length, distortion model, distortion coefficient, frame rate, motion speed, and number of frames of image visible to the reference object of the image acquisition device.
4. A method according to claim 2 or 3, wherein said obtaining an influence factor for each of said influence factors comprises:
acquiring influence factors of the influence factors according to shooting parameters of the image acquisition equipment, the positions of the reference objects and preset corresponding relations; the corresponding relation is the corresponding relation between the shooting parameters of the image acquisition equipment and the position of the reference object and the influence factors.
5. A method according to any one of claims 1-3, characterized in that before said obtaining the influence factor of at least one influence factor, further comprises:
determining the influencing factors according to a camera imaging model; the influencing factors comprise at least one of the following: the focal length of the image acquisition device, the pixel point position of the reference object in the image and the distortion coefficient of the image acquisition device.
6. A method according to any one of claims 1-3, characterized in that the lower the accuracy of the position of the reference, the lower the weight the reference corresponds to.
7. A method according to any one of claims 1-3, wherein after said determining the accuracy of the position of the reference object, further comprising:
updating a map according to the accuracy of the position of the reference object and the position of the reference object;
correspondingly, the acquiring the accuracy of the position of the reference object comprises the following steps:
and acquiring the accuracy of the position of the reference object from the updated map.
8. A method according to any one of claims 1-3, wherein prior to locating the vehicle, further comprising:
and acquiring the position of the reference object by utilizing a three-dimensional reconstruction algorithm according to the image data of the reference object.
9. A vehicle positioning device, characterized by comprising:
the acquisition module is used for acquiring the accuracy of the position of the reference object on the road;
the processing module is used for positioning the vehicle according to the accuracy of the position of the reference object and the position of the reference object;
the processing module is specifically configured to:
positioning the vehicle according to the accuracy of the positions of at least two references, the positions of the at least two references and the weights corresponding to the references; the weight corresponding to the reference object is determined according to the accuracy of the position of the reference object;
the processing module is further configured to:
acquiring an influence factor of at least one influence factor, wherein the influence factor is an influence factor on the accuracy of the position of the reference object; determining the accuracy of the position of the reference object according to the influence factors of the influence factors and the accuracy of the influence factors;
if the number of the influencing factors is at least two, the processing module is further configured to:
weighting the influence factors of the influence factors and the precision of the influence factors to determine the precision of the position of the reference object; the weight corresponding to the influence factor of each influence factor is determined according to the back projection error of the reference object in the image.
10. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
11. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-8.
12. A method of data processing, comprising:
acquiring an influence factor of at least one influence factor, wherein the influence factor is an influence factor on the accuracy of the position of a reference object;
determining the accuracy of the position of the reference object according to the influence factors of the influence factors and the accuracy of the influence factors, so as to position the vehicle according to the accuracy of the position of at least two reference objects, the positions of the at least two reference objects and the weights corresponding to the reference objects, wherein the weights corresponding to the reference objects are determined according to the accuracy of the position of the reference object;
if the number of the influencing factors is at least two, determining the accuracy of the position of the reference object includes:
weighting the influence factors of the influence factors and the precision of the influence factors to determine the precision of the position of the reference object; the weight corresponding to the influence factor of each influence factor is determined according to the back projection error of the reference object in the image.
CN202010050505.4A 2020-01-17 2020-01-17 Vehicle positioning method, device and storage medium Active CN111260722B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010050505.4A CN111260722B (en) 2020-01-17 2020-01-17 Vehicle positioning method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010050505.4A CN111260722B (en) 2020-01-17 2020-01-17 Vehicle positioning method, device and storage medium

Publications (2)

Publication Number Publication Date
CN111260722A CN111260722A (en) 2020-06-09
CN111260722B true CN111260722B (en) 2023-12-26

Family

ID=70950630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010050505.4A Active CN111260722B (en) 2020-01-17 2020-01-17 Vehicle positioning method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111260722B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111708857B (en) * 2020-06-10 2023-10-03 北京百度网讯科技有限公司 Processing method, device, equipment and storage medium for high-precision map data
CN112157642B (en) * 2020-09-16 2022-08-23 上海电机学院 A unmanned robot that patrols and examines for electricity distribution room

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101894366A (en) * 2009-05-21 2010-11-24 北京中星微电子有限公司 Method and device for acquiring calibration parameters and video monitoring system
CN103185578A (en) * 2011-12-30 2013-07-03 上海博泰悦臻电子设备制造有限公司 Mobile positioning device and navigation device
CN106842269A (en) * 2017-01-25 2017-06-13 北京经纬恒润科技有限公司 Localization method and system
CN206479647U (en) * 2017-01-25 2017-09-08 北京经纬恒润科技有限公司 Alignment system and automobile
CN107339996A (en) * 2017-06-30 2017-11-10 百度在线网络技术(北京)有限公司 Vehicle method for self-locating, device, equipment and storage medium
CN107643086A (en) * 2016-07-22 2018-01-30 北京四维图新科技股份有限公司 A kind of vehicle positioning method, apparatus and system
CN109815300A (en) * 2018-12-13 2019-05-28 北京邮电大学 A kind of vehicle positioning method
CN110287276A (en) * 2019-05-27 2019-09-27 百度在线网络技术(北京)有限公司 High-precision map updating method, device and storage medium
CN110595459A (en) * 2019-09-18 2019-12-20 百度在线网络技术(北京)有限公司 Vehicle positioning method, device, equipment and medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101894366A (en) * 2009-05-21 2010-11-24 北京中星微电子有限公司 Method and device for acquiring calibration parameters and video monitoring system
CN103185578A (en) * 2011-12-30 2013-07-03 上海博泰悦臻电子设备制造有限公司 Mobile positioning device and navigation device
CN107643086A (en) * 2016-07-22 2018-01-30 北京四维图新科技股份有限公司 A kind of vehicle positioning method, apparatus and system
CN106842269A (en) * 2017-01-25 2017-06-13 北京经纬恒润科技有限公司 Localization method and system
CN206479647U (en) * 2017-01-25 2017-09-08 北京经纬恒润科技有限公司 Alignment system and automobile
CN107339996A (en) * 2017-06-30 2017-11-10 百度在线网络技术(北京)有限公司 Vehicle method for self-locating, device, equipment and storage medium
CN109815300A (en) * 2018-12-13 2019-05-28 北京邮电大学 A kind of vehicle positioning method
CN110287276A (en) * 2019-05-27 2019-09-27 百度在线网络技术(北京)有限公司 High-precision map updating method, device and storage medium
CN110595459A (en) * 2019-09-18 2019-12-20 百度在线网络技术(北京)有限公司 Vehicle positioning method, device, equipment and medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Jiali Bao et al..Vehicle self-localization using 3D building map and stereo camera.《2016 IEEE Intelligent Vehicles Symposium (IV)》.2016,第928-930页. *
李祎承.面向智能车的道路场景建模与高精度定位研究.《中国博士学位论文全文数据库 工程科技Ⅱ辑》.2019,(第7期),正文第60-71页. *
王博宇.动基座下双目鱼眼视觉系统目标定位算法研究.《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》.2019,(第8期),正文第7-18页. *
许宇能等.基于单目摄像头的车辆前方道路三维重建.《汽车安全》.2014,(第2期),第48-51页. *

Also Published As

Publication number Publication date
CN111260722A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
US11615605B2 (en) Vehicle information detection method, electronic device and storage medium
CN111274343B (en) Vehicle positioning method and device, electronic equipment and storage medium
CN111220154A (en) Vehicle positioning method, device, equipment and medium
CN111462029B (en) Visual point cloud and high-precision map fusion method and device and electronic equipment
CN108279670B (en) Method, apparatus and computer readable medium for adjusting point cloud data acquisition trajectory
CN111612852B (en) Method and apparatus for verifying camera parameters
US20220270289A1 (en) Method and apparatus for detecting vehicle pose
CN111553844B (en) Method and device for updating point cloud
CN111739005B (en) Image detection method, device, electronic equipment and storage medium
CN111767853B (en) Lane line detection method and device
US11587332B2 (en) Method, apparatus, system, and storage medium for calibrating exterior parameter of on-board camera
CN111401251B (en) Lane line extraction method, lane line extraction device, electronic equipment and computer readable storage medium
CN111079079B (en) Data correction method, device, electronic equipment and computer readable storage medium
CN113989450A (en) Image processing method, image processing apparatus, electronic device, and medium
JP7337121B2 (en) Roundabout navigation method, apparatus, device and storage medium
EP3886045A1 (en) Three-dimensional reconstruction method, three-dimensional reconstruction apparatus and storage medium
EP3904829B1 (en) Method and apparatus for generating information, device, medium and computer program product
CN112581533B (en) Positioning method, positioning device, electronic equipment and storage medium
CN111260722B (en) Vehicle positioning method, device and storage medium
CN110675635A (en) Method and device for acquiring external parameters of camera, electronic equipment and storage medium
CN112147632A (en) Method, device, equipment and medium for testing vehicle-mounted laser radar perception algorithm
CN111597987B (en) Method, apparatus, device and storage medium for generating information
CN111652113A (en) Obstacle detection method, apparatus, device, and storage medium
CN111311743B (en) Three-dimensional reconstruction precision testing method and device and electronic equipment
CN111597986B (en) Method, apparatus, device and storage medium for generating information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant