CN116503482B - Vehicle position acquisition method and device and electronic equipment - Google Patents

Vehicle position acquisition method and device and electronic equipment Download PDF

Info

Publication number
CN116503482B
CN116503482B CN202310756847.1A CN202310756847A CN116503482B CN 116503482 B CN116503482 B CN 116503482B CN 202310756847 A CN202310756847 A CN 202310756847A CN 116503482 B CN116503482 B CN 116503482B
Authority
CN
China
Prior art keywords
image
road
determining
vehicle
constraint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310756847.1A
Other languages
Chinese (zh)
Other versions
CN116503482A (en
Inventor
郭嘉斌
周新杰
徐佳飞
李志伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202310756847.1A priority Critical patent/CN116503482B/en
Publication of CN116503482A publication Critical patent/CN116503482A/en
Application granted granted Critical
Publication of CN116503482B publication Critical patent/CN116503482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application provides a vehicle position acquisition method and device and electronic equipment, and relates to the technical field of automatic driving. The vehicle position acquisition method comprises the following steps: collecting a first image in front of the current vehicle running, and identifying one or more road elements in the first image; determining a local coordinate system of the first image, and determining constraint dimensions of road elements in the first image under the local coordinate system; acquiring a sliding window where the first image is located, and acquiring a second image except the first image in the sliding window; and acquiring the target vehicle position corresponding to the first image according to the constraint dimension of the road element in the first image and the second image. In the embodiment of the application, the constraint dimension of the road element is acquired through the local coordinate system, so that the influence of the constraint dimension of the road element on the vehicle positioning error is reduced; through multi-frame joint observation, errors of single-frame analysis are reduced, and accuracy of positioning of the target vehicle position is effectively improved.

Description

Vehicle position acquisition method and device and electronic equipment
Technical Field
The present application relates to the field of automatic driving technologies, and in particular, to a method and an apparatus for acquiring a vehicle position, and an electronic device.
Background
Compared with the traditional positioning scheme using the laser radar, the IMU, the wheel speed and other sensors, the visual positioning method using the camera, the IMU, the wheel speed and other sensors and the vectorized high-precision map has higher hardware stability and lower cost, and is the preferred scheme of automatic driving positioning.
In the visual positioning by combining a high-precision map, the road elements are required to be relied on for constraint, and in the actual road running, different road elements can be randomly combined to influence the observation dimension of the road elements, so that the vehicle position positioning error is caused.
Disclosure of Invention
The embodiment of the application provides a vehicle position acquisition method and device and electronic equipment.
An embodiment of a first aspect of the present application provides a method for acquiring a vehicle position, including:
collecting a first image in front of the current vehicle running, and identifying one or more road elements in the first image;
determining a local coordinate system of the first image, and determining constraint dimensions of road elements in the first image under the local coordinate system;
acquiring a sliding window where the first image is located, and acquiring a second image except the first image in the sliding window;
And acquiring a target vehicle position corresponding to the first image according to the constraint dimension of the road element in the first image and the second image.
In one embodiment of the present application, the determining the local coordinate system of the first image includes:
identifying a lane line in the first image, and acquiring a local lane line in a road map corresponding to the first image;
determining that a target lane line which is overlapped with the first image exists in the road map;
and determining a local coordinate system of the first image according to the target lane line.
In one embodiment of the present application, the determining the local coordinate system of the first image according to the target lane line includes:
determining the direction of the target lane line in the local coordinate system, and determining the main direction of the lane line according to the direction of the target lane line;
acquiring an origin of the local coordinate system;
a local coordinate system of the first image is determined based on the lane line main direction and the origin.
In one embodiment of the present application, the determining the local coordinate system of the first image based on the lane line main direction and the origin comprises:
And constructing the local coordinate system at the origin point by taking the main direction of the lane line as a first coordinate direction and taking the main direction of the lane line as a second coordinate direction perpendicular to the main direction of the lane line.
In one embodiment of the present application, the acquiring the origin of the local coordinate system includes:
determining a target characteristic position point according to the position of the target lane line;
and determining the origin of the local coordinate system based on the target characteristic position point.
In one embodiment of the present application, the determining the origin of the local coordinate system based on the target feature position point includes:
when the number of the target lane lines is two or more, obtaining target characteristic position points of each target lane line, and fusing the target characteristic position points to obtain global target characteristic position points serving as the origin of the local coordinate system.
In one embodiment of the present application, the acquiring the road map corresponding to the first image includes:
determining adjacent second images of the first images, and acquiring vehicle running parameters of the adjacent second images;
determining a target vehicle position of the adjacent second image, and determining a predicted vehicle position of the first image according to the target vehicle position of the adjacent second image and the vehicle driving parameter;
And determining a road image according to the predicted vehicle position.
In one embodiment of the present application, the determining the constraint dimension of the road element in the first image under the local coordinate system includes:
and acquiring the type of the road element, and determining the constraint dimension of the road element under the local coordinate system based on the type of the road element.
In one embodiment of the present application, the obtaining the target vehicle position corresponding to the first image according to the constraint dimension of the road element in the first image and the second image further includes:
acquiring constraint dimensions of road elements of the second image under respective local coordinate systems;
acquiring relative position information of a vehicle and vehicle running parameters between two adjacent frames of images in the sliding window;
and determining the target vehicle position corresponding to the first image according to the constraint dimension of the road element of each frame of image in the sliding window, the relative position information of two adjacent frames of images and the vehicle running parameter.
In one embodiment of the present application, the determining the target vehicle position corresponding to the first image according to the constraint dimension of the road element of each frame of image in the sliding window, the relative position information of two adjacent frames of images, and the vehicle driving parameter includes:
Acquiring map road elements in a road image, determining map road elements matched with the road elements in each frame of image aiming at each frame of image in the sliding window, and acquiring offset between the road elements and the matched map road elements according to constraint dimensions of the road elements;
determining the vehicle driving distance between two adjacent frames of images according to the vehicle driving parameters of the two adjacent frames of images;
and determining a target vehicle position corresponding to the first image based on the offset and the vehicle driving distance of each frame of image in the sliding window and the relative position information.
In one embodiment of the present application, the determining the target vehicle position corresponding to the first image based on the offset and the vehicle travel distance of each frame of image in the sliding window, and the relative position information includes:
taking the difference between the relative position information of two adjacent frames of images and the vehicle driving distance as position constraint; and carrying out optimization solution on the offset of each frame of image, and obtaining the vehicle position when the position constraint and the offset are minimum as the target vehicle position corresponding to the first image.
In one embodiment of the present application, the obtaining the offset between the road element and the matched map road element according to the constraint dimension of the road element includes:
determining the constraint direction of the road element according to the constraint dimension of the road element;
and determining the distance between the road element and the matched map road element as the offset under the constraint direction of the road element.
An embodiment of a second aspect of the present application provides a vehicle position acquiring device, including:
the first acquisition module is used for acquiring a first image in front of the current vehicle running and identifying one or more road elements in the first image;
the second acquisition module is used for determining a local coordinate system of the first image and determining constraint dimensions of road elements in the first image under the local coordinate system;
the third acquisition module is used for acquiring a sliding window where the first image is located and acquiring a second image except the first image in the sliding window;
and the position determining module is used for acquiring the position of the target vehicle corresponding to the first image according to the constraint dimension of the road element in the first image and the second image.
In one embodiment of the present application, the second obtaining module is configured to:
identifying a lane line in the first image, and acquiring a local lane line in a road map corresponding to the first image;
determining that a target lane line which is overlapped with the first image exists in the road map;
and determining a local coordinate system of the first image according to the target lane line.
In one embodiment of the present application, the second obtaining module is configured to:
determining the direction of the target lane line in the local coordinate system, and determining the main direction of the lane line according to the direction of the target lane line;
acquiring an origin of the local coordinate system;
a local coordinate system of the first image is determined based on the lane line main direction and the origin.
In one embodiment of the present application, the second obtaining module is configured to:
and constructing the local coordinate system at the origin point by taking the main direction of the lane line as a first coordinate direction and taking the main direction of the lane line as a second coordinate direction perpendicular to the main direction of the lane line.
In one embodiment of the present application, the second obtaining module is configured to:
determining a target characteristic position point according to the position of the target lane line;
And determining the origin of the local coordinate system based on the target characteristic position point.
In one embodiment of the present application, the second obtaining module is configured to:
when the number of the target lane lines is two or more, obtaining target characteristic position points of each target lane line, and fusing the target characteristic position points to obtain global target characteristic position points serving as the origin of the local coordinate system.
In one embodiment of the present application, the second obtaining module is configured to:
determining adjacent second images of the first images, and acquiring vehicle running parameters of the adjacent second images;
determining a target vehicle position of the adjacent second image, and determining a predicted vehicle position of the first image according to the target vehicle position of the adjacent second image and the vehicle driving parameter;
and determining a road image according to the predicted vehicle position.
In one embodiment of the present application, the second obtaining module is configured to:
and acquiring the type of the road element, and determining the constraint dimension of the road element under the local coordinate system based on the type of the road element.
In one embodiment of the application, the location determination module is configured to:
Acquiring constraint dimensions of road elements of the second image under respective local coordinate systems;
acquiring relative position information of a vehicle and vehicle running parameters between two adjacent frames of images in the sliding window;
and determining the target vehicle position corresponding to the first image according to the constraint dimension of the road element of each frame of image in the sliding window, the relative position information of two adjacent frames of images and the vehicle running parameter.
In one embodiment of the application, the location determination module is configured to:
acquiring map road elements in a road image, determining map road elements matched with the road elements in each frame of image aiming at each frame of image in the sliding window, and acquiring offset between the road elements and the matched map road elements according to constraint dimensions of the road elements;
determining the vehicle driving distance between two adjacent frames of images according to the vehicle driving parameters of the two adjacent frames of images;
and determining a target vehicle position corresponding to the first image based on the offset and the vehicle driving distance of each frame of image in the sliding window and the relative position information.
In one embodiment of the application, the location determination module is configured to:
Taking the difference between the relative position information of two adjacent frames of images and the vehicle driving distance as position constraint; and carrying out optimization solution on the offset of each frame of image, and obtaining the vehicle position when the position constraint and the offset are minimum as the target vehicle position corresponding to the first image.
In one embodiment of the application, the location determination module is configured to:
determining the constraint direction of the road element according to the constraint dimension of the road element;
and determining the distance between the road element and the matched map road element as the offset under the constraint direction of the road element.
An embodiment of a third aspect of the present application provides an electronic device, including: an embodiment of the second aspect of the present application provides a vehicle position acquiring device.
An embodiment of a fourth aspect of the present application provides an electronic device, including: a processor; a memory for storing the processor-executable instructions; the processor is configured to execute the instructions to implement the method for acquiring the vehicle position according to the embodiment of the first aspect of the present application.
An embodiment of a fifth aspect of the present application proposes a non-transitory computer readable storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the method proposed by the embodiment of the first aspect of the present application.
An embodiment of the sixth aspect of the application proposes a computer program product comprising a computer program which, when executed by a processor in a communication device, implements the method proposed by the embodiment of the first aspect of the application.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
according to the embodiment of the application, the first image in front of the vehicle running is acquired, the road elements in the first image are identified, the local coordinate system corresponding to the first image is determined in order to avoid the problem of observation errors, the constraint dimension of the road elements is determined under the local coordinate system corresponding to the first image, multi-frame joint observation is carried out based on the sliding window, and the target vehicle position in the first image is determined through the second image except the first image in the sliding window and the constraint dimension of the road elements in the first image. In the embodiment of the application, the constraint dimension of the road element is acquired through the local coordinate system of the first image, so that the constraint is performed only according to the constraint under the current local coordinate system when the first image is analyzed, and the influence of the constraint dimension of the road element on the vehicle positioning error is reduced; meanwhile, when the position of the target vehicle is determined, errors of single-frame analysis are reduced through multi-frame joint observation, the position of the target vehicle is determined based on constraint dimensions of road elements under multi-frame joint observation, and the accuracy of positioning the position of the target vehicle is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
fig. 1 is a flowchart of a method for acquiring a vehicle position according to an embodiment of the present application;
fig. 2 is a flowchart of another method for acquiring a vehicle position according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating another method for acquiring a vehicle position according to an embodiment of the present application;
fig. 4 is a flowchart of another method for acquiring a vehicle position according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a vehicle position acquiring device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of another electronic device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of another electronic device according to an embodiment of the present application;
fig. 9 is a schematic functional block diagram of a vehicle according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with embodiments of the application. Rather, they are merely examples of apparatus and methods consistent with aspects of embodiments of the application as detailed in the accompanying claims.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of embodiments of the application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of embodiments of the present application. The words "if" and "if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination", depending on the context.
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the like or similar elements throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present application and should not be construed as limiting the application.
It should be noted that, the method for acquiring the vehicle position according to any one of the embodiments of the present application may be performed alone or in combination with possible implementation methods in other embodiments, and may also be performed in combination with any one of the technical solutions in the related art.
The method and device for acquiring the vehicle position and the electronic device according to the embodiments of the present application are described below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for acquiring a vehicle position according to an embodiment of the present application. As shown in fig. 1, the method includes, but is not limited to, the steps of:
s101, acquiring a first image in front of the current vehicle driving, and identifying one or more road elements in the first image.
With the development of artificial intelligence, current research on automatic driving and intelligent automobiles has become a big hotspot, wherein the problem of positioning vehicles is a fundamental research problem in automatic driving and intelligent automobiles.
In some implementations, a positioning analysis may be performed on the vehicle from an image of the front of the vehicle in motion. Alternatively, an image in front of the vehicle running may be acquired as the first image from the in-vehicle camera.
In analyzing the vehicle position, it is necessary to process the road elements in the acquired first image in combination. Optionally, the road element may include: the lane lines, the ground marks, the lamp posts, the guideboards and other elements are high in stability and are not easy to change.
In some implementations, feature extraction is performed on the first image, and image recognition and classification is performed based on the extracted features to extract road elements from the first image.
Alternatively, the recognition of the road elements may be based on a pre-trained image recognition network, and may also be performed using a template matching method.
S102, determining a local coordinate system of the first image, and determining constraint dimensions of road elements in the first image under the local coordinate system.
In analysing the vehicle position, different road elements in the first image may provide different constraint dimensions for constraining the direction that the road element uses when participating in the vehicle position solution, that is, in the vehicle position solution, only the direction under the corresponding constraint dimension of the road element is calculated. For example, lane lines generally provide only lateral constraints, while floor signs, light poles, and signs, etc. generally provide longitudinal constraints. When the vehicle is in some special scenes in the running process of an actual road, an incorrect positioning result can occur when the vehicle position is analyzed according to the inherent constraint of road elements; for example, when the vehicle runs on a turned road section, the first image in front of the running of the vehicle contains the road element of the current road and the road element of the road after turning, and for the same type of road element in the current road and the road element of the road after turning, if the vehicle position is determined by processing using the same observation dimension, the positioning of the vehicle position is wrong; it is therefore necessary to consider the observation dimensions of road elements to improve the accuracy of vehicle positioning.
In some implementations, the road elements may be constrained by constructing a coordinate system. And for the acquired first image, determining a corresponding local coordinate system according to the road information in the first image, and determining constraint information of the road elements based on the local coordinate system corresponding to the first image. Alternatively, the road information may be stable information such as road edges or lane lines, so as to facilitate construction of a local coordinate system.
And determining constraint dimensions of the road elements in the first image, such as lane lines, under a local coordinate system corresponding to the first image, determining transverse constraints under the local coordinate system, and determining longitudinal constraints under the local coordinate system by the lamp post, so as to obtain constraint dimensions of all the road elements in the first image. The local coordinate systems of the first images of each frame may be different, so that the observation of each frame of image is ensured to be performed according to the constraint dimension of the road element under the corresponding local coordinate system, and the accuracy is higher.
S103, acquiring a sliding window where the first image is located, and acquiring a second image except the first image in the sliding window.
In consideration of the influence of factors such as noise and the like when the first image is used for single-frame observation, the positioning result fluctuates greatly, so that the multi-frame images are used for joint observation in the embodiment of the application.
Optionally, the multi-frame image may be selected by setting a sliding window, where the length of the sliding window may take a value of 5-20 and the sliding step length of the sliding window is 1 in order to ensure the observation efficiency and the observation accuracy.
And acquiring a second image except the first image in a sliding window where the first image is located, wherein for example, the length of the sliding window is 10, the 9 frames of images except the first image in the sliding window are the second images, and analyzing by combining all the second images, so that the analysis and observation accuracy of the first image is improved.
It is understood that the first image is an image of the front of the vehicle running currently acquired, and the second image is an image of the front of each frame of the vehicle running acquired before the first image.
S104, acquiring a target vehicle position corresponding to the first image according to the constraint dimension of the road element in the first image and the second image.
The first image acquired during the running process of the vehicle contains road elements of different types, and correspondingly, the second image in the sliding window also contains road elements of different types. In some implementations, the target vehicle location in the first image may be obtained from a constraint dimension of the road element in the first image and a constraint dimension of the road element in the second image.
Optionally, an actual road map may be obtained as a reference, and a comparison analysis is performed according to the actual road map and the acquired first image and second image, so as to obtain an offset between a road element in the first image and a map road element in the corresponding road map.
Alternatively, to improve the accuracy of the analysis, the road map may select a high-precision road map, extract features in the high-precision road map, and perform image recognition and classification based on the extracted features to extract map road elements from the road map.
Alternatively, the recognition of the road elements may be based on a pre-trained image recognition network, and may also be performed using a template matching method.
The first image acquired in the running process of the vehicle also contains road elements of different types, and the road elements in the first image have corresponding relations with the actual map road elements; a map road element may thus be determined that matches the road element in the first image. Correspondingly, for the second image in the sliding window, the road elements in the second image can also acquire the matched map road elements.
In the case of accurate positioning of the vehicle, the positions of the road elements identified in the vehicle acquisition image and the matched map road elements should be coincident. That is, the smaller the offset between the road element having the matching relationship and the map road element, the more likely the vehicle is at the target vehicle position.
Alternatively, the offset between the road element and the map road element may be reflected by a euclidean distance or cosine distance, or the like.
It should be noted that, the road element coordinates are determined based on the local coordinate system of the image, and the map road element coordinates are determined under the world coordinate system, so that when determining the offset between the road element and the map road element, it is necessary to convert the coordinates of the road element under the local coordinate system into the coordinates under the world coordinate system. A transformation matrix may be constructed, which reflects the mapping between the local coordinate system and the world coordinate system, and the local coordinate system is transformed into the world coordinate system by the transformation matrix.
In some implementations, the offset of the road element in the first image may be a cumulative sum of the offsets of all road elements, or may be an average of the offsets of all road elements.
Alternatively, since each road element in the first image has a respective constraint dimension, when calculating the offset between the road element and the map road element, the calculation may be performed under the constraint dimension of the road element, for example, for a lane line, the constraint dimension of which is a lateral constraint, and when determining the offset between the lane line and the actual map lane line, only the offset in the x-axis direction of the local coordinate system is calculated, so as to improve the rationality and accuracy of determining the target vehicle position based on the road element. Correspondingly, for each frame of the second image in the sliding window, the road elements in each frame of the second image also correspond to respective constraint dimensions, and when the offset analysis is performed on the road elements in the second image, the calculation is also required under the corresponding constraint dimensions.
In the running process of the vehicle, besides collecting images in front of the running of the vehicle, the vehicle running data at the current moment can be obtained, the obtaining method can be according to the measurement of an inertial measurement unit (Inertial Measurement Unit, IMU) carried by the vehicle, and the vehicle running data can be: angle, speed, acceleration, etc.
Since the positioning of the vehicle is analyzed in real time, that is, the first image in front of the vehicle is acquired in real time, and the frame interval between each acquisition can be a fixed time interval, then the driving distance between the vehicle position of the current frame image and the vehicle position of the next frame image can be determined according to the vehicle driving data and the fixed time interval. That is, in the embodiment of the application, the target vehicle position of the first image is determined through double constraints of the driving distance and the road element, so that the target vehicle position is positioned more accurately.
It should be noted that, since the first image and the second image exist in the sliding window, when determining the target vehicle position of the first image, the first image may be affected by the target vehicle position of the second image adjacent thereto, that is, the first image and all the second images in the sliding window may be associated with each other under the constraint of the driving distance, so the target vehicle position of each frame of image in the sliding window should satisfy: the offset of the road element in each frame image is as small as possible while the relative position information between the target vehicle positions in the adjacent two frame images and the vehicle travel distance are as close as possible.
According to the embodiment of the application, the first image in front of the vehicle running is acquired, the road elements in the first image are identified, the local coordinate system corresponding to the first image is determined in order to avoid the problem of observation errors, the constraint dimension of the road elements is determined under the local coordinate system corresponding to the first image, multi-frame joint observation is carried out based on the sliding window, and the target vehicle position in the first image is determined through the second image except the first image in the sliding window and the constraint dimension of the road elements in the first image. In the embodiment of the application, the constraint dimension of the road element is acquired through the local coordinate system of the first image, so that the constraint is performed only according to the constraint under the current local coordinate system when the first image is analyzed, and the influence of the constraint dimension of the road element on the vehicle positioning error is reduced; meanwhile, when the position of the target vehicle is determined, errors of single-frame analysis are reduced through multi-frame joint observation, the position of the target vehicle is determined based on constraint dimensions of road elements under multi-frame joint observation, and the accuracy of positioning the position of the target vehicle is improved.
Fig. 2 is a flowchart of another method for acquiring a vehicle position according to an embodiment of the present application. As shown in fig. 2, the method includes, but is not limited to, the steps of:
s201, acquiring a first image in front of the current vehicle running, and identifying one or more road elements in the first image.
In the embodiment of the present application, the implementation method of step S201 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S202, lane lines in the first image are identified, and local lane lines in a road map corresponding to the first image are acquired.
After the first image is acquired, in order to better constrain the road elements in the first image, the constraint dimension of the road elements in the first image is determined by acquiring a local coordinate system corresponding to the first image. When the local coordinate system corresponding to the first image is acquired, the corresponding local coordinate system can be constructed based on the relatively stable road elements in the first image.
Alternatively, the local coordinate system may be acquired based on the lane lines in the first image. The lane lines in the first image may be determined based on a pre-trained neural network.
In order to ensure the rationality of the construction of the local coordinate system, the rationality of the lane lines needs to be ensured when the local coordinate system is constructed by using the lane lines, so that the road image of the actual map corresponding to the first image can be obtained, and a more reasonable lane line is selected from the lane lines of the first image by taking the road image as a reference basis.
S203, determining that the first image and the road image have overlapped target lane lines.
Alternatively, since the road image is an image in the actual map, it is determined that there is a lane line overlapping with the road image in the first image as the target lane line; and constructing the local coordinate system of the first image based on the target lane line, so that the construction rationality of the local coordinate system is improved.
Alternatively, when acquiring the road image compared with the first image, an adjacent second image of the first image may be determined, and the vehicle running parameter of the adjacent second image is acquired; determining a target vehicle position of the adjacent second image, and determining a predicted vehicle position of the first image according to the target vehicle position of the adjacent second image and the vehicle running parameters; and determining a road image corresponding to the first image according to the predicted vehicle position. That is, the road image to be referred to for the first image is determined based on the predicted vehicle position, so that the calculation amount and cost for analysis using the excessively large road image can be reduced, and the efficiency is improved.
Optionally, the preliminary positioning may be performed on a global map based on the predicted vehicle position, where the global map may be a map of the world, a map of the province, a map of the city, etc.; after the predicted vehicle position is initially located on the global map, a rectangular area may be constructed centering on the predicted vehicle position, and the map within the rectangular area may be used as the road map of the first image.
S204, determining the direction of the target lane line in the local coordinate system, and determining the main direction of the lane line according to the direction of the target lane line.
After the target lane line is determined, a lane line main direction is determined from the target lane line.
In some implementations, when there is only one target lane, the direction of the target lane is determined, and the direction of the target lane is taken as the main direction of the lane.
In another implementation, when the number of target lane lines is two or more, the local linear direction of each target lane line is obtained, and the main direction of the lane line is determined according to the directions of all the target lane lines. That is, the directions of each target lane line are acquired, the principal component directions of the directions of all the target lane lines are calculated, and the principal component directions are taken as the lane line principal directions.
S205, acquiring an origin of the local coordinate system.
After determining the main direction of the lane lines, the origin position at the time of construction of the local coordinate system needs to be determined.
In some implementations, when only one item marks a lane line, a target feature position point of the target lane line is determined, and the target feature position point is taken as an origin.
In other implementations, when there are two or more target lane lines, the target feature position points of each target lane line are obtained, and the target feature position points are fused to obtain a global target feature position point as the origin of the local coordinate system.
Alternatively, the target feature location point may be a midpoint or centroid of the target lane line.
For example, when only one target lane line is identified, the midpoint of the target lane line may be determined, for example, coordinates of a start point and an end point of the target lane line may be determined, and the midpoint position may be determined based on the coordinates of the start point and the end point. Further, a local coordinate system is constructed with the midpoint of the target lane line as the origin.
When the number of the target lane lines is two or more, the midpoint of each target lane line is extracted, the midpoints of all the target lane lines are further fused to obtain a global midpoint, and further, the global midpoint is used as an origin to construct a local coordinate system.
Optionally, fusing midpoints of all target lane lines to obtain a global midpoint method: connecting the midpoints of all the target lane lines one by one to obtain a connecting line, and determining the midpoint on the connecting line as a global midpoint; the coordinates of the midpoints of all the target lane lines can be obtained, a straight line is fitted by using a least square method based on the coordinates of all the midpoints, the midpoints of the straight line are used as global midpoints, and the center positions of all the midpoints are determined according to the coordinates of all the midpoints.
S206, determining a local coordinate system of the first image based on the main direction and the origin of the lane line.
Alternatively, a local coordinate system may be constructed at the origin with the lane line principal direction as the first coordinate direction and the second coordinate direction perpendicular to the lane line principal direction. The first coordinate direction may be a longitudinal direction, and the second coordinate direction may be a transverse direction. The local coordinate system of the first image is thereby determined based on the main direction of the lane lines and the position of the origin.
S207, determining constraint dimensions of the road elements in the first image under the local coordinate system.
Optionally, the type of the road element is obtained, and the constraint dimension of the road element in the local coordinate system is determined based on the type of the road element.
Optionally, at least one constraint dimension including a road element is required.
By way of example, the types of the road elements may be lane lines, ground marks, lamp posts, signboards, etc., the types of the road elements in the first image are obtained, and the types of the road elements may be marked while the road elements in the first image are identified by using the neural network. For different types of road elements, determining constraint dimensions of the road elements according to the types of the road elements. For example, in the visual high-precision positioning, the lane line can only provide transverse constraint, so that when the type of the road element is determined to be the lane line, the constraint dimension of the road element corresponding to the lane line is transverse; the ground sign, the lamp post, the guidepost and the like can generally only provide longitudinal constraint, so that when the type of the road element is determined to be the ground sign, the lamp post or the guidepost, the constraint dimension of the road element corresponding to the ground sign, the lamp post or the guidepost is longitudinal.
S208, acquiring a sliding window where the first image is located, and acquiring a second image except the first image in the sliding window.
In the embodiment of the present application, the implementation method of step S208 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S209, acquiring a target vehicle position corresponding to the first image according to the constraint dimension of the road element in the first image and the second image.
Alternatively, the types of road elements may be further classified into a line type and a dot type; wherein the line-type road elements are for example: lane lines, stop lines, etc.; the dot-type road elements are for example: lamp poles, guideboards, ground signs, etc.
It should be noted that the line type and the dot type do not affect the constraint dimension determination of the road element. That is, the constraint dimension of the road element is determined based on the type of road element, such as lane lines, ground signs, lamp posts, and signboards.
When the position of the target vehicle is actually determined based on the constraint dimension of the road element, the calculation of the offset is required under the constraint dimension of the road element, and the calculation of the offset of the point-type road element is actually the offset between the point of the identified road element and the point of the matched map road element; the offset calculation for the line-type road element is the offset between the point of the identified road element and the line of the actual map road element, i.e. the distance between the point and the line.
In the embodiment of the present application, the implementation method of step S209 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described in detail.
According to the embodiment of the application, the first image in front of the vehicle running is obtained, the road elements and the lane lines in the first image are identified, and meanwhile, the road image corresponding to the first image is obtained as a reference basis; and selecting a target lane line which is overlapped with the road image in the first image, determining a main direction of the lane line according to the direction of the target lane line, determining a local coordinate system of the first image according to the main direction of the lane line and an origin, and acquiring the position of the target vehicle in the first image under the local coordinate system. In the embodiment of the application, the main direction of the lane line is determined by taking the road image as a basis, analysis errors are reduced, the main direction of the lane line is determined according to the direction of the target lane line in the first image, the local coordinate system of the lane line can be determined according to the actual lane line conditions in the images under different images, when more than one target lane line is obtained, the main directions of the lane lines are determined by acquiring the main directions of all the target lane lines, the origin in the first image is determined according to the target characteristic position points of the target lane lines, and the local coordinate system is determined based on the main directions of the lane lines and the target characteristic position points of the target lane lines in the first image, so that the construction of the local coordinate system in each frame of image can be more suitable for the corresponding image, and the solution of the position of the target vehicle is performed based on the constraint dimension of the road elements under the respective local coordinate system of each frame of image, and the accuracy is higher.
Fig. 3 is a flowchart of another method for acquiring a vehicle position according to an embodiment of the present application. As shown in fig. 3, the method includes, but is not limited to, the steps of:
s301, acquiring a first image in front of the current vehicle driving, and identifying one or more road elements in the first image.
In the embodiment of the present application, the implementation method of step S301 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S302, determining a local coordinate system of the first image, and determining constraint dimensions of road elements in the first image under the local coordinate system.
In the embodiment of the present application, the implementation method of step S302 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S303, acquiring a sliding window where the first image is located, and acquiring a second image except the first image in the sliding window.
In the embodiment of the present application, the implementation method of step S303 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S304, the constraint dimension of the road element of the second image under the respective local coordinate system is obtained.
Alternatively, since the second image is also an image of the front of the vehicle running, the second image will also contain the corresponding road element. Likewise, the road elements in the second image may be obtained by using the neural network, and the main direction and the position of the origin of the lane line may be obtained according to the target lane line in the second image, so as to determine the local coordinate system corresponding to each frame of the second image. Further, based on the type of the road element in the second image, corresponding constraint dimensions are determined under respective local coordinate systems of the second image.
And S305, acquiring the relative position information of the vehicle between two adjacent frames of images in the sliding window and the vehicle running parameters.
Optionally, when multi-frame images in the sliding window are analyzed in a combined way, the connection between every two adjacent frames of images can be established, so that the aim of mutual correlation between the images in the sliding window is fulfilled, and the accuracy of the target vehicle position in each frame of image which is finally determined is improved.
In some implementations, the relative position information of the vehicle between two adjacent frames of images in the sliding window can be obtained as a link between the two corresponding frames of images; wherein the relative position information may include a relative distance and a relative direction. For example, 10 frames are included in the sliding window, the relative distance and relative direction of the vehicle between the 1 st frame and the 2 nd frame, the relative distance and relative direction between the 2 nd frame and the 3 rd frame, and so on, until the relative distance and relative direction between the 9 th frame and the 10 th frame are acquired.
It should be noted that, here, the relative position information of the vehicle between two adjacent frames of images in the sliding window is determined by the position of the target vehicle between each frame of images, that is, the relative position information between the positions of the target vehicle in each two adjacent frames of images needs to be acquired while the position of the target vehicle in each frame of images in the sliding window is optimally solved.
In other implementations, in addition to acquiring an image of a front of a vehicle, a vehicle running parameter at a current time may be acquired, where the vehicle running parameter may be: angle, speed, acceleration, etc.
S306, acquiring map road elements in the road image, determining map road elements matched with the road elements in each frame of image aiming at each frame of image in the sliding window, and acquiring the offset between the road elements and the matched map road elements according to constraint dimensions of the road elements.
Alternatively, a road image corresponding to each frame of image in the sliding window is acquired as a reference, and map road elements in the road image are acquired.
Further, the road elements in each frame of image in the sliding window are matched with the map road elements in the corresponding road image, and the map road elements matched with the road elements in each frame of image are determined.
In the case of accurate vehicle positioning, the positions of the road elements identified in the vehicle-collected image and the map road elements that match should coincide, so that the target vehicle position can be determined by acquiring the offset between the road elements having the matching relationship and the map road elements. That is, in the vehicle position solving optimization process, the smaller the offset between the road element having the matching relationship and the map road element, the more likely the current vehicle position is the target vehicle position.
Alternatively, the offset between the road element and the map road element may be reflected by the Euclidean distance. Also, before calculating the offset between the road element and the map road element, it is necessary to convert the coordinates of the map road element into coordinates in the local coordinate system.
Optionally, determining a constraint direction of the road element according to the constraint dimension of the road element; and determining the distance between the road element and the matched map road element as offset under the constraint direction of the road element. That is, since the road element in each frame image has a corresponding constraint dimension, when the offset between the road element and the map road element that matches is acquired, calculation under the constraint dimension corresponding to the road element is required.
In the embodiment of the present application, the implementation method of step S306 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S307, determining the vehicle running distance between the two adjacent frame images according to the vehicle running parameters of the two adjacent frame images.
And determining the vehicle running parameters of each frame of image in the sliding window, and determining the vehicle running distance between two adjacent frames of images. That is, for any one frame image, the vehicle running parameter of the frame image is acquired, and since the vehicle running parameter includes parameters such as speed, the vehicle running distance from the frame image of each adjacent two frame images to the vehicle in the next frame image can be obtained according to the time interval between the frame image and the next frame image and the vehicle running parameter.
S308, determining the target vehicle position corresponding to the first image based on the offset of each frame of image in the sliding window, the vehicle driving distance and the relative position information.
In the process of solving the position of the target vehicle in each frame of image, the offset of each frame of image and the relative position information of the vehicle between every two frames of images are obtained. Meanwhile, the vehicle running distance between every two frames of images is determined according to the vehicle running parameters of each frame of images.
Alternatively, a difference between the relative position information of two adjacent frame images and the vehicle travel distance is used as a position constraint; and carrying out optimization solution on the offset of each frame of image, and acquiring the vehicle position when the position constraint and the offset are minimum as the target vehicle position corresponding to the first image.
In the embodiment of the application, the offset of each frame of image is used as a first constraint condition, the difference between the relative position information of the vehicle between two adjacent frames of images and the running distance of the vehicle is used as a second constraint condition, and the optimization solution is performed based on the first constraint condition and the second constraint condition to obtain the position of the target vehicle in each frame of image in the sliding window. That is, when the offset of each frame image and the difference between the relative position information of the adjacent two frame images and the vehicle travel distance are both minimum, the vehicle position at that time is acquired as the target vehicle position corresponding to the first image. When the deviation of each frame of image and the difference between the relative position information of two adjacent frames of images and the vehicle driving distance are minimum, namely the deviation of each frame of image and the combined value of the difference between the relative position information of two adjacent frames of images and the vehicle driving distance are minimum.
Illustratively, the first constraint is an offset per frame of the image, the smaller the offset, the closer the road element in the image is to the map road element, the more likely the current vehicle position is to be the target vehicle position; meanwhile, the second constraint condition is the difference between the relative position information of the vehicles between the two adjacent frames of images and the vehicle running distance, the smaller the difference is, the closer the relative position information of the vehicles between the two adjacent frames of images is to the vehicle running distance, the more likely the current vehicle position is the target vehicle position, therefore, the finally determined difference between the relative position information of the vehicles between the two adjacent frames of images and the vehicle running distance between the two frames of images is the minimum, and the vehicle position when the deviation of each frame of images is also the minimum is the target vehicle position of the current frame of images.
In summary, in the embodiment of the application, by acquiring the first image in front of the vehicle driving, identifying the road element in the first image, constructing a local coordinate system of the first image, and determining the constraint dimension of the road element under the local coordinate system; analyzing each frame of image in the sliding window, and determining the constraint dimension of the road element in each frame of image and the offset between the road element in each frame of image and the matched map road element; meanwhile, the vehicle running distance is determined by combining the vehicle running parameters of each frame of image, and the target vehicle position of each frame of image in the sliding window is determined according to the vehicle running distance and the relative position information of the vehicle between two adjacent frames of images. In the embodiment of the application, joint analysis is carried out according to multi-frame images in a sliding window, and constraint dimensions of road elements in the images are determined through respective local coordinate systems of each frame of images; determining the vehicle driving distance by combining the vehicle driving parameters of each frame of image, and acquiring the relative position information between two adjacent frames of images; under the constraint dimension of a local coordinate system, the offset of the road element and the map road element is used as a first constraint condition, whether the relative position information meets the vehicle driving distance or not is used as a second constraint condition, the target vehicle position of each frame of image in the sliding window is obtained by optimizing and solving based on the first constraint condition and the second constraint condition, errors and fluctuation of single frame analysis are reduced, the accuracy of obtaining the target vehicle position is improved, and the obtained target vehicle position is more reliable by solving the target vehicle position by combining the two constraint conditions.
Fig. 4 is a flowchart of another method for acquiring a vehicle position according to an embodiment of the present application. As shown in fig. 4, the method includes, but is not limited to, the steps of:
s401, acquiring a first image in front of the current vehicle driving, and identifying one or more road elements in the first image.
In the embodiment of the present application, the implementation method of step S401 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S402, lane lines in the first image are identified, and local lane lines in the road image corresponding to the first image are acquired.
In the embodiment of the present application, the implementation method of step S402 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S403, determining that the first image and the road image have overlapped target lane lines.
In the embodiment of the present application, the implementation method of step S403 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S404, determining the direction of the target lane line, and determining the main direction of the lane line according to the direction of the target lane line.
In the embodiment of the present application, the implementation method of step S404 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S405, acquiring an origin of the local coordinate system.
In the embodiment of the present application, the implementation method of step S405 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S406, determining a local coordinate system of the first image based on the main direction and the origin of the lane line.
In the embodiment of the present application, the implementation method of step S406 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S407, determining constraint dimensions of the road elements in the first image under the local coordinate system.
In the embodiment of the present application, the implementation method of step S407 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S408, acquiring a sliding window where the first image is located, and acquiring a second image except the first image in the sliding window.
In the embodiment of the present application, the implementation method of step S408 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described in detail.
S409, obtaining constraint dimensions of road elements of the second image under the respective local coordinate system.
In the embodiment of the present application, the implementation method of step S409 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
And S410, acquiring the relative position information of the vehicle between two adjacent frames of images in the sliding window and the vehicle running parameters.
In the embodiment of the present application, the implementation method of step S410 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S411, acquiring map road elements in the road image, determining map road elements matched with the road elements in each frame of image aiming at each frame of image in the sliding window, and acquiring the offset between the road elements and the matched map road elements according to constraint dimensions of the road elements.
In the embodiment of the present application, the implementation method of step S411 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S412, determining the vehicle driving distance between the two adjacent frame images according to the vehicle driving parameters of the two adjacent frame images.
In the embodiment of the present application, the implementation method of step S412 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described in detail.
S413, determining the target vehicle position corresponding to the first image based on the offset of each frame of image in the sliding window, the vehicle travel distance, and the relative position information.
In the embodiment of the present application, the implementation method of step S413 may be implemented in any manner in each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
In summary, the embodiment of the application acquires the first image in front of the vehicle running, identifies the road elements and the lane lines in the first image, and simultaneously acquires the road image corresponding to the first image as a reference basis; selecting a target lane line which is overlapped with the road image in the first image, determining a lane line main direction according to the direction of the target lane line, determining a local coordinate system of the first image according to the lane line main direction and an origin, determining the constraint dimension of the road element under the local coordinate system corresponding to the first image, carrying out multi-frame joint observation based on a sliding window, and determining the constraint dimension of the road element in each frame of image and the offset between the road element in each frame of image and the matched map road element; meanwhile, the vehicle running distance is determined by combining the vehicle running parameters of each frame of image, and the target vehicle position of each frame of image in the sliding window is determined according to the vehicle running distance and the relative position information of the vehicle between two adjacent frames of images. In the embodiment of the application, the local coordinate systems of the images are determined by utilizing the target lane lines, so that the adaptability is stronger, and the construction of the local coordinate systems is more convenient and accurate. And the constraint dimension of the road element is analyzed under the local coordinate system, so that the accuracy is higher. The multi-frame joint observation is carried out based on the sliding window, errors and fluctuation of single-frame analysis can be reduced, for each frame of image in the sliding window, under the constraint dimension of a local coordinate system, the offset of a road element and a map road element is used as a first constraint condition, whether the relative position information meets the driving distance of a vehicle is used as a second constraint condition, the position of a target vehicle of each frame of image in the sliding window is obtained by optimizing and solving based on the first constraint condition and the second constraint condition, and the obtained position of the target vehicle is more reliable and higher in accuracy.
Fig. 5 is a schematic structural diagram of a vehicle position acquiring device according to an embodiment of the present application. As shown in fig. 5, the vehicle position acquisition device 500 includes:
a first acquisition module 501, configured to acquire a first image in front of a current vehicle running, and identify one or more road elements in the first image;
a second obtaining module 502, configured to determine a local coordinate system of the first image, and determine constraint dimensions of road elements in the first image under the local coordinate system;
a third obtaining module 503, configured to obtain a sliding window in which the first image is located, and obtain a second image in the sliding window except for the first image;
the position determining module 504 is configured to obtain a target vehicle position corresponding to the first image according to the constraint dimension of the road element in the first image and the second image.
In some implementations, a second acquisition module 502 is configured to:
identifying lane lines in the first image, and acquiring local lane lines in a road map corresponding to the first image;
determining that a target lane line which is overlapped with the first image exists in the road map;
and determining a local coordinate system of the first image according to the target lane line.
In some implementations, a second acquisition module 502 is configured to:
Determining the direction of a target lane line in a local coordinate system, and determining the main direction of the lane line according to the direction of the target lane line;
acquiring an origin of a local coordinate system;
a local coordinate system of the first image is determined based on the lane line main direction and the origin.
In some implementations, a second acquisition module 502 is configured to:
and constructing a local coordinate system at the origin by taking the main direction of the lane line as a first coordinate direction and taking the main direction of the lane line as a second coordinate direction perpendicular to the main direction of the lane line.
In some implementations, a second acquisition module 502 is configured to:
determining a target characteristic position point according to the position of the target lane line;
the origin of the local coordinate system is determined based on the target feature location points.
In some implementations, a second acquisition module 502 is configured to:
when the number of the target lane lines is two or more, the target characteristic position points of each target lane line are obtained, and the target characteristic position points are fused to obtain global target characteristic position points serving as the origin of the local coordinate system.
In some implementations, a second acquisition module 502 is configured to:
determining adjacent second images of the first images, and acquiring vehicle running parameters of the adjacent second images;
determining a target vehicle position of the adjacent second image, and determining a predicted vehicle position of the first image according to the target vehicle position of the adjacent second image and the vehicle running parameters;
A road image is determined based on the predicted vehicle location.
In some implementations, a second acquisition module 502 is configured to:
and obtaining the type of the road element, and determining the constraint dimension of the road element under the local coordinate system based on the type of the road element.
In some implementations, the location determination module 504 is configured to:
acquiring constraint dimensions of road elements of the second image under respective local coordinate systems;
acquiring relative position information of a vehicle between two adjacent frames of images in a sliding window and vehicle running parameters;
and determining the target vehicle position corresponding to the first image according to the constraint dimension of the road element of each frame of image in the sliding window, the relative position information of two adjacent frames of images and the vehicle running parameters.
In some implementations, the location determination module 504 is configured to:
map road elements in the road images are obtained, the map road elements matched with the road elements in each frame of image in the sliding window are determined, and the offset between the road elements and the matched map road elements is obtained according to the constraint dimension of the road elements;
determining the vehicle driving distance between two adjacent frames of images according to the vehicle driving parameters of the two adjacent frames of images;
And determining the target vehicle position corresponding to the first image based on the offset of each frame of image in the sliding window, the vehicle driving distance and the relative position information.
In some implementations, the location determination module 504 is configured to:
taking the difference between the relative position information of two adjacent frames of images and the running distance of the vehicle as position constraint; and carrying out optimization solution on the offset of each frame of image, and acquiring the vehicle position when the position constraint and the offset are minimum as the target vehicle position corresponding to the first image.
In some implementations, the location determination module 504 is configured to:
determining the constraint direction of the road element according to the constraint dimension of the road element;
and determining the distance between the road element and the matched map road element as offset under the constraint direction of the road element.
According to the embodiment of the application, the first image in front of the vehicle running is acquired, the road elements in the first image are identified, the local coordinate system corresponding to the first image is determined in order to avoid the problem of observation errors, the constraint dimension of the road elements is determined under the local coordinate system corresponding to the first image, multi-frame joint observation is carried out based on the sliding window, and the target vehicle position in the first image is determined through the second image except the first image in the sliding window and the constraint dimension of the road elements in the first image. In the embodiment of the application, the constraint dimension of the road element is acquired through the local coordinate system of the first image, so that the constraint is performed only according to the constraint under the current local coordinate system when the first image is analyzed, and the influence of the constraint dimension of the road element on the vehicle positioning error is reduced; meanwhile, when the position of the target vehicle is determined, errors of single frame analysis are reduced through multi-frame joint observation, the driving distance between two adjacent frames of images is used as another constraint, and the position of the target vehicle is determined by combining the offset between the road elements and the road elements of the actual map under the corresponding constraint dimension, so that the accuracy of positioning the position of the target vehicle is effectively improved.
Fig. 6 is a block diagram of an electronic device, according to an example embodiment. As shown in fig. 6, the electronic apparatus 600 includes the vehicle position acquisition device 500. The electronic device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
There is also provided, in accordance with an embodiment of the present application, an electronic device including: a processor; a memory for storing the processor-executable instructions, wherein the processor is configured to execute the instructions to implement the vehicle location acquisition method as described above.
In order to implement the above embodiment, the present application also proposes a storage medium.
Wherein the instructions in the storage medium, when executed by the processor of the electronic device, enable the electronic device to perform the method of acquiring a vehicle position as described above.
To achieve the above embodiments, the present application also provides a computer program product.
Wherein the computer program product, when executed by a processor of an electronic device, enables the electronic device to perform the method as described above.
Fig. 7 is a block diagram of an electronic device, according to an example embodiment. The electronic device shown in fig. 7 is only an example and should not be construed as limiting the functionality and scope of use of the embodiments of the application.
As shown in fig. 7, the electronic device 700 includes a processor 701 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a Memory 706 into a random access Memory (RAM, random Access Memory) 703. In the RAM 703, various programs and data required for the operation of the electronic device 700 are also stored. The processor 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An Input/Output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: a memory 706 including a hard disk and the like; and a communication section 707 including a network interface card such as a LAN (local area network ) card, a modem, or the like, the communication section 707 performing communication processing via a network such as the internet; a drive 708 is also connected to the I/O interface 705 as needed.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program embodied on a computer readable medium, the computer program containing program code for performing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from the network through the communication section 707. The above-described functions defined in the method of the present application are performed when the computer program is executed by the processor 701.
In an exemplary embodiment, a storage medium is also provided, e.g., a memory, comprising instructions executable by the processor 701 of the electronic device 700 to perform the above-described method. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Fig. 8 is a block diagram illustrating a configuration of an electronic device according to an exemplary embodiment. The electronic device shown in fig. 8 is only an example and should not be construed as limiting the functionality and scope of use of the embodiments of the application. As shown in fig. 8, the electronic device 800 includes a processor 801 and a memory 802. The memory 802 is used for storing program codes, and the processor 801 is connected to the memory 802 and is used for reading the program codes from the memory 802 to implement the method for acquiring the vehicle position in the above embodiment.
Alternatively, the number of processors 801 may be one or more.
Optionally, the electronic device may further include an interface 803, and the number of the interfaces 803 may be plural. The interface 803 may be connected to an application program, and may receive data of an external device such as a sensor, or the like.
Fig. 9 is a block diagram of a vehicle 900, the electronic device being the vehicle 900, according to an example embodiment. For example, vehicle 900 may be a hybrid vehicle, but may also be a non-hybrid vehicle, an electric vehicle, a fuel cell vehicle, or other type of vehicle. The vehicle 900 may be an autonomous vehicle, a semi-autonomous vehicle, or a non-autonomous vehicle.
Referring to fig. 9, a vehicle 900 may include various subsystems, such as an infotainment system 910, a perception system 920, a decision control system 930, a drive system 940, and a computing platform 950. Vehicle 900 may also include more or fewer subsystems, and each subsystem may include multiple components. In addition, interconnections between each subsystem and between each component of the vehicle 900 may be achieved by wired or wireless means.
In some embodiments, the infotainment system 910 may include a communication system, an entertainment system, a navigation system, and the like.
The sensing system 920 may include several sensors for sensing information of the environment surrounding the vehicle 900. For example, the sensing system 920 may include a global positioning system (which may be a GPS system, a beidou system, or other positioning system), an inertial measurement unit (inertial measurement unit, IMU), a lidar, millimeter wave radar, an ultrasonic radar, and a camera device.
Decision control system 930 may include a computing system, a vehicle controller, a steering system, a throttle, and a braking system.
The drive system 940 may include components that provide powered movement of the vehicle 900. In one embodiment, the drive system 940 may include an engine, an energy source, a transmission, and wheels. The engine may be one or a combination of an internal combustion engine, an electric motor, an air compression engine. The engine is capable of converting energy provided by the energy source into mechanical energy.
Some or all of the functions of the vehicle 900 are controlled by the computing platform 950. Computing platform 950 may include at least one processor 951 and memory 952, and processor 951 may execute instructions 953 stored in memory 952.
The processor 951 may be any conventional processor, such as a commercially available CPU. The processor may also include, for example, an image processor (Graphic Process Unit, GPU), a field programmable gate array (Field Programmable Gate Array, FPGA), a System On Chip (SOC), an application specific integrated Chip (Application Specific Integrated Circuit, ASIC), or a combination thereof.
The memory 952 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically Erasable Programmable Read Only Memory (EEPROM), erasable Programmable Read Only Memory (EPROM), programmable Read Only Memory (PROM), read Only Memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
In addition to instructions 953, the memory 952 may also store data such as road maps, route information, vehicle position, direction, speed, and the like. The data stored by memory 952 may be used by computing platform 950.
In an embodiment of the present disclosure, the processor 951 may execute the instructions 953 to complete all or part of the steps of the vehicle position acquisition method described above.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (14)

1. A method of acquiring a vehicle position, comprising:
collecting a first image in front of the current vehicle running, and identifying one or more road elements in the first image;
determining a local coordinate system of the first image, and determining constraint dimensions of road elements in the first image under the local coordinate system of the first image, wherein the constraint dimensions of the road elements are used for constraining directions of the road elements when participating in vehicle position solving;
acquiring a sliding window where the first image is located, and acquiring a second image except the first image in the sliding window;
acquiring a target vehicle position corresponding to the first image according to the constraint dimension of the road element in the first image and the second image;
the determining a local coordinate system of the first image includes:
identifying a lane line in the first image, and acquiring a local lane line in a road map corresponding to the first image;
Determining that a target lane line which is overlapped with the first image exists in the road map;
determining a local coordinate system of the first image according to the target lane line;
the obtaining the target vehicle position corresponding to the first image according to the constraint dimension of the road element in the first image and the second image, further includes:
acquiring constraint dimensions of road elements of the second image under respective local coordinate systems;
acquiring relative position information of a vehicle and vehicle running parameters between two adjacent frames of images in the sliding window;
determining a target vehicle position corresponding to the first image according to the constraint dimension of the road element of each frame of image in the sliding window, the relative position information of two adjacent frames of images and the vehicle running parameter;
and taking the offset of each frame of image as a first constraint condition, taking the difference between the relative position information of the vehicle between the two adjacent frames of images and the vehicle driving distance as a second constraint condition, and carrying out optimization solving based on the first constraint condition and the second constraint condition to obtain the position of the target vehicle in each frame of image in the sliding window.
2. The method of claim 1, wherein the determining the local coordinate system of the first image from the target lane line comprises:
determining the direction of the target lane line in the local coordinate system, and determining the main direction of the lane line according to the direction of the target lane line;
acquiring an origin of the local coordinate system;
a local coordinate system of the first image is determined based on the lane line main direction and the origin.
3. The method of claim 2, wherein the determining the local coordinate system of the first image based on the lane line main direction and the origin comprises:
and constructing the local coordinate system at the origin point by taking the main direction of the lane line as a first coordinate direction and taking the main direction of the lane line as a second coordinate direction perpendicular to the main direction of the lane line.
4. The method of claim 2, wherein the obtaining the origin of the local coordinate system comprises:
determining a target characteristic position point according to the position of the target lane line;
and determining the origin of the local coordinate system based on the target characteristic position point.
5. The method of claim 4, wherein the determining the origin of the local coordinate system based on the target feature location point comprises:
When the number of the target lane lines is two or more, obtaining target characteristic position points of each target lane line, and fusing the target characteristic position points to obtain global target characteristic position points serving as the origin of the local coordinate system.
6. The method of claim 1, wherein the acquiring the road map corresponding to the first image comprises:
determining adjacent second images of the first images, and acquiring vehicle running parameters of the adjacent second images;
determining a target vehicle position of the adjacent second image, and determining a predicted vehicle position of the first image according to the target vehicle position of the adjacent second image and the vehicle driving parameter;
and determining a road image corresponding to the first image according to the predicted vehicle position.
7. The method of any of claims 1-6, wherein the determining constraint dimensions of road elements in the first image under the local coordinate system of the first image comprises:
and acquiring the type of the road element, and determining the constraint dimension of the road element under the local coordinate system based on the type of the road element.
8. The method according to claim 1, wherein determining the target vehicle position corresponding to the first image according to the constraint dimension of the road element of each frame of image in the sliding window, the relative position information of two adjacent frames of images, and the vehicle driving parameter comprises:
acquiring map road elements in a road image, determining map road elements matched with the road elements in each frame of image aiming at each frame of image in the sliding window, and acquiring offset between the road elements and the matched map road elements according to constraint dimensions of the road elements;
determining the vehicle driving distance between two adjacent frames of images according to the vehicle driving parameters of the two adjacent frames of images;
and determining a target vehicle position corresponding to the first image based on the offset and the vehicle driving distance of each frame of image in the sliding window and the relative position information.
9. The method of claim 8, wherein the determining the target vehicle position corresponding to the first image based on the offset and the vehicle travel distance for each frame of image within the sliding window, and the relative position information, comprises:
Taking the difference between the relative position information of two adjacent frames of images and the vehicle driving distance as position constraint; and carrying out optimization solution on the offset of each frame of image, and obtaining the vehicle position when the position constraint and the offset are minimum as the target vehicle position corresponding to the first image.
10. The method of claim 9, wherein the obtaining an offset between the road element and the matching map road element according to the constraint dimension of the road element comprises:
determining the constraint direction of the road element according to the constraint dimension of the road element;
and determining the distance between the road element and the matched map road element as the offset under the constraint direction of the road element.
11. A vehicle position acquisition apparatus characterized by comprising:
the first acquisition module is used for acquiring a first image in front of the current vehicle running and identifying one or more road elements in the first image;
the second acquisition module is used for determining a local coordinate system of the first image, and determining constraint dimensions of road elements in the first image under the local coordinate system of the first image, wherein the constraint dimensions of the road elements are used for constraining directions of the road elements when participating in vehicle position solving;
The third acquisition module is used for acquiring a sliding window where the first image is located and acquiring a second image except the first image in the sliding window;
the position determining module is used for acquiring a target vehicle position corresponding to the first image according to the constraint dimension of the road element in the first image and the second image;
the determining a local coordinate system of the first image includes:
identifying a lane line in the first image, and acquiring a local lane line in a road map corresponding to the first image;
determining that a target lane line which is overlapped with the first image exists in the road map;
determining a local coordinate system of the first image according to the target lane line;
the obtaining the target vehicle position corresponding to the first image according to the constraint dimension of the road element in the first image and the second image, further includes:
acquiring constraint dimensions of road elements of the second image under respective local coordinate systems;
acquiring relative position information of a vehicle and vehicle running parameters between two adjacent frames of images in the sliding window;
determining a target vehicle position corresponding to the first image according to the constraint dimension of the road element of each frame of image in the sliding window, the relative position information of two adjacent frames of images and the vehicle running parameter;
And taking the offset of each frame of image as a first constraint condition, taking the difference between the relative position information of the vehicle between the two adjacent frames of images and the vehicle driving distance as a second constraint condition, and carrying out optimization solving based on the first constraint condition and the second constraint condition to obtain the position of the target vehicle in each frame of image in the sliding window.
12. An electronic device, comprising: the apparatus of claim 11.
13. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of any one of claims 1 to 10.
14. A non-transitory computer readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of any one of claims 1 to 10.
CN202310756847.1A 2023-06-26 2023-06-26 Vehicle position acquisition method and device and electronic equipment Active CN116503482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310756847.1A CN116503482B (en) 2023-06-26 2023-06-26 Vehicle position acquisition method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310756847.1A CN116503482B (en) 2023-06-26 2023-06-26 Vehicle position acquisition method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN116503482A CN116503482A (en) 2023-07-28
CN116503482B true CN116503482B (en) 2023-10-20

Family

ID=87328687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310756847.1A Active CN116503482B (en) 2023-06-26 2023-06-26 Vehicle position acquisition method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN116503482B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113075716A (en) * 2021-03-19 2021-07-06 地平线(上海)人工智能技术有限公司 Image-based vehicle positioning method and device, storage medium and electronic equipment
CN114018274A (en) * 2021-11-18 2022-02-08 阿波罗智能技术(北京)有限公司 Vehicle positioning method and device and electronic equipment
WO2022147924A1 (en) * 2021-01-05 2022-07-14 广州汽车集团股份有限公司 Method and apparatus for vehicle positioning, storage medium, and electronic device
CN114820769A (en) * 2022-05-09 2022-07-29 安徽蔚来智驾科技有限公司 Vehicle positioning method and device, computer equipment, storage medium and vehicle
CN115112125A (en) * 2022-07-15 2022-09-27 智道网联科技(北京)有限公司 Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN115661299A (en) * 2022-12-27 2023-01-31 安徽蔚来智驾科技有限公司 Method for constructing lane line map, computer device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022147924A1 (en) * 2021-01-05 2022-07-14 广州汽车集团股份有限公司 Method and apparatus for vehicle positioning, storage medium, and electronic device
CN113075716A (en) * 2021-03-19 2021-07-06 地平线(上海)人工智能技术有限公司 Image-based vehicle positioning method and device, storage medium and electronic equipment
CN114018274A (en) * 2021-11-18 2022-02-08 阿波罗智能技术(北京)有限公司 Vehicle positioning method and device and electronic equipment
CN114820769A (en) * 2022-05-09 2022-07-29 安徽蔚来智驾科技有限公司 Vehicle positioning method and device, computer equipment, storage medium and vehicle
CN115112125A (en) * 2022-07-15 2022-09-27 智道网联科技(北京)有限公司 Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN115661299A (en) * 2022-12-27 2023-01-31 安徽蔚来智驾科技有限公司 Method for constructing lane line map, computer device and storage medium

Also Published As

Publication number Publication date
CN116503482A (en) 2023-07-28

Similar Documents

Publication Publication Date Title
EP3505869B1 (en) Method, apparatus, and computer readable storage medium for updating electronic map
EP3875985B1 (en) Method, apparatus, computing device and computer-readable storage medium for positioning
Suhr et al. Sensor fusion-based low-cost vehicle localization system for complex urban environments
CN109767637B (en) Method and device for identifying and processing countdown signal lamp
US11144770B2 (en) Method and device for positioning vehicle, device, and computer readable storage medium
US10996072B2 (en) Systems and methods for updating a high-definition map
CN111742326A (en) Lane line detection method, electronic device, and storage medium
US10996337B2 (en) Systems and methods for constructing a high-definition map based on landmarks
CN115205803A (en) Automatic driving environment sensing method, medium and vehicle
CN116997771A (en) Vehicle, positioning method, device, equipment and computer readable storage medium thereof
CN116486377B (en) Method and device for generating drivable area
CN116503482B (en) Vehicle position acquisition method and device and electronic equipment
WO2020113425A1 (en) Systems and methods for constructing high-definition map
CN111351497A (en) Vehicle positioning method and device and map construction method and device
CN115112125A (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN112880692B (en) Map data labeling method and device and storage medium
CN116762094A (en) Data processing method and device
WO2021056185A1 (en) Systems and methods for partially updating high-definition map based on sensor data matching
CN114761830A (en) Global positioning system positioning method and computer program product
CN116630923B (en) Marking method and device for vanishing points of roads and electronic equipment
CN116678423B (en) Multisource fusion positioning method, multisource fusion positioning device and vehicle
CN116740681B (en) Target detection method, device, vehicle and storage medium
US11620831B2 (en) Register sets of low-level features without data association
CN116664680A (en) Rod piece detection method and device and electronic equipment
CN117893634A (en) Simultaneous positioning and map construction method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant