CN115235493A - Method and device for automatic driving positioning based on vector map - Google Patents

Method and device for automatic driving positioning based on vector map Download PDF

Info

Publication number
CN115235493A
CN115235493A CN202210848452.XA CN202210848452A CN115235493A CN 115235493 A CN115235493 A CN 115235493A CN 202210848452 A CN202210848452 A CN 202210848452A CN 115235493 A CN115235493 A CN 115235493A
Authority
CN
China
Prior art keywords
feature
identifier
vehicle
processing
feature identifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210848452.XA
Other languages
Chinese (zh)
Inventor
陶绍源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hozon New Energy Automobile Co Ltd
Original Assignee
Hozon New Energy Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hozon New Energy Automobile Co Ltd filed Critical Hozon New Energy Automobile Co Ltd
Priority to CN202210848452.XA priority Critical patent/CN115235493A/en
Publication of CN115235493A publication Critical patent/CN115235493A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/343Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial

Abstract

The invention discloses a method and a device for automatic driving positioning based on a vector map, relates to the technical field of automatic driving of vehicles, gives consideration to processing cost and positioning precision, and can obtain more accurate vehicle pose. The main technical scheme of the invention is as follows: acquiring an initial pose corresponding to a vehicle; processing a current image corresponding to the shot vehicle based on the initial pose to obtain a first feature element contained in a first feature identifier in the current image; acquiring a second feature element of a second feature identifier corresponding to the vehicle from a vector map based on the initial pose; and processing the first characteristic element and the second characteristic element which have the matching relationship by using a preset cost function so as to correct the initial pose based on a processing result to obtain a target pose.

Description

Method and device for automatic driving positioning based on vector map
Technical Field
The invention relates to the technical field of automatic driving of vehicles, in particular to a method and a device for automatic driving positioning based on a vector map.
Background
One key technology of high-level automatic driving is high-precision positioning, namely, the vehicle needs to estimate the current pose in the world or a map at any time in the automatic driving process.
Currently, the high-precision positioning of vehicle automatic driving is mainly realized by adopting two classical schemes: one scheme is based on laser radar and laser point cloud map to carry out positioning; another approach is to perform positioning based on visual and visual feature point maps. These two classical schemes are briefly explained as follows:
for the former scheme, in the composition phase, a point cloud map of a scene may be constructed using a laser radar and other devices (for example, related devices applying inertial navigation technology and Real-time kinematic (RTK) carrier-phase differential technology); and in the positioning stage, matching the point cloud scanned by the current laser radar with the point cloud in the point cloud map to obtain the current vehicle pose. However, in this solution, the lidar device is expensive, and the point cloud map contains a large number of original scanning points in the scene, which is bulky, and it is also costly to perform the matching operation of the point cloud.
For the latter scheme, in the composition phase, a visual feature point map of a scene may be constructed using a visual sensor (camera) and other devices (e.g., related devices applying inertial navigation technology and Real-time kinematic (RTK) carrier-phase differential technology); in the positioning stage, the feature points detected by the current image frame are matched with the visual feature point map to obtain the current vehicle pose. However, in this solution, under a large scale outdoor scene, such as changes in illumination and weather, the extraction of the feature points in the image frame may be adversely affected, which may affect the positioning accuracy.
Although the former scheme realizes high-precision positioning of the vehicle, the required hardware cost is high, the computational cost is also high, and the implementation difficulty is high; the latter solution, although not as costly as required, is difficult to guarantee with respect to positioning accuracy. How to give consideration to implementation cost and high-precision positioning still needs a better solution.
Disclosure of Invention
In view of this, the present invention provides a method and an apparatus for automatic driving positioning based on a vector map, and mainly aims to provide an optimized automatic driving positioning method by using a vector map, which has low implementation cost and can meet the high-precision requirement for positioning.
In order to achieve the above purpose, the present invention mainly provides the following technical solutions:
the invention provides a method for automatic driving positioning based on a vector map, which comprises the following steps:
acquiring an initial pose corresponding to a vehicle;
processing a current image corresponding to the shot vehicle based on the initial pose to obtain a first feature element contained in a first feature identifier in the current image;
acquiring a second feature element of a second feature identifier corresponding to the vehicle from a vector map based on the initial pose;
and processing the first characteristic element and the second characteristic element with the matching relation by using a preset cost function so as to correct the initial pose based on a processing result to obtain a target pose.
In some modified embodiments of the first aspect of the present invention, the processing, based on the initial pose, a current image corresponding to the vehicle to obtain a first feature element included in a first feature identifier in the current image includes:
processing the current image by using a preset image semantic segmentation model, and extracting at least one first feature identifier from the current image;
according to the number of the first feature identifications, creating a first image layer corresponding to the current image;
putting each first feature identifier into the uniquely corresponding first image layer to obtain pixel points correspondingly covered by the first feature identifiers in the first image layer;
performing distance conversion processing on the pixel points correspondingly covered by the first characteristic identification to obtain target pixel points;
and forming a first characteristic element corresponding to the first characteristic identification by using the target pixel points.
In some modified embodiments of the first aspect of the present invention, the obtaining, from a vector map, a second feature element that a second feature identifier corresponding to the vehicle has, based on the initial pose, includes:
extracting a second feature identifier within a preset range from the vehicle from a vector map based on the initial pose;
creating a second image layer corresponding to the current image;
projecting the second feature identifier to a unique corresponding second image layer to obtain pixel points correspondingly covered by the second feature identifier in the second image layer;
and forming a second characteristic element corresponding to the second characteristic identifier by using the pixel points correspondingly covered by the second characteristic identifier.
In some modified embodiments of the first aspect of the present invention, before the projecting the second feature identifier to a uniquely corresponding second image layer to obtain a pixel point correspondingly covered by the second feature identifier in the second image layer, the method further includes:
if the second characteristic mark has a linear mark, performing equal-interval sampling processing on the linear mark to obtain a plurality of corresponding discrete points;
and using the plurality of discrete point substitutes to characterize the line type identification.
In some modified embodiments of the first aspect of the present invention, the processing the first feature element and the second feature element having a matching relationship by using a preset cost function to correct the initial pose to obtain a target pose based on a processing result includes:
based on the same feature identifier, searching the first feature element matched with the second feature element to obtain a feature element combination corresponding to the same feature identifier, wherein the feature element combination comprises the first feature element and the second feature element which have a matching relationship;
processing each feature element combination by using a preset cost function to obtain a cost function value for measuring the matching degree between the second feature element and the first feature element based on each feature element combination;
and correcting the initial pose to obtain a target pose based on the minimum cost function value.
The invention provides a device for automatic driving positioning based on a vector map, which comprises:
the first acquisition unit is used for acquiring an initial pose corresponding to the vehicle;
the first processing unit is used for processing a current image corresponding to the shot vehicle based on the initial pose to obtain a first feature element contained in a first feature identifier in the current image;
a second obtaining unit, configured to obtain, from a vector map, a second feature element that a second feature identifier corresponding to the vehicle has, based on the initial pose;
and the second processing unit is used for processing the first characteristic element and the second characteristic element with the matching relation by using a preset cost function so as to correct the initial pose based on a processing result to obtain a target pose.
In some modified embodiments of the second aspect of the present invention, the first processing unit includes:
the first extraction module is used for processing the current image by utilizing a preset image semantic segmentation model and extracting at least one first feature identifier from the current image;
a first creating module, configured to create a first image layer corresponding to the current image according to the number of the first feature identifiers;
the placement module is used for placing each first feature identifier into the uniquely corresponding first image layer to obtain pixel points correspondingly covered by the first feature identifiers in the first image layer;
the processing module is used for carrying out distance conversion processing on the pixel points correspondingly covered by the first characteristic identifier to obtain target pixel points;
and the first composing module is used for composing a first feature element corresponding to the first feature identifier by using the target pixel point.
In some modified embodiments of the second aspect of the present invention, the second acquisition unit includes:
the second extraction module is used for extracting a second feature identifier within a preset range of the vehicle from a vector map based on the initial pose;
a second creating module, configured to create a second layer corresponding to the current image;
the projection module is used for projecting the second feature identifier to a unique corresponding second image layer to obtain pixel points correspondingly covered by the second feature identifier in the second image layer;
and the second composing module is used for composing a second characteristic element corresponding to the second characteristic identifier by using the pixel points correspondingly covered by the second characteristic identifier.
In some modified embodiments of the second aspect of the present invention, the second acquiring unit further includes:
the sampling processing module is configured to, before the second feature identifier is projected to a uniquely corresponding second image layer and a pixel point correspondingly covered by the second feature identifier in the second image layer is obtained, if a linear identifier exists in the second feature identifier, perform equal-interval sampling processing on the linear identifier to obtain a plurality of corresponding discrete points;
and the replacing module is used for replacing and characterizing the line type identification by the plurality of discrete points.
In some modified embodiments of the second aspect of the present invention, the second processing unit includes:
the searching module is used for searching the first feature element matched with the second feature element based on the same feature identifier to obtain a feature element combination corresponding to the same feature identifier, wherein the feature element combination comprises the first feature element and the second feature element which have a matching relationship;
the processing module is used for processing each characteristic element combination by utilizing a preset cost function to obtain a cost function value for measuring the matching degree between the second characteristic element and the first characteristic element based on each characteristic element combination;
and the implementation module is used for implementing the correction of the initial pose based on the minimum cost function value so as to obtain a target pose.
A third aspect of the invention provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the method for automatic driving localization based on a vector map as described above.
A fourth aspect of the present invention provides an electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, the processor when executing the computer program implementing the method for automatic driving positioning based on vector map as described above.
By the technical scheme, the technical scheme provided by the invention at least has the following advantages:
the invention provides a method and a device for automatic driving positioning based on a vector map. On the premise of the same initial pose, the first feature identifier shot on the actual road surface and the second feature identifier obtained from the vector map have the same condition in practice, so that the first feature element and the second feature element have the matching relation based on the same feature identifier. The first characteristic element and the second characteristic element with the matching relation are processed by the aid of the preset cost function, so that a cost function value of the matching degree between the first characteristic element and the second characteristic element with the matching relation is calculated to serve as a calculation processing result, the initial pose is corrected based on the processing result to obtain the target pose, and high-precision positioning of the automatic driving vehicle is achieved.
Compared with the prior art, the method provided by the invention has the advantages that the required algorithm is not complex, the calculation cost is low, and the high-precision vector map is easy to obtain, so that the implementation cost of the scheme is low, the requirement on high precision of positioning is also met, and the problem that the implementation cost and high precision positioning are difficult to realize in the prior art is solved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various additional advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a flowchart of a method for automatic driving positioning based on a vector map according to an embodiment of the present invention;
FIG. 2 is a flowchart of another method for automatic driving positioning based on a vector map according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an exemplary distance change process performed on Ick to obtain a distance converted image Dck according to the embodiment of the present invention;
fig. 4 is a block diagram of an apparatus for automatic driving positioning based on a vector map according to an embodiment of the present invention;
fig. 5 is a block diagram of another apparatus for automatic driving positioning based on a vector map according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The embodiment of the invention provides a method for automatic driving positioning based on a vector map, which comprises the following specific steps as shown in figure 1:
101. and acquiring the corresponding initial pose of the vehicle.
Wherein, the vehicle position: vehicle pose refers to the position and attitude of a vehicle in the world or map, the position being typically expressed in euclidean coordinates (x, y, z), and the attitude being typically expressed in euler angles (rotation angles about the x/y/z axis) or quaternions (x, y, z, w).
In a vehicle automatic driving application scene, an initial pose of a vehicle is mainly acquired based on two different example scenes, and the method specifically comprises the following steps: in an example scene 1, a current vehicle pose is obtained as an initial pose when a vehicle is just started; and in the example scene 2, the vehicle pose is acquired at any time in the vehicle running process and is used as an initial pose. For the embodiment of the present invention, the initial pose of the vehicle is calculated based on a large amount of data, which corresponds to the guessed pose of the current vehicle.
Based on different example scenes, the specific implementation method for respectively adopting different acquisition initial poses is as follows:
example scenario 1: for the initial pose corresponding to the current time of the vehicle just after the vehicle starts, the initial pose corresponding to the current time of the vehicle can be obtained through a global pose sensor, for example, the initial pose can be directly obtained through a Global Navigation Satellite System (GNSS).
Example scenario 2: in the vehicle running process, for obtaining the initial pose of the vehicle at the current moment, the target pose calculated according to the step 104 at the previous moment and data of other sensors such as an Inertial Measurement Unit (IMU) or a wheel speed meter can be predicted to obtain the initial pose.
For example, when the target pose of the vehicle at the previous time is obtained, the acceleration and the angular velocity of the vehicle at the current time obtained by the IMU are used to integrate the time difference between the previous time and the current time, so as to obtain the position and posture increment of the current time relative to the previous time, and the position and posture increment is superimposed on the target pose at the previous time, so as to obtain the initial pose at the current time. The pose increment can also be obtained by integrating the time through the wheel speed provided by a wheel speed meter of the vehicle.
102. And processing the current image corresponding to the shot vehicle based on the initial pose to obtain a first feature element contained in the first feature identifier in the current image.
The first feature element is a feature element of a feature identifier included in a current image of the captured vehicle. The feature identifier may be, but is not limited to: traffic signs such as lane lines, stop lines, traffic lights, and the like; the road condition has characteristic marks such as road edge, green isolation belt, street lamp, etc.
For example, the first feature element may be a representation of the pixel level of the corresponding feature identifier in an image, namely: which pixel points are covered in the image, so that the corresponding feature identifier is visually displayed in the image by using the pixel points. For example, if a lane line is shot in the current image, the lane line is the first feature identifier, and the lane line covers corresponding pixel points in the current image, where the pixel points are the first feature elements corresponding to the lane line.
It should be noted that, in the technical solution of the present invention, the "feature identifier" and the "feature element" are mentioned several times, and for convenience of clear explanation of the technical solution of the present invention, the "feature identifier" obtained based on capturing the current image is referred to as a "first feature identifier", and the "feature element" included in the "first feature identifier" is referred to as a "first feature element".
And acquiring a 'feature identifier' which is associated with the vehicle periphery from the vector map, marking the 'feature identifier' as a 'second feature identifier', and marking a 'feature element' of the 'second feature identifier'.
However, it should be noted that, because the pose refers to the positioning and the pose of the vehicle, the imaging effect of the first feature identifier in the current image may be slightly different from the actual shape on the road surface based on different poses. For example, if the relative position of the vehicle and the lane line is not parallel but has an included angle, the lane line imaged by the camera sensor at the left end or the right end of the vehicle is in a trapezoid shape instead of a long rectangle shape, and the imaging effect of the lane line imaged based on the different mounting positions of the camera sensor on the vehicle (such as the front side of the left end or the rear side of the left end) may be different. Accordingly, based on different poses, the first feature elements of the same first feature identifier in the current image are also different.
In the embodiment of the present invention, at least one camera sensor is mounted on the vehicle in advance, and the mounting positions of the camera sensors may be, but are not limited to, the front end, the rear end and the left/right end of the vehicle. On the premise that the initial pose of the vehicle is determined, the corresponding current images can be shot by utilizing the camera sensors respectively.
When the current image is shot, more first feature identifiers are expected to be shot, so that more first feature elements of the first feature identifiers are applied to subsequent step operation, and the target pose with higher precision is obtained finally. To achieve this, it is exemplary, but not limited to, adjusting the installation position of the camera sensor, the photographing view range, and the photographing clarity.
103. And acquiring second feature elements of a second feature identifier corresponding to the vehicle from the vector map based on the initial pose.
The second feature element refers to a feature element included in the second feature identifier included in the vector map. It should be noted that, the second feature identifier may also be, but is not limited to: traffic signs such as lane lines, stop lines, traffic lights, and the like; the road condition such as curb, greenbelt, street lamp has the characteristic sign. As can be seen by comparison, the first characteristic mark and the second characteristic mark actually refer to the same data type and range, but have different sources, and the source of the first characteristic mark is: acquiring a feature identifier from a shot current image of the vehicle based on the initial pose of the vehicle; and the source of the second feature identifier is: and acquiring feature identifications existing nearby the vehicle from the vector map based on the initial pose of the vehicle.
The vector map can be downloaded from a third-party application, but is not limited to, and the high-precision requirement of the vector map can be met based on different source channels.
In the embodiment of the invention, on the premise of determining the initial pose of the vehicle, the second feature identifiers existing in the range near the vehicle are obtained from the vector map, and the second feature elements of each second feature identifier are further analyzed. For example, the second feature element may be a projection of the second feature identifier into an image to be characterized at a pixel level, that is: which pixel points are covered in the image, so that the corresponding feature identifiers are visually displayed in the image by using the pixel points. For example. And when one lane line near the vehicle is obtained from the vector map, projecting the lane line into one image as a second feature identifier, and taking all pixel points covered by the lane line in the image as second feature elements.
It should be noted that although the pose refers to the position and the posture of the vehicle, for the second feature identifier obtained from the vector map, the vehicle position is relied on, and the vehicle pose is not affected, so that for projecting the second feature identifier obtained from the vector map onto one image, the imaging effect of the second feature identifier is substantially the same as the shape of the actual feature identifier on the road surface, and the imaging effect is only affected by the accuracy of the vector map.
Therefore, in the embodiment of the invention, the vehicle is positioned in the same position, but based on different poses, the first characteristic element and the second characteristic element of the same characteristic mark are different.
104. And processing the first characteristic element and the second characteristic element which have the matching relationship by using a preset cost function so as to correct the initial pose based on a processing result to obtain a target pose.
In the embodiment of the present invention, for obtaining the first feature identifier by capturing the current image of the vehicle and for obtaining the second feature identifier corresponding to the vehicle from the vector map, since the operations are performed based on the same initial pose, although the operations are obtained through two different sources, in fact, the feature identifiers respectively characterized by the first feature identifier and the second feature identifier may be the same.
Illustratively, for a running vehicle, the vehicle pose acquired at any time is selected as an initial pose, and then a camera sensor deployed at the left end of the vehicle is used for shooting a left dotted lane line identifier as a first feature identifier acquired from a shot current image based on the initial pose.
Based on the initial pose, feature identifiers existing in the vicinity of the vehicle, such as a left dotted lane line identifier and a right dotted lane line identifier of a driving lane in which the vehicle is located, lane line identifiers of adjacent driving lanes and the like, can be acquired by using the vector map. Accordingly, the left dotted lane line identifier (i.e., this same feature identifier) is obtained based on two different channels.
However, under the influence of the vehicle pose included in the initial pose, the first feature element and the second feature element of the same feature identifier may be the same or different, but the two may be determined to have a matching relationship based on the same feature identifier.
Therefore, in the embodiment of the present invention, the second feature element of the second feature identifier obtained from the vector map is used as a reference, and the cost value of the matching degree between the two feature elements is calculated by using the preset cost function, so as to measure the matching degree between the two feature elements, which is used as a processing result, so as to further correct the initial pose (i.e., the guessed pose of the current vehicle) to position the more accurate current pose (i.e., the target pose).
The embodiment of the invention provides a method for automatic driving positioning based on a vector map. On the premise of the same initial pose, the feature identifier shot on the actual road surface and the feature identifier obtained from the vector map have the same condition in practice, so that the first feature element and the second feature element have a matching relationship based on the same feature identifier. The first characteristic element and the second characteristic element with the matching relation are processed by the aid of the preset cost function, so that a cost function value of the matching degree between the first characteristic element and the second characteristic element with the matching relation is calculated to serve as a calculation processing result, the initial pose is corrected based on the processing result to obtain the target pose, and high-precision positioning of the automatic driving vehicle is achieved.
Compared with the prior art, the method provided by the embodiment of the invention has the advantages that the required algorithm is not complex, the calculation cost is low, and the high-precision vector map is easy to obtain, so that the implementation cost of the scheme is low, the requirement on high precision of positioning is met, and the problem that the implementation cost and high precision positioning cannot be considered in the prior art is solved.
In order to explain the above embodiment in more detail, the embodiment of the present invention further provides another method for automatic driving positioning based on a vector map, and as shown in fig. 2, the embodiment of the present invention provides the following specific steps:
first, it should be noted that, for convenience and clarity of explaining the method for automatic driving positioning based on a vector map provided in the embodiment of the present invention, a word "first" is used to identify a feature identifier, a feature element, and a created layer obtained by capturing a current image; the feature identifier, the feature element and the created layer obtained from the vector map are identified by the word "second", so that the related data information obtained from two different channels can be distinguished conveniently.
201. And acquiring the corresponding initial pose of the vehicle.
In the embodiment of the present invention, for the explanation of this step, refer to step 101, which is not described herein again. Illustratively, the embodiment of the present invention represents the initial pose as a guess pose of the vehicle at the current time, for example, as the vehicle pose Tk at time k.
With reference to steps 202a to 205a, based on the initial pose, processing the current image corresponding to the captured vehicle to obtain a first feature element included in a first feature identifier in the current image, and performing a detailed explanation:
202a, processing the current image by using a preset image semantic segmentation model, and extracting at least one first feature identifier from the current image.
The preset image semantic segmentation model is a model trained in advance based on a deep learning network, and the embodiment of the invention mainly utilizes the model to identify the characteristic identification existing in the image. In order to distinguish the feature identifiers of different acquisition channels between the feature identifiers obtained by capturing an image and the feature identifiers obtained from the vector map, the feature identifier obtained from the former channel is referred to as a first feature identifier, and the feature identifier obtained from the latter channel is referred to as a second feature identifier.
203a, creating a first image layer corresponding to the current image according to the number of the first feature identifiers.
204a, putting each first feature identifier into a uniquely corresponding first image layer to obtain pixel points correspondingly covered by the first feature identifiers in the first image layer.
In the embodiment of the present invention, layers are created based on the current image, so that each layer has the same attribute information as the current image, where the attribute information includes, but is not limited to, resolution, size, and the like.
For convenience and clarity of reference, the layer created according to the first feature identifier is referred to as a first layer in the embodiments of the present invention. Illustratively, if the current image is shot to include three first feature identifiers, namely, a lane line, a road edge and a traffic light, three first image layers are correspondingly created, and each first feature identifier is placed in a uniquely corresponding first image layer.
For the first feature identifier placed in the first image layer, it should be noted that the pixel coordinate of the first feature identifier in the first image layer is the same as the pixel coordinate of the first feature identifier in the current image corresponding to the vehicle.
Illustratively, the following formula (1) is used to characterize which pixel points are covered by the first feature identifier in the first layer.
Figure BDA0003753916620000121
Wherein k is used for indicating that the shot current image is a few frames; c is used for describing the category of the first characteristic mark, the category is not a 'classification category' in a popular meaning, and the embodiment of the invention actually judges each different first characteristic mark to be a category correspondingly, for example, two lane lines, stop lines and road edges with different positions are judged to be four categories, so that a certain pixel point belongs to which characteristic mark for convenience; ick is used to refer to a first image layer, and specifically is a first image layer based on the first feature identifier c of the current image of the k-th frame, for example, taking a lane line as an example, ick is a first image layer based on the lane line of the current image of the k-th frame; p I Each pixel point included in the first layer Ick; the embodiment of the invention is based on the judgment operation of '1' or '0' for 'yes' and 'no', in particular to judge P I Whether the pixel point is covered by the first feature identifier c on the first image layer Ick or not.
Specifically, taking a lane line as an example, some designated pixel points may be covered in the first layer Ick, so that the designated pixel points may be imaged as a lane line, and then, for each pixel point P included in the first layer Ick I Using a decision operation (yes and not, i.e. in the form of a "1" or a "0") one by one, if a certain P is decided I To be "1", the P is determined I To specify a pixel point, but if not, then determine the P I Is "0", and the corresponding visualization effect on the first layer is: if a certain P is judged I Is "1", then P is I The pixel points are black in the first image layer; but if judging P I Is "0", then P is I The pixel points are white in the first image layer; based on the determination of each P in the first layer I Is "1" or "0" to exhibit an imaging effect of a lane line (i.e., a first feature mark) within the first layer. The first image layer is thus converted into a binary (0/1) image, that is, the first feature identifier included in the first image layer is expressed by using the binary (0/1) image.
205a, performing distance transformation processing on the pixel points correspondingly covered by the first feature identifier to obtain target pixel points.
In the embodiment of the present invention, in addition to obtaining the pixel points covered by the first feature identifier on the corresponding first layer by using the formula (1), distance transformation processing is further performed on the pixel points, so that a position pixel value closer to a center of the first feature identifier is referred to as a maximum position pixel value and a position pixel value farther away is referred to as a minimum position pixel value on the first layer, so that the effect presented by the first feature identifier is: in the first layer, the center of the first feature mark is very dark, and the farther away from the first feature mark, the more fuzzy or white the center is, so that a gradual change effect is obtained. Specifically, the following formula (2) is adopted to realize the distance conversion processing:
Figure BDA0003753916620000131
for equation (2), the specific explanation is as follows:
(1)P I expressing any pixel point in the first image layer, wherein the pixel coordinate is generally expressed as (u, v), the upper left corner of the image is taken as the origin (0, 0), u expresses the column where the pixel point is located, and v expresses the row where the pixel point is located;
(2) Ick is the first layer, ick (P) I ) P representing Ick image I A pixel value of a pixel location;
(3) Dck is the reconstructed image, and the pixel value rule of each pixel is as follows:
i. when Ick (P) I ) 1, i.e. P of Ick I P of Dck when pixel value of position is 1 I The loxel value is set to 0;
ii. When Ick (P) I ) =0, i.e. P of Ick I When the pixel value of the position is 0, find the distance P in Ick I The nearest pixel with a pixel value of 1, this pixel and P I The minimum value of the difference between u and v of (b) is set to P of Dck I The position pixel value.
Specifically, with reference to the schematic diagram of obtaining the distance conversion image Dck by performing the distance change processing on Ick illustrated in fig. 3, in the embodiment of the present invention, the distance change processing is performed on the pixel points in each first layer, so as to obtain a Dck image reconstructed based on each first layer.
By the conversion of the above equation, the original binary (0/1) image can be converted into an image having continuous brightness, that is, an image having brighter dots farther from a dot having a value of 1 in the original Ick. The result Dck of this transformation can be used for non-linear optimization in subsequent steps.
206a, forming a first feature element corresponding to the first feature identifier by using the target pixel points.
In the embodiment of the present invention, for the first feature identifier placed in the first layer, based on the shooting imaging effect, the first feature identifier covers corresponding pixel points in the first layer, and distance conversion processing is performed on the pixel points, which is equivalent to enhancement processing of the imaging effect. Accordingly, the embodiment of the present invention is equivalent to a first feature element which takes a pixel point having an enhanced imaging effect as a first feature identifier on the first layer.
It should be noted that, because the pose refers to the positioning and the posture of the vehicle, based on different postures, the imaging effect of the feature identifier obtained by capturing the current image of the vehicle may be slightly different from the actual shape of the road surface, and therefore, the imaging effect of the first feature identifier composed of the first feature elements is also different from the actual shape of the road surface.
As a parallel implementation process of "acquiring a first feature element possessed by a first feature identifier from a current image corresponding to a captured vehicle based on an initial pose", the embodiment of the present invention further acquires a second feature element possessed by a second feature identifier corresponding to the vehicle from a vector map based on the initial pose, which is specifically explained with reference to steps 202b to 205b as follows.
And 202b, extracting a second feature identifier within a preset range from the vehicle from the vector map based on the initial pose.
203b, creating a second image layer corresponding to the current image based on the number of the second feature identifications.
In the embodiment of the invention, the second feature identifier within a preset range from the vehicle is extracted from the vector map based on the vehicle positioning contained in the initial pose of the vehicle.
Further, based on the number of the second feature identifiers, the embodiment of the present invention creates a corresponding number of second image layers, and the attribute information of each second image layer is the same as the current image obtained by capturing the current driving road condition of the vehicle, where the attribute information includes, but is not limited to, resolution, size, and the like.
204b, projecting the second feature identifier to a uniquely corresponding second image layer to obtain pixel points correspondingly covered by the second feature identifier in the second image layer.
In the embodiment of the present invention, it is equivalent to projecting the second feature identifiers obtained from the vector map onto the image captured by the camera sensor, specifically, projecting each second feature identifier onto a uniquely corresponding second image layer, which is equivalent to implementing coordinate conversion between a vector map world and a camera sensor world, and further implementing pixel coordinate conversion, and the following formula (3) is adopted to implement: ZP I =KT k P m Formula (3);
specifically, the projection process is explained by using the formula (3) as follows:
(1)、P m being points in the map, T k For the pose of the camera sensor in the map at time k, since the camera sensor mentioned here is the camera sensor used in the step 202a to take the current imageAre identical, the T k Is also equivalent to the vehicle initial pose, P, mentioned in step 201 c =T k P m Calculated P c Is P m Coordinates under a camera sensor coordinate system;
(2) K is a camera sensor (namely a camera) internal reference matrix, and the general form is as follows:
Figure BDA0003753916620000151
by P p =KT k P m Further obtaining P p Is P c The position of the point in the pixel coordinate system;
z is P p A third dimensional coordinate of P p The coordinate is divided by Z at the same time to obtain the normalized pixel coordinate P with the third dimension of 1 I The normalized pixel coordinate is the final P m Pixel coordinates projected onto the pixel.
It should be noted that, for some line type identifiers (i.e. as second feature identifiers), the embodiment of the present invention may perform equal-interval sampling processing on such second feature identifiers in advance, so as to convert and characterize such second feature identifiers to have a plurality of discrete points in advance, and then perform projection processing based on these discrete points, thereby facilitating improvement of efficiency of projection operation on these line type identifiers.
205b, forming a second feature element corresponding to the second feature identifier by using the pixel points correspondingly covered by the second feature identifier.
In the embodiment of the invention, the second feature identifier obtained from the vector map is projected to the only corresponding second map layer, correspondingly, corresponding pixel points are imaged and covered in the image, and then the pixel points are utilized to obtain the second feature element corresponding to the second feature identifier.
207. And processing the first characteristic element and the second characteristic element which have the matching relationship by using a preset cost function so as to correct the initial pose based on a processing result to obtain a target pose.
In the embodiment of the present invention, the detailed explanation of this step is as follows:
firstly, based on the same feature identifier, searching a first feature element matched with a second feature element to obtain a feature element combination corresponding to the same feature identifier, wherein each feature element combination comprises the first feature element and the second feature element which have a matching relationship.
For example, based on the initial pose of the vehicle, a first feature identifier is obtained from a current image taken and a second feature identifier (i.e., two source channels) is obtained by searching the vicinity of the vehicle from a vector map, and it may occur that a certain first feature identifier and a certain second feature identifier actually express the same feature identifier in the road condition, for example, the same lane line in the road condition.
In addition, because the embodiment of the invention processes the feature identifier in a layer mode, based on the same feature identifier, a certain first layer and a certain second layer have a matching relationship. Accordingly, based on such a same feature identifier, a first feature element corresponding to the feature identifier in the first layer having the matching relationship and a second feature element corresponding to the feature identifier in the corresponding second layer are combined to form a feature element combination corresponding to the feature identifier. And the first feature element and the second feature element stored in each feature element combination also have a matching relationship.
Secondly, processing each characteristic element combination by using a preset cost function to obtain a cost function value for correspondingly measuring the matching degree between a second characteristic element and a first characteristic element based on each characteristic element combination, and correcting the initial pose based on the minimum cost function value to obtain a target pose.
In the embodiment of the invention, each feature element combination is processed by using the preset cost function, and actually, the first layer and the second layer with the matching relationship are processed based on the same feature identifier, so that the technical scheme of the invention is realized by processing different feature identifiers by using a plurality of layers with the matching relationship, each feature identifier is processed one by one, and the initial pose of a vehicle is corrected according to the processing result of each feature identifier.
Illustratively, the preset cost function constructed by the embodiment of the present invention is:
Figure BDA0003753916620000161
specifically, the principle of the method implemented by using the formula (4) is as follows:
(1) For all points m (i.e. P) in the map m ) Obtaining the pixel coordinate P using Tk according to the above equation (3) I
(2) And for the class c (namely, the class c refers to a certain feature mark) to which the point m belongs, in the Dck image generated according to the formula (2), according to the pixel coordinate P I Obtaining the pixel value of Dck corresponding to each point m;
for example, for a first layer and a second layer having a matching relationship based on the same feature identifier, a second feature element (i.e., pixel coordinate P) of a second feature identifier projected on the second layer is used I ) According to P I Searching for D corresponding to the first layer CK Image, e.g. looking up FIG. 3, gets each P I Corresponding pixel value of Dck, since each P I Actually corresponds to an m point on the map, so as to further obtain the pixel value of Dck corresponding to the m point;
(3) The pixel values obtained at all points m (i.e., the corresponding pixel values of Dck) are squared and summed to obtain J (Tk). This J (Tk) is actually equivalent to the expression: based on the same feature identifier, it obtains from the vector map and projects the imaging effect in the second layer, which is obtained by shooting the current image and the imaging effect in the first layer, the cost function value of the matching degree between the two imaging effects. I.e. the smaller the value of the cost function, the higher the degree of matching.
Further, in the embodiment of the present invention, one is found by the nonlinear optimization algorithm using the formula (5)
Figure BDA0003753916620000171
So that the value of J (Tk) is minimized,obtained by solving according to the above
Figure BDA0003753916620000172
As the pose of the object.
Figure BDA0003753916620000173
Wherein, T k The current vehicle initial pose is obtained;
Figure BDA0003753916620000174
the initial pose is corrected to obtain the target pose.
It should be noted that, if the method provided by the embodiment of the present invention is executed only by using the current image obtained by one camera sensor of the vehicle, the formulas (4) and (5) may be adopted. However, if the method provided in the embodiment of the present invention is executed by using the current image obtained by each of the multiple camera sensors provided in the vehicle, it is necessary to obtain the corresponding second feature identifier having a matching relationship from the vector map based on the first feature identifier included in each current image, and participate in the subsequent cost function operation processing to obtain a more accurate vehicle pose, which is specifically implemented by using the following formula (6):
Figure BDA0003753916620000175
wherein, V represents all camera sensors, that is, dck images are generated for all camera sensors, and all camera sensors including a point m in the field of view project the point m to the pixel coordinates thereof for Dck value taking and summing.
In the embodiment of the invention, the pose of the vehicle is also continuously changed during the automatic driving process of the vehicle, after the target pose of step 207 is obtained according to the initial pose obtained in step 201, the target pose can be continuously used as the guessed pose of the current vehicle at the next adjacent unit moment, and the steps 202a to 206a, 202b to 205b and 207 are repeatedly executed, so that a more accurate pose positioned at the next adjacent unit moment is obtained.
Further, as an implementation of the method shown in fig. 1 and fig. 2, an embodiment of the present invention provides an apparatus for performing automatic driving positioning based on a vector map. The embodiment of the apparatus corresponds to the embodiment of the method, and for convenience of reading, details in the embodiment of the apparatus are not described again one by one, but it should be clear that the apparatus in the embodiment can correspondingly implement all the contents in the embodiment of the method. The device is applied to obtain more accurate vehicle pose, and specifically as shown in fig. 4, the device comprises:
a first acquiring unit 31 configured to acquire an initial pose corresponding to the vehicle;
the first processing unit 32 is configured to process the current image corresponding to the captured vehicle based on the initial pose, so as to obtain a first feature element included in a first feature identifier in the current image;
a second obtaining unit 33, configured to obtain, from a vector map, a second feature element that a second feature identifier corresponding to the vehicle has, based on the initial pose;
and the second processing unit 34 is configured to process the first feature element and the second feature element having the matching relationship by using a preset cost function, so as to implement correction on the initial pose based on a processing result to obtain a target pose.
Further, as shown in fig. 5, the first processing unit 32 includes:
a first extraction module 321, configured to process the current image by using a preset image semantic segmentation model, and extract at least one first feature identifier from the current image;
a first creating module 322, configured to create a first image layer corresponding to the current image according to the number of the first feature identifiers;
a placement module 323, configured to place each first feature identifier in the uniquely corresponding first layer, so as to obtain a pixel point correspondingly covered by the first feature identifier in the first layer;
the processing module 324 is configured to perform distance conversion processing on the pixel point correspondingly covered by the first feature identifier to obtain a target pixel point;
a first composing module 325, configured to compose a first feature element corresponding to the first feature identifier by using the target pixel point.
Further, as shown in fig. 5, the second acquiring unit 33 includes:
a second extraction module 331, configured to extract, from a vector map, a second feature identifier within a preset range from the vehicle based on the initial pose;
a second creating module 332, configured to create a second layer corresponding to the current image;
the projection module 333 is configured to project the second feature identifier to a uniquely corresponding second image layer, so as to obtain a pixel point correspondingly covered by the second feature identifier in the second image layer;
a second composing module 334, configured to compose a second feature element corresponding to the second feature identifier by using the pixel points correspondingly covered by the second feature identifier.
Further, as shown in fig. 5, the second obtaining unit 33 further includes:
a sampling processing module 335, configured to, before the second feature identifier is projected into a uniquely corresponding second image layer to obtain a pixel point correspondingly covered by the second feature identifier in the second image layer, if a linear identifier exists in the second feature identifier, perform sampling processing at equal intervals on the linear identifier to obtain a plurality of corresponding discrete points;
a replacing module 336 for replacing the line type identifier with the plurality of discrete points.
Further, as shown in fig. 5, the second processing unit 34 includes:
a searching module 341, configured to search, based on the same feature identifier, the first feature element matched with the second feature element to obtain a feature element combination corresponding to the same feature identifier, where the feature element combination includes the first feature element and the second feature element that have a matching relationship;
a processing module 342, configured to process each feature element combination by using a preset cost function, so as to obtain a cost function value for measuring a matching degree between the second feature element and the first feature element based on each feature element combination;
and the implementing module 343 is configured to implement, based on the minimum cost function value, correction on the initial pose to obtain a target pose.
In summary, the embodiments of the present invention provide a method and an apparatus for automatic driving positioning based on a vector map, where an initial pose of a vehicle is first obtained, and on the premise of a vehicle state of the initial pose, a current image is captured and processed to obtain a first feature element included in a first feature identifier of the current image, and a second feature element included in a second feature identifier corresponding to the vehicle is obtained from the vector map. On the premise of the same initial pose, the first feature identifier shot on the actual road surface and the second feature identifier obtained from the vector map have the same condition in practice, so that the first feature element and the second feature element have the matching relation based on the same feature identifier. The first characteristic element and the second characteristic element with the matching relation are processed by the aid of the preset cost function, so that a cost function value of the matching degree between the first characteristic element and the second characteristic element with the matching relation is calculated to serve as a calculation processing result, the initial pose is corrected based on the processing result to obtain the target pose, and high-precision positioning of the automatic driving vehicle is achieved. The algorithm required by the method provided by the embodiment of the invention is not complex, the calculation cost is low, and the high-precision vector map is easy to obtain, so that the implementation cost of the scheme is low, and the requirement on high precision of positioning is also met.
The device for automatic driving positioning based on the vector map comprises a processor and a memory, wherein the first acquisition unit, the first processing unit, the second acquisition unit, the second processing unit and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to realize corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. The kernel can be set to be one or more than one, the optimized automatic driving positioning method is provided by utilizing the vector map by adjusting the kernel parameters, the implementation cost is low, and the high-precision requirement on positioning can be met.
An embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method for automatic driving positioning based on a vector map as described above is implemented.
An embodiment of the present invention provides an electronic device, including: memory, processor and computer program stored on the memory and executable on the processor, the processor when executing the computer program implementing the method for automatic driving positioning based on vector map as described above.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a device includes one or more processors (CPUs), memory, and a bus. The device may also include input/output interfaces, network interfaces, and the like.
The memory may include volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), including at least one memory chip. The memory is an example of a computer-readable medium.
Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional identical elements in the process, method, article, or apparatus comprising the element.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present invention, and are not intended to limit the present invention. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent insertion, improvement or the like made within the spirit and principle of the present invention shall be included in the scope of the claims of the present invention.

Claims (10)

1. A method for automatic driving positioning based on a vector map, the method comprising:
acquiring an initial pose corresponding to a vehicle;
processing a current image corresponding to the shot vehicle based on the initial pose to obtain a first feature element contained in a first feature identifier in the current image;
acquiring a second feature element of a second feature identifier corresponding to the vehicle from a vector map based on the initial pose;
and processing the first characteristic element and the second characteristic element with the matching relation by using a preset cost function so as to correct the initial pose based on a processing result to obtain a target pose.
2. The method according to claim 1, wherein the processing a current image corresponding to the vehicle, based on the initial pose, to obtain a first feature element included in a first feature identifier in the current image comprises:
processing the current image by using a preset image semantic segmentation model, and extracting at least one first feature identifier from the current image;
according to the number of the first feature identifications, creating a first image layer corresponding to the current image;
putting each first feature identifier into the uniquely corresponding first image layer to obtain pixel points correspondingly covered by the first feature identifiers in the first image layer;
performing distance conversion processing on the pixel points correspondingly covered by the first characteristic identifier to obtain target pixel points;
and forming a first characteristic element corresponding to the first characteristic identifier by using the target pixel points.
3. The method according to claim 1, wherein the obtaining of the second feature element of the second feature identifier corresponding to the vehicle from the vector map based on the initial pose comprises:
extracting a second feature identifier within a preset range from the vehicle from a vector map based on the initial pose;
creating a second image layer corresponding to the current image;
projecting the second feature identifier to a unique corresponding second image layer to obtain pixel points correspondingly covered by the second feature identifier in the second image layer;
and forming a second characteristic element corresponding to the second characteristic identifier by using the pixel points correspondingly covered by the second characteristic identifier.
4. The method according to claim 3, wherein before the projecting the second feature identifier to a uniquely corresponding second image layer to obtain a pixel point correspondingly covered by the second feature identifier in the second image layer, the method further comprises:
if the second characteristic mark has a linear mark, performing equal-interval sampling processing on the linear mark to obtain a plurality of corresponding discrete points;
and replacing the line type identification by using the plurality of discrete points.
5. The method according to claim 1, wherein the processing the first feature element and the second feature element having the matching relationship by using a preset cost function to implement correction of the initial pose to obtain a target pose based on a processing result comprises:
based on the same feature identifier, searching the first feature element matched with the second feature element to obtain a feature element combination corresponding to the same feature identifier, wherein the feature element combination comprises the first feature element and the second feature element which have a matching relationship;
processing each feature element combination by using a preset cost function to obtain a cost function value for measuring the matching degree between the second feature element and the first feature element based on each feature element combination;
and correcting the initial pose to obtain a target pose based on the minimum cost function value.
6. An apparatus for automatic driving positioning based on a vector map, the apparatus comprising:
the first acquisition unit is used for acquiring an initial pose corresponding to the vehicle;
the first processing unit is used for processing a current image corresponding to the shot vehicle based on the initial pose to obtain a first feature element contained in the current image;
a second obtaining unit, configured to obtain, from a vector map, a second feature element that a second feature identifier corresponding to the vehicle has, based on the initial pose;
and the second processing unit is used for processing the first characteristic element and the second characteristic element with the matching relation by using a preset cost function so as to correct the initial pose based on a processing result to obtain a target pose.
7. The apparatus of claim 6, wherein the first processing unit comprises:
the first extraction module is used for processing the current image by utilizing a preset image semantic segmentation model and extracting at least one first feature identifier from the current image;
a first creating module, configured to create a first image layer corresponding to the current image according to the number of the first feature identifiers;
the placement module is used for placing each first feature identifier into the uniquely corresponding first image layer to obtain pixel points correspondingly covered by the first feature identifiers in the first image layer;
the processing module is used for carrying out distance conversion processing on the pixel points correspondingly covered by the first characteristic identifier to obtain target pixel points;
and the first composing module is used for composing a first feature element corresponding to the first feature identifier by using the target pixel point.
8. The apparatus of claim 6, wherein the second obtaining unit comprises:
the second extraction module is used for extracting a second feature identifier within a preset range from the vehicle from a vector map based on the initial pose;
the second creating module is used for creating a second image layer corresponding to the current image;
the projection module is used for projecting the second feature identifier to a unique corresponding second image layer to obtain pixel points correspondingly covered by the second feature identifier in the second image layer;
and the second composing module is used for composing a second characteristic element corresponding to the second characteristic identifier by using the pixel points correspondingly covered by the second characteristic identifier.
9. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method for automatic driving localization based on a vector map according to any one of claims 1-5.
10. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, the processor when executing the computer program implementing a method for vector map based autonomous driving positioning according to any of claims 1-5.
CN202210848452.XA 2022-07-19 2022-07-19 Method and device for automatic driving positioning based on vector map Pending CN115235493A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210848452.XA CN115235493A (en) 2022-07-19 2022-07-19 Method and device for automatic driving positioning based on vector map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210848452.XA CN115235493A (en) 2022-07-19 2022-07-19 Method and device for automatic driving positioning based on vector map

Publications (1)

Publication Number Publication Date
CN115235493A true CN115235493A (en) 2022-10-25

Family

ID=83674418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210848452.XA Pending CN115235493A (en) 2022-07-19 2022-07-19 Method and device for automatic driving positioning based on vector map

Country Status (1)

Country Link
CN (1) CN115235493A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117471513A (en) * 2023-12-26 2024-01-30 合众新能源汽车股份有限公司 Vehicle positioning method, positioning device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117471513A (en) * 2023-12-26 2024-01-30 合众新能源汽车股份有限公司 Vehicle positioning method, positioning device, electronic equipment and storage medium
CN117471513B (en) * 2023-12-26 2024-03-15 合众新能源汽车股份有限公司 Vehicle positioning method, positioning device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108805934B (en) External parameter calibration method and device for vehicle-mounted camera
CN110501018B (en) Traffic sign information acquisition method for high-precision map production
CN111830953B (en) Vehicle self-positioning method, device and system
CN111340877B (en) Vehicle positioning method and device
CN110969592B (en) Image fusion method, automatic driving control method, device and equipment
CN114088114B (en) Vehicle pose calibration method and device and electronic equipment
CN111930877B (en) Map guideboard generation method and electronic equipment
JP6278790B2 (en) Vehicle position detection device, vehicle position detection method, vehicle position detection computer program, and vehicle position detection system
CN114663852A (en) Method and device for constructing lane line graph, electronic equipment and readable storage medium
CN109115232B (en) Navigation method and device
Kruber et al. Vehicle position estimation with aerial imagery from unmanned aerial vehicles
CN113255578B (en) Traffic identification recognition method and device, electronic equipment and storage medium
CN115235493A (en) Method and device for automatic driving positioning based on vector map
CN112580489A (en) Traffic light detection method and device, electronic equipment and storage medium
CN117079238A (en) Road edge detection method, device, equipment and storage medium
Lee et al. Semi-automatic framework for traffic landmark annotation
CN114898321A (en) Method, device, equipment, medium and system for detecting road travelable area
CN113869440A (en) Image processing method, apparatus, device, medium, and program product
Li et al. Lane detection and road surface reconstruction based on multiple vanishing point & symposia
CN111238524B (en) Visual positioning method and device
KR102425346B1 (en) Apparatus for determining position of vehicle and method thereof
KR102540636B1 (en) Method for create map included direction information and computer program recorded on record-medium for executing method therefor
KR102540629B1 (en) Method for generate training data for transportation facility and computer program recorded on record-medium for executing method therefor
KR102540632B1 (en) Method for create a colormap with color correction applied and computer program recorded on record-medium for executing method therefor
KR102540634B1 (en) Method for create a projection-based colormap and computer program recorded on record-medium for executing method therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 314500 988 Tong Tong Road, Wu Tong Street, Tongxiang, Jiaxing, Zhejiang

Applicant after: United New Energy Automobile Co.,Ltd.

Address before: 314500 988 Tong Tong Road, Wu Tong Street, Tongxiang, Jiaxing, Zhejiang

Applicant before: Hezhong New Energy Vehicle Co.,Ltd.

CB02 Change of applicant information