CN111831771A - Map fusion method and vehicle - Google Patents

Map fusion method and vehicle Download PDF

Info

Publication number
CN111831771A
CN111831771A CN202010659216.4A CN202010659216A CN111831771A CN 111831771 A CN111831771 A CN 111831771A CN 202010659216 A CN202010659216 A CN 202010659216A CN 111831771 A CN111831771 A CN 111831771A
Authority
CN
China
Prior art keywords
information
map data
matching
queue
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010659216.4A
Other languages
Chinese (zh)
Other versions
CN111831771B (en
Inventor
刘中元
李红军
黄亚
广学令
孙崇尚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng Internet of Vehicle Technology Co Ltd
Original Assignee
Guangzhou Xiaopeng Internet of Vehicle Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaopeng Internet of Vehicle Technology Co Ltd filed Critical Guangzhou Xiaopeng Internet of Vehicle Technology Co Ltd
Priority to CN202010659216.4A priority Critical patent/CN111831771B/en
Publication of CN111831771A publication Critical patent/CN111831771A/en
Application granted granted Critical
Publication of CN111831771B publication Critical patent/CN111831771B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)

Abstract

The embodiment of the invention provides a map fusion method and a vehicle, wherein the method comprises the following steps: the method comprises the steps of obtaining at least two map data aiming at a target area, coding the at least two map data respectively to obtain a coding information set, further matching the coding information sets corresponding to the at least two map data to obtain a matching result, and performing map fusion on the at least two map data according to the matching result. By the embodiment of the invention, the optimization of map fusion is realized, and the encoding and matching of different map data do not need to depend on the rotation of roads in the map data, thereby avoiding the condition of missing matching and improving the accuracy of map fusion.

Description

Map fusion method and vehicle
Technical Field
The invention relates to the field of map fusion, in particular to a map fusion method and a vehicle.
Background
The electronic map is a map stored and consulted in a data form, and becomes an essential tool for people in daily travel, different map data can be collected in the process of generating the map, people can fuse the map to obtain more accurate map data, and influence caused by different map data is avoided.
In the process of map fusion, roads in the map data need to be rotated, and then the map data is fused, so that the situations of map data mismatching and missing matching occur, and the accuracy of map fusion is reduced.
Disclosure of Invention
In view of the above, it is proposed to provide a map fusion method and vehicle overcoming the above problems or at least partially solving the above problems, comprising:
a method of map fusion, the method comprising:
acquiring at least two map data for a target area;
respectively encoding the at least two map data to obtain an encoded information set;
matching the coding information sets corresponding to the at least two map data to obtain a matching result;
and performing map fusion on the at least two pieces of map data according to the matching result.
Optionally, the encoding the at least two pieces of map data respectively to obtain an encoded information set includes:
determining a plurality of semantic elements for each map data;
respectively determining position information and type information corresponding to the semantic elements;
and coding the plurality of semantic elements according to the position information and the type information to obtain a coding information set.
Optionally, the position information is position information based on a target coordinate system, and before the determining the position information corresponding to the plurality of semantic elements respectively, the method further includes:
determining a plurality of target semantic elements on the same side of a road from the map data;
determining a projection straight line according to the distribution of the plurality of target semantic elements;
and constructing a target coordinate system by taking the projection straight line as a coordinate axis.
Optionally, the at least two map data include first map data and second map data, the first map data corresponds to a first encoded information set, the second map data corresponds to a second encoded information set, and the matching of the encoded information sets corresponding to the at least two map data is performed to obtain a matching result, including:
determining a plurality of first encoded information queues from the first encoded information set; the encoding information at the head of each first encoding information queue is different, and the encoding information in each first encoding information queue is arranged according to the position information corresponding to the semantic elements;
determining a plurality of second encoded information queues from the second encoded information set; the encoding information at the head of each second encoding information queue is different, and the encoding information in each second encoding information queue is arranged according to the position information corresponding to the semantic elements;
respectively determining matching scores between the multiple first coded information queues and the multiple second coded information queues, and determining a target first coded information queue and a target second coded information queue corresponding to the highest matching scores;
and obtaining a matching result according to the target first coding information queue and the target second coding information queue.
Optionally, the determining matching scores between the plurality of first encoded information queues and the plurality of second encoded information queues, respectively, includes:
determining first coding information in the first coding information queue and determining second coding information corresponding to the first coding information in the second coding information queue;
judging whether the type information corresponding to the first coding information and the type information corresponding to the second coding information are the same type information;
when the type information corresponding to the first coding information and the type information corresponding to the second coding information are judged to be the same type information, determining the sub-matching scores of the first coding information and the second coding information as a first preset score;
and combining all the sub-matching scores to obtain the matching score between the first coded information queue and the second coded information queue.
Optionally, before the determining whether the type information corresponding to the first encoded information and the type information corresponding to the second encoded information are the same type information, the method further includes:
determining first relative distance information corresponding to the first coding information and the first matching coding information; the first matching coded information is coded information of which the latest matching score in the first coded information queue is a first preset score;
determining second relative distance information corresponding to the second coding information and second matching coding information; the second matching coded information is coded information of which the latest matching score in the second coded information queue is a first preset score;
determining a distance difference between the first relative distance information and the second relative distance information;
and when the distance difference is smaller than a preset distance difference, executing the judgment of whether the type information corresponding to the first coding information and the type information corresponding to the second coding information are the same type information.
Optionally, the matching result includes a correspondence between a plurality of semantic elements in the at least two map data.
Optionally, the target coordinate system is a one-dimensional coordinate system.
Optionally, the target area is a parking lot, and the target semantic element is a semantic element corresponding to a parking space in the parking lot.
A vehicle, the vehicle comprising:
the map data acquisition module is used for acquiring at least two map data aiming at the target area;
the encoding information set obtaining module is used for respectively encoding the at least two map data to obtain an encoding information set;
a matching result obtaining module, configured to match the coding information sets corresponding to the at least two pieces of map data to obtain a matching result;
and the map fusion module is used for carrying out map fusion on the at least two pieces of map data according to the matching result.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, at least two map data aiming at a target area are obtained, and the at least two map data are respectively coded to obtain the coded information set, so that the coded information sets corresponding to the at least two map data are matched to obtain the matching result, and the at least two map data are subjected to map fusion according to the matching result, thereby realizing the optimization of the map fusion, avoiding the condition of missing matching by coding and matching different map data without depending on the rotation of roads in the map data, and improving the accuracy of the map fusion.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the description of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a flowchart illustrating steps of a method for map fusion, according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of another method for map fusion, according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating steps of a method for map fusion according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating steps of a method for still another map fusion, according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a map fusion example provided by an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a vehicle according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart illustrating steps of a method for map fusion according to an embodiment of the present invention is shown, which may specifically include the following steps:
step 101, acquiring at least two map data aiming at a target area;
in the process of map fusion, a target area can be determined, and then at least two pieces of map data for the target area can be acquired, wherein the at least two pieces of map data can be acquired in real time or from a server side.
As an example, the target area may be a parking lot, and the at least two map data may be map data for the same parking lot.
In the process of map fusion, it can be determined that an area where a vehicle is located is a parking lot through a positioning function in the vehicle, such as a Global Positioning System (GPS) function, and then the parking lot is a target area, and at least two pieces of map data for the parking lot can be acquired by downloading the map data of the parking lot from a server side or forming the map data of the parking lot through the GPS function of the vehicle.
102, coding the at least two map data respectively to obtain a coding information set;
after acquiring the at least two map data, in order to be able to compare the map data, the at least two map data may be encoded, and thus an encoded information set may be obtained.
103, matching the coding information sets corresponding to the at least two map data to obtain a matching result;
after the coded information sets are obtained, the coded information sets corresponding to at least two map data can be matched, and then a matching result can be obtained.
And 104, performing map fusion on the at least two pieces of map data according to the matching result.
After the matching result is obtained, map fusion can be performed on at least two pieces of map data according to the matching result.
In the embodiment of the invention, at least two map data aiming at a target area are obtained, and the at least two map data are respectively coded to obtain the coded information set, so that the coded information sets corresponding to the at least two map data are matched to obtain the matching result, and the at least two map data are subjected to map fusion according to the matching result, thereby realizing the optimization of the map fusion, avoiding the condition of missing matching by coding and matching different map data without depending on the rotation of roads in the map data, and improving the accuracy of the map fusion.
Referring to fig. 2, a flowchart illustrating steps of another map fusion method according to an embodiment of the present invention is shown, which may specifically include the following steps:
step 201, acquiring at least two map data aiming at a target area;
in the process of map fusion, at least two map data for a target area may be acquired.
Step 202, determining a plurality of semantic elements for each map data;
wherein the map data may comprise semantic elements.
After the map data is acquired, a plurality of semantic elements in the map data may be determined for each map data.
As an example, the semantic element may be one of parking space information, deceleration strip information, road block information, road information, boundary information, and the like, and the determining semantic element may be information of parking space information, road block information, and the like in the map data.
Step 203, respectively determining position information and type information corresponding to the semantic elements;
after determining the semantic elements, position information and type information corresponding to the plurality of semantic elements may be determined, respectively.
For example, when the semantic element is the parking space information, the type of the semantic element may be determined to be the parking space type, and then the parking space type may be determined to be the type information of the semantic element, and the position information of the parking space information corresponding to the map data may also be determined.
In an embodiment of the present invention, the position information may be position information based on a target coordinate system, and before the determining the position information and the type information corresponding to the semantic elements respectively, the method may further include:
determining a plurality of target semantic elements on the same side of a road from the map data; determining a projection straight line according to the distribution of the plurality of target semantic elements; and constructing a target coordinate system by taking the projection straight line as a coordinate axis.
The target coordinate system is a one-dimensional coordinate system, the target area is a parking lot, and the target semantic elements are semantic elements corresponding to the parking spaces in the parking lot.
After the semantic elements are determined, since the road can be divided into two sides, a plurality of target semantic elements on the same side of the road can be determined from the map data.
For example, when any semantic element is determined to be the parking space information, the vector direction of the parking space information pointing to the road may be determined, the road may be divided into two sides according to the vector direction, and all the target semantic elements located on the same side of the road may be determined from the map data.
After the target semantic elements are determined, a projection straight line can be determined according to the distribution of a plurality of target semantic elements, such as three-dimensional space distribution, and then a target coordinate system can be constructed by taking the projection straight line as a coordinate axis.
For example, continuing with the above example, after all the target semantic elements are determined, the relative positions of the target semantic elements and the parking space information may be determined, and then the three-dimensional spatial distribution of the target semantic elements on the same side of the road may be determined, a fitting projection straight line conforming to all the target semantic elements may be calculated by a RANSAC (Random Sample Consensus) method, and the projection straight line may be used as a coordinate axis to construct a target coordinate system.
In practical application, all semantic elements on the same side of the road can be projected into the target coordinate system, any one semantic element can be used as the origin of the target coordinate system, and then the coordinate information of all the semantic elements in the target coordinate system can be determined, and the coordinate information can be determined as the position information of the semantic elements.
Step 204, encoding the semantic elements according to the position information and the type information to obtain an encoded information set;
after the position information and the type information corresponding to the semantic elements are determined, the plurality of semantic elements may be encoded according to the position information and the type information.
For example, the position information of the semantic element may be determined, and then the semantic element may be encoded according to the position information of the semantic element; or determining the type information of the semantic element, and further coding the semantic element according to the type information of the semantic element; the semantic elements can be coded by combining the position information and the type information of the semantic elements, and further the coding information corresponding to the semantic elements can be obtained, so that a coding information set corresponding to all the semantic elements can be obtained.
Step 205, matching the coding information sets corresponding to the at least two map data to obtain a matching result;
wherein the matching result comprises a correspondence between a plurality of semantic elements in the at least two map data;
after the encoded information sets are obtained, the encoded information sets corresponding to the at least two map data may be matched to obtain a matching result.
For example, the coded information may be matched according to the position information and/or the type information, and then the coded information with the same position information and/or type information may be determined as the matched coded information, so as to obtain the matching result according to the matched coded information.
And step 206, performing map fusion on the at least two map data according to the matching result.
After the matching result is obtained, the position information and the arrangement sequence of the coded information corresponding to the map data can be determined according to the matching result, and further, when the map data are fused, the positions corresponding to at least two map data can be adjusted according to the arrangement sequence, and the map fusion can be performed on the adjusted map data.
For example, the corresponding positions of the at least two map data may be adjusted by means of alignment and/or translation.
In the embodiment of the invention, at least two map data aiming at a target area are obtained, a plurality of semantic elements are determined aiming at each map data, the position information and the type information corresponding to the semantic elements are respectively determined, then the semantic elements are coded according to the position information and the type information to obtain a coded information set, the coded information sets corresponding to the map data are matched to obtain a matching result, and therefore, the map fusion is carried out on the map data according to the matching result, the optimization of the map fusion is realized, the encoding and the matching of different map data do not need to depend on the rotation of roads in the map data, the condition of missing matching is avoided, and the accuracy of the map fusion is improved.
Referring to fig. 3, a flowchart illustrating steps of another map fusion method according to an embodiment of the present invention is shown, which may specifically include the following steps:
step 301, acquiring at least two map data for a target area; wherein the at least two map data include first map data and second map data;
in the process of map fusion, at least two pieces of map data for a target area can be acquired, and then the first map data and the second map data can be determined.
Step 302, determining a plurality of semantic elements for each map data;
wherein the map data may comprise semantic elements.
After the map data is acquired, a plurality of semantic elements in the map data may be determined for each map data.
Step 303, respectively determining position information and type information corresponding to the plurality of semantic elements;
after determining the semantic elements, position information and type information corresponding to the plurality of semantic elements may be determined, respectively.
304, coding the semantic elements according to the position information and the type information to obtain a coding information set; wherein the first map data corresponds to a first set of encoded information and the second map data corresponds to a second set of encoded information;
after the position information and the type information corresponding to the semantic elements are determined, the plurality of semantic elements may be encoded according to the position information and the type information.
After the encoding is completed, the encoding information set can be obtained, and since the first map data and the second map data can be determined, the first encoding information set corresponding to the first map data and the second encoding information set corresponding to the second map data can be further determined
Step 305, determining a plurality of first encoded information queues from the first encoded information set; the encoding information at the head of each first encoding information queue is different, and the encoding information in each first encoding information queue is arranged according to the position information corresponding to the semantic elements;
after the first encoded information set is obtained, the encoded information may be arranged according to the position information corresponding to the semantic element, and any one encoded information may be used as the head of the queue, so that a plurality of encoded information queues may be determined.
For example, the first encoding information set may include encoding information a1, a2, A3, a4, and a5, wherein the encoding information a1, a2, A3, a4, and a5 may respectively correspond to type information of different semantic elements, and the position information of the corresponding semantic elements may be-1, 3, 2, and 5, and may be arranged according to the position information corresponding to the semantic elements, resulting in the order of a1, a2, a4, A3, and a 5.
When the encoded information a2 is taken as the head of the queue, the encoded information queues a2, a4, A3, a5 and a1 can be determined, and when the encoded information a1 is taken as the head of the queue, the encoded information queues a1, a2, a4, A3 and a5 can be determined, and then a plurality of encoded information queues can be determined.
Step 306, determining a plurality of second encoded information queues from the second encoded information sets; the encoding information at the head of each second encoding information queue is different, and the encoding information in each second encoding information queue is arranged according to the position information corresponding to the semantic elements;
after the second encoded information set is obtained, the encoded information may be arranged according to the position information corresponding to the semantic element, and any one encoded information may be used as the head of the queue, so that a plurality of encoded information queues may be determined.
For example, the second set of encoded information may include encoded information B1, B2, B3, B4, and B5, wherein the encoded information B1, B2, B3, B4, and B5 may respectively correspond to type information of different semantic elements, and the position information of the corresponding semantic elements may be-2, 1, 2, 3, and 5, and may be arranged according to the position information corresponding to the semantic elements, resulting in an order of B1, B2, B3, B4, and B5.
When the coded information B2 is taken as the head of the queue, the coded information queues can be determined to be B2, B3, B4, B5 and B1, and when the coded information B1 is taken as the head of the queue, the coded information queues can be determined to be B1, B2, B3, B4 and B5, and then a plurality of coded information queues can be determined.
Step 307, determining matching scores between the plurality of first encoded information queues and the plurality of second encoded information queues, and determining a target first encoded information queue and a target second encoded information queue corresponding to the highest matching scores;
after the plurality of encoded information queues are determined, the plurality of first encoded information queues and the plurality of second encoded information queues may be matched and scored, and then matching scores between the plurality of first encoded information queues and the plurality of second encoded information queues may be determined respectively.
For example, the sequence of the first encoded information queue may be a2, a4, A3, a5, and a1, and the sequence of the second encoded information queue may be B2, B3, B4, B5, and B1, so that the first encoded information queue and the second encoded information queue may be matched and scored, and the target first encoded information queue and the target second encoded information queue corresponding to the highest matching score may be determined.
308, obtaining a matching result according to the target first coding information queue and the target second coding information queue;
after the target first encoded information queue and the target second encoded information queue are determined, a matching result can be obtained according to the target first encoded information queue and the target second encoded information queue.
For example, the order of the target first encoded information queue and the target second encoded information queue may be determined, and the order may be used as the matching result, or the order of the target first encoded information queue and the target second encoded information queue may be determined, and the matching order may be used as the matching result.
Step 309, according to the matching result, map fusion is performed on the at least two map data.
After the matching result is obtained, the position information and the arrangement sequence of the coded information corresponding to the map data can be determined according to the matching result, and further, when the map data are fused, the positions corresponding to at least two map data can be adjusted according to the arrangement sequence, and the map fusion can be performed on the adjusted map data.
For example, after the arrangement order of the target first encoded information queue and the target second encoded information queue is determined, the positions corresponding to at least two map data may be adjusted by methods such as alignment and translation, so that the map data are aligned as much as possible, and further map fusion may be performed according to the aligned map data.
In the embodiment of the invention, at least two map data for a target area are acquired, a plurality of semantic elements are determined for each map data, position information and type information corresponding to the semantic elements are respectively determined, the semantic elements are further encoded according to the position information and the type information to obtain an encoded information set, wherein the first map data corresponds to a first encoded information set, the second map data corresponds to a second encoded information set, a plurality of first encoded information queues are further determined from the first encoded information set, a plurality of second encoded information queues are determined from the second encoded information set, matching scores between the first encoded information queues and the second encoded information queues are respectively determined, and a target first encoded information queue and a target second encoded information queue corresponding to the highest matching score are determined, therefore, according to the target first coding information queue and the target second coding information queue, a matching result is obtained, and map fusion is carried out on the at least two map data according to the matching result, so that optimization of map fusion is realized.
Referring to fig. 4, a flowchart illustrating steps of a still another map fusion method according to an embodiment of the present invention is shown, which may specifically include the following steps:
step 401, acquiring at least two map data for a target area; wherein the at least two map data include first map data and second map data;
step 402, determining a plurality of semantic elements for each map data;
step 403, respectively determining position information and type information corresponding to the multiple semantic elements;
step 404, encoding the plurality of semantic elements according to the position information and the type information to obtain an encoded information set; wherein the first map data corresponds to a first set of encoded information and the second map data corresponds to a second set of encoded information;
step 405, determining a plurality of first encoded information queues from the first encoded information set; the encoding information at the head of each first encoding information queue is different, and the encoding information in each first encoding information queue is arranged according to the position information corresponding to the semantic elements;
step 406, determining a plurality of second encoded information queues from the second encoded information sets; the encoding information at the head of each second encoding information queue is different, and the encoding information in each second encoding information queue is arranged according to the position information corresponding to the semantic elements;
step 407, determining first encoded information in the first encoded information queue, and determining second encoded information corresponding to the first encoded information in the second encoded information queue;
after determining the first encoded information queue and the second encoded information queue, the encoded information in the encoded information queue may be matched, and during the matching, the first encoded information in the first encoded information queue may be determined, and the second encoded information in the second encoded information queue corresponding to the first encoded information may be determined.
For example, the sequence of the first encoded information queue may be a2, a4, A3, a5, a1, and the sequence of the second encoded information queue may be B2, B3, B4, B5, B1, and then the second encoded information a4 in the first encoded information queue may be determined as the first encoded information, and correspondingly, the second encoded information B3 in the second encoded information queue may be determined as the second encoded information.
Step 408, determining first relative distance information corresponding to the first coded information and the first matching coded information; the first matching coded information is coded information of which the latest matching score in the first coded information queue is a first preset score;
wherein the first preset score may be a value set by a user.
In the matching process, the coded information in the coded information queue can be continuously matched, the matching score of the coded information which is not successfully matched can be determined as a non-first preset score, the matching score of the coded information which is successfully matched can be determined as a first preset score, and the latest coded information which is successfully matched in the first coded information queue can be determined as the first matched coded information.
After determining the first encoding information and the first matching encoding information, first relative distance information corresponding to the first encoding information and the first matching encoding information may be determined.
For example, the order of the first encoded information queue may be a2, a4, A3, a5, and a1, a4 may be determined as the first encoded information, and a2 may be determined as the first matching encoded information, where the position information corresponding to the encoded information may be-1, 3, 2, and 5, and further the position information of the first encoded information may be determined as 1, and the position information of the first matching encoded information may be determined as-1, and then the first relative distance information may be determined as 2.
Step 409, determining second relative distance information corresponding to the second coding information and second matching coding information; the second matching coded information is coded information of which the latest matching score in the second coded information queue is a first preset score;
in the matching process, the coded information in the coded information queue can be continuously matched, the matching score of the coded information which is not successfully matched can be determined as a non-first preset score, the matching score of the coded information which is successfully matched can be determined as a first preset score, and the latest coded information which is successfully matched in the second coded information queue can be determined as second matched coded information.
After determining the second encoding information and the second matching encoding information, second relative distance information corresponding to the second encoding information and the second matching encoding information may be determined.
For example, the order of the second encoded information queue may be B2, B3, B4, B5, and B1, it may be determined that B3 is the second encoded information, and it may be determined that B2 is the second matching encoded information, where the position information corresponding to the encoded information may be-2, 1, 2, 3, and 5, and further, it may be determined that the position information of the second encoded information is 1, and it may be determined that the position information of the second matching encoded information is-2, and then it may be determined that the second relative distance information is 3.
Step 4010, determining a distance difference between said first relative distance information and said second relative distance information;
after determining the first relative distance information and the second relative distance information, a distance difference between the first relative distance information and the second relative distance information may be determined.
For example, when the first relative distance information is 2 and the second relative distance information is 3, it may be determined that the distance difference between the first relative distance information and the second relative distance information is 1.
Step 4011, when the distance difference is smaller than a preset distance difference, performing the judgment on whether the type information corresponding to the first coding information and the type information corresponding to the second coding information are the same type information;
the preset distance difference value may be preset by a user.
After the distance difference is determined, the magnitude of the distance difference may be compared with a preset distance difference, and when the distance difference is smaller than or equal to the preset distance difference, an operation of determining whether the type information corresponding to the first encoded information and the type information corresponding to the second encoded information are the same type information may be performed.
For example, a preset distance difference value may be preset to be 1, and when the distance difference value is 1, the distance difference value may be compared with the preset distance difference value, that is, the distance difference value is smaller than the preset distance difference value, so that an operation of determining whether the type information corresponding to the first encoded information and the type information corresponding to the second encoded information are the same type information may be performed.
When the distance difference is greater than the preset distance difference, the operation may not be executed and the process returns to step 407 to determine that the next encoded information in the first encoded information queue is the first encoded information, and further steps such as step 408 may be continuously executed.
For example, a preset distance difference value may be preset to be 0.8, and when the distance difference value is 1, the magnitude of the distance difference value and the preset distance difference value may be compared, that is, the distance difference value is greater than the preset distance difference value, and then the operation of determining whether the type information corresponding to the first encoded information and the type information corresponding to the second encoded information are the same type information may not be performed, and it may be determined that the next encoded information in the first encoded information queue, that is, a4 is the first encoded information, so as to continue to perform steps such as step 408.
Step 4012, determining whether the type information corresponding to the first encoded information and the type information corresponding to the second encoded information are the same type information;
since the coding information set can be obtained according to the position information and the type information in step 404, when the distance difference is smaller than or equal to the preset distance difference, the type information of the first coding information and the type information of the second coding information can be determined, and it can be determined whether the type information corresponding to the first coding information and the type information corresponding to the second coding information are the same type information.
For example, the sequence of the first encoded information queue may be a2, a4, A3, a5, a1, the sequence of the second encoded information queue may be B2, B3, B4, B5, B1, the vertical slot type information may be represented as 2, the parallel slot type information may be represented as 3, and the deceleration strip type information may be represented as 4.
When the first encoded information is a4 and the second encoded information is B3, it may be determined that the type information corresponding to the first encoded information and the type information corresponding to the second encoded information are not the same type information.
When the first encoded information is a2 and the second encoded information is B2, it can be determined that the type information corresponding to the first encoded information and the type information corresponding to the second encoded information are the same type information.
Step 4013, when it is determined that the type information corresponding to the first encoded information and the type information corresponding to the second encoded information are the same type information, determining a sub-matching score of the first encoded information and the second encoded information as a first preset score;
when it is determined that the type information corresponding to the first encoded information and the type information corresponding to the second encoded information are the same type information, the sub-matching score of the first encoded information and the sub-matching score of the second encoded information may be determined to be a first preset score.
For example, the sequence of the first encoded information queue may be a2, a4, A3, a5, and a1, the sequence of the second encoded information queue may be B2, B3, B4, B5, and B1, when the first encoded information is a2 and the second encoded information is B2, it may be determined that the type information corresponding to the first encoded information and the type information corresponding to the second encoded information are the same type information, and further, it may be determined that the sub-matching score of the first encoded information and the second encoded information is the first preset score.
When it is determined that the type information corresponding to the first encoded information and the type information corresponding to the second encoded information are not the same type information, the sub-matching score of the first encoded information and the sub-matching score of the second encoded information may be determined to be a non-first preset score.
And 4014, combining all the sub-matching scores to obtain a matching score between the first encoded information queue and the second encoded information queue.
After determining the sub-match scores of the first encoded information and the second encoded information, the next encoded information in the second encoded information queue may be determined to be the second encoded information, the sub-match scores of the first encoded information and the second encoded information may be determined continuously by using the above-described steps, and further, the sub-match scores of all the first encoded information and the second encoded information may be determined.
For example, the order of the first encoded information queue may be a2, a4, A3, a5, a1, and the corresponding location information may be-1, 3, 2, 5, the order of the second encoded information queue may be B2, B3, B4, B5, B1, and the corresponding location information may be-2, 1, 2, 3, 5, and the preset distance difference may be 1, wherein the partial matching process may be as follows:
the a2 may correspond to the B2 and the distance difference is 1, it may be determined that the distance difference is equal to the preset distance difference, it may further be determined that the type information is the same, the matching score is added by 1, and it is determined that the next second encoded information is B3, the a2 may correspond to the B3 and the distance difference is 2, it may be determined that the distance difference is greater than the preset distance difference, and it may be determined that the next first encoded information is a 4.
The a4 may correspond to the B3 and have a distance difference of 1, and may determine that the distance difference is equal to a preset distance difference, and further may determine that the type information is different, and then may determine that the next second encoded information is B4, the a4 may correspond to the B4 and have a distance difference of 1, and may determine that the distance difference is equal to a preset distance difference, and further may determine that the type information and the type information are the same, and add 1 point to the matching score, and determine that the next second encoded information is B5, the a4 may correspond to the B5 and have a distance difference of 1, and may determine that the distance difference is greater than the preset distance difference, and then may determine that the next first encoded information is A3.
After all sub-match scores are determined, all sub-match scores may be combined to obtain a match score between the first encoded information queue and the second encoded information queue.
In practice, the sum of all sub-match scores may be calculated and may be determined as the match score between the first and second queues of encoded information.
Step 4015, determining a target first encoded information queue and a target second encoded information queue corresponding to the highest matching score;
in practical application, since a plurality of first encoded information queues and second encoded information queues can be determined, the matching scores between all the first encoded information queues and all the second encoded information queues can be further determined.
After determining the matching scores between all the first encoded information queues and all the second encoded information queues, the first encoded information queue and the second encoded information queue with the highest matching scores can be determined, and then the first encoded information queue and the second encoded information queue are determined to be the target first encoded information queue and the target second encoded information queue.
For example, after determining the matching scores between all the first encoded information queues and the second encoded information queues, the matching score between the first encoded information queue with the head of the queue a2 and the second encoded information queue with the head of the queue B2 may be determined to be the highest matching score, and then the first encoded information queue with the head of the queue a2 and the second encoded information queue with the head of the queue B2 may be determined to be the target first encoded information queue and the target second encoded information queue.
Step 4016, obtaining a matching result according to the target first encoded information queue and the target second encoded information queue;
after the target first encoded information queue and the target second encoded information queue are determined, a matching result can be obtained according to the target first encoded information queue and the target second encoded information queue.
For example, a target first encoded information queue with a head of queue a2 and a target second encoded information queue with a head of queue B2 may be determined, and the order of the encoded information in the first encoded information queue and the second encoded information queue may be determined as a matching result.
And 4017, performing map fusion on the at least two map data according to the matching result.
After the matching result is obtained, the position information and the arrangement sequence of the coded information corresponding to the map data can be determined according to the matching result, and further, when the map data are fused, the positions corresponding to at least two map data can be adjusted according to the arrangement sequence, and the map fusion can be performed on the adjusted map data.
For example, a first encoded information queue with a head of the queue a2 and a second encoded information queue with a head of the queue B2 may be determined, and after determining the arrangement order of the first encoded information queue and the second encoded information queue, the positions corresponding to at least two pieces of map data may be adjusted by methods such as alignment and translation with the heads of the queues a2 and B2 as references, so that the map data are aligned as much as possible, and map fusion may be performed according to the aligned map data.
In the embodiment of the invention, at least two map data aiming at a target area are obtained, the at least two map data are respectively coded to obtain a coded information set, the coded information sets corresponding to the at least two map data are matched to obtain a matching result, and the at least two map data are subjected to map fusion according to the matching result, so that the optimization of the map fusion is realized, and the encoding and the matching of different map data do not need to depend on the rotation of roads in the map data, so that the condition of missing matching is avoided, and the accuracy of the map fusion is improved.
An embodiment of the invention is illustrated below with reference to fig. 5:
1. at least two pieces of map data for a target area can be acquired, a plurality of target semantic elements on the same side of a road can be determined from the map data, and calculation can be performed according to distribution of the plurality of target semantic elements to determine a projection straight line;
2. after the projection straight line is determined, the spatial three-dimensional coordinates of the plurality of target semantic elements can be projected onto the projection straight line, so that the one-dimensional coordinates of the plurality of target semantic elements can be obtained, namely the position information of the target semantic elements can be determined;
3. after the position information of the target semantic element is determined, the type information of the target semantic element can be determined, and then at least two map data can be respectively encoded according to the type information and the position information of the semantic element to obtain an encoded information set;
4. after the coded information sets are obtained, the coded information sets corresponding to at least two map data can be matched, and then a matching result can be obtained;
5. after the matching result is obtained, the optimal matching result can be screened out according to the matching result, and then the map fusion can be carried out on at least two map data according to the optimal matching result.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 6, a schematic structural diagram of a vehicle according to an embodiment of the present invention is shown, which may specifically include the following modules:
a map data acquisition module 601, configured to acquire at least two pieces of map data for a target area;
a coding information set obtaining module 602, configured to respectively code the at least two pieces of map data to obtain a coding information set;
a matching result obtaining module 603, configured to match the coding information sets corresponding to the at least two pieces of map data to obtain a matching result;
and a map fusion module 604, configured to perform map fusion on the at least two pieces of map data according to the matching result.
In an embodiment of the present invention, the encoding information set obtaining module 602 further includes:
a semantic element determination submodule for determining a plurality of semantic elements for each map data;
the semantic element information determining submodule is used for respectively determining the position information and the type information corresponding to the plurality of semantic elements;
and the coding information set obtaining submodule is used for coding the semantic elements according to the position information and the type information to obtain a coding information set.
In an embodiment of the present invention, the position information is position information based on a target coordinate system, and the vehicle further includes:
the target semantic element determining module is used for determining a plurality of target semantic elements which are positioned on the same side of a road from the map data;
the projection straight line determining module is used for determining a projection straight line according to the distribution of the plurality of target semantic elements;
and the target coordinate system construction module is used for constructing a target coordinate system by taking the projection straight line as a coordinate axis.
In an embodiment of the present invention, the at least two map data include a first map data and a second map data, the first map data corresponds to a first set of encoded information, the second map data corresponds to a second set of encoded information, and the matching result obtaining module 603 includes:
a first encoded information queue determining submodule for determining a plurality of first encoded information queues from the first encoded information set; the encoding information at the head of each first encoding information queue is different, and the encoding information in each first encoding information queue is arranged according to the position information corresponding to the semantic elements;
a second encoded information queue determining sub-module for determining a plurality of second encoded information queues from the second encoded information set; the encoding information at the head of each second encoding information queue is different, and the encoding information in each second encoding information queue is arranged according to the position information corresponding to the semantic elements;
the matching score determining sub-module is used for respectively determining matching scores between the plurality of first coded information queues and the plurality of second coded information queues and determining a target first coded information queue and a target second coded information queue corresponding to the highest matching scores;
and the matching result obtaining submodule is used for obtaining a matching result according to the target first coding information queue and the target second coding information queue.
In an embodiment of the present invention, the matching score determining sub-module further includes:
the encoding information determining unit is used for determining first encoding information in the first encoding information queue and determining second encoding information corresponding to the first encoding information in the second encoding information queue;
the same type information judging unit is used for judging whether the type information corresponding to the first coding information and the type information corresponding to the second coding information are the same type information or not;
a sub-matching score determining unit, configured to determine a sub-matching score of the first encoded information and the second encoded information as a first preset score when it is determined that the type information corresponding to the first encoded information and the type information corresponding to the second encoded information are the same type information;
and the sub-matching score combining unit is used for combining all the sub-matching scores to obtain the matching score between the first coded information queue and the second coded information queue.
In an embodiment of the present invention, the vehicle further includes:
the first relative distance information determining module is used for determining first relative distance information corresponding to the first coded information and the first matching coded information; the first matching coded information is coded information of which the latest matching score in the first coded information queue is a first preset score;
the second relative distance information determining module is used for determining second relative distance information corresponding to the second coding information and the second matching coding information; the second matching coded information is coded information of which the latest matching score in the second coded information queue is a first preset score;
a distance difference determination module for determining a distance difference between the first relative distance information and the second relative distance information;
and the same type information execution judgment module is used for calling a same type information judgment unit when the distance difference value is smaller than a preset distance difference value.
In the embodiment of the invention, at least two map data aiming at a target area are obtained, the at least two map data are respectively coded to obtain a coded information set, the coded information sets corresponding to the at least two map data are matched to obtain a matching result, and the at least two map data are subjected to map fusion according to the matching result, so that the optimization of the map fusion is realized, and the encoding and the matching of different map data do not need to depend on the rotation of roads in the map data, so that the condition of missing matching is avoided, and the accuracy of the map fusion is improved.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method and the vehicle for map fusion provided above are described in detail, and the principle and the embodiment of the present invention are explained herein by applying specific examples, and the above description of the embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A method of map fusion, the method comprising:
acquiring at least two map data for a target area;
respectively encoding the at least two map data to obtain an encoded information set;
matching the coding information sets corresponding to the at least two map data to obtain a matching result;
and performing map fusion on the at least two pieces of map data according to the matching result.
2. The method of claim 1, wherein the separately encoding the at least two map data to obtain an encoded information set comprises:
determining a plurality of semantic elements for each map data;
respectively determining position information and type information corresponding to the semantic elements;
and coding the plurality of semantic elements according to the position information and the type information to obtain a coding information set.
3. The method according to claim 2, wherein the position information is position information based on a target coordinate system, and before the determining the position information corresponding to the semantic elements respectively, the method further comprises:
determining a plurality of target semantic elements on the same side of a road from the map data;
determining a projection straight line according to the distribution of the plurality of target semantic elements;
and constructing a target coordinate system by taking the projection straight line as a coordinate axis.
4. The method according to claim 2 or 3, wherein the at least two map data comprise a first map data and a second map data, the first map data corresponds to a first set of encoded information, the second map data corresponds to a second set of encoded information, and the matching the sets of encoded information corresponding to the at least two map data to obtain a matching result comprises:
determining a plurality of first encoded information queues from the first encoded information set; the encoding information at the head of each first encoding information queue is different, and the encoding information in each first encoding information queue is arranged according to the position information corresponding to the semantic elements;
determining a plurality of second encoded information queues from the second encoded information set; the encoding information at the head of each second encoding information queue is different, and the encoding information in each second encoding information queue is arranged according to the position information corresponding to the semantic elements;
respectively determining matching scores between the multiple first coded information queues and the multiple second coded information queues, and determining a target first coded information queue and a target second coded information queue corresponding to the highest matching scores;
and obtaining a matching result according to the target first coding information queue and the target second coding information queue.
5. The method of claim 4, wherein said determining match scores between said first plurality of queues of encoded information and said second plurality of queues of encoded information, respectively, comprises:
determining first coding information in the first coding information queue and determining second coding information corresponding to the first coding information in the second coding information queue;
judging whether the type information corresponding to the first coding information and the type information corresponding to the second coding information are the same type information;
when the type information corresponding to the first coding information and the type information corresponding to the second coding information are judged to be the same type information, determining the sub-matching scores of the first coding information and the second coding information as a first preset score;
and combining all the sub-matching scores to obtain the matching score between the first coded information queue and the second coded information queue.
6. The method according to claim 5, wherein before said determining whether the type information corresponding to the first encoded information and the type information corresponding to the second encoded information are the same type information, further comprising:
determining first relative distance information corresponding to the first coding information and the first matching coding information; the first matching coded information is coded information of which the latest matching score in the first coded information queue is a first preset score;
determining second relative distance information corresponding to the second coding information and second matching coding information; the second matching coded information is coded information of which the latest matching score in the second coded information queue is a first preset score;
determining a distance difference between the first relative distance information and the second relative distance information;
and when the distance difference is smaller than a preset distance difference, executing the judgment of whether the type information corresponding to the first coding information and the type information corresponding to the second coding information are the same type information.
7. The method of claim 2, wherein the matching result comprises a correspondence between a plurality of semantic elements in the at least two map data.
8. The method of claim 3, wherein the target coordinate system is a one-dimensional coordinate system.
9. The method according to claim 3, wherein the target area is a parking lot, and the target semantic element is a semantic element corresponding to a parking space in the parking lot.
10. A vehicle, characterized in that the vehicle comprises:
the map data acquisition module is used for acquiring at least two map data aiming at the target area;
the encoding information set obtaining module is used for encoding the at least two map data respectively to obtain an encoding information set;
a matching result obtaining module, configured to match the coding information sets corresponding to the at least two pieces of map data to obtain a matching result;
and the map fusion module is used for carrying out map fusion on the at least two pieces of map data according to the matching result.
CN202010659216.4A 2020-07-09 2020-07-09 Map fusion method and vehicle Active CN111831771B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010659216.4A CN111831771B (en) 2020-07-09 2020-07-09 Map fusion method and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010659216.4A CN111831771B (en) 2020-07-09 2020-07-09 Map fusion method and vehicle

Publications (2)

Publication Number Publication Date
CN111831771A true CN111831771A (en) 2020-10-27
CN111831771B CN111831771B (en) 2024-03-12

Family

ID=72901360

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010659216.4A Active CN111831771B (en) 2020-07-09 2020-07-09 Map fusion method and vehicle

Country Status (1)

Country Link
CN (1) CN111831771B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308810A (en) * 2020-11-05 2021-02-02 广州小鹏自动驾驶科技有限公司 Map fusion method and device, server and storage medium
CN112836003A (en) * 2021-02-04 2021-05-25 广州小鹏自动驾驶科技有限公司 Map processing method and device
CN113313038A (en) * 2021-06-02 2021-08-27 上海又为智能科技有限公司 Method, device and storage medium for identifying chart
CN114061564A (en) * 2021-11-01 2022-02-18 广州小鹏自动驾驶科技有限公司 Map data processing method and device
CN114111758A (en) * 2021-11-01 2022-03-01 广州小鹏自动驾驶科技有限公司 Map data processing method and device
CN114111758B (en) * 2021-11-01 2024-06-04 广州小鹏自动驾驶科技有限公司 Map data processing method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567492A (en) * 2011-12-22 2012-07-11 哈尔滨工程大学 Method for sea-land vector map data integration and fusion
CN107369373A (en) * 2017-07-13 2017-11-21 武汉大学 A kind of method that composite mapping is carried out using multi-scale line feature map
KR20190064218A (en) * 2017-11-30 2019-06-10 현대엠엔소프트 주식회사 Apparatus for generating precise map and method thereof
CN110986969A (en) * 2019-11-27 2020-04-10 Oppo广东移动通信有限公司 Map fusion method and device, equipment and storage medium
CN111174799A (en) * 2019-12-24 2020-05-19 Oppo广东移动通信有限公司 Map construction method and device, computer readable medium and terminal equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567492A (en) * 2011-12-22 2012-07-11 哈尔滨工程大学 Method for sea-land vector map data integration and fusion
CN107369373A (en) * 2017-07-13 2017-11-21 武汉大学 A kind of method that composite mapping is carried out using multi-scale line feature map
KR20190064218A (en) * 2017-11-30 2019-06-10 현대엠엔소프트 주식회사 Apparatus for generating precise map and method thereof
CN110986969A (en) * 2019-11-27 2020-04-10 Oppo广东移动通信有限公司 Map fusion method and device, equipment and storage medium
CN111174799A (en) * 2019-12-24 2020-05-19 Oppo广东移动通信有限公司 Map construction method and device, computer readable medium and terminal equipment

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308810A (en) * 2020-11-05 2021-02-02 广州小鹏自动驾驶科技有限公司 Map fusion method and device, server and storage medium
CN112308810B (en) * 2020-11-05 2022-05-13 广州小鹏自动驾驶科技有限公司 Map fusion method and device, server and storage medium
CN112836003A (en) * 2021-02-04 2021-05-25 广州小鹏自动驾驶科技有限公司 Map processing method and device
CN113313038A (en) * 2021-06-02 2021-08-27 上海又为智能科技有限公司 Method, device and storage medium for identifying chart
WO2022253024A1 (en) * 2021-06-02 2022-12-08 Evoco Labs Co., Ltd. Method, device and storage medium for recognizing chart
CN114061564A (en) * 2021-11-01 2022-02-18 广州小鹏自动驾驶科技有限公司 Map data processing method and device
CN114111758A (en) * 2021-11-01 2022-03-01 广州小鹏自动驾驶科技有限公司 Map data processing method and device
CN114061564B (en) * 2021-11-01 2022-12-13 广州小鹏自动驾驶科技有限公司 Map data processing method and device
WO2023071029A1 (en) * 2021-11-01 2023-05-04 广州小鹏自动驾驶科技有限公司 Map data processing method and apparatus, and electronic device and storage medium
CN114111758B (en) * 2021-11-01 2024-06-04 广州小鹏自动驾驶科技有限公司 Map data processing method and device

Also Published As

Publication number Publication date
CN111831771B (en) 2024-03-12

Similar Documents

Publication Publication Date Title
CN111831771B (en) Map fusion method and vehicle
CN111750878B (en) Vehicle pose correction method and device
CN106855415B (en) Map matching method and system
CN111862337B (en) Visual positioning method, visual positioning device, electronic equipment and computer readable storage medium
CN111422204B (en) Automatic driving vehicle passing judgment method and related equipment
CN110954112A (en) Method and device for updating matching relation between navigation map and perception image
CN111750882B (en) Method and device for correcting vehicle pose during initialization of navigation map
CN112308810B (en) Map fusion method and device, server and storage medium
CN109670516B (en) Image feature extraction method, device, equipment and readable storage medium
CN108734325A (en) The evaluation method and device of planning path
CN111207759B (en) Method and device for displaying vehicle position
CN112052807B (en) Vehicle position detection method, device, electronic equipment and storage medium
CN103376114A (en) Technique for generating from point data geometric data that continuously describe a course of a geographic object
CN113609148A (en) Map updating method and device
CN114372068A (en) Map updating method and map updating device
CN116484036A (en) Image recommendation method, device, electronic equipment and computer readable storage medium
CN113011517A (en) Positioning result detection method and device, electronic equipment and storage medium
CN114111815B (en) Map data processing method and device
Wong et al. Single camera vehicle localization using feature scale tracklets
CN114743395A (en) Signal lamp detection method, device, equipment and medium
CN112507857A (en) Lane line updating method, device, equipment and storage medium
CN111326006B (en) Reminding method, reminding system, storage medium and vehicle-mounted terminal for lane navigation
CN114061564B (en) Map data processing method and device
CN115993124B (en) Virtual lane line generation method, device, equipment and computer readable storage medium
CN111044035B (en) Vehicle positioning method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 46, room 406, No.1, Yichuang street, Zhongxin knowledge city, Huangpu District, Guangzhou City, Guangdong Province 510000

Applicant after: Guangzhou Xiaopeng Automatic Driving Technology Co.,Ltd.

Address before: Room 46, room 406, No.1, Yichuang street, Zhongxin knowledge city, Huangpu District, Guangzhou City, Guangdong Province 510000

Applicant before: Guangzhou Xiaopeng Internet of vehicles Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant