CN112880692A - Map data annotation method and device and storage medium - Google Patents

Map data annotation method and device and storage medium Download PDF

Info

Publication number
CN112880692A
CN112880692A CN201911205302.1A CN201911205302A CN112880692A CN 112880692 A CN112880692 A CN 112880692A CN 201911205302 A CN201911205302 A CN 201911205302A CN 112880692 A CN112880692 A CN 112880692A
Authority
CN
China
Prior art keywords
traffic light
road
data
information
road connection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911205302.1A
Other languages
Chinese (zh)
Other versions
CN112880692B (en
Inventor
付万增
王哲
石建萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201911205302.1A priority Critical patent/CN112880692B/en
Publication of CN112880692A publication Critical patent/CN112880692A/en
Application granted granted Critical
Publication of CN112880692B publication Critical patent/CN112880692B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data

Abstract

The disclosure provides a map data labeling method and device and a storage medium. Acquiring map data, positioning information of a vehicle and environmental perception data acquired by a sensor deployed on the vehicle; carrying out traffic light detection on the environment perception data to obtain traffic light detection information; extracting road connection information corresponding to the road where the vehicle is located from the map data based on the positioning information; matching the traffic light detection information and the road connection information to obtain road connection information indicated by the traffic light; and marking the traffic light data and the road connection information indicated by the traffic light in the map data according to the traffic light detection information and the road connection information, or marking the road connection information indicated by the traffic light in the traffic light data corresponding to the map data and the traffic light detection information.

Description

Map data annotation method and device and storage medium
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a map data labeling method and apparatus, and a storage medium.
Background
The automatic driving system realizes automatic driving based on the map data. The automatic driving system needs to include the rule information of the traffic light in the map data to understand the traffic light rule. However, at present, the rule information of the traffic lights is marked in the map data by means of manual marking and the like, so that the marking efficiency is low, and errors are easy to occur.
Disclosure of Invention
The present disclosure provides a technical solution for labeling map data.
In a first aspect, a map data annotation method is provided, which includes:
obtaining map data, positioning information of a vehicle and environmental perception data collected by a sensor deployed on the vehicle;
carrying out traffic light detection on the environment perception data to obtain traffic light detection information;
extracting road connection information corresponding to the road where the vehicle is located from the map data based on the positioning information;
matching the traffic light detection information and the road connection information to obtain the road connection information indicated by the traffic light;
and according to the traffic light detection information and the road connection information, marking the traffic light data and the road connection information indicated by the traffic light in the map data, or marking the road connection information indicated by the traffic light in the traffic light data corresponding to the map data and the traffic light detection information.
In one implementation, the obtaining environmental awareness data collected by sensors deployed on the vehicle includes:
acquiring an environment image acquired by a camera deployed on the vehicle, wherein the environment perception data comprises the environment image;
the detecting the traffic light to the environment perception data to obtain the traffic light detection information comprises:
and carrying out traffic light detection on the environment image to obtain the traffic light detection information.
In yet another implementation, the obtaining environmental awareness data collected by sensors deployed on the vehicle includes:
acquiring first environment point cloud data respectively acquired at a plurality of moments of a laser radar deployed on the vehicle;
splicing the first environment point cloud data acquired at multiple moments to obtain second environment point cloud data, wherein the density of the second environment point cloud data is greater than that of the first environment point cloud data;
the detecting the traffic light to the environment perception data to obtain the traffic light detection information comprises:
and filtering the second environmental point cloud data projected to the periphery of the traffic light to obtain the traffic light detection information.
In another implementation, the extracting, from the map data, road connection information corresponding to a road on which the vehicle is located based on the positioning information includes:
inquiring the map data according to the positioning information to obtain all reachable road information and unreachable road information;
and obtaining the road connection relation among all reachable roads according to the information of all reachable roads.
In yet another implementation, the matching the traffic light detection information and the road connection information to obtain the road connection information indicated by the traffic light includes:
acquiring traffic light detection information of at least one sequenced traffic light in a traffic light set, and acquiring road connection information among all sequenced reachable roads, wherein the road connection information among all sequenced reachable roads and the sequenced traffic light detection information have the same sequencing mode;
and matching the sorted traffic light detection information with the road connection relations among all the sorted reachable roads one by one according to the sorting starting direction to obtain the road connection information indicated by each traffic light.
In yet another implementation, the method further comprises:
acquiring position information of the detected at least one traffic light and distance information between the at least one traffic light on the environment perception data;
dividing at least one traffic light, the distance of which is less than or equal to a set threshold value on the environment perception data, into a traffic light set;
and sequencing the traffic light detection information of at least one traffic light in each traffic light set according to the position information of at least one traffic light in each traffic light set to obtain the sequenced traffic light detection information.
In yet another implementation, the method further comprises:
when the current time and the last time period of the vehicle pass through the same positioning position, comparing the environmental perception data acquired at the current time with the environmental perception data acquired at the last time period, and if the comparison result is inconsistent, performing traffic light detection according to the environmental perception data acquired at the current time to obtain updated traffic light detection information;
carrying out matching processing on the updated traffic light detection information and the road connection information again to obtain updated road connection information indicated by the traffic light;
and according to the updated traffic light detection information and the updated road connection information, marking the updated traffic light data and the updated road connection information indicated by the traffic light in the map data, or marking the updated road connection information indicated by the traffic light in the traffic light data corresponding to the map data and the updated traffic light detection information.
In yet another implementation, the method further comprises:
when the current moment and the last time period of the vehicle pass through the same positioning position, extracting road connection information corresponding to the road where the vehicle is located from the map data;
comparing the road connection information corresponding to the road where the vehicle is located currently with the road connection information corresponding to the road where the vehicle is located in a time period;
if the reachable road in the previous time period becomes unreachable or the unreachable road in the previous time period becomes reachable, matching the traffic light detection information and the road connection information corresponding to the road where the vehicle is currently located to obtain updated road connection information indicated by the traffic light;
and marking the traffic light data and the updated road connection information indicated by the traffic light in the map data according to the traffic light detection information and the updated road connection information, or marking the updated road connection information indicated by the traffic light in the traffic light data corresponding to the map data and the traffic light detection information.
In a second aspect, a map data labeling apparatus is provided, including:
the system comprises a first acquisition unit, a second acquisition unit and a control unit, wherein the first acquisition unit is used for acquiring map data, positioning information of a vehicle and environment perception data acquired by a sensor deployed on the vehicle;
the detection unit is used for detecting the traffic light according to the environment perception data to obtain traffic light detection information;
the extraction unit is used for extracting road connection information corresponding to the road where the vehicle is located from the map data based on the positioning information;
the first matching unit is used for matching the traffic light detection information and the road connection information to obtain the road connection information indicated by the traffic light;
and the marking unit is used for marking the traffic light data and the road connection information indicated by the traffic light in the map data according to the traffic light detection information and the road connection information, or marking the road connection information indicated by the traffic light in the traffic light data corresponding to the map data and the traffic light detection information.
In one implementation, the first obtaining unit is configured to obtain an environment image collected by a camera deployed on the vehicle, and the environment perception data includes the environment image;
the detection unit is used for detecting the traffic light of the environment image to obtain the traffic light detection information.
In yet another implementation, the first obtaining unit includes:
the second acquisition unit is used for acquiring first environment point cloud data which are acquired by the laser radar deployed on the vehicle at a plurality of moments respectively;
the splicing unit is used for splicing the first environment point cloud data acquired at multiple moments to obtain second environment point cloud data, and the density of the second environment point cloud data is greater than that of the first environment point cloud data;
the detection unit is used for filtering second environment point cloud data projected to the periphery of the traffic light to obtain the traffic light detection information.
In yet another implementation, the extracting unit is configured to query the map data according to the positioning information to obtain all reachable road information and unreachable road information, and obtain a road connection relationship between all reachable roads according to the all reachable road information.
In yet another implementation, the first matching unit includes:
a third obtaining unit, configured to obtain traffic light detection information of at least one sequenced traffic light in a traffic light set, and obtain road connection information between all sequenced reachable roads, where the road connection information between all sequenced reachable roads and the sequenced traffic light detection information have a same sequencing manner;
and the second matching unit is used for matching the sorted traffic light detection information with the road connection relations between all the sorted reachable roads one by one according to the sorting starting direction to obtain the road connection information indicated by each traffic light.
In yet another implementation, the apparatus further comprises:
a fourth acquiring unit, configured to acquire position information of the detected at least one traffic light and distance information between the at least one traffic light on the environment perception data;
the dividing unit is used for dividing at least one traffic light of which the distance on the environment perception data is smaller than or equal to a set threshold into a traffic light set;
and the sequencing unit is used for sequencing the traffic light detection information of at least one traffic light in each traffic light set according to the position information of at least one traffic light in each traffic light set to obtain the sequenced traffic light detection information.
In yet another implementation, the apparatus further comprises:
the first comparison unit is used for comparing the environmental perception data acquired at the current moment with the environmental perception data acquired at the last time period when the vehicle passes through the same positioning position at the current moment and the last time period, and if the comparison result is inconsistent, carrying out traffic light detection according to the environmental perception data acquired at the current moment to obtain updated traffic light detection information;
the first matching unit is further configured to match the updated traffic light detection information and the road connection information again to obtain updated road connection information indicated by the traffic light;
the labeling unit is further configured to label, according to the updated traffic light detection information and the updated road connection information, the updated traffic light data and the updated road connection information indicated by the traffic light in the map data, or label, in the traffic light data corresponding to the map data and the updated traffic light detection information, the updated road connection information indicated by the traffic light.
In yet another implementation, the apparatus further comprises:
the extraction unit is also used for extracting road connection information corresponding to the road where the vehicle is located from the map data when the vehicle passes through the same positioning position at the current moment and the last time period;
the second comparison unit is used for comparing the road connection information corresponding to the road where the vehicle is located currently with the road connection information corresponding to the road where the vehicle is located in a time period;
the first matching unit is further configured to, if the reachable road in the previous time period becomes unreachable or the unreachable road in the previous time period becomes reachable, match the traffic light detection information and the road connection information corresponding to the road where the vehicle is currently located, and obtain updated road connection information indicated by the traffic light;
the labeling unit is further configured to label traffic light data and updated road connection information indicated by the traffic light in the map data according to the traffic light detection information and the updated road connection information, or label the updated road connection information indicated by the traffic light in the traffic light data corresponding to the map data and the traffic light detection information.
In a third aspect, an apparatus for annotating map data is provided, the apparatus comprising: a memory and a processor; wherein the memory stores a set of program instructions and the processor is configured to call the program instructions stored in the memory to perform the method as described in the first aspect or any one of the first aspects.
In a fourth aspect, there is provided a computer readable storage medium having stored thereon computer program instructions to be executed by a processor to implement the method as described in the first aspect or any one of the first aspects.
In a fifth aspect, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform a method as set out in the first aspect or any one of the first aspects.
By adopting the scheme disclosed by the invention, the following beneficial effects are achieved:
the obtained traffic light detection information and the road connection information are matched to obtain the road connection information indicated by the traffic light, and the traffic light data and the road connection information indicated by the traffic light are marked in the map data, so that the automatic marking of the traffic light data and the road connection information indicated by the traffic light can be realized, the marking efficiency and accuracy are improved, and the map generation and data updating efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flowchart of a map data annotation method according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of another map data annotation method according to an embodiment of the disclosure;
FIG. 3A is a diagram of context aware data;
FIG. 3B is a schematic diagram illustrating the process of extracting traffic light information from the environmental awareness data;
FIG. 3C is a schematic diagram of a clustered traffic light detection box;
FIG. 3D is a schematic diagram of an achievable road;
FIG. 3E is a schematic diagram of the matched traffic light detection box and road connection information;
fig. 4 is a schematic flowchart of another map data annotation method according to an embodiment of the disclosure;
FIG. 5A is a schematic diagram of a world global coordinate system;
FIG. 5B is a schematic diagram of a vehicle positioning inertial navigation coordinate system;
FIG. 5C is a schematic view of a camera coordinate system and a pixel coordinate system;
fig. 6 is a schematic flowchart of another map data annotation method according to an embodiment of the disclosure;
fig. 7 is a schematic flowchart of another map data annotation method according to an embodiment of the disclosure;
FIG. 8 is a schematic structural diagram of a map data annotation device according to an embodiment of the disclosure;
fig. 9 is a schematic structural diagram of yet another map data labeling apparatus provided in the embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
As shown in fig. 1, a schematic flow chart of a map data annotation method provided in an embodiment of the present disclosure may include:
s101, obtaining map data, positioning information of the vehicle and environment perception data collected by a sensor deployed on the vehicle.
In the present disclosure, "vehicle" is to be understood in a broad sense and may include various types of vehicles having transportation or operation functions in the conventional sense, such as trucks, buses, cars; the robot device also comprises movable robot devices, such as a blind guiding device, an intelligent toy, a sweeping robot and other intelligent household devices, and can also be an industrial robot, a service robot, a toy robot, an educational robot and the like, which are not limited in the disclosure.
The map data may be acquired from a server or a vehicle-mounted terminal. The map data may be a semantic map, a high-precision map, or the like, but is not limited thereto, and may be other types of maps. The map data includes rich road information.
The vehicle may also be configured with a position sensor to obtain vehicle location information. The position sensor may include at least one of: global positioning system GPS, inertial measurement unit IMU, etc., it will be understood by those skilled in the art that the position sensor is not limited to the above.
The vehicle positioning information may be a synchronized positioning information obtained for each frame of the vehicle captured image. It may be GPS location information, or IMU location information, or a fusion of the GPS location information and the IMU location information.
The fused information is a more reliable positioning result obtained based on the GPS positioning information and the IMU positioning information. The GPS positioning information and the IMU positioning information can be obtained through Kalman filtering, or the GPS positioning information and the IMU positioning information can be subjected to mean value calculation or weighted average calculation.
Various sensors are also deployed on the vehicle, including: cameras, laser radars, etc. The sensor can sense the surrounding environment of the vehicle to obtain environment sensing data. For example, an image of the surrounding environment may be acquired by a camera to obtain an environmental image; environmental point cloud data can be collected by a laser radar. The environment perception data may be the above-described environment image, environment point cloud data, or the like.
S102, carrying out traffic light detection on the environment perception data to obtain traffic light detection information.
The vehicle is driven on a road, and the collected environment perception data comprises traffic light data. Therefore, the traffic light detection can be carried out on the environment perception data, and the traffic light detection information can be obtained. The traffic light detection information includes a traffic light type and a traffic light position. Traffic light types include straight running lights, turn lights. Traffic light position refers to the exact position of the traffic light on the map data.
Specifically, the environment image can be detected through a neural network for detecting the traffic light, so as to obtain traffic light detection information; the environmental point cloud data can also be filtered, and the environmental point cloud data projected to the periphery of the traffic light is filtered to obtain the traffic light detection information.
And S103, extracting road connection information corresponding to the road where the vehicle is located from the map data based on the positioning information.
The map data stores therein rich road information and road connection information. The road connection information refers to a connection relationship between one road and other roads that can be directly accessed from the road. For example, when the vehicle travels to the intersection of the road a, the vehicle may turn left to the road B, may travel straight to the road C, and may turn right to the road D, the road a has a connection relationship with the road B, the road C, and the road D. If the road a needs to turn left to the road B and then turn right to the road E, the road a and the road E do not have the road connection relationship described in the embodiment of the disclosure.
When the vehicle runs to a certain intersection, the road connection information corresponding to the road where the vehicle is located can be extracted from the map data based on the positioning information in addition to the traffic light detection information. By acquiring the positioning information of the vehicle, the road connection information corresponding to the road where the vehicle is located can be accurately found in the map data based on the positioning information.
And S104, matching the traffic light detection information and the road connection information to obtain the road connection information indicated by the traffic light.
Generally, the connection relations of roads are various, and the setting of the traffic lights is to indicate the road connection at the intersection, that is, the traffic lights can indicate the road B having the connection relation with the current road a, which is equivalent to indicating the road turning information corresponding to the traffic lights, such as left turn, right turn, straight going, turning around, and the like. The road connection information indicated by each traffic light is set according to a certain rule, and the obtained traffic light detection information and the road connection information can be matched, so that the road connection information indicated by each traffic light is obtained.
And S105, marking the traffic light data and the road connection information indicated by the traffic light in the map data according to the traffic light detection information and the road connection information, or marking the road connection information indicated by the traffic light in the traffic light data corresponding to the map data and the traffic light detection information.
After the traffic light detection information and the road connection information are acquired, the map data can be labeled.
According to whether the traffic light data is stored in the map data, two labeling modes can be adopted:
the first method is as follows: if the map data does not store the traffic light data, the acquired traffic light data and the road connection information indicated by the traffic light can be marked on the map data.
The second method comprises the following steps: if the map data stores traffic light data, the traffic light data may be labeled with road connection information indicated by the traffic light.
Wherein the traffic light data includes a traffic light type and a traffic light location. Traffic light types include straight running lights, turn lights. Traffic light position refers to the exact position of the traffic light on the map data.
After the map data marked with the traffic light information and the road connection information indicated by the traffic lights are acquired, the automatic driving system is favorable for accurately understanding the meaning of each traffic light and the road connection information indicated by each traffic light, so that one or more driving instructions such as stopping, going straight, steering and the like can be safely, reliably and automatically realized.
According to the map data labeling method provided by the embodiment of the disclosure, the obtained traffic light detection information and the obtained road connection information are matched to obtain the road connection information indicated by the traffic light, and the traffic light data and the road connection information indicated by the traffic light are labeled in the map data, so that the traffic light data and the road connection information indicated by the traffic light can be automatically labeled, the labeling efficiency and accuracy are improved, and the map generation and data updating efficiency is improved.
As shown in fig. 2, a schematic flow chart of another map data annotation method provided in the embodiment of the present disclosure is shown, where the method may include:
s201, obtaining an environment image acquired by a camera deployed on a vehicle, wherein environment perception data comprise the environment image.
The vehicle may be configured with a vision sensor to capture images of the vehicle surroundings in real time, the images obtained being referred to as ambient perception data or ambient images. Since the image captured by the vision sensor provided on the vehicle corresponds to "perception" of the vehicle driving control system about the vehicle surroundings, the environmental perception data may also be referred to as environmental perception data.
The vision sensor may include at least one of: cameras, video cameras, and the like. It should be understood by those skilled in the art that the vision sensor is not limited to the above.
S202, carrying out traffic light detection on the environment image to obtain traffic light detection information.
The trained neural network for detecting the traffic light can be used for detecting the traffic light according to the environment perception data. Specifically, the environment sensing data is input to the neural network to obtain the traffic signal lamp (traffic light for short) detection information. In this embodiment, the traffic light detection information may be represented by traffic light detection boxes, and each traffic light may correspond to one or more detection boxes. In this embodiment, each traffic light corresponds to one detection frame as an example. The information of the detection frame may include the pixel coordinates (e.g., upper left corner) of the position of a certain corner of the detection frame and the length and width of the traffic light detection frame. That is, the information using the above-described detection box may indicate a detection box having a certain size. The traffic light detection information includes a traffic light type and a traffic light position. Traffic light types include straight running lights, turn lights. Traffic light position refers to the exact position of the traffic light on the map data.
The neural network may be trained by road images with traffic light detection information (not referred to as sample road images) including traffic light detection information. The neural network is trained through the sample road image, so that the model has the capability of detecting traffic lights in the input road image. For the context awareness data input to the neural network, traffic light detection information in the image may be output.
A schematic diagram of the environment perception data as shown in fig. 3A, in which two lampposts with traffic lights are included.
The traffic light detection is performed on the environmental awareness data shown in fig. 3A, resulting in four traffic light detection frames shown in fig. 3B: detection frame 1, detection frame 2, detection frame 3 and detection frame 4.
S203, acquiring the position information of the detected at least one traffic light and the distance information between the at least one traffic light on the environment perception data.
And S204, dividing at least one traffic light with the distance smaller than or equal to a set threshold value on the environment perception data into a traffic light set.
S205, according to the position information of at least one traffic light in each traffic light set, the traffic light detection information of at least one traffic light in each traffic light set is sequenced to obtain sequenced traffic light detection information.
The above-described steps S203 to S205 describe clustering and sorting of traffic light detection information.
A plurality of traffic light detection frames which are adjacent in the environment perception data are generally positioned on the same traffic light column and used for indicating the turning directions of different roads. Therefore, the obtained traffic light detection frames can be clustered according to the pixel distances of the traffic light detection frames to obtain a plurality of traffic light detection frames on the same traffic light post, and the traffic light detection frames belong to a set.
Specifically, pixel distances between different traffic light detection frames on the same environmental perception data are calculated, and a plurality of traffic light detection frames with the pixel distances smaller than a set pixel distance are classified into one type.
As shown in the schematic diagram of the clustered traffic light detection frame shown in fig. 3C, the traffic light detection frame shown in fig. 3B is clustered, and since the detection frame 1 and the detection frame 2 are both located on the lamppost 1 and the pixel distance between the detection frame 1 and the detection frame 2 is smaller than the set pixel distance, they can be grouped into one type; since the detection frame 3 and the detection frame 4 are both located on the lamp post 2, the pixel distance between the detection frame 3 and the detection frame 4 is smaller than the set pixel distance, and thus can be grouped into another kind.
Further, the clustered traffic light detection frames are sequenced according to the relative positions of the clustered traffic light detection frames in the environment perception data, and the clustered traffic light detection frames are obtained. Specifically, the traffic light detection frames of the same type are sorted from left to right and from top to bottom according to the relative positions of the detection frames, and the sorted traffic light detection frames are identified. For example, a plurality of traffic light detection boxes are assigned ascending sequence IDs of 1, 2, …, etc. according to the order. Of course, this disclosure does not limit how the serial numbers are identified.
As shown in fig. 3C, the two clustered traffic light detection frames on the lamppost 1 are sorted, and the detection frame groups after clustering sorting are obtained by sorting from left to right: (ID 1 of detection box 1, ID2 of detection box 2); sequencing the two clustered traffic light detection frames on the lamppost 2, and sequencing from left to right to obtain clustered and sequenced detection frame groups: (ID 3 of detection box 3, ID4 of detection box 4).
And S206, inquiring the map data according to the positioning information to obtain all reachable road information and unreachable road information.
And S207, obtaining the road connection relation among all reachable roads according to the information of all reachable roads.
The above steps S206 and S207 are to obtain a road connection relationship.
The map data is inquired according to the vehicle positioning information, and the identification of the road where the current vehicle is located can be obtained. By continuing to look up the map data along the direction of travel of the vehicle, the identity of all roads reachable and unreachable by the vehicle can be obtained. The reachable road means that the vehicle can reach the reachable road from the current road, namely the current road and the reachable road are unobstructed; the unreachable road means that the vehicle cannot reach the unreachable road from the current road, namely, the current road is not smooth with the unreachable road, for example, the current road turns right and no road is reachable, or the current road turns right and the road is closed.
Then, the road where the vehicle is currently located and all the roads where the vehicle can reach are combined to obtain the road connection relation. The connection relationship of any two roads can be represented by the following tuples: (Current road ID, reachable target road ID).
Referring to the reachable roads diagram of FIG. 3D, when the vehicle is currently in road _1, and the map data is queried along the driving direction of the vehicle, the identities of all reachable roads from road _1 for the vehicle can be obtained: road _2 and road _ 3. road _1, road _2, and road _3 constitute a "T" font. The road connection relation tuples of all reachable roads are obtained as follows: (road _1, road _2), (road _1, road _ 3).
S208, acquiring the road connection information among all the sequenced reachable roads, wherein the road connection information among all the sequenced reachable roads and the sequenced traffic light detection information have the same sequencing mode.
And sequencing all reachable roads according to a sequencing mode which is the same as the sequenced traffic light detection information, such as a sequence from left to right and a sequence from top to bottom, so as to obtain the road connection relation of the sequenced road where the vehicle is currently located.
Still referring to fig. 3D, if the traffic light in the detection box 1 indicates a left turn, the road connection relation tuple of the road is obtained as follows: from left to right are (road _1, road _2), i.e. it means that it is possible to turn left from (from) road _1 to (to) road _ 2; and the traffic light in the detection frame 2 indicates straight going, and then the obtained road connection relation tuple of the road is as follows: from bottom to top are (road _1, road _3), meaning that it is possible to proceed from road _1 to road _ 3.
S209, matching the sorted traffic light detection information with the road connection relations among all the sorted reachable roads one by one according to the sorting starting direction to obtain the road connection information indicated by each traffic light.
The traffic light detection frames corresponding to different road connection information are clustered together and are sorted according to a set sequence; accordingly, the vehicle reachable road information in the map data is also arranged in the same order. Therefore, the road connection information indicated by the traffic lights and the traffic light position information have a one-to-one correspondence relationship, and correspondingly, the traffic light detection frames and the accessible roads of the vehicles also have a one-to-one correspondence relationship, so that the road connection information corresponding to each traffic light can be obtained by matching the road connection corresponding to the traffic light for each traffic light detection frame. For example, the traffic light detection box with the number 1 is assigned the tuple 1 of the road ID (current road ID, reachable target road 1ID), the traffic light detection box with the number 2 is assigned the tuple 2 of the road ID (current road ID, reachable target road 1ID), and so on.
For example, for the road connection relationships (road _1, road _2) and (road _1, road _3) obtained in fig. 3D, if the tuple of the detection frame is (ID 1 of detection frame 1, ID2 of detection frame 2), the detection frame and the road connection relationship are matched, and the road connection information corresponding to the traffic light detection frame is obtained as: the ID1 of detection box 1 matches (road _1, road _2), and the ID2 of detection box 2 matches (road _1, road _ 3).
For another example, for the clustered and sorted detection box group on the lamppost 1 shown in fig. 3C: (ID 1 for detection frame 1, ID2 for detection frame 2), the road connection information corresponding to the traffic light detection frame shown in fig. 3E can be obtained as: the ID1 of detection box 1 matches (road _1, road _2), and the ID2 of detection box 2 matches (road _1, road _ 3).
S210, according to the traffic light detection information and the road connection information, marking the traffic light data and the road connection information indicated by the traffic light in the map data, or marking the road connection information indicated by the traffic light in the traffic light data corresponding to the map data and the traffic light detection information.
The specific implementation of this step can refer to step S105 of the embodiment shown in fig. 1.
According to the map data labeling method provided by the embodiment of the disclosure, the obtained traffic light detection information and the obtained road connection information are matched to obtain the road connection information indicated by the traffic light, and the traffic light data and the road connection information indicated by the traffic light are labeled in the map data, so that the traffic light data and the road connection information indicated by the traffic light can be automatically labeled, the labeling efficiency and accuracy are improved, and the map generation and data updating efficiency is improved.
As shown in fig. 4, a schematic flow chart of another map data annotation method provided in the embodiment of the present disclosure is shown, where the method may include:
s301, first environment point cloud data respectively collected at a plurality of moments of a laser radar deployed on a vehicle are obtained.
The point data set of the appearance surface of the object obtained by the measuring instrument is called point cloud. The present embodiment may acquire environmental point cloud data around a vehicle through a lidar deployed on the vehicle. The method comprises the steps of obtaining first environment point cloud data which are acquired by a laser radar deployed on a vehicle at a plurality of moments respectively.
S302, splicing the first environment point cloud data acquired at multiple moments to obtain second environment point cloud data, wherein the density of the second environment point cloud data is greater than that of the first environment point cloud data.
The point cloud information collected by the laser radar with low cost and low line beam at each moment is sparse. And the collected discrete frame-by-frame data of the laser point cloud is stored in a relative coordinate system mode. First environment point cloud data acquired by a laser radar at multiple moments can be converted into a unified map global coordinate system from a laser radar coordinate system in a coordinate system conversion mode, the splicing operation of the laser point cloud is completed, second environment point cloud data are obtained, and the point cloud density is enriched, namely the first environment point cloud data acquired are spliced on the map data.
And then, the spliced second environment point cloud data is converted from the map global coordinate system to the vehicle positioning inertial navigation coordinate system through the conversion matrix between the coordinate systems, is continuously converted to the camera coordinate system, and is projected to the camera pixel coordinate system (namely the coordinate system of the traffic light detection frame).
Fig. 5A is a schematic diagram of a global coordinate system of the world, which is a right-handed cartesian rectangular coordinate system, and the origin of the coordinates is the direction of the intersection of the origin pointing to the meridian and the 0-degree weft, which is the positive direction of the x-axis, and the direction of the origin pointing to the north pole is the positive direction of the z-axis. The length is in meters. Any point in the coordinate system has a unique corresponding coordinate on the earth, such as longitude and latitude information.
As shown in fig. 5B, the vehicle positioning inertial navigation coordinate system is also a right-handed cartesian rectangular coordinate system, and the vehicle-mounted high-precision inertial navigation center is taken as an origin, the direction of the vehicle head is taken as the positive x-axis direction, and the left side of the vehicle body is taken as the positive y-axis direction. The length is in meters. The world global coordinate system and the vehicle positioning inertial navigation coordinate system are both right-handed Cartesian rectangular coordinate systems, and only one rotation and translation matrix is needed for conversion between the two right-handed Cartesian rectangular coordinate systems. The rotation angle and the translation amount of the rotation and translation matrix between the world global coordinate system and the vehicle positioning inertial coordinate system can be determined according to the position of the vehicle positioning information in the vehicle positioning inertial coordinate system and the position of the vehicle positioning information in the world global coordinate system. Therefore, the laser point cloud based on the world global coordinate system can be converted to the vehicle positioning inertial navigation coordinate system according to the rotation and translation matrix, and the laser point cloud based on the vehicle positioning inertial navigation coordinate system is obtained.
To project the laser point cloud on the map image onto the environment perception data based on the camera coordinate system or the pixel coordinate system, it is necessary to convert the map image based on the vehicle body coordinate system into the camera coordinate system or the pixel coordinate system. A schematic diagram of the camera coordinate system and the pixel coordinate system as shown in fig. 5C. Wherein the camera coordinate system o-x-y is a three-dimensional map and the pixel coordinate system o ' -x ' -y ' is a two-dimensional map.
In one implementation, if the map image is a two-dimensional map, acquiring a homography matrix between a pixel coordinate system and a vehicle positioning inertial navigation coordinate system; adopting a homogeneous coordinate system to represent a map image based on a vehicle positioning inertial navigation coordinate system; and converting the map image based on the vehicle positioning inertial navigation coordinate system expressed by the homogeneous coordinate system to a pixel coordinate system according to the homography matrix to obtain the laser point cloud of the environmental perception data projected to the pixel coordinate system. In specific implementation, for a two-dimensional map image, the conversion from a vehicle positioning inertial navigation coordinate system to a pixel coordinate system can be completed in a homography matrix transformation mode. The homography matrix refers to a three-dimensional object which can be projected to a plurality of two-dimensional planes, and the homography matrix can convert the projection of one three-dimensional object on a certain two-dimensional plane into the projection of another two-dimensional plane. According to the principle that three points determine a plane, at least three points on a three-dimensional object are selected, corresponding projection points of the points on two-dimensional projection planes are respectively calculated, then a conversion matrix between two groups of corresponding projection points is a homography matrix, and the homography matrix can be solved through an algebraic analysis mode. Specifically, the homography matrix between the pixel coordinate system and the vehicle positioning inertial navigation coordinate system can be calibrated in advance through manual calibration data. In an alternative implementation, assuming that the matrix is a 3 x 3 matrix, with 8 degrees of freedom, an affine transformation from one plane to another can be performed. Then, a homogeneous coordinate system is used for representing the laser point clouds on the map image, and then the coordinates of each laser point cloud are multiplied by a homography matrix to obtain the laser point cloud based on the pixel coordinate system.
In another implementation, if the map image is a three-dimensional map, converting the map image based on the vehicle positioning inertial navigation coordinate system to the camera coordinate system according to a rotational translation matrix between the vehicle positioning inertial navigation coordinate system and the camera coordinate system to obtain a laser point cloud of the environmental perception data projected onto the camera coordinate system; and converting the laser point cloud of the environment perception data projected onto the camera coordinate system onto the pixel coordinate system according to the projection matrix between the camera coordinate system and the pixel coordinate system to obtain the laser point cloud of the environment perception data projected onto the pixel coordinate system. In specific implementation, for a three-dimensional map image, the conversion among a vehicle positioning inertial navigation coordinate system, a camera coordinate system and a pixel coordinate system can be completed by using internal and external parameters of a camera. The imaging principle of the camera is pinhole imaging, and the camera refers to the focal length of a convex lens of the camera and the coordinates of an optical center under a pixel coordinate system; and the camera external parameter is a rotation and translation matrix between a camera coordinate system and a vehicle positioning inertial navigation coordinate system. The camera coordinate system is a right-hand Cartesian coordinate system which takes the optical center of the camera as an origin and the positive directions of the y axis and the z axis are respectively arranged above and in front of the camera. After camera internal and external parameters are calibrated in advance according to manual calibration data, laser point clouds on a map image are translated to a camera coordinate system through camera external parameters, and then the laser point clouds based on the camera coordinate system are projected to a pixel coordinate system according to a zoom principle of pinhole imaging and camera internal parameters to obtain the laser point clouds projected to environment perception data.
And S303, filtering the second environment point cloud data projected to the periphery of the traffic light to obtain traffic light detection information.
And filtering the spliced laser point cloud data projected to the camera pixel coordinate system based on the traffic light detection information, specifically based on the plurality of traffic light detection frames to obtain the laser point cloud data of the plurality of traffic light detection frames. Specifically, laser point cloud projected to the periphery of the detection frame is filtered by traversing the traffic light detection frame, the sequence ID and the road connection information of the traffic light detection frame obtained through matching are bound to the laser point cloud projected to the interior of the detection frame, and finally laser point cloud data with a label are output. Wherein the point cloud data retains three-dimensional spatial coordinates in a global coordinate system of the map.
S304, acquiring the position information of the detected at least one traffic light and the distance information between the at least one traffic light on the environment perception data.
And S305, dividing at least one traffic light with the distance smaller than or equal to a set threshold value on the environment perception data into a traffic light set.
S306, according to the position information of at least one traffic light in each traffic light set, sorting the traffic light detection information of at least one traffic light in each traffic light set to obtain the sorted traffic light detection information.
And clustering the laser point cloud data of the plurality of traffic light detection frames to obtain clustered laser point cloud data corresponding to the plurality of traffic light detection frames. And clustering the laser point clouds with the labels based on the sequence ID of the traffic light detection frame, the road connection information and the distance of the laser point clouds in a three-dimensional space under a global coordinate system of the map, wherein the clustering principle is that the point clouds of the same category have the same sequence ID and the same road connection information, and the pixel distance between the point clouds is smaller than a set pixel threshold value. And finally, obtaining clustered laser point cloud data. Each type of laser point cloud corresponds to a traffic light in the actual scene.
And S307, inquiring map data according to the positioning information to obtain all reachable road information and unreachable road information.
The step S206 of the embodiment shown in fig. 2 can be referred to for specific implementation of this step.
And S308, obtaining the road connection relation among all reachable roads according to the information of all reachable roads.
The step S207 of the embodiment shown in fig. 2 can be referred to for specific implementation of this step.
S309, acquiring the road connection information among all the sequenced reachable roads.
The step S208 of the embodiment shown in fig. 2 can be referred to for specific implementation of this step.
S310, matching the sorted traffic light detection information with the road connection relations among all the sorted reachable roads one by one according to the sorting starting direction to obtain the road connection information indicated by each traffic light.
And respectively calculating the barycentric coordinates of the clustered laser point cloud data corresponding to the plurality of traffic light detection frames, and taking the barycentric coordinates of the clustered laser point cloud data as the position information of each traffic light. Processing the clustered laser point cloud data, and calculating the gravity center of each type of laser point cloud, namely the coordinate formed by the average of three-dimensional position coordinates of all the types of laser point clouds under a map global coordinate system; and the gravity center of each type of laser point cloud is used as the three-dimensional position coordinate of the traffic light corresponding to the type of laser point cloud, the sequence ID and the road connection information of the type of laser point cloud are given to the traffic light, and finally a group of traffic lights containing the three-dimensional position coordinate and the road connection information under the map global coordinate system are obtained.
S311, according to the traffic light detection information and the road connection information, marking the traffic light data and the road connection information indicated by the traffic light in the map data, or marking the road connection information indicated by the traffic light in the traffic light data corresponding to the map data and the traffic light detection information.
The specific implementation of this step can refer to step S210 in the embodiment shown in fig. 2.
According to the map data labeling method provided by the embodiment of the disclosure, the obtained traffic light detection information and the obtained road connection information are matched to obtain the road connection information indicated by the traffic light, and the traffic light data and the road connection information indicated by the traffic light are labeled in the map data, so that the traffic light data and the road connection information indicated by the traffic light can be automatically labeled, the labeling efficiency and accuracy are improved, and the map generation and data updating efficiency is improved.
As shown in fig. 6, a schematic flow chart of another map data annotation method provided in the embodiment of the present disclosure is shown, where the method may include:
s401, when the current time and the last time period of the vehicle pass through the same positioning position, comparing the environmental perception data collected at the current time with the environmental perception data collected at the last time period. If the comparison result is not consistent, the process goes to S402; otherwise, no processing is performed.
With the development of cities, the arrangement of traffic lights may change, for example, a traffic light on a left road is cancelled, a traffic light on a new road is added, and the like. The vehicle may also traverse the same road at multiple time periods, and each time the vehicle traverses the intersection (same localized location) of the road, the environmental awareness data is re-collected. And assuming the road condition that the vehicle passes through the road at the current moment, acquiring the environmental perception data acquired at the current moment. The specific acquisition mode can refer to the acquisition mode in the embodiment shown in fig. 1, fig. 2 or fig. 4.
After the environmental perception data collected at the current moment are obtained, when the current moment and the last time period of the vehicle pass through the same positioning position, the environmental perception data collected at the current moment is compared with the environmental perception data collected at the last time period. The environmental awareness data acquired in the previous time period may be stored in a memory of the map data annotation device or the cloud server, and the environmental awareness data acquired in the previous time period may be acquired from the memory or the cloud server. If the setting of the traffic light is not changed compared with the previous time period, the comparison result is consistent, and no processing can be performed; if the setting of the traffic light is changed from the previous time period, the result of the comparison is not consistent, and the following operation processing is performed.
S402, carrying out traffic light detection according to the environment perception data collected at the current moment to obtain updated traffic light detection information.
And if the setting of the traffic light is changed compared with the previous time period, and the environmental perception data acquired at the current time is inconsistent with the environmental perception data acquired at the previous time period, carrying out traffic light detection according to the environmental perception data acquired at the current time to obtain updated traffic light detection information.
The specific traffic light detection method can refer to the traffic light detection method in the embodiment shown in fig. 1, fig. 2 or fig. 4.
And S403, carrying out matching processing on the updated traffic light detection information and the road connection information again to obtain the updated road connection information indicated by the traffic light.
And if the updated traffic light detection information is not matched with the road connection information in the previous time period, carrying out matching processing on the updated traffic light detection information and the road connection information again to obtain the updated road connection information indicated by the traffic light.
The specific matching manner can refer to the manner in the embodiment shown in fig. 1, fig. 2 or fig. 4.
S404, according to the updated traffic light detection information and the updated road connection information, the updated traffic light data and the updated road connection information indicated by the traffic light are marked in the map data, or the updated road connection information indicated by the traffic light is marked in the traffic light data corresponding to the map data and the updated traffic light detection information.
Since the traffic light detection information is updated and the road connection information indicated by the traffic light is also updated by the re-matching, it is necessary to re-label the map data. The specific labeling manner can refer to the labeling manner in the embodiment shown in fig. 1, fig. 2 or fig. 4.
According to the map data labeling method provided by the embodiment of the disclosure, when the setting of the traffic light changes, the traffic light detection information and the road connection information indicated by the traffic light are updated, and the labeling is carried out on the map data again in time, so that the accuracy of the map data labeling is improved.
As shown in fig. 7, a schematic flow chart of another map data annotation method provided in the embodiment of the present disclosure is shown, where the method may include:
s501, when the current time and the last time period of the vehicle pass through the same positioning position, extracting road connection information corresponding to the road where the vehicle is located from the map data.
As cities develop, road connection relations may change, for example, an accessible road in a previous time period becomes inaccessible due to road construction or the like, or an inaccessible road in a previous time period becomes accessible and the like. The vehicle may also pass through the same road in multiple time periods, and each time the vehicle passes through an intersection (the same positioning position) of the road, the road connection information corresponding to the road where the vehicle is currently located may be extracted from the map data. The specific way of extracting the road connection information may refer to the way of extracting in the embodiments shown in fig. 1, fig. 2 or fig. 4.
S502, comparing the road connection information corresponding to the road where the vehicle is located currently with the road connection information corresponding to the road where the vehicle is located in a time period. If the reachable road in the previous time period becomes unreachable, or the unreachable road in the previous time period becomes reachable, the process goes to S503; otherwise, no processing is performed.
And after the road connection information corresponding to the road where the vehicle is located is re-extracted from the map data, comparing the road connection information corresponding to the road where the vehicle is located currently with the road connection information corresponding to the road where the vehicle is located in a time period. The road connection information corresponding to the road on which the time period is located on the vehicle can be stored in a memory of the map data labeling device or the cloud server, and the road connection information corresponding to the road on which the time period is located on the vehicle is obtained from the memory or the cloud server.
If the reachable road in the previous time period becomes unreachable or the unreachable road in the previous time period becomes reachable, the road connection information corresponding to the road where the vehicle is currently located is inconsistent with the road connection information corresponding to the road where the vehicle is located in the previous time period, and then the following operation processing is carried out; and if the road connection information corresponding to the road where the vehicle is located at present is consistent with the road connection information corresponding to the road where the vehicle is located at a time period, no processing is performed.
S503, matching the traffic light detection information and the road connection information corresponding to the road where the vehicle is located currently to obtain the updated road connection information indicated by the traffic light.
And if the road connection information is updated and is not matched with the traffic light detection information acquired in the previous time period, the traffic light detection information and the road connection information corresponding to the current road of the vehicle are matched again to obtain the updated road connection information indicated by the traffic light. The specific matching mode can refer to the matching mode in the embodiment shown in fig. 1, fig. 2 or fig. 4.
S504, according to the traffic light detection information and the updated road connection information, the traffic light data and the updated road connection information indicated by the traffic light are marked in the map data, or the updated road connection information indicated by the traffic light is marked in the traffic light data corresponding to the map data and the traffic light detection information.
And if the updated road connection information indicated by the traffic light is obtained through the re-matching, the map data is marked again. The specific labeling manner can refer to the labeling manner in the embodiment shown in fig. 1, fig. 2 or fig. 4.
According to the map data labeling method provided by the embodiment of the disclosure, when the road connection relation changes, the road connection information indicated by the traffic light is updated, and the labeling is carried out on the map data again in time, so that the accuracy of the map data labeling is improved.
It can be understood that the setting change of the traffic light is often associated with the change of the road connection relationship, and therefore, the method flows of the embodiments shown in fig. 6 and 7 may be implemented independently or in association.
Based on the same concept of the map data annotation method in the foregoing embodiment, as shown in fig. 8, the embodiment of the present disclosure further provides a map data annotation device 1000, which can be applied to the methods shown in fig. 1, fig. 2, fig. 4, fig. 6, and fig. 7. The apparatus 1000 comprises:
the first acquisition unit 101 is used for acquiring map data, positioning information of a vehicle and environmental perception data acquired by a sensor deployed on the vehicle;
the detection unit 102 is configured to perform traffic light detection on the environment sensing data to obtain traffic light detection information;
an extraction unit 103, configured to extract road connection information corresponding to a road where a vehicle is located from map data based on the positioning information;
the first matching unit 104 is configured to perform matching processing on the traffic light detection information and the road connection information to obtain road connection information indicated by the traffic light;
and a labeling unit 105, configured to label, according to the traffic light detection information and the road connection information, the traffic light data and the road connection information indicated by the traffic light in the map data, or label, in the traffic light data corresponding to the traffic light detection information in the map data, the road connection information indicated by the traffic light.
In one implementation, the first obtaining unit 101 is configured to obtain an environment image collected by a camera deployed on a vehicle, where the environment sensing data includes the environment image;
the detecting unit 102 is configured to perform traffic light detection on the environment image to obtain traffic light detection information.
In yet another implementation, the first obtaining unit 101 includes:
the second acquisition unit is used for acquiring first environment point cloud data which are acquired by a laser radar deployed on a vehicle at a plurality of moments respectively;
the splicing unit is used for splicing the first environment point cloud data acquired at multiple moments to obtain second environment point cloud data, and the density of the second environment point cloud data is greater than that of the first environment point cloud data;
the detecting unit 102 is configured to filter the second environmental point cloud data projected to the periphery of the traffic light to obtain traffic light detection information.
In yet another implementation, the extracting unit 103 is configured to query the map data according to the positioning information to obtain all reachable road information and unreachable road information, and obtain the road connection relationship between all reachable roads according to all reachable road information.
In yet another implementation, the first matching unit 104 includes:
a third obtaining unit, configured to obtain traffic light detection information of at least one sequenced traffic light in one traffic light set, and obtain road connection information between all sequenced reachable roads, where the road connection information between all sequenced reachable roads and the sequenced traffic light detection information have a same sequencing manner;
and the second matching unit is used for matching the sorted traffic light detection information with the road connection relations among all the sorted reachable roads one by one according to the sorting starting direction to obtain the road connection information indicated by each traffic light.
In yet another implementation, the apparatus further comprises (shown in dashed lines):
a fourth obtaining unit 106, configured to obtain the detected position information of the at least one traffic light and the distance information between the at least one traffic light on the environment perception data;
a dividing unit 107, configured to divide at least one traffic light, whose distance on the environment perception data is smaller than or equal to a set threshold, into a traffic light set;
the sorting unit 108 is configured to sort the traffic light detection information of at least one traffic light in each traffic light set according to the position information of at least one traffic light in each traffic light set, so as to obtain the sorted traffic light detection information.
In yet another implementation, the apparatus further comprises (shown in dashed lines):
the first comparing unit 109 is configured to compare the environmental awareness data acquired at the current time with the environmental awareness data acquired at the previous time when the vehicle passes through the same positioning location at the current time and the previous time, and if a comparison result is inconsistent, perform traffic light detection according to the environmental awareness data acquired at the current time to obtain updated traffic light detection information;
the first matching unit 104 is further configured to perform matching processing on the updated traffic light detection information and the updated road connection information again to obtain updated road connection information indicated by the traffic light;
the labeling unit 105 is further configured to label the updated traffic light data and the updated road connection information indicated by the traffic light in the map data according to the updated traffic light detection information and the updated road connection information, or label the updated road connection information indicated by the traffic light in the traffic light data corresponding to the map data and the updated traffic light detection information.
In yet another implementation, the extracting unit 103 is further configured to extract road connection information corresponding to a road where the vehicle is currently located from the map data when the vehicle passes through the same positioning location at the current time and the previous time period;
the device still includes:
the second comparison unit is used for comparing the road connection information corresponding to the road where the vehicle is located currently with the road connection information corresponding to the road where the vehicle is located in a time period;
the first matching unit 104 is further configured to, if the reachable road in the previous time period becomes unreachable or the unreachable road in the previous time period becomes reachable, match the traffic light detection information and the road connection information corresponding to the road where the vehicle is currently located, and obtain updated road connection information indicated by the traffic light;
the labeling unit 105 is further configured to label the map data with the traffic light data and the updated road connection information indicated by the traffic light according to the traffic light detection information and the updated road connection information, or label the map data with the updated road connection information indicated by the traffic light in the traffic light data corresponding to the traffic light detection information.
The functions of the above units can be specifically described with reference to the map data labeling methods shown in fig. 1, fig. 2, fig. 4, fig. 6, and fig. 7.
According to the map data labeling device provided by the embodiment of the disclosure, the obtained traffic light detection information and the obtained road connection information are matched to obtain the road connection information indicated by the traffic light, and the traffic light data and the road connection information indicated by the traffic light are labeled in the map data, so that the traffic light data and the road connection information indicated by the traffic light can be automatically labeled, the labeling efficiency and accuracy are improved, and the map generation and data updating efficiency is improved.
The embodiment of the disclosure also provides a map data labeling device, which is used for executing the map data labeling method. Some or all of the above methods may be implemented by hardware, or may be implemented by software or firmware.
Alternatively, the apparatus may be a chip or an integrated circuit when embodied.
Alternatively, when part or all of the map data labeling method of the above embodiment is implemented by software or firmware, it can be implemented by a map data labeling apparatus 1100 provided in fig. 9. As shown in fig. 9, the apparatus 1100 may include:
an input device 111, an output device 112, a memory 113, and a processor 114 (the processor 114 in the device may be one or more, and one processor is taken as an example in fig. 9). In the present embodiment, the input device 111, the output device 112, the memory 113 and the processor 114 may be connected by a bus or other means, wherein the bus connection is taken as an example in fig. 9.
The processor 114 is configured to execute the method steps executed in fig. 1, fig. 2, fig. 4, fig. 6, and fig. 7.
Alternatively, the program of the above-described map data labeling method may be stored in the memory 113. The memory 113 may be a physically separate unit or may be integrated with the processor 114. The memory 113 may also be used to store data.
Alternatively, when part or all of the map data annotation method of the above embodiment is implemented by software, the apparatus may include only a processor. The memory for storing the program is located outside the device, and the processor is connected with the memory through a circuit or a wire and used for reading and executing the program stored in the memory.
The processor may be a Central Processing Unit (CPU), a Network Processor (NP), or a WLAN device.
The processor may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
The memory may include volatile memory (volatile memory), such as random-access memory (RAM); the memory may also include a non-volatile memory (non-volatile memory), such as a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD); the memory may also comprise a combination of memories of the kind described above.
According to the map data labeling device provided by the embodiment of the disclosure, the obtained traffic light detection information and the obtained road connection information are matched to obtain the road connection information indicated by the traffic light, and the traffic light data and the road connection information indicated by the traffic light are labeled in the map data, so that the traffic light data and the road connection information indicated by the traffic light can be automatically labeled, the labeling efficiency and accuracy are improved, and the map generation and data updating efficiency is improved.
One skilled in the art will appreciate that one or more embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program may be stored on the storage medium, and when the program is executed by a processor, the steps of the map data labeling method described in any embodiment of the present disclosure are implemented, and/or the steps of the map data labeling method described in any embodiment of the present disclosure are implemented. Wherein "and/or" means having at least one of the two, e.g., "A and/or B" includes three schemes: A. b, and "A and B".
The embodiments in the disclosure are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. Particularly, for the embodiment of the map data annotation device, since it is basically similar to the embodiment of the method, the description is simple, and for the relevant points, refer to the partial description of the embodiment of the method.
The foregoing description of specific embodiments of the present disclosure has been described. Other embodiments are within the scope of the following claims. In some cases, the acts or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Embodiments of the subject matter and functional operations described in this disclosure may be implemented in: digital electronic circuitry, tangibly embodied computer software or firmware, computer hardware including the structures disclosed in this disclosure and their structural equivalents, or a combination of one or more of them. Embodiments of the subject matter described in this disclosure can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a tangible, non-transitory program carrier for execution by, or to direct the operation of, data processing apparatus. Alternatively or additionally, the program instructions may be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode and transmit information to suitable receiver apparatus for execution by the data processing apparatus. The computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
The processes and logic flows described in this disclosure can be performed by one or more programmable computers executing one or more computer programs to perform corresponding functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC).
Computers suitable for executing computer programs include, for example, general and/or special purpose microprocessors, or any other type of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory and/or a random access memory. The basic components of a computer include a central processing unit for implementing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not necessarily have such a device. Moreover, a computer may be embedded in another device, e.g., a mobile telephone, a Personal Digital Assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device such as a Universal Serial Bus (USB) flash drive, to name a few.
Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices), magnetic disks (e.g., an internal hard disk or a removable disk), magneto-optical disks, and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this disclosure contains many specific implementation details, these should not be construed as limiting the scope of any invention or of what may be claimed, but rather as merely describing the features of particular embodiments of particular inventions. Certain features that are described in this disclosure in the context of separate embodiments can also be implemented in combination in a single embodiment. In other instances, features described in connection with one embodiment may be implemented as discrete components or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. Further, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.
The above description is only for the purpose of illustrating the preferred embodiments of the present disclosure, and is not intended to limit the scope of the present disclosure, which is to be construed as being limited by the appended claims.

Claims (10)

1. A map data labeling method is characterized by comprising the following steps:
obtaining map data, positioning information of a vehicle and environmental perception data collected by a sensor deployed on the vehicle;
carrying out traffic light detection on the environment perception data to obtain traffic light detection information;
extracting road connection information corresponding to the road where the vehicle is located from the map data based on the positioning information;
matching the traffic light detection information and the road connection information to obtain the road connection information indicated by the traffic light;
and according to the traffic light detection information and the road connection information, marking the traffic light data and the road connection information indicated by the traffic light in the map data, or marking the road connection information indicated by the traffic light in the traffic light data corresponding to the map data and the traffic light detection information.
2. The method of claim 1, wherein the acquiring environmental awareness data collected by sensors deployed on the vehicle comprises:
acquiring an environment image acquired by a camera deployed on the vehicle, wherein the environment perception data comprises the environment image;
the detecting the traffic light to the environment perception data to obtain the traffic light detection information comprises:
and carrying out traffic light detection on the environment image to obtain the traffic light detection information.
3. The method of claim 1, wherein the acquiring environmental awareness data collected by sensors deployed on the vehicle comprises:
acquiring first environment point cloud data respectively acquired at a plurality of moments of a laser radar deployed on the vehicle;
splicing the first environment point cloud data acquired at multiple moments to obtain second environment point cloud data, wherein the density of the second environment point cloud data is greater than that of the first environment point cloud data;
the detecting the traffic light to the environment perception data to obtain the traffic light detection information comprises:
and filtering the second environmental point cloud data projected to the periphery of the traffic light to obtain the traffic light detection information.
4. The method according to any one of claims 1 to 3, wherein the extracting road connection information corresponding to the road where the vehicle is located from the map data based on the positioning information comprises:
inquiring the map data according to the positioning information to obtain all reachable road information and unreachable road information;
and obtaining the road connection relation among all reachable roads according to the information of all reachable roads.
5. The method according to any one of claims 1 to 4, wherein the matching the traffic light detection information and the road connection information to obtain the road connection information indicated by the traffic light comprises:
acquiring traffic light detection information of at least one sequenced traffic light in a traffic light set, and acquiring road connection information among all sequenced reachable roads, wherein the road connection information among all sequenced reachable roads and the sequenced traffic light detection information have the same sequencing mode;
and matching the sorted traffic light detection information with the road connection relations among all the sorted reachable roads one by one according to the sorting starting direction to obtain the road connection information indicated by each traffic light.
6. The method of claim 5, further comprising:
acquiring position information of the detected at least one traffic light and distance information between the at least one traffic light on the environment perception data;
dividing at least one traffic light, the distance of which is less than or equal to a set threshold value on the environment perception data, into a traffic light set;
and sequencing the traffic light detection information of at least one traffic light in each traffic light set according to the position information of at least one traffic light in each traffic light set to obtain the sequenced traffic light detection information.
7. The method according to any one of claims 1 to 6, further comprising:
when the current time and the last time period of the vehicle pass through the same positioning position, comparing the environmental perception data acquired at the current time with the environmental perception data acquired at the last time period, and if the comparison result is inconsistent, performing traffic light detection according to the environmental perception data acquired at the current time to obtain updated traffic light detection information;
carrying out matching processing on the updated traffic light detection information and the road connection information again to obtain updated road connection information indicated by the traffic light;
and according to the updated traffic light detection information and the updated road connection information, marking the updated traffic light data and the updated road connection information indicated by the traffic light in the map data, or marking the updated road connection information indicated by the traffic light in the traffic light data corresponding to the map data and the updated traffic light detection information.
8. The method according to any one of claims 1 to 7, further comprising:
when the current moment and the last time period of the vehicle pass through the same positioning position, extracting road connection information corresponding to the road where the vehicle is located from the map data;
comparing the road connection information corresponding to the road where the vehicle is located currently with the road connection information corresponding to the road where the vehicle is located in a time period;
if the reachable road in the previous time period becomes unreachable or the unreachable road in the previous time period becomes reachable, matching the traffic light detection information and the road connection information corresponding to the road where the vehicle is currently located to obtain updated road connection information indicated by the traffic light;
and marking the traffic light data and the updated road connection information indicated by the traffic light in the map data according to the traffic light detection information and the updated road connection information, or marking the updated road connection information indicated by the traffic light in the traffic light data corresponding to the map data and the traffic light detection information.
9. A map data labeling apparatus, comprising:
the system comprises a first acquisition unit, a second acquisition unit and a control unit, wherein the first acquisition unit is used for acquiring map data, positioning information of a vehicle and environment perception data acquired by a sensor deployed on the vehicle;
the detection unit is used for detecting the traffic light according to the environment perception data to obtain traffic light detection information;
the extraction unit is used for extracting road connection information corresponding to the road where the vehicle is located from the map data based on the positioning information;
the first matching unit is used for matching the traffic light detection information and the road connection information to obtain the road connection information indicated by the traffic light;
and the marking unit is used for marking the traffic light data and the road connection information indicated by the traffic light in the map data according to the traffic light detection information and the road connection information, or marking the road connection information indicated by the traffic light in the traffic light data corresponding to the map data and the traffic light detection information.
10. A map data labeling apparatus, comprising: a memory and a processor; wherein the memory stores a set of program instructions and the processor is configured to call the program instructions stored in the memory to perform the method of any one of claims 1 to 8.
CN201911205302.1A 2019-11-29 2019-11-29 Map data labeling method and device and storage medium Active CN112880692B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911205302.1A CN112880692B (en) 2019-11-29 2019-11-29 Map data labeling method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911205302.1A CN112880692B (en) 2019-11-29 2019-11-29 Map data labeling method and device and storage medium

Publications (2)

Publication Number Publication Date
CN112880692A true CN112880692A (en) 2021-06-01
CN112880692B CN112880692B (en) 2024-03-22

Family

ID=76038981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911205302.1A Active CN112880692B (en) 2019-11-29 2019-11-29 Map data labeling method and device and storage medium

Country Status (1)

Country Link
CN (1) CN112880692B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113763731A (en) * 2021-09-28 2021-12-07 苏州挚途科技有限公司 Method and system for reconstructing traffic light information of road intersection by high-precision map

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100040552A (en) * 2008-10-10 2010-04-20 현대자동차주식회사 Apparatus for displaying status information of traffic lamp of a vehicle
KR20140130968A (en) * 2013-05-02 2014-11-12 현대오트론 주식회사 Signal change prediction system and method of traffic light using learning experience data of a driver
CN105930819A (en) * 2016-05-06 2016-09-07 西安交通大学 System for real-time identifying urban traffic lights based on single eye vision and GPS integrated navigation system
CN107316488A (en) * 2017-08-23 2017-11-03 苏州豪米波技术有限公司 The recognition methods of signal lamp, device and system
US20180307925A1 (en) * 2017-04-20 2018-10-25 GM Global Technology Operations LLC Systems and methods for traffic signal light detection
US20190080186A1 (en) * 2017-09-12 2019-03-14 Baidu Online Network Technology (Beijing) Co., Ltd. Traffic light state recognizing method and apparatus, computer device and readable medium
CN109635640A (en) * 2018-10-31 2019-04-16 百度在线网络技术(北京)有限公司 Traffic light recognition method, device, equipment and storage medium based on cloud
CN109949594A (en) * 2019-04-29 2019-06-28 北京智行者科技有限公司 Real-time traffic light recognition method
CN109949593A (en) * 2019-03-13 2019-06-28 北京联合大学 A kind of traffic lights recognition methods and system based on crossing priori knowledge
CN110136227A (en) * 2019-04-26 2019-08-16 杭州飞步科技有限公司 Mask method, device, equipment and the storage medium of high-precision map
CN110388929A (en) * 2018-04-20 2019-10-29 比亚迪股份有限公司 Navigation map update method, apparatus and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100040552A (en) * 2008-10-10 2010-04-20 현대자동차주식회사 Apparatus for displaying status information of traffic lamp of a vehicle
KR20140130968A (en) * 2013-05-02 2014-11-12 현대오트론 주식회사 Signal change prediction system and method of traffic light using learning experience data of a driver
CN105930819A (en) * 2016-05-06 2016-09-07 西安交通大学 System for real-time identifying urban traffic lights based on single eye vision and GPS integrated navigation system
US20180307925A1 (en) * 2017-04-20 2018-10-25 GM Global Technology Operations LLC Systems and methods for traffic signal light detection
CN107316488A (en) * 2017-08-23 2017-11-03 苏州豪米波技术有限公司 The recognition methods of signal lamp, device and system
US20190080186A1 (en) * 2017-09-12 2019-03-14 Baidu Online Network Technology (Beijing) Co., Ltd. Traffic light state recognizing method and apparatus, computer device and readable medium
CN110388929A (en) * 2018-04-20 2019-10-29 比亚迪股份有限公司 Navigation map update method, apparatus and system
CN109635640A (en) * 2018-10-31 2019-04-16 百度在线网络技术(北京)有限公司 Traffic light recognition method, device, equipment and storage medium based on cloud
CN109949593A (en) * 2019-03-13 2019-06-28 北京联合大学 A kind of traffic lights recognition methods and system based on crossing priori knowledge
CN110136227A (en) * 2019-04-26 2019-08-16 杭州飞步科技有限公司 Mask method, device, equipment and the storage medium of high-precision map
CN109949594A (en) * 2019-04-29 2019-06-28 北京智行者科技有限公司 Real-time traffic light recognition method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PALLAVI CHOUDEKAR等: "Implementation of image processing in real time traffic light control", 《2011 3RD INTERNATIONAL CONFERENCE ON ELECTRONICS COMPUTER TECHNOLOGY》 *
陈进新;: "PLC定时器下实现十字路口交通信号灯控制系统的可行性研究", 科技展望, no. 11 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113763731A (en) * 2021-09-28 2021-12-07 苏州挚途科技有限公司 Method and system for reconstructing traffic light information of road intersection by high-precision map
CN113763731B (en) * 2021-09-28 2022-12-06 苏州挚途科技有限公司 Method and system for reconstructing traffic light information of road intersection by high-precision map

Also Published As

Publication number Publication date
CN112880692B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN109341706B (en) Method for manufacturing multi-feature fusion map for unmanned vehicle
CN108959321B (en) Parking lot map construction method, system, mobile terminal and storage medium
JP2022509302A (en) Map generation method, operation control method, device, electronic device and system
Suhr et al. Sensor fusion-based low-cost vehicle localization system for complex urban environments
CN109583415B (en) Traffic light detection and identification method based on fusion of laser radar and camera
CN112212874B (en) Vehicle track prediction method and device, electronic equipment and computer readable medium
Brenner Extraction of features from mobile laser scanning data for future driver assistance systems
CN110617821B (en) Positioning method, positioning device and storage medium
CN112667837A (en) Automatic image data labeling method and device
CN112991791B (en) Traffic information identification and intelligent driving method, device, equipment and storage medium
CN111582189B (en) Traffic signal lamp identification method and device, vehicle-mounted control terminal and motor vehicle
CN108896994A (en) A kind of automatic driving vehicle localization method and equipment
CN111079680B (en) Temporary traffic signal lamp detection method and device and automatic driving equipment
EP3904831A1 (en) Visual localization using a three-dimensional model and image segmentation
CN113358125B (en) Navigation method and system based on environment target detection and environment target map
JP2008065087A (en) Apparatus for creating stationary object map
CN112740225B (en) Method and device for determining road surface elements
CN115164918B (en) Semantic point cloud map construction method and device and electronic equipment
CN116997771A (en) Vehicle, positioning method, device, equipment and computer readable storage medium thereof
CN115205803A (en) Automatic driving environment sensing method, medium and vehicle
CN114639085A (en) Traffic signal lamp identification method and device, computer equipment and storage medium
Akopov et al. Choosing a Camera for 3D Mapping
CN112880692B (en) Map data labeling method and device and storage medium
JP2022542082A (en) Pose identification method, pose identification device, computer readable storage medium, computer equipment and computer program
US11433892B2 (en) Assertive vehicle detection model generation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant