WO2020042348A1 - 自动驾驶导航地图的生成方法、系统、车载终端及服务器 - Google Patents

自动驾驶导航地图的生成方法、系统、车载终端及服务器 Download PDF

Info

Publication number
WO2020042348A1
WO2020042348A1 PCT/CN2018/113665 CN2018113665W WO2020042348A1 WO 2020042348 A1 WO2020042348 A1 WO 2020042348A1 CN 2018113665 W CN2018113665 W CN 2018113665W WO 2020042348 A1 WO2020042348 A1 WO 2020042348A1
Authority
WO
WIPO (PCT)
Prior art keywords
road
target object
road feature
feature
electronic map
Prior art date
Application number
PCT/CN2018/113665
Other languages
English (en)
French (fr)
Inventor
杜志颖
单乐
Original Assignee
初速度(苏州)科技有限公司
北京初速度科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 初速度(苏州)科技有限公司, 北京初速度科技有限公司 filed Critical 初速度(苏州)科技有限公司
Publication of WO2020042348A1 publication Critical patent/WO2020042348A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data

Definitions

  • the invention relates to the technical field of automatic driving, and in particular, to a method, a system, a vehicle-mounted terminal, and a server for generating an automatic driving navigation map.
  • Autonomous driving navigation map is an important part of the technical scheme of autonomous driving, and it is the basis for the realization of autonomous driving navigation.
  • the condition of the road may change at any time, such as the damage, addition and replacement of road signs, and the closure of road forks caused by temporary construction.
  • the electronic map needs to be updated in time to adapt to actual road conditions changes and reduce the probability of accidents.
  • the embodiment of the invention discloses a method, a system and a server for generating an automatic driving navigation map, which can improve the updating efficiency and the updating speed of an automatic driving navigation electronic map.
  • a first aspect of the embodiments of the present invention discloses a method for generating an automatic driving navigation map, and the method includes:
  • the target object is a first road feature that does not match any of the second road features or Any of the first road features that do not match the second road feature; wherein a plurality of road images taken by the camera are acquired through a vehicle-mounted terminal; and for a certain frame of road images, the vehicle-mounted terminal recognizes the A first road feature, and the vehicle terminal calculates a relative distance between the first road feature and a vehicle according to a position of the road image of the previous road frame before the road image of the frame, and extracts the The first road feature is described.
  • the matching the first road feature and the second road feature in the first electronic map to identify a mismatched target object includes:
  • Map the first road feature to the first electronic map obtain a first position of the first road feature in the first electronic map, and determine whether the first position exists in relation to the first A second road feature with matching road features, and if not, determining the first road feature mapped to the first location as a mismatched target object;
  • the reporting the target object to a server includes:
  • the position of the target object in the first electronic map is the unmatched first road feature. Map to a position in the first electronic map; when the target object is a second road feature that does not match any of the first road features, the position of the target object in the first electronic map Is the position of the mismatched second road feature in the first electronic map.
  • the identifying a first road feature in a road image captured by a camera includes:
  • a first road feature in a road image captured by a camera is identified.
  • the second aspect of the embodiments of the present invention discloses another method for generating an autonomous driving navigation map, and the method includes:
  • the target object is a first road feature identified by the vehicle terminal that does not match any second road feature or a second road feature that does not match any of the first road features;
  • the first The road feature is a road feature identified from a road image, and the second road feature is a road feature in the first electronic map; the mismatch includes missing road features, addition of road features, and changes in road features.
  • the updating the target object in the first electronic map includes:
  • a third aspect of the embodiments of the present invention discloses a vehicle-mounted terminal, including:
  • a recognition unit configured to identify a first road feature in a road image captured by a camera; the framing range of the camera includes at least the front environment of the vehicle in which the vehicle-mounted terminal is located;
  • a matching unit configured to match the first road feature and the second road feature in the first electronic map, and identify a target object that does not match;
  • the target object is a first road feature that does not match any of the second road features A road feature or a second road feature that does not match any of the first road features;
  • a communication unit is configured to report the target object to a server to update the target object in the first electronic map through the server to obtain an updated second electronic map.
  • the matching unit includes:
  • a conversion subunit configured to map the first road feature to the first electronic map, and obtain a first position of the first road feature in the first electronic map; or, convert the first electronic feature Projecting the second road feature of the map onto the road image to obtain a second position of the second road feature in the road image;
  • a judging subunit configured to judge whether a second road feature matching the first road feature exists in the first position; or determining whether a second road feature matching the second road feature exists in the second position
  • a determining subunit configured to, when the determining subunit determines that there is no second road feature matching the first road feature at the first position, the first position that is mapped to the first position
  • the road feature is determined to be a non-matching target object; or, when the judging subunit determines that there is no first road feature matching the second road feature at the second location, it is projected onto the second
  • the second road feature of the location is determined as a mismatched target object.
  • the manner in which the communication unit reports the target object to the server is specifically:
  • the communication unit is configured to report the target object and the position of the target object in the first electronic map to a server;
  • the position of the target object in the first electronic map is the unmatched first road feature. Map to a position in the first electronic map; when the target object is a second road feature that does not match any of the first road features, the position of the target object in the first electronic map Is the position of the mismatched second road feature in the first electronic map.
  • the manner in which the identification unit is used to identify a first road feature in a road image captured by a camera is specifically:
  • the identification unit is configured to identify a first road feature in a road image captured by a camera when performing positioning calculation on the vehicle.
  • a fourth aspect of the embodiments of the present invention discloses a server, including:
  • a transceiver unit configured to receive a reported target object;
  • the target object is a first road feature identified by the vehicle terminal that does not match any second road feature or a second road feature that does not match any of the first road features
  • the first road feature is a road feature identified from a road image
  • the second road feature is a road feature in a first electronic map
  • multiple road images taken by a camera are obtained through a vehicle terminal;
  • a judging unit configured to judge whether the number of reporting times of the target object exceeds a specified threshold
  • An update unit is configured to update the target object in the first electronic map when the judgment unit determines that the number of reporting times of the target object exceeds the specified threshold to obtain an updated second electronic map.
  • the update unit includes:
  • a fusion subunit configured to perform data fusion on all received position data of the target object when the judging unit determines that the number of reporting times of the target object exceeds the specified threshold, so as to obtain that the target object is The third position in the first electronic map; the all position data consists of the position of the target object in the first electronic map received each time;
  • An update subunit is configured to update the target object at the third position to obtain an updated second electronic map.
  • a fifth aspect of the embodiment of the present invention discloses a system for generating an automatic driving navigation map, including: a vehicle-mounted terminal disclosed in the third aspect of the embodiment of the present invention;
  • a sixth aspect of the present invention discloses a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute any one of the methods disclosed in the first aspect of the embodiments of the present invention.
  • a seventh aspect of the embodiment of the present invention discloses a computer program product, and when the computer program product runs on a computer, the computer is caused to execute any method disclosed in the first aspect of the embodiment of the present invention.
  • An eighth aspect of the present invention discloses a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute any one of the methods disclosed in the second aspect of the embodiments of the present invention.
  • a ninth aspect of the embodiment of the present invention discloses a computer program product, and when the computer program product is run on a computer, the computer is caused to execute any method disclosed in the second aspect of the embodiment of the present invention.
  • the in-vehicle terminal can identify the first road feature in the road image, and match the first road feature with the second road feature, can identify a mismatched target object and report the target object to the server.
  • the server receives the same target object a sufficient number of times, the server updates it to the first electronic map to obtain an updated second electronic map. It can be seen that the entire map update process does not need to rely on manual labor.
  • the vehicle terminal and server can automatically complete the identification and update of the target object, which can improve the update speed and update efficiency of the automatic driving navigation electronic map, and improve the stability and reliability of map updates.
  • the map update task can be distributed to each vehicle-mounted terminal that establishes a communication connection with the server through crowdsourcing, which further improves the update speed of the automatic driving navigation electronic map and Update efficiency.
  • the in-vehicle terminal can share the same image recognition result when identifying the target object to be updated and performing vehicle positioning calculation, which can save computing resources.
  • the server uses the position data of the target object received multiple times to perform data fusion to determine the more accurate position of the target object in the first electronic map, and updates the target object at that position, which can reduce the error of a single observation. Improve the accuracy of where the target object is located.
  • FIG. 1 is a schematic architecture diagram of a system architecture disclosed by an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of a method for generating an automatic driving navigation map disclosed in an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of another method for generating an autonomous driving navigation map according to an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of an implementation manner of step 302 in FIG. 3 according to an embodiment of the present invention
  • FIG. 5 is a schematic flowchart of another implementation manner of step 302 in FIG. 3 according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a vehicle-mounted terminal disclosed by an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of another vehicle-mounted terminal disclosed by an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a server disclosed by an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of a system for generating an automatic driving navigation map according to an embodiment of the present invention.
  • the embodiment of the invention discloses a method, a system, a vehicle-mounted terminal and a server for generating an automatic driving navigation map, which can improve the updating efficiency and the updating speed of the automatic driving navigation electronic map. Each of them will be described in detail below.
  • FIG. 1 is a schematic structural diagram of a system architecture disclosed by an embodiment of the present invention.
  • the system architecture may include a vehicle-mounted terminal (not shown) installed on a plurality of vehicles and a server having a communication connection with the vehicle-mounted terminal.
  • the vehicle terminal and the server can perform mobile communication (such as LTE, 5G) based on the operator's base station, and can also communicate through a wireless local area network (WLAN).
  • the system architecture can be deployed in any one or more combinations of LTE, 5G, and WLAN.
  • the server may be a device for maintaining an electronic map for autonomous driving navigation and providing users with map uploading, downloading, and updating services;
  • a vehicle-mounted terminal may be a vehicle-mounted terminal that accepts a crowdsourced map update task, where the crowdsourced map update task refers to Freely and voluntarily outsource map update tasks to non-specific vehicle terminals; vehicle terminals that accept crowdsourced map update tasks disclose the sensor data collected during driving and / or upload the sensor data to the server for automatic use Update of driving navigation electronic map.
  • FIG. 2 is a schematic flowchart of a method for generating an autonomous driving navigation map according to an embodiment of the present invention.
  • the method is applied to a vehicle-mounted computer, a vehicle-mounted industrial control computer (Industrial Personal Computer, IPC) and other vehicle-mounted terminals, which are not limited in the embodiment of the present invention.
  • the method for generating the autonomous driving navigation map may include the following steps:
  • a vehicle-mounted terminal recognizes a first road feature in a road image captured by a camera.
  • the framing range of the camera includes at least the environment in front of the vehicle. There may be data transmission between the camera and the vehicle-mounted terminal, and the vehicle-mounted terminal acquires the road image captured by the camera during the vehicle driving in real time.
  • the in-vehicle terminal may recognize a first road feature in a road image through a pre-trained semantic feature detection model, and the semantic feature detection model may be a deep learning neural network. The deep learning neural network is trained by using a large number of sample images labeled with the first road feature as input to obtain the above-mentioned semantic feature detection model.
  • Using deep learning neural networks to identify first road features compared to traditional image recognition methods such as image segmentation, it can maintain good images under poor lighting conditions such as rain, snow, and dusk, or under special lighting conditions such as camera backlight Recognition performance improves the accuracy of road feature recognition under special lighting conditions, which can reduce the rate of missed detection of road features and improve the stability of map update schemes based on visual information.
  • the in-vehicle terminal matches the first road feature and the second road feature in the first electronic map, and identifies a target object that does not match.
  • the target object is a first road feature that does not match any second road feature or a second road feature that does not match any first road feature.
  • the first road feature may be a road feature identified by the vehicle terminal from the image
  • the second road feature may be a road feature in the first electronic map.
  • the above road features can be empirically screened roads and their surroundings, which can be used as landmark objects for location determination.
  • the road feature may be a traffic sign (such as a street sign, a speed limit sign, etc.), a lane line, a street light pole, a road point of interest (POI), and the like, which are not limited in the embodiment of the present invention.
  • first road features are an automatic driving navigation electronic map
  • second road features are two-dimensional representation of the road feature in the road image captured by the camera
  • the second road feature can be understood as a three-dimensional representation of the road feature in the first electronic map constructed in advance.
  • the matching of the first road feature with the second road feature may include at least the matching of the feature type, the matching of the feature position, and the matching of the feature content. Accordingly, the mismatch between the first road feature and the second road feature may include but is not limited to the following three Situation:
  • Missing road features that is, there is a second road feature in the first electronic map, but there is no first road feature matching the second road feature in the road image.
  • the missing road feature can be expressed as the above-mentioned and Any second road feature whose first road feature does not match;
  • Road feature addition That is, there is a first road feature in the road image, but there is no second road feature matching the first road feature in the first electronic map.
  • the added road feature can be expressed as the above and Any first road feature whose second road feature does not match;
  • Road feature changes Includes feature location changes, feature type changes, and feature content changes for road features.
  • the change of the feature position of a road feature can indicate that the road feature is changed from position A to position B, that is, it can be expressed as the absence of a road feature at position A and the addition of a road feature at position B.
  • the road feature with this change of position can be used
  • Two target objects are represented, respectively the above-mentioned second road feature that does not match any of the first road features and the first road feature that does not match any of the second road features;
  • the feature type change may include the first road feature Mismatch with the second road feature in type, for example, there is a certain first road feature in the road image, the first road feature is a traffic mark, and the position corresponding to the first road feature in the first electronic map There is a second road feature, and the second road feature is a light pole. At this time, it can be considered that the first road feature does not match the feature content of the second road feature; the change of the feature content of the road feature may include the first Mismatch in feature content between the road feature and the second road feature.
  • the first road feature is a traffic mark
  • the content of the mark is the speed limit 60 of the road section.
  • a second road feature exists at a position corresponding to the first road feature in the first electronic map, and the second road feature is traffic.
  • Mark the content of the mark is the speed limit 80 of the road section, at this time, it can be considered that the first road feature and the second road feature do not match in the feature content.
  • the road feature with the change of the feature type and the feature content can be represented as the first road feature that does not match any of the second road features.
  • the in-vehicle terminal reports the target object to the server, so that the target object is updated in the first electronic map by the server to obtain an updated second electronic map.
  • the in-vehicle terminal may report the target object to the server in the form of a change report, and the change report may include the position of the target object identified in the on-board navigation map, the feature type of the target object, and the target object. Content related to the target content. Further, the change report can also capture the images collected by sensors such as the Inertial Measurement Unit (IMU), Global Positioning Systems (GPS), and wheel speedometers that were captured on the vehicle. The data and information such as the status of each sensor are not limited in the embodiment of the present invention.
  • IMU Inertial Measurement Unit
  • GPS Global Positioning Systems
  • the server receives the reported target object.
  • the target object is a first road feature that is not matched with any second road feature, or a second road feature that is not matched with any first road feature, which is identified by the in-vehicle terminal performing step 202 described above.
  • the server may receive the above-mentioned change report reported by the in-vehicle terminal.
  • step 206 The server determines whether the number of times the target object has reported exceeds a specified threshold. If yes, step 206 is performed; if no, the process ends.
  • the number of reporting times of the target object is the number of reporting times of the same target object.
  • the server can identify the same target object as the target object from all the received change reports by using information such as the position of the target object included in the change report, and count the number of reporting times of the target object. .
  • the server updates the target object in the first electronic map to obtain an updated second electronic map.
  • the in-vehicle terminal recognizes the first road feature. For example, if a vehicle terminal misses a lane line in a road image, then when step 202 is performed, the lane line corresponding to the lane line in the image on the first electronic map may be identified as not matching any road feature.
  • the second road feature is reported to the server as a target object; or the vehicle terminal uses the pedestrian in the road image as a light pole, then when step 202 is performed, it may not be possible to match the misidentified " For a light pole object that matches the "light pole", the misidentified "light pole” may be identified as a first road feature that does not match any of the second road features, and may be reported to the server as a target object. Therefore, the server needs to be fault-tolerant. When the number of reports from the same target object is sufficient (that is, when the number of reports exceeds a specified threshold), it may indicate that multiple vehicles equipped with the vehicle-mounted terminal pass by the target object ’s location. The target object is recognized, and at this time, there is a high possibility that a map update demand exists in the location. It can be seen that performing steps 205 to 206 can reduce the impact of image recognition errors and improve the stability and reliability of the automatic driving navigation electronic map update.
  • the server may use the information contained in the change report to update the target object to the first electronic map to obtain a new second electronic map.
  • the update operation includes, but is not limited to, adding, deleting, and replacing.
  • the server may publish the second electronic map to all vehicle-mounted terminals in communication with the server, so that all vehicle-mounted terminals can receive the updated information.
  • the second electronic map is to say, even for a vehicle that has never passed the target object ’s location, its in-vehicle terminal can obtain the updated second electronic map through the server, and the information of the target object is passed by other targets ’location Vehicle-mounted terminal provided.
  • the implementation of the above-mentioned embodiment can distribute the map update task to each vehicle-mounted terminal that establishes a communication connection with the server through crowdsourcing, thereby improving the efficiency of map update and the electronic map for automatic driving navigation generated thereby. Real-time performance, which further improves the safety of automatic driving based on the automatic driving navigation electronic map.
  • the in-vehicle terminal can identify mismatched target objects by identifying the first road feature in the road image and matching the first road feature with the second road feature.
  • the target object is reported to the server.
  • the server receives the same target object a sufficient number of times, the server updates it to the first electronic map to obtain an updated second electronic map. It can be seen that the entire map update process does not need to rely on humans.
  • the vehicle terminal and server can automatically complete the identification and update of the target object, and have certain fault tolerance, which can improve the update speed and update efficiency of the electronic map for automatic driving navigation, and at the same time improve the map update Stability and reliability.
  • the method described in FIG. 2 can be performed based on the system architecture shown in FIG. 1, so that the map update task can be distributed to each vehicle-mounted terminal that establishes a communication connection with the server through crowdsourcing, which further improves the autonomous driving navigation. Update speed and efficiency of electronic maps.
  • FIG. 3 is a schematic flowchart of another method for generating an automatic driving navigation map disclosed in an embodiment of the present invention.
  • the method for generating the autonomous driving navigation map may include the following steps:
  • a vehicle-mounted terminal When a vehicle-mounted terminal performs a positioning calculation on a vehicle, it recognizes a first road feature in a road image captured by a camera.
  • the framing range of the camera includes at least the environment in front of the vehicle.
  • a possible implementation manner of the vehicle terminal for calculating the positioning of the vehicle may be as follows: identifying the first road feature in the road image, and matching the first road feature in the electronic navigation electronic map Feature position, calculate the position of the vehicle in the electronic navigation electronic map, and complete the vehicle positioning.
  • map updates and vehicle positioning calculations can share the same image recognition results, which can save computing resources.
  • the in-vehicle terminal matches the first road feature and the second road feature in the first electronic map, and identifies a mismatched target object.
  • step 302 may specifically be:
  • the vehicle-mounted terminal maps the first road feature to the first electronic map, and obtains a first position of the first road feature in the first electronic map.
  • the vehicle-mounted terminal may acquire multiple road images captured by a camera during the running of the vehicle.
  • the in-vehicle terminal recognizes the first road feature from it, and based on the position of the first road feature in the previous frame of the road image before the frame of road image, the in-vehicle terminal can calculate the first road feature and the vehicle. The relative distance between them is to extract the depth information of the first road feature.
  • the in-vehicle terminal may calculate the depth information of the first road feature using an algorithm such as an optical flow method.
  • the in-vehicle terminal can calculate the relative position of the first road feature relative to the vehicle, so that it can be based on the world coordinate system (that is, the coordinate system used by the first electronic map). ) And the vehicle coordinate system centered on the vehicle, calculate the position of the first road feature in the road image in the first electronic map, that is, the first road feature is mapped to the first in the first electronic map. position.
  • step S402. The in-vehicle terminal determines whether there is a second road feature that matches the first road feature in the first position. If yes, step S403 is performed, and if no, step S404 is performed.
  • matching the first road feature with the second road feature includes feature type matching and feature content matching.
  • the vehicle-mounted terminal acquires the next first road feature identified from the road image, and proceeds to step 401.
  • the in-vehicle terminal may execute a matching method as shown in FIG. 4 to determine whether the first road feature is Is an unmatched audience.
  • the in-vehicle terminal determines the first road feature mapped to the first location as a mismatched target object.
  • the in-vehicle terminal may also obtain the next first road feature identified from the road image. Road features, and proceed to step 401.
  • the in-vehicle terminal may match multiple first road features in a parallel computing manner, that is, perform matching of multiple first road features simultaneously.
  • the in-vehicle terminal can identify a first road feature that does not match any of the second road features.
  • the first road feature may be a road feature missing from the first electronic map, or it may be a first road feature.
  • step 303 may also be specifically:
  • the in-vehicle terminal projects the second road feature of the first electronic map onto the road image, and obtains a second position of the second road feature in the road image.
  • the in-vehicle terminal can project the second road feature into the road image according to the conversion relationship between the world coordinate system (that is, the coordinate system used by the first electronic map) and the camera-centered camera coordinate system. To obtain the second position of the second road feature in the road image.
  • the world coordinate system that is, the coordinate system used by the first electronic map
  • step S502 The in-vehicle terminal determines whether there is a first road feature matching the second road feature at the second location. If yes, step S503 is performed, and if no, step S504 is performed.
  • matching the first road feature with the second road feature includes feature type matching and feature content matching.
  • the vehicle-mounted terminal acquires the next second road feature identified from the first electronic map, and proceeds to step S501.
  • the in-vehicle terminal determines the second road feature projected to the second position as a mismatched target object.
  • the in-vehicle terminal may match multiple second road features in a parallel computing manner, that is, perform matching of multiple second road features simultaneously.
  • the in-vehicle terminal can identify a second road feature that does not match any of the first road features, and the second road feature may be a road feature that needs to be added to the first electronic map.
  • the vehicle terminal after the vehicle terminal acquires the road image, it can execute the matching method shown in FIG. 4 and FIG. 5, so that it can recognize the road image and the first electronic map. Identify all possible audiences.
  • the vehicle terminal performs the following steps:
  • the vehicle-mounted terminal reports the target object and the position of the target object in the first electronic map to the server.
  • the position of the target object in the first electronic map may be the first road feature that does not match is mapped to the first road feature.
  • the server receives the reported target object.
  • step 305 The server determines whether the number of times the target object reports exceeds a specified threshold. If yes, execute step 306; if no, end this process.
  • the server performs data fusion on all position data of the target object to obtain a third position of the target object in the first electronic map.
  • all the position data of the target object is composed of the position of the target object in the first electronic map received each time.
  • the server may receive multiple locations of the target object in the first electronic map.
  • the server may perform data fusion based on the received multiple position data, thereby determining a relatively accurate position of the target object in the first electronic map (that is, the third position described above).
  • the data fusion method may include weighted average, clustering, and optimization methods, which are not limited in the embodiment of the present invention. Due to factors such as vehicle speed, sensor status, and light intensity, there may be a large error between the position of the target object reported by a single vehicle terminal in the first electronic map and the actual position of the target object. The data fusion of the position data can reduce the error of a single observation and improve the accuracy of the location of the target object.
  • the server updates the target object at the third position of the first electronic map.
  • the vehicle-mounted terminal and the server can automatically complete the identification and update of the target object.
  • the map update and the vehicle positioning calculation can share the same image recognition result, thereby saving computing resources.
  • the server uses the position data of the target object received multiple times to perform data fusion, thereby determining a more accurate position of the target object in the first electronic map, and updating the target object at the position. , Which can reduce the error of a single observation and improve the accuracy of the location of the target object.
  • FIG. 6 is a schematic structural diagram of a vehicle-mounted terminal according to an embodiment of the present invention.
  • the vehicle-mounted terminal may include:
  • the identification unit 601 is configured to identify a first road feature in a road image captured by a camera; wherein, the framing range of the camera includes at least the front environment of the vehicle in which the vehicle-mounted terminal is located.
  • the recognition unit 601 may recognize a first road feature in a road image by using a pre-trained semantic feature detection model, and the semantic feature detection model may be a deep learning neural network.
  • the above-mentioned semantic feature detection model is a deep learning neural network obtained by using a large number of sample images labeled with the above-mentioned first road feature as training input and finally trained.
  • the recognition unit 601 can improve the accuracy of road feature recognition under special lighting conditions, thereby reducing the rate of missed detection of road features and improving the stability of map update schemes based on visual information .
  • a matching unit 602 is configured to match the first road feature identified by the recognition unit 601 and the second road feature in the first electronic map to identify a target object that does not match; the above-mentioned target object is different from any second road feature A matched first road feature or a second road feature that does not match any of the first road features.
  • the mismatch between the first road feature and the second road feature may include but is not limited to the following three cases: the absence of road features, the addition of road features, and the modification of road features.
  • the communication unit 603 is configured to report the target object identified by the matching unit 602 to the server, so as to update the target object in the first electronic map through the server to obtain an updated second electronic map.
  • the communication unit 603 may report the target object to the server in the form of a change report, and the change report may include the position of the target object in the autonomous driving navigation map recognized by the vehicle terminal, the feature type of the target object, and the target. Information related to the target content, such as the content of the object.
  • the change report may further include data collected by each sensor installed on the vehicle, and information such as the status of each sensor, which is not limited in the embodiment of the present invention.
  • the implementation of the vehicle-mounted terminal shown in FIG. 6 can automatically identify the target object that may need to be updated and report the target object to the server, so that the target object can be updated through the server. Improve the update speed and efficiency of electronic maps for autonomous driving navigation.
  • FIG. 7 is a schematic structural diagram of another vehicle-mounted terminal disclosed by an embodiment of the present invention.
  • the above-mentioned matching unit 602 may include:
  • a conversion subunit 6021 configured to map a first road feature identified by the identification unit 601 to a first electronic map to obtain a first position of the first road feature in the first electronic map; or Two road features are projected onto the road image to obtain a second position of the second road feature in the road image;
  • a judging subunit 6022 configured to judge whether a second road feature matching the first road feature exists in the first position determined by the transformation subunit 6021; or determining whether a second road exists in the second position determined by the transformation subunit 6021 and the second road Feature matching first road features;
  • a determining subunit 6023 configured to determine the first road feature mapped to the first position as a target that does not match when the determining subunit 6022 determines that there is no second road feature matching the first road feature in the first position An object; or, when the judging subunit 6022 determines that there is no first road feature matching the second road feature at the second position, the second road feature projected to the second position is determined as a target object that does not match.
  • the manner in which the above-mentioned communication unit 603 is used to report the target object to the server is specifically:
  • a communication unit 603, configured to report the target object and the position of the target object in the first electronic map to the server;
  • the position of the target object in the first electronic map is that the first road feature that does not match is mapped to the position in the first electronic map;
  • the position of the target object in the first electronic map is the position of the second road feature that does not match in the first electronic map.
  • the manner in which the above-mentioned identification unit 601 is used to identify the first road feature in the road image captured by the camera is specifically:
  • the identification unit 601 is configured to identify a first road feature in a road image captured by a camera when performing a positioning calculation on a vehicle.
  • the matching unit 602 and a positioning unit (not shown) that may be included in the in-vehicle terminal may share the image recognition result (that is, the first road feature) obtained by the recognition unit 601.
  • the matching unit 602 may be used to match the first road feature and the second road feature in the first electronic map to identify a target object that does not match.
  • the above-mentioned positioning unit may be used to identify a match with the first road feature.
  • a second road feature, and the position of the vehicle in the first electronic map is determined based on the position of the second road feature that matches the first road feature in the first electronic map.
  • the implementation of the vehicle-mounted terminal shown in FIG. 7 can automatically identify the target object that may need to be updated, and can determine the position of the target object in the first electronic map, so that the target object and the target object can be placed in the first electronic map. The locations in the map are reported to the server together, so that the server can update the target object. Further, when the vehicle-mounted terminal shown in FIG. 7 is implemented, the image recognition result obtained by the recognition unit 601 can be shared by the matching unit 602 and a positioning unit that may be included in the vehicle-mounted terminal, which is beneficial to saving computing resources.
  • an embodiment of the present invention also discloses a computer-readable storage medium that stores a computer program, where the computer program causes a computer to execute any of the methods for generating an autonomous driving navigation map disclosed in Embodiment 1 or Embodiment 2. Steps performed by a car terminal.
  • An embodiment of the present invention also discloses a computer program product.
  • the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute the first or second embodiment of the disclosure. The steps performed by the vehicle terminal in any of the methods for generating an autonomous driving navigation map.
  • FIG. 8 is a schematic structural diagram of a server disclosed by an embodiment of the present invention.
  • the server may include:
  • the transceiver unit 801 is configured to receive a target object reported by a vehicle-mounted terminal.
  • the target object is a first road feature that does not match any second road feature identified by the vehicle terminal or a second road feature that does not match any first road feature; the first road feature may be from a road image
  • a judging unit 802 configured to judge whether the number of reporting times of the target object received by the transceiver unit 801 exceeds a specified threshold
  • the number of reporting times of the target object is the number of reporting times of the same target object.
  • the updating unit 803 is configured to update the above-mentioned target object in the first electronic map when the determining unit 802 determines that the number of reporting times of the target object exceeds a specified threshold to obtain an updated second electronic map.
  • the update unit 804 will update the target object in the first electronic map only when the number of reporting times of the same target object is sufficient, thereby improving the stability and reliability of the automatic driving navigation electronic map update.
  • the operation of updating the target object in the first electronic map by the update unit 803 includes, but is not limited to, adding, deleting, and replacing the above target object in the first electronic map.
  • the above-mentioned update unit 803 may include:
  • a fusion subunit 8031 configured to determine that the number of reporting times of the target object exceeds a specified threshold, and perform data fusion on all position data of the received target object to obtain a third position of the target object in the first electronic map; Wherein, all the position data of the target object is composed of the position of the target object in the first electronic map received each time; the data fusion method may include weighted average, clustering, and optimization methods, which are not limited in the embodiment of the present invention. .
  • the update subunit 8032 is configured to update the target object at a third position determined by the fusion subunit 8041 to obtain an updated second electronic map.
  • the target object reported by the in-vehicle terminal can be received, and the target object can be updated in the first electronic map, so that the automatic update of the target object can be completed, and the automatic driving navigation electronic map can be improved Update speed and update efficiency.
  • the server shown in FIG. 8 can also determine whether the number of reporting times of the same target object exceeds the specified threshold, and only when the number of reporting times exceeds the specified threshold, the operation of updating the target object in the first electronic map is performed, thereby improving automatic Stability and reliability of driving navigation electronic map updates. Further, when the server shown in FIG.
  • the server uses the position data of the target object received multiple times to perform data fusion, thereby determining a more accurate position of the target object in the first electronic map. , And update the target object at this position, thereby reducing the error of a single observation and improving the accuracy of the position of the target object.
  • an embodiment of the present invention also discloses a computer-readable storage medium that stores a computer program, where the computer program causes a computer to execute any of the methods for generating an autonomous driving navigation map disclosed in Embodiment 1 or Embodiment 2. Steps performed by the server.
  • An embodiment of the present invention also discloses a computer program product.
  • the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute the first or second embodiment of the disclosure. The steps performed by the server in any of the methods for generating an autonomous driving navigation map.
  • FIG. 9 is a schematic structural diagram of a system for generating an automatic driving navigation map according to an embodiment of the present invention.
  • the autonomous driving navigation map may include:
  • the in-vehicle terminal 901 can be used to acquire data collected by sensors such as a camera, IMU, GPS, and tachometer installed on the vehicle, and process the data collected by each sensor; specifically, the in-vehicle terminal 901 can The method is used to execute the steps performed by a vehicle-mounted terminal in the method for generating an automatic driving navigation map disclosed in Embodiment 1 or Embodiment 2.
  • the server 902 may be configured to execute steps performed by the server in the method for generating an automatic driving navigation map disclosed in the first embodiment or the second embodiment.
  • the map update task can be distributed to each vehicle-mounted terminal that establishes a communication connection with the server through crowdsourcing, which can improve the efficiency of map updates and the resulting generation.
  • the real-time nature of the automatic driving navigation electronic map improves the safety of automatic driving based on the automatic driving navigation electronic map.
  • an embodiment or “an embodiment” mentioned throughout the specification means that a particular feature, structure, or characteristic related to the embodiment is included in at least one embodiment of the present invention.
  • the appearances of "in one embodiment” or “in an embodiment” appearing throughout the specification are not necessarily referring to the same embodiment.
  • the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
  • the embodiments described in the specification are all optional embodiments, and the actions and modules involved are not necessarily required by the present invention.
  • the units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • the functional units in the embodiments of the present invention may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
  • the technical solution of the present invention essentially or part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, which is stored in a memory , Including a number of requests to cause a computer device (which may be a personal computer, a server, or a network device, specifically a processor in a computer device) to perform some or all of the steps of the foregoing methods of various embodiments of the present invention.
  • a computer device which may be a personal computer, a server, or a network device, specifically a processor in a computer device
  • the program may be stored in a computer-readable storage medium, and the storage medium includes a read-only Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-only Memory (PROM), Erasable Programmable Read Only Memory, EPROM), One-time Programmable Read-Only Memory (OTPROM), Electronically-Erasable Programmable Read-Only Memory (EEPROM), Compact Disc (Compact Disc) Read-Only Memory (CD-ROM) or other optical disk storage, magnetic disk storage, magnetic tape storage, or any other computer-readable medium that can be used to carry or store data.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • PROM Programmable Read-only Memory
  • EPROM Erasable Programmable Read Only Memory
  • OTPROM One-time Programmable Read-Only Memory
  • EEPROM Electronically-Erasable Programmable Read-Only Memory
  • CD-ROM Compact Disc
  • CD-ROM Compact Disc

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种自动驾驶导航地图的生成方法、系统、车载终端及服务器,该方法包括:车载终端识别摄像头拍摄到的道路图像中的第一道路特征(201);车载终端匹配第一道路特征和第一电子地图中的第二道路特征,识别出不匹配的目标对象(202);车载终端将目标对象上报至服务器(203);服务器接收上报的目标对象(204);服务器判断目标对象的上报次数是否超出指定阈值(205);当超出阈值时,服务器在第一电子地图中更新目标对象,以获得更新后的第二电子地图(206)。通过该方法无需人工标注即可自动完成自动驾驶导航电子地图的更新,从而提高地图的更新效率和更新速度。

Description

自动驾驶导航地图的生成方法、系统、车载终端及服务器 技术领域
本发明涉及自动驾驶技术领域,具体涉及一种自动驾驶导航地图的生成方法、系统、车载终端及服务器。
背景技术
自动驾驶导航地图是自动驾驶技术方案中的重要组成部分,是自动驾驶导航的实现基础。在实际生活中,道路的状况可能随时发生更改,比如道路标记牌的损坏、增加和替换,因临时施工而导致的道路岔口封闭等。为了保障驾驶安全,需要对电子地图及时进行更新,以适应实际的道路状况更改,降低事故发生概率。
现有的电子地图更新方案依赖地图生产商使用专用的地图数据采集车辆在可能出现更改的路段中行驶,通过人工标注的方式从采集到的道路图像中标注出发生更改的目标对象(比如标记牌、路灯杆等),然后利用包含这些目标对象的道路图像重新生成地图,从而完成地图的更新。然而,在实践中发现,这种电子地图更新方案需要通过人工标注的方式识别出发生更改的目标对象,从而导致地图更新的效率较低,更新速度也比较慢。
发明内容
本发明实施例公开了一种自动驾驶导航地图生成方法、系统及服务器,能够提高自动驾驶导航电子地图的更新效率和更新速度。
本发明实施例第一方面公开一种自动驾驶导航地图生成方法,所述方法包括:
识别摄像头拍摄到的道路图像中的第一道路特征;所述摄像头的取景范围至少包括车辆前方的环境;
匹配所述第一道路特征和第一电子地图中的第二道路特征,识别出不匹配的目标对象;所述目标对象为与任一所述第二道路特征不匹配的第一道路特征或者与任一所述第一道路特征不匹配的第二道路特征;其中,通过一车载终端获取到所述摄像头拍摄的多张道路图像;针对某一帧道路图像,所述车载终端从中识别出所述第一道路特征,根据所述第一道路特征在该帧道路图像之前的上一帧道路图像的位置,所述车载终端计算出所述第一道路特征和车辆之间的相对距离,提取出所述第一道路特征。
将所述目标对象上报至服务器,以通过所述服务器在所述第一电子地图中更新所述目标对象,得到更新后的第二电子地图。
作为一种可选的实施方式,在本发明实施例第一方面中,所述匹配所述第一道路特征和第一电子地图中的第二道路特征,识别出不匹配的目标对象,包括:
将所述第一道路特征映射到所述第一电子地图,获得所述第一道路特征在所述第一电子地图中的第一位置,并判断所述第一位置是否存在与所述第一道路特征相匹配的第二道路特征,如果否,将映射至所述第一位置的所述第一道路特征确定为不匹配的目标对象;
或者,将所述第一电子地图的所述第二道路特征投影到所述道路图像,获得所述第二道路特征在所述道路图像中的第二位置,并判断所述第二位置是否存在与所述第二道路特征相匹配的第一道路特征,如果否,将投影至所述第二位置的所述第二道路特征确定为不匹配的目标对象。
作为一种可选的实施方式,在本发明实施例第一方面中,所述将所述目标对象上报至服务器,包括:
将所述目标对象以及所述目标对象在所述第一电子地图中的位置上报至服务器,以通过所述服务器在所述第一电子地图中更新所述目标对象,得到更新后的第二电子地图。;
其中,当所述目标对象为与任一所述第二道路特征不匹配的第一道路特征时,所述目标对象在所述第一电子地图中的位置为所述不匹配的第一道路特征映射至所述第一电子地图中的位置;当所述目标对象为与任一所述第一道路特征不匹配的第二道路特征时,所述目标对象在所述第一电子地图中的位置为所述不匹配的第二道路特征在所述第一电子地图中的位置。
作为一种可选的实施方式,在本发明实施例第一方面中,所述识别摄像头拍摄到的道路图像中的第一道路特征,包括:
在对所述车辆进行定位计算时,识别摄像头拍摄到的道路图像中的第一道路特征。
本发明实施例第二方面公开另一种自动驾驶导航地图的生成方法,所述方法包括:
接收上报的目标对象;所述目标对象为车载终端识别出的与任一第二道路特征不匹配的第一道路特征或者与任一第一道路特征不匹配的第二道路特征;所述第一道路特征为从道路图像中识别出的道路特征,所述第二道路特征为第一电子地图中的道路特征;所述不匹配包括道路特征的缺失、道路特征的添加以及道路特征的更改中的一种或几种;
判断所述目标对象的上报次数是否超出指定阈值,如果是,在第一电子地图中更新所述目标对象,以获得更新后的第二电子地图。
作为一种可选的实施方式,在本发明实施例第二方面中,所述在第一电子地图中更新所述目标对象,包括:
对接收到的所述目标对象的所有位置数据进行数据融合,得到所述目标对象在所述第一电子地图中的第三位置;所述所有位置数据由每次接收到的所述目标对象在所述第一电子地图中的位置组成;
在所述第一电子地图的所述第三位置更新所述目标对象。
本发明实施例第三方面公开一种车载终端,包括:
识别单元,用于识别摄像头拍摄到的道路图像中的第一道路特征;所述摄像头的取景范围至少包括所述车载终端所在车辆的前方环境;
匹配单元,用于匹配所述第一道路特征和第一电子地图中的第二道路特征,识别出不匹配的目标对象;所述目标对象为与任一所述第二道路特征不匹配的第一道路特征或者与任一所述第一道路特征不匹配的第二道路特征;
通信单元,用于将所述目标对象上报至服务器,以通过所述服务器在所述第一电子地图中更新所述目标对象,得到更新后的第二电子地图。
作为一种可选的实施方式,在本发明实施例第三方面中,所述匹配单元,包括:
转换子单元,用于将所述第一道路特征映射到所述第一电子地图,获得所述第一道路特征在所述第一电子地图中的第一位置;或者,将所述第一电子地图的所述第二道路特征投影到所述道路图像,获得所述第二道路特征在所述道路图像中的第二位置;
判断子单元,用于判断所述第一位置是否存在与所述第一道路特征相匹配的第二道路特征;或者,判断所述第二位置是否存在与所述第二道路特征相匹配的第一道路特征;
确定子单元,用于在所述判断子单元判断出所述第一位置不存在与所述第一道路特征 相匹配的第二道路特征时,将映射至所述第一位置的所述第一道路特征确定为不匹配的目标对象;或者,在所述判断子单元判断出所述第二位置不存在与所述第二道路特征相匹配的第一道路特征时,将投影至所述第二位置的所述第二道路特征确定为不匹配的目标对象。
作为一种可选的实施方式,在本发明实施例第三方面中,所述通信单元用于将所述目标对象上报至服务器的方式具体为:
所述通信单元,用于将所述目标对象以及所述目标对象在所述第一电子地图中的位置上报至服务器;
其中,当所述目标对象为与任一所述第二道路特征不匹配的第一道路特征时,所述目标对象在所述第一电子地图中的位置为所述不匹配的第一道路特征映射至所述第一电子地图中的位置;当所述目标对象为与任一所述第一道路特征不匹配的第二道路特征时,所述目标对象在所述第一电子地图中的位置为所述不匹配的第二道路特征在所述第一电子地图中的位置。
作为一种可选的实施方式,在本发明实施例第三方面中,所述识别单元用于识别摄像头拍摄到的道路图像中的第一道路特征的方式具体为:
所述识别单元,用于在对所述车辆进行定位计算时,识别摄像头拍摄到的道路图像中的第一道路特征。
本发明实施例第四方面公开一种服务器,包括:
收发单元,用于接收上报的目标对象;所述目标对象为车载终端识别出的与任一第二道路特征不匹配的第一道路特征或者与任一第一道路特征不匹配的第二道路特征;所述第一道路特征为从道路图像中识别出的道路特征,所述第二道路特征为第一电子地图中的道路特征;通过一车载终端获取到摄像头拍摄的多张道路图像;针对某一帧道路图像,所述车载终端从中识别出所述第一道路特征,根据所述第一道路特征在该帧道路图像之前的上一帧道路图像的位置,所述车载终端计算出所述第一道路特征和车辆之间的相对距离,提取出所述第一道路特征;
判断单元,用于判断所述目标对象的上报次数是否超出指定阈值;
更新单元,用于在所述判断单元判断出所述目标对象的上报次数超出所述指定阈值时,在第一电子地图中更新所述目标对象,以获得更新后的第二电子地图。
作为一种可选的实施方式,在本发明实施例第四方面中,所述更新单元,包括:
融合子单元,用于在所述判断单元判断出所述目标对象的上报次数超出所述指定阈值时,对接收到的所述目标对象的所有位置数据进行数据融合,得到所述目标对象在所述第一电子地图中的第三位置;所述所有位置数据由每次接收到的所述目标对象在所述第一电子地图中的位置组成;
更新子单元,用于在所述第三位置更新所述目标对象,以获得更新后的第二电子地图。
本发明实施例第五方面公开一种自动驾驶导航地图的生成系统,包括:如本发明实施例第三方面公开的车载终端;
以及,如本发明实施例第四方面公开的服务器。
本发明第六方面公开一种计算机可读存储介质,其存储计算机程序,其中,所述计算机程序使得计算机执行本发明实施例第一方面公开的任一项方法。
本发明实施例第七方面公开一种计算机程序产品,当所述计算机程序产品在计算机上 运行时,使得所述计算机执行本发明实施例第一方面公开的任一项方法。
本发明第八方面公开一种计算机可读存储介质,其存储计算机程序,其中,所述计算机程序使得计算机执行本发明实施例第二方面公开的任一项方法。
本发明实施例第九方面公开一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行本发明实施例第二方面公开的任一项方法。
与现有技术相比,本发明的发明点及实现的有益效果:
1、车载终端可以识别道路图像中的第一道路特征,以及将第一道路特征和第二道路特征进行匹配,可以识别出不匹配的目标对象并将目标对象上报至服务器。服务器在接收到同一个目标对象的次数足够多时,并将其更新至第一电子地图中,以获得更新后的第二电子地图。可见,整个地图更新过程无需依赖人工,车载终端和服务器可以自动完成目标对象的识别和更新,从而可以提高自动驾驶导航电子地图的更新速度和更新效率,同时提高地图更新的稳定性和可靠性。
2、基于本发明公开的自动驾驶导航电子地图生成系统,可以通过众包的方式,将地图更新任务分散至各个与服务器建立通信连接的车载终端,进一步提高了自动驾驶导航电子地图的更新速度和更新效率。
3、车载终端在进行识别待更新的目标对象以及进行车辆定位计算时可以共用相同的图像识别结果,可以节省计算资源。
4、服务器使用多次接收到的目标对象的位置数据进行数据融合,以确定出目标对象在第一电子地图中较为准确的位置,并在该位置更新目标对象,可以降低单次观测的误差,提高目标对象所在位置的精度。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例公开的一种系统架构的架构示意图;
图2是本发明实施例公开的一种自动驾驶导航地图的生成方法的流程示意图;
图3是本发明实施例公开的另一种自动驾驶导航地图的生成方法的流程示意图;
图4是本发明实施例公开的一种图3中步骤302执行方式的流程示意图;
图5是本发明实施例公开的另一种图3中步骤302执行方式的流程示意图;
图6是本发明实施例公开的一种车载终端的结构示意图;
图7是本发明实施例公开的另一种车载终端的结构示意图;
图8是本发明实施例公开的一种服务器的结构示意图;
图9是本发明实施例公开的一种自动驾驶导航地图的生成系统的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,本发明实施例及附图中的术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
本发明实施例公开了一种自动驾驶导航地图的生成方法、系统、车载终端及服务器,能够提高自动驾驶导航电子地图的更新效率和更新速度。以下分别进行详细说明。
为了更好地理解本发明实施例公开的自动驾驶导航地图的生成方法、系统、车载终端及服务器,下面首先对本发明实施例适用的系统架构进行描述。请参阅图1,图1是本发明实施例公开的一种系统架构的架构示意图。
如图1所示,该系统架构可以包括装设于多台车辆上的车载终端(未图示)以及与上述的车载终端存在通信连接的服务器。其中,车载终端与服务器之间既可以基于运营商基站进行移动通信(如LTE、5G),又可以通过无线局域网(WLAN)进行通信。该系统架构可以部署在LTE、5G、WLAN中的任一种或多种组合的系统中。其中,服务器可以是用于维护自动驾驶导航电子地图并为用户提供地图上传、下载、更新等服务的设备;车载终端可以为接受众包地图更新任务的车载终端,其中,众包地图更新任务指以自由自愿的形式将地图更新任务外包给非特定车载终端;接受众包地图更新任务的车载终端公开其行驶途中采集到的传感器数据和/或将上述的传感器数据上传至服务器,以用于自动驾驶导航电子地图的更新。
实施例一
请参阅图2,图2是本发明实施例公开的一种自动驾驶导航地图的生成方法的流程示意图。其中,该方法应用于车载电脑、车载工业控制计算机(Industrial personal Computer,IPC)等车载终端,本发明实施例不做限定。如图2所示,该自动驾驶导航地图的生成方法可以包括以下步骤:
201、车载终端识别摄像头拍摄到的道路图像中的第一道路特征。
本发明实施例中,摄像头的取景范围至少包括车辆前方的环境,摄像头与车载终端可以存在数据传输,车载终端实时获取摄像头在车辆行驶过程中拍摄到的道路图像。作为一种可选的实施方式,在本发明实施例中,车载终端可以通过预先训练好的语义特征检测模型识别道路图像中的第一道路特征,该语义特征检测模型可以为深度学习神经网络,采用大量标注有上述的第一道路特征的样本图像作为输入对深度学习神经网络进行训练,得到上述的语义特征检测模型。利用深度学习神经网络进行第一道路特征的识别,相较于图像分割等传统的图像识别方法,可以在下雨、雪天、黄昏等光照条件较差或者摄像头背光等特殊光照条件下保持良好的图像识别性能,提高特殊光照条件下的道路特征识别准确率,从而可以降低道路特征的漏检率,提高基于视觉信息的地图更新方案的稳定性。
202、车载终端匹配第一道路特征和第一电子地图中的第二道路特征,识别出不匹配的目标对象。
本发明实施例中,目标对象为与任一第二道路特征不匹配的第一道路特征或者与任一第一道路特征不匹配的第二道路特征。第一道路特征可以为车载终端从图像中识别出的道路特征,第二道路特征可以为第一电子地图中的道路特征。具体地,上述的道路特征可以为道路及其周边经过经验筛选,可用于位置确定的标志性物体。举例来说,道路特征可以为交通标志(如路牌、限速标志等)、车道线、路灯杆、道路兴趣点(Point of Interest, POI)等,本发明实施例不做限定。为了方便描述,如无特殊说明,下文以“道路特征”作为“第一道路特征”和“第二道路特征”的统称。此外,上述的第一电子地图为自动驾驶导航电子地图,该电子地图可以为三维矢量地图。因此,第一道路特征可以理解为道路特征在摄像头拍摄到的道路图像中的二维表示,第二道路特征可以理解为道路特征在预先构建的第一电子地图中的三维表示。
第一道路特征与第二道路特征相匹配至少可以包括特征类型的匹配、特征位置的匹配以及特征内容的匹配,相应地,第一道路特征与第二道路特征不匹配可以包括但不限于以下三种情况:
道路特征的缺失:即第一电子地图中存在某一第二道路特征,但道路图像中不存在与该第二道路特征相匹配的第一道路特征,该缺失的道路特征可以表示为上述的与任一第一道路特征不匹配的第二道路特征;
道路特征的添加:即道路图像中存在某一第一道路特征,但第一电子地图中不存在与该第一道路特征相匹配的第二道路特征,该添加的道路特征可以表示为上述的与任一第二道路特征不匹配的第一道路特征;
道路特征的更改:包括道路特征的特征位置更改、特征类型更改和特征内容更改。道路特征的特征位置更改可以表示该道路特征从位置A更改至位置B,也就是说,可以表示为位置A道路特征的缺失以及位置B道路特征的添加,因此,该位置更改的道路特征可以使用两个目标对象进行表示,分别为上述的与任一第一道路特征不匹配的第二道路特征以及与任一第二道路特征不匹配的第一道路特征;特征类型更改可以包括第一道路特征和第二道路特征在类型上的不匹配,举例来说,道路图像中存在某一第一道路特征,该第一道路特征为交通标记,第一电子地图中与第一道路特征相对应的位置上存在一第二道路特征,该第二道路特征为灯杆,此时,可以认为该第一道路特征与该第二道路特征在特征内容上不匹配;道路特征的特征内容更改可以包括第一道路特征和第二道路特征在特征内容上的不匹配,举例来说,道路图像中存在某一第一道路特征,该第一道路特征为交通标记,标记的内容为该路段限速60,第一电子地图中与第一道路特征相对应的位置上存在一第二道路特征,该第二道路特征为交通标记,标记的内容为该路段限速80,此时,可以认为该第一道路特征与该第二道路特征在特征内容上不匹配。综上,特征类型更改和特征内容更改的道路特征均可以表示为上述的与任一第二道路特征不匹配的第一道路特征。203、车载终端将目标对象上报至服务器,以通过服务器在第一电子地图中更新目标对象,得到更新后的第二电子地图。
本发明实施例中,车载终端可以通过变更报告的形式将目标对象上报至服务器,变更报告可以包括车载终端识别到的该目标对象在自动驾驶导航地图中的位置、目标对象的特征类型、目标对象的内容等与目标内容相关的信息。进一步地,变更报告还可以将拍摄到该目标对象时装设于车辆上的惯性测量单元(Inertial Measurement Unit,IMU)、全球卫星定位系统(Global Positioning Systems,GPS)、轮速计等传感器采集到的数据以及上述各个传感器的状态等信息,本发明实施例不做限定。
204、服务器接收上报的目标对象。
本发明实施例中,目标对象为车载终端执行上述的步骤202识别出的与任一第二道路特征不匹配的第一道路特征或者与任一第一道路特征不匹配的第二道路特征。具体地,服务器可以接收车载终端上报的如上述的变更报告。
205、服务器判断目标对象的上报次数是否超出指定阈值,如果是,执行步骤206,如果否,结束本流程。
本发明实施例中,目标对象的上报次数为同一个目标对象的上报次数。作为一种可选的实施方式,服务器可以通过变更报告中包含的目标对象的位置等信息从接收到的所有变更报告中识别出与该目标对象相同的目标对象,并统计该目标对象的上报次数。
206、服务器在第一电子地图中更新目标对象,以获得更新后的第二电子地图。
本发明实施例中,车载终端识别第一道路特征时可能存在一定的识别误差。比如说,车载终端漏检道路图像中的某一车道线,那么在执行步骤202时,第一电子地图上与图像中的车道线对应的车道线可能被识别为与任一道路特征不匹配的第二道路特征,从而作为目标对象上报至服务器;或者,车载终端将道路图像中的行人为灯杆,那么在执行步骤202时,可能无法在第一电子地图中匹配出与该误识别的“灯杆”相匹配的灯杆对象,该误识别的“灯杆”可能会被识别为与任一第二道路特征不匹配的第一道路特征,从而作为目标对象上报至服务器。因此,服务器需要有一定的容错性,当同一个目标对象的上报次数足够多(即上报次数超过指定阈值时),可能表示多辆装设有上述车载终端的车辆在途经目标对象所在位置时,均识别出该目标对象,此时该位置确切存在地图更新需求的可能性较大。可见,执行步骤205~步骤206,可以减少图像识别误差带来的影响,提高自动驾驶导航电子地图更新的稳定性和可靠性。
此外,服务器可以使用变更报告中包含的信息将该目标对象更新至第一电子地图中,以得到新的第二电子地图,更新的操作包括但不限于添加、删除、替换。
请一并参阅图1,进一步可选的,服务器在获得第二电子地图之后,可以将第二电子地图发布至与服务器存在通信连接的所有车载终端,从而使得所有车载终端均可以接收到已更新的第二电子地图。也就是说,即使对于从未途经目标对象所在位置的车辆而言,其车载终端也可以通过服务器获得已更新的第二电子地图,而该目标对象的信息是由其他途经该目标对象所在位置的车辆搭载的车载终端提供的。可见,实施上述的实施方式,可以通过众包的方式,将地图更新任务分散至各个与服务器建立通信连接的车载终端,从而可以提高地图更新的效率,提高了由此生成的自动驾驶导航电子地图的实时性,进而提高了基于该自动驾驶导航电子地图进行的自动驾驶的安全性。
综上所述,在图2所描述的方法中,车载终端可以通过识别道路图像中的第一道路特征,以及将第一道路特征和第二道路特征进行匹配,可以识别出不匹配的目标对象并将目标对象上报至服务器。服务器在接收到同一个目标对象的次数足够多时,并将其更新至第一电子地图中,以获得更新后的第二电子地图。可见,整个地图更新过程无需依赖人工,车载终端和服务器可以自动完成目标对象的识别和更新,并且具有一定的容错能力,从而可以提高自动驾驶导航电子地图的更新速度和更新效率,同时提高地图更新的稳定性和可靠性。进一步地,图2所描述的方法可以基于图1所示的系统架构进行,从而可以通过众包的方式,将地图更新任务分散至各个与服务器建立通信连接的车载终端,进一步提高了自动驾驶导航电子地图的更新速度和更新效率。
实施例二
请参阅图3,图3是本发明实施例公开的另一种自动驾驶导航地图的生成方法的流程示意图。如图3所示,该自动驾驶导航地图的生成方法可以包括以下步骤:
301、车载终端在对车辆进行定位计算时,识别摄像头拍摄到的道路图像中的第一道 路特征。
本发明实施例中,摄像头的取景范围至少包括车辆前方的环境。本领域技术人员可以理解的是,车载终端对车辆进行定位计算的一种可能的实施方式可以如下:识别道路图像中的第一道路特征,根据自动导航电子地图中与第一道路特征相匹配的特征位置,计算出车辆在自动导航电子地图中的位置,从而完成车辆定位。也就是说,地图更新和车辆定位计算可以共用相同的图像识别结果,从而可以节省计算资源。
302、车载终端匹配第一道路特征和第一电子地图中的第二道路特征,识别出不匹配的目标对象。
本发明实施例中,请一并参阅图4,作为一种可选的实施方式,车载终端执行步骤302的方式具体可以为:
S401、车载终端将第一道路特征映射到第一电子地图,获得第一道路特征在第一电子地图中的第一位置。
本发明实施例中,车载终端可以在车辆的行驶过程中获取到摄像头拍摄的多张道路图像。针对某一帧道路图像,车载终端从中识别出第一道路特征,根据该第一道路特征在该帧道路图像之前的上一帧道路图像的位置,车载终端可以计算出第一道路特征和车辆之间的相对距离,即提取出第一道路特征的深度信息。可选的,车载终端可以使用光流法(Optical Flow Method)等算法计算出第一道路特征的深度信息。在第一道路特征的深度信息和车辆的位姿信息确定时,车载终端可以计算出第一道路特征相对于车辆的相对位置,从而可以根据世界坐标系(即第一电子地图所使用的坐标系)与以车辆为中心的车辆坐标系之间的转换关系,计算出道路图像中的第一道路特征在第一电子地图中的位置,即第一道路特征映射到第一电子地图中的第一位置。
S402、车载终端判断第一位置是否存在与第一道路特征相匹配的第二道路特征,如果是,执行步骤S403,如果否,执行步骤S404。
本发明实施例中,第一道路特征与第二道路特征相匹配包括特征类型匹配和特征内容匹配。
S403、车载终端获取从道路图像中识别出的下一个第一道路特征,并继续执行步骤401。
本发明实施例中,道路图像中可能存在多个第一道路特征,对于每一个识别出的第一道路特征,车载终端均可以执行如图4所示的匹配方法,判断该第一道路特征是否为不匹配的目标对象。
S404、车载终端将映射至第一位置的第一道路特征确定为不匹配的目标对象。
本发明实施例中,如果道路图像中存在多个第一道路特征,那么当车载终端确定某一个第一道路特征为目标对象之后,车载终端还可以获取从道路图像中识别出的下一个第一道路特征,并继续执行步骤401。
需要说明的是,图4所示的匹配方法在道路图像中存在多个第一道路特征时,完成某一个第一道路特征的匹配之后再继续执行下一个第一道路特征的匹配。在另一些可能的实施例中,车载终端可以使用并行计算的方式对多个第一道路特征进行匹配,即同时进行多个第一道路特征的匹配。
实施图4所示的实施方式,车载终端可以识别出与任一第二道路特征不匹配的第一道路特征,该第一道路特征可能为第一电子地图中缺失的道路特征,也可能为第一电子地图 中更改的道路特征。
此外,请一并参阅图5,作为另一种可选的实施方式,车载终端执行步骤303的方式具体也可以为:
S501、车载终端将第一电子地图的第二道路特征投影到上述的道路图像,获得第二道路特征在道路图像中的第二位置。
本发明实施例中,车载终端可以根据世界坐标系(即第一电子地图所使用的坐标系)和以摄像头为中心的相机坐标系之间的转换关系,将第二道路特征投影到道路图像中,从而获得第二道路特征在道路图像中的第二位置。
S502、车载终端判断第二位置是否存在与第二道路特征相匹配的第一道路特征,如果是,执行步骤S503,如果否,执行步骤S504。
本发明实施例中,第一道路特征与第二道路特征相匹配包括特征类型匹配和特征内容匹配。
S503、车载终端获取从第一电子地图中识别出的下一个第二道路特征,并继续执行步骤S501。
S504、车载终端将投影至第二位置的第二道路特征确定为不匹配的目标对象。
需要说明的是,图5所示的匹配方法在第一电子地图中存在多个第二道路特征时,完成某一个第二道路特征的匹配之后再继续执行下一个第二道路特征的匹配。在另一些可能的实施例中,车载终端可以使用并行计算的方式对多个第二道路特征进行匹配,即同时进行多个第二道路特征的匹配。
实施图5所示的实施方式,车载终端可以识别出与任一第一路特征不匹配的第二道路特征,该第二道路特征可能为需要添加至第一电子地图的道路特征。
此外,本领域技术人员可以理解,在一些可能的实施例中,车载终端在获取到道路图像之后,可以执行图4以及图5所示的匹配方法,从而可以根据道路图像和第一电子地图识别出所有可能的目标对象。
请继续参阅图3,当识别出不匹配的目标对象之后,车载终端执行以下步骤:
303、车载终端将目标对象以及目标对象在第一电子地图中的位置上报至服务器。
本发明实施例中,如果上报的目标对象为与任一第二道路特征不匹配的第一道路特征时,目标对象在第一电子地图中的位置可以为不匹配的第一道路特征映射至第一电子地图中的位置;如果上报的目标对象为与任一第一道路特征不匹配的第二道路特征时,目标对象在第一电子地图中的位置可以为不匹配的第二道路特征在第一电子地图中的位置。
304、服务器接收上报的目标对象。
305、服务器判断目标对象的上报次数是否超出指定阈值,如果是,执行步骤306,如果否,结束本流程。
306、服务器对目标对象的所有位置数据进行数据融合,得到目标对象在第一电子地图中的第三位置。
本发明实施例中,目标对象的所有位置数据由每次接收到的目标对象在第一电子地图中的位置组成。对于同一个目标对象,服务器可能接收到目标对象在第一电子地图中的多个位置。服务器可以根据对接收到的多个位置数据进行数据融合,从而确定出目标对象在第一电子地图中相对准确的位置(即上述的第三位置)。其中,数据融合的方式可以包括加权平均、聚类、最优化方法等,本发明实施例不做限定。由于车辆速度、传感器状态、 光照强度等因素的影响,单个车载终端上报的目标对象在第一电子地图中的位置可能与该目标对象实际的位置之间存在较大误差,通过对多次接收到的位置数据进行数据融合,可以降低单次观测的误差,提高目标对象所在位置的精度。
307、服务器在第一电子地图的第三位置更新目标对象。
可见,在图3所描述的自动驾驶导航地图的生成方法中,车载终端和服务器可以自动完成目标对象的识别和更新。同时,在图3所描述的方法中,地图更新和车辆定位计算可以共用相同的图像识别结果,从而可以节省计算资源。进一步地,图3所描述的方法中,服务器使用多次接收到的目标对象的位置数据进行数据融合,从而确定出目标对象在第一电子地图中较为准确的位置,并在该位置更新目标对象,从而可以降低单次观测的误差,提高目标对象所在位置的精度。
实施例三
请参阅图6,图6是本发明实施例公开的一种车载终端的结构示意图。如图6所示,该车载终端可以包括:
识别单元601,用于识别摄像头拍摄到的道路图像中的第一道路特征;其中,摄像头的取景范围至少包括车载终端所在车辆的前方环境。
本发明实施例中,作为一种可选的实施方式,识别单元601可以通过预先训练好的语义特征检测模型识别道路图像中的第一道路特征,该语义特征检测模型可以为深度学习神经网络,上述的语义特征检测模型为采用大量标注有上述的第一道路特征的样本图像作为训练输入,最终训练得到的深度学习神经网络。利用深度学习神经网络进行第一道路特征的识别,识别单元601可以提高特殊光照条件下的道路特征识别准确率,从而可以降低道路特征的漏检率,提高基于视觉信息的地图更新方案的稳定性。
匹配单元602,用于匹配识别单元601识别出的第一道路特征和第一电子地图中的第二道路特征,识别出不匹配的目标对象;上述的目标对象为与任一第二道路特征不匹配的第一道路特征或者与任一第一道路特征不匹配的第二道路特征。
本发明实施例中,第一道路特征与第二道路特征不匹配可以包括但不限于以下三种情况:道路特征的缺失、道路特征的添加以及道路特征的更改。
通信单元603,用于将匹配单元602识别出的目标对象上报至服务器,以通过服务器在第一电子地图中更新目标对象,得到更新后的第二电子地图。
本发明实施例中,通信单元603可以通过变更报告的形式将目标对象上报至服务器,变更报告可以包括车载终端识别到的该目标对象在自动驾驶导航地图中的位置、目标对象的特征类型、目标对象的内容等与目标内容相关的信息。变更报告还可以包括装设于车辆上的各个传感器采集到的数据以及各个传感器的状态等信息,本发明实施例不做限定。
可见,实施图6所示的车载终端,可以自动识别出可能需要进行更新的目标对象,并且将目标对象上报至服务器,从而可以通过服务器完成目标对象的更新,地图更新过程无需依赖人工,从而可以提高自动驾驶导航电子地图的更新速度和更新效率。
实施例四
请参阅图7,图7是本发明实施例公开的另一种车载终端的结构示意图。如图7所示,上述的匹配单元602可以包括:
转换子单元6021,用于将识别单元601识别出的第一道路特征映射到第一电子地图,获得第一道路特征在第一电子地图中的第一位置;或者,将第一电子地图的第二道路特征 投影到道路图像,获得第二道路特征在道路图像中的第二位置;
判断子单元6022,用于判断转换子单元6021确定的第一位置是否存在与第一道路特征相匹配的第二道路特征;或者,判断转换子单元6021确定的第二位置是否存在与第二道路特征相匹配的第一道路特征;
确定子单元6023,用于在判断子单元6022判断出第一位置不存在与第一道路特征相匹配的第二道路特征时,将映射至第一位置的第一道路特征确定为不匹配的目标对象;或者,在判断子单元6022判断出第二位置不存在与第二道路特征相匹配的第一道路特征时,将投影至第二位置的第二道路特征确定为不匹配的目标对象。
可选的,在图7所示的车载终端中,上述的通信单元603用于将目标对象上报至服务器的方式具体为:
通信单元603,用于将目标对象以及目标对象在第一电子地图中的位置上报至服务器;
其中,当目标对象为与任一第二道路特征不匹配的第一道路特征时,目标对象在第一电子地图中的位置为不匹配的第一道路特征映射至第一电子地图中的位置;当目标对象为与任一第一道路特征不匹配的第二道路特征时,目标对象在第一电子地图中的位置为不匹配的第二道路特征在第一电子地图中的位置。
进一步可选的,在图7所示的车载终端中,上述的识别单元601用于识别摄像头拍摄到的道路图像中的第一道路特征的方式具体为:
识别单元601,用于在对车辆进行定位计算时,识别摄像头拍摄到的道路图像中的第一道路特征。实施该实施方式,匹配单元602和车载终端可能包括的定位单元(未图示)可以共用识别单元601获得的图像识别结果(即第一道路特征)。其中,匹配单元602可以用于匹配第一道路特征和第一电子地图中的第二道路特征,识别出不匹配的目标对象;上述的定位单元,可以用于识别与第一道路特征相匹配的第二道路特征,并基于该与第一道路特征相匹配的第二道路特征在第一电子地图中的位置,确定出车辆在第一电子地图中的位置。
可见,实施图7所示的车载终端,可以自动识别出可能需要进行更新的目标对象,并且可以确定出目标对象在第一电子地图中的位置,从而可以将目标对象和目标对象在第一电子地图中的位置一并上报至服务器,便于服务器进行目标对象的更新。进一步地,实施图7所示的车载终端,识别单元601获得的图像识别结果可以被匹配单元602以及车载终端可能包括的定位单元共用,有利于节省计算资源。
此外,本发明实施例还公开了一种计算机可读存储介质,其存储计算机程序,其中,该计算机程序使得计算机执行实施例一或实施例二公开的任一种自动驾驶导航地图的生成方法中车载终端执行的步骤。
本发明实施例还公开了一种计算机程序产品,该计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,且该计算机程序可操作来使计算机执行实施例一或实施例二公开的任一种自动驾驶导航地图的生成方法中车载终端执行的步骤。
实施例五
请参阅图8,图8是本发明实施例公开的一种服务器的结构示意图。如图8所示,该服务器可以包括:
收发单元801,用于接收车载终端上报的目标对象。其中,目标对象为车载终端识别 出的与任一第二道路特征不匹配的第一道路特征或者与任一第一道路特征不匹配的第二道路特征;第一道路特征可以为从道路图像中识别出的道路特征,第二道路特征可以为第一电子地图中的道路特征;车载终端可以为上述图6或图7所示的任一种车载终端;
判断单元802,用于判断收发单元801接收到的目标对象的上报次数是否超出指定阈值;
本发明实施例中,目标对象的上报次数为同一个目标对象的上报次数。
更新单元803,用于在判断单元802判断出目标对象的上报次数超出指定阈值时,在第一电子地图中更新上述的目标对象,以获得更新后的第二电子地图。
本发明实施例中,更新单元804在同一个目标对象的上报次数足够多时,才会在第一电子地图中更新该目标对象,从而可以提高自动驾驶导航电子地图更新的稳定性和可靠性。此外,更新单元803在第一电子地图中更新目标对象的操作包括但不限于在第一电子地图中添加、删除、替换上述的目标对象。
可选的,在图8所示的服务器中,上述的更新单元803,可以包括:
融合子单元8031,用于判断单元802判断出目标对象的上报次数超出指定阈值时,对接收到的目标对象的所有位置数据进行数据融合,得到目标对象在第一电子地图中的第三位置;其中,目标对象的所有位置数据由每次接收到的目标对象在第一电子地图中的位置组成;数据融合的方式可以包括加权平均、聚类、最优化方法等,本发明实施例不做限定。
更新子单元8032,用于在融合子单元8041确定出的第三位置更新目标对象,以获得更新后的第二电子地图。
可见,实施图8所示的服务器,可以在接收车载终端上报的目标对象,并且在在第一电子地图中更新该目标对象,从而可以完成目标对象的自动更新,可以提高自动驾驶导航电子地图的更新速度和更新效率。此外,图8所示的服务器还可以判断同一个目标对象的上报次数是否超过指定阈值,并且在上报次数超过指定阈值时,才执行在第一电子地图中更新目标对象的操作,从而可以提高自动驾驶导航电子地图更新的稳定性和可靠性。进一步地,图8所示的服务器在第一电子地图中更新目标对象时,使用多次接收到的目标对象的位置数据进行数据融合,从而确定出目标对象在第一电子地图中较为准确的位置,并在该位置更新目标对象,从而可以降低单次观测的误差,提高目标对象所在位置的精度。
此外,本发明实施例还公开了一种计算机可读存储介质,其存储计算机程序,其中,该计算机程序使得计算机执行实施例一或实施例二公开的任一种自动驾驶导航地图的生成方法中服务器执行的步骤。
本发明实施例还公开了一种计算机程序产品,该计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,且该计算机程序可操作来使计算机执行实施例一或实施例二公开的任一种自动驾驶导航地图的生成方法中服务器执行的步骤。
实施例六
请参阅图9,图9是本发明实施例公开的一种自动驾驶导航地图的生成系统的结构示意图。如图9所示,该自动驾驶导航地图可以包括:
车载终端901和服务器902,车载终端901与服务器902之间存在通信连接;
其中,车载终端901,可以用于获取装设于车辆上的摄像头、IMU、GPS、轮速计等 传感器采集到的数据,并对各个传感器采集到的数据进行处理;具体地,车载终端901可以用于执行实施例一或实施例二公开的任一种自动驾驶导航地图的生成方法中车载终端执行的步骤。
服务器902,可以用于执行实施例一或实施例二公开的任一种自动驾驶导航地图的生成方法中服务器执行的步骤。
基于图9所示的自动驾驶导航地图的生成系统,可以通过众包的方式,将地图更新任务分散至各个与服务器建立通信连接的车载终端,从而可以提高地图更新的效率,提高了由此生成的自动驾驶导航电子地图的实时性,进而提高了基于该自动驾驶导航电子地图进行的自动驾驶的安全性。
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本发明的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定特征、结构或特性可以以任意适合的方式结合在一个或多个实施例中。本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作和模块并不一定是本发明所必须的。
在本发明的各种实施例中,应理解,上述各过程的序号的大小并不意味着执行顺序的必然先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物单元,即可位于一个地方,或者也可以分布到多个网络单元上。可根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本发明各实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
上述集成的单元若以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可获取的存储器中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或者部分,可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干请求用以使得一台计算机设备(可以为个人计算机、服务器或者网络设备等,具体可以是计算机设备中的处理器)执行本发明的各个实施例上述方法的部分或全部步骤。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质包括只读存储器(Read-Only Memory,ROM)、随机存储器(Random Access Memory,RAM)、可编程只读存储器(Programmable Read-only Memory,PROM)、可擦除可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、一次可编程只读存储器(One-time Programmable Read-Only Memory,OTPROM)、电子抹除式可复写只读存储器(Electrically-Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储器、磁盘存储器、磁带存储器、或者能够用于携带或存储数据的计算机可读的任何其他介质。
以上对本发明实施例公开的一种自动驾驶导航地图的生成方法、系统、车载终端及服 务器进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想。同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (12)

  1. 一种自动驾驶导航地图的生成方法,其特征在于,包括:
    识别摄像头拍摄到的道路图像中的第一道路特征;所述摄像头的取景范围至少包括车辆前方的环境;
    匹配所述第一道路特征和第一电子地图中的第二道路特征,识别出不匹配的目标对象;所述目标对象为与任一所述第二道路特征不匹配的第一道路特征或者与任一所述第一道路特征不匹配的第二道路特征;其中,通过一车载终端获取到所述摄像头拍摄的多张道路图像;针对某一帧道路图像,所述车载终端从中识别出所述第一道路特征,根据所述第一道路特征在该帧道路图像之前的上一帧道路图像的位置,所述车载终端计算出所述第一道路特征和车辆之间的相对距离,提取出所述第一道路特征。
    将所述目标对象上报至服务器,以通过所述服务器在所述第一电子地图中更新所述目标对象,得到更新后的第二电子地图。2、根据权利要求1所述的自动驾驶导航地图的生成方法,其特征在于,所述匹配所述第一道路特征和第一电子地图中的第二道路特征,识别出不匹配的目标对象,包括:
    将所述第一道路特征映射到所述第一电子地图,获得所述第一道路特征在所述第一电子地图中的第一位置,并判断所述第一位置是否存在与所述第一道路特征相匹配的第二道路特征,如果否,将映射至所述第一位置的所述第一道路特征确定为不匹配的目标对象;
    或者,将所述第一电子地图的所述第二道路特征投影到所述道路图像,获得所述第二道路特征在所述道路图像中的第二位置,并判断所述第二位置是否存在与所述第二道路特征相匹配的第一道路特征,如果否,将投影至所述第二位置的所述第二道路特征确定为不匹配的目标对象。
  2. 根据权利要求1或2所述的自动驾驶导航地图的生成方法,其特征在于,所述将所述目标对象上报至服务器,包括:
    将所述目标对象以及所述目标对象在所述第一电子地图中的位置上报至服务器,以通过所述服务器在所述第一电子地图中更新所述目标对象,得到更新后的第二电子地图;
    其中,当所述目标对象为与任一所述第二道路特征不匹配的第一道路特征时,所述目标对象在所述第一电子地图中的位置为所述不匹配的第一道路特征映射至所述第一电子地图中的位置;当所述目标对象为与任一所述第一道路特征不匹配的第二道路特征时,所述目标对象在所述第一电子地图中的位置为所述不匹配的第二道路特征在所述第一电子地图中的位置。
  3. 根据权利要求1~3任一项所述的自动驾驶导航地图的生成方法,其特征在于,所述识别摄像头拍摄到的道路图像中的第一道路特征,包括:
    在对所述车辆进行定位计算时,识别摄像头拍摄到的道路图像中的第一道路特征。
  4. 一种自动驾驶导航地图的生成方法,其特征在于,包括:
    接收上报的目标对象;所述目标对象为车载终端识别出的与任一第二道路特征不匹配的第一道路特征或者与任一第一道路特征不匹配的第二道路特征;所述第一道路特征为从道路图像中识别出的道路特征,所述第二道路特征为第一电子地图中的道路特征;所述不匹配包括道路特征的缺失、道路特征的添加以及道路特征的更改中的一种或几种;
    判断所述目标对象的上报次数是否超出指定阈值,如果是,在第一电子地图中更新所述目标对象,以获得更新后的第二电子地图。
  5. 根据权利要求4所述的自动驾驶导航地图的生成方法,其特征在于,所述在第一 电子地图中更新所述目标对象,包括:
    对接收到的所述目标对象的所有位置数据进行数据融合,得到所述目标对象在所述第一电子地图中的第三位置;所述所有位置数据由每次接收到的所述目标对象在所述第一电子地图中的位置组成;
    在所述第一电子地图的所述第三位置更新所述目标对象。
  6. 一种车载终端,其特征在于,包括:
    识别单元,用于识别摄像头拍摄到的道路图像中的第一道路特征;所述摄像头的取景范围至少包括所述车载终端所在车辆的前方环境;
    匹配单元,用于匹配所述第一道路特征和第一电子地图中的第二道路特征,识别出不匹配的目标对象;所述目标对象为与任一所述第二道路特征不匹配的第一道路特征或者与任一所述第一道路特征不匹配的第二道路特征;其中,一车载终端获取到所述摄像头拍摄的多张道路图像;针对某一帧道路图像,所述车载终端从中识别出所述第一道路特征,根据所述第一道路特征在该帧道路图像之前的上一帧道路图像的位置,所述车载终端计算出所述第一道路特征和车辆之间的相对距离,提取出所述第一道路特征。
    通信单元,用于将所述目标对象上报至服务器,以通过所述服务器在所述第一电子地图中更新所述目标对象,得到更新后的第二电子地图。
  7. 根据权利要求7所述的车载终端,其特征在于,所述匹配单元,包括:
    转换子单元,用于将所述第一道路特征映射到所述第一电子地图,获得所述第一道路特征在所述第一电子地图中的第一位置;或者,将所述第一电子地图的所述第二道路特征投影到所述道路图像,获得所述第二道路特征在所述道路图像中的第二位置;
    判断子单元,用于判断所述第一位置是否存在与所述第一道路特征相匹配的第二道路特征;或者,判断所述第二位置是否存在与所述第二道路特征相匹配的第一道路特征;
    确定子单元,用于在所述判断子单元判断出所述第一位置不存在与所述第一道路特征相匹配的第二道路特征时,将映射至所述第一位置的所述第一道路特征确定为不匹配的目标对象;或者,在所述判断子单元判断出所述第二位置不存在与所述第二道路特征相匹配的第一道路特征时,将投影至所述第二位置的所述第二道路特征确定为不匹配的目标对象。
  8. 根据权利要求7或8所述的车载终端,其特征在于,所述通信单元用于将所述目标对象上报至服务器的方式具体为:
    所述通信单元,用于将所述目标对象以及所述目标对象在所述第一电子地图中的位置上报至服务器;
    其中,当所述目标对象为与任一所述第二道路特征不匹配的第一道路特征时,所述目标对象在所述第一电子地图中的位置为所述不匹配的第一道路特征映射至所述第一电子地图中的位置;当所述目标对象为与任一所述第一道路特征不匹配的第二道路特征时,所述目标对象在所述第一电子地图中的位置为所述不匹配的第二道路特征在所述第一电子地图中的位置。
  9. 根据权利要求7~9任一项所述的车载终端,其特征在于,所述识别单元用于识别摄像头拍摄到的道路图像中的第一道路特征的方式具体为:
    所述识别单元,用于在对所述车辆进行定位计算时,识别摄像头拍摄到的道路图像中的第一道路特征。
  10. 一种服务器,其特征在于,包括:
    收发单元,用于接收上报的目标对象;所述目标对象为车载终端识别出的与任一第二道路特征不匹配的第一道路特征或者与任一第一道路特征不匹配的第二道路特征;所述第一道路特征为从道路图像中识别出的道路特征,所述第二道路特征为第一电子地图中的道路特征;通过一车载终端获取到摄像头拍摄的多张道路图像;针对某一帧道路图像,所述车载终端从中识别出所述第一道路特征,根据所述第一道路特征在该帧道路图像之前的上一帧道路图像的位置,所述车载终端计算出所述第一道路特征和车辆之间的相对距离,提取出所述第一道路特征;
    判断单元,用于判断所述目标对象的上报次数是否超出指定阈值;
    更新单元,用于在所述判断单元判断出所述目标对象的上报次数超出所述指定阈值时,在第一电子地图中更新所述目标对象,以获得更新后的第二电子地图。
  11. 根据权利要求11所述的服务器,其特征在于,所述更新单元,包括:
    融合子单元,用于在所述判断单元判断出所述目标对象的上报次数超出所述指定阈值时,对接收到的所述目标对象的所有位置数据进行数据融合,得到所述目标对象在所述第一电子地图中的第三位置;所述所有位置数据由每次接收到的所述目标对象在所述第一电子地图中的位置组成;
    更新子单元,用于在所述第三位置更新所述目标对象,以获得更新后的第二电子地图。
  12. 一种电子地图的生成系统,其特征在于,包括:
    如权利要求7~10任一项所述的车载终端;
    以及,如权利要求11~12任一项所述的服务器。
PCT/CN2018/113665 2018-08-28 2018-11-02 自动驾驶导航地图的生成方法、系统、车载终端及服务器 WO2020042348A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810984491.6A CN110146097B (zh) 2018-08-28 2018-08-28 自动驾驶导航地图的生成方法、系统、车载终端及服务器
CN201810984491.6 2018-08-28

Publications (1)

Publication Number Publication Date
WO2020042348A1 true WO2020042348A1 (zh) 2020-03-05

Family

ID=67589400

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/113665 WO2020042348A1 (zh) 2018-08-28 2018-11-02 自动驾驶导航地图的生成方法、系统、车载终端及服务器

Country Status (2)

Country Link
CN (1) CN110146097B (zh)
WO (1) WO2020042348A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183440A (zh) * 2020-10-13 2021-01-05 北京百度网讯科技有限公司 道路信息的处理方法、装置、电子设备和存储介质
CN112729336A (zh) * 2020-12-14 2021-04-30 北京航空航天大学 一种基于高精度矢量地图的车道级导航定位评价方法
CN112735136A (zh) * 2020-12-31 2021-04-30 深圳市艾伯通信有限公司 5g交通监测规划方法、移动终端、交通服务平台及系统
CN114219907A (zh) * 2021-12-08 2022-03-22 阿波罗智能技术(北京)有限公司 三维地图生成方法、装置、设备以及存储介质

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446915B (zh) * 2019-08-28 2024-03-29 北京初速度科技有限公司 一种基于图像组的建图方法及装置
CN112530270B (zh) * 2019-09-17 2023-03-14 北京初速度科技有限公司 一种基于区域分配的建图方法及装置
CN110888434A (zh) * 2019-11-14 2020-03-17 腾讯科技(深圳)有限公司 自动驾驶方法、装置、计算机设备和计算机可读存储介质
CN112991241B (zh) * 2019-12-13 2024-04-12 阿里巴巴集团控股有限公司 一种道路场景图像处理方法、装置、电子设备及存储介质
CN113048988B (zh) * 2019-12-26 2022-12-23 北京初速度科技有限公司 一种导航地图对应场景的变化元素检测方法及装置
CN113701767B (zh) * 2020-05-22 2023-11-17 杭州海康机器人股份有限公司 一种地图更新的触发方法和系统
CN111680596B (zh) * 2020-05-29 2023-10-13 北京百度网讯科技有限公司 基于深度学习的定位真值校验方法、装置、设备及介质
CN112069279B (zh) 2020-09-04 2022-11-08 北京百度网讯科技有限公司 地图数据更新方法、装置、设备及可读存储介质
CN112466005B (zh) * 2020-11-26 2022-08-09 重庆长安汽车股份有限公司 基于用户使用习惯的自动驾驶围栏的更新系统及方法
EP4242590A1 (en) * 2020-11-30 2023-09-13 Huawei Technologies Co., Ltd. Map verification method and related apparatus
CN113515536B (zh) * 2021-07-13 2022-12-13 北京百度网讯科技有限公司 地图的更新方法、装置、设备、服务器以及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107167149A (zh) * 2017-06-26 2017-09-15 上海与德科技有限公司 一种街景视图制作方法及系统
CN107241441A (zh) * 2017-07-28 2017-10-10 深圳普思英察科技有限公司 一种新能源无人车车载地图更新方法和系统
CN107515006A (zh) * 2016-06-15 2017-12-26 华为终端(东莞)有限公司 一种地图更新方法和车载终端
CN108074394A (zh) * 2016-11-08 2018-05-25 武汉四维图新科技有限公司 实景交通数据更新方法及装置
CN108416045A (zh) * 2018-03-15 2018-08-17 斑马网络技术有限公司 交通设施的位置获取方法、装置、终端设备及服务器
CN108413975A (zh) * 2018-03-15 2018-08-17 斑马网络技术有限公司 地图获取方法、系统、云处理器及车辆

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101694392B (zh) * 2009-09-29 2015-03-18 北京四维图新科技股份有限公司 一种导航终端的地图更新方法、导航终端及系统
US9630619B1 (en) * 2015-11-04 2017-04-25 Zoox, Inc. Robotic vehicle active safety systems and methods
CN106525057A (zh) * 2016-10-26 2017-03-22 陈曦 高精度道路地图的生成系统
EP3327669B1 (en) * 2016-11-26 2022-01-05 Thinkware Corporation Image processing apparatus, image processing method, computer program and computer readable recording medium
CN110832474B (zh) * 2016-12-30 2023-09-15 辉达公司 更新高清地图的方法
CN107339996A (zh) * 2017-06-30 2017-11-10 百度在线网络技术(北京)有限公司 车辆自定位方法、装置、设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107515006A (zh) * 2016-06-15 2017-12-26 华为终端(东莞)有限公司 一种地图更新方法和车载终端
CN108074394A (zh) * 2016-11-08 2018-05-25 武汉四维图新科技有限公司 实景交通数据更新方法及装置
CN107167149A (zh) * 2017-06-26 2017-09-15 上海与德科技有限公司 一种街景视图制作方法及系统
CN107241441A (zh) * 2017-07-28 2017-10-10 深圳普思英察科技有限公司 一种新能源无人车车载地图更新方法和系统
CN108416045A (zh) * 2018-03-15 2018-08-17 斑马网络技术有限公司 交通设施的位置获取方法、装置、终端设备及服务器
CN108413975A (zh) * 2018-03-15 2018-08-17 斑马网络技术有限公司 地图获取方法、系统、云处理器及车辆

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183440A (zh) * 2020-10-13 2021-01-05 北京百度网讯科技有限公司 道路信息的处理方法、装置、电子设备和存储介质
EP3922950A3 (en) * 2020-10-13 2022-04-06 Beijing Baidu Netcom Science Technology Co., Ltd. Road information processing method and apparatus, electronic device, storage medium and program
CN112729336A (zh) * 2020-12-14 2021-04-30 北京航空航天大学 一种基于高精度矢量地图的车道级导航定位评价方法
CN112729336B (zh) * 2020-12-14 2023-07-14 北京航空航天大学 一种基于高精度矢量地图的车道级导航定位评价方法
CN112735136A (zh) * 2020-12-31 2021-04-30 深圳市艾伯通信有限公司 5g交通监测规划方法、移动终端、交通服务平台及系统
CN114219907A (zh) * 2021-12-08 2022-03-22 阿波罗智能技术(北京)有限公司 三维地图生成方法、装置、设备以及存储介质

Also Published As

Publication number Publication date
CN110146097B (zh) 2022-05-13
CN110146097A (zh) 2019-08-20

Similar Documents

Publication Publication Date Title
WO2020042348A1 (zh) 自动驾驶导航地图的生成方法、系统、车载终端及服务器
US20220227394A1 (en) Autonomous Vehicle Operational Management
CN111695546B (zh) 用于无人车的交通信号灯识别方法和装置
JP6325806B2 (ja) 車両位置推定システム
JP6424761B2 (ja) 運転支援システム及びセンタ
US20180330610A1 (en) Traffic accident warning method and traffic accident warning apparatus
CN112991791B (zh) 交通信息识别和智能行驶方法、装置、设备及存储介质
CN109935077A (zh) 用于为自动驾驶车辆构建车辆与云端实时交通地图的系统
WO2015129045A1 (ja) 画像取得システム、端末、画像取得方法および画像取得プログラム
CN111780987B (zh) 自动驾驶车辆的测试方法、装置、计算机设备和存储介质
US20200167603A1 (en) Method, apparatus, and system for providing image labeling for cross view alignment
JP2012221291A (ja) データ配信システム、データ配信サーバ及びデータ配信方法
CN112543956B (zh) 提供道路拥堵原因的方法和装置
CN109903574B (zh) 路口交通信息的获取方法和装置
JP5522475B2 (ja) ナビゲーション装置
JP2020193954A (ja) 位置補正サーバ、位置管理装置、移動体の位置管理システム及び方法、位置情報の補正方法、コンピュータプログラム、車載装置並びに車両
US10949707B2 (en) Method, apparatus, and system for generating feature correspondence from camera geometry
US20200134844A1 (en) Method, apparatus, and system for generating feature correspondence between image views
CN110765224A (zh) 电子地图的处理方法、车辆视觉重定位的方法和车载设备
US20230360379A1 (en) Track segment cleaning of tracked objects
CN114639085A (zh) 交通信号灯识别方法、装置、计算机设备和存储介质
US20230109909A1 (en) Object detection using radar and lidar fusion
US10759449B2 (en) Recognition processing device, vehicle control device, recognition control method, and storage medium
Bhandari et al. Fullstop: A camera-assisted system for characterizing unsafe bus stopping
US11885640B2 (en) Map generation device and map generation method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18931567

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18931567

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 18931567

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05/01/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 18931567

Country of ref document: EP

Kind code of ref document: A1