CN108694882A - Method, apparatus and equipment for marking map - Google Patents

Method, apparatus and equipment for marking map Download PDF

Info

Publication number
CN108694882A
CN108694882A CN201710231536.8A CN201710231536A CN108694882A CN 108694882 A CN108694882 A CN 108694882A CN 201710231536 A CN201710231536 A CN 201710231536A CN 108694882 A CN108694882 A CN 108694882A
Authority
CN
China
Prior art keywords
marked
region
map
point
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710231536.8A
Other languages
Chinese (zh)
Other versions
CN108694882B (en
Inventor
徐胜攀
宋适宇
徐宝强
田海龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201710231536.8A priority Critical patent/CN108694882B/en
Publication of CN108694882A publication Critical patent/CN108694882A/en
Application granted granted Critical
Publication of CN108694882B publication Critical patent/CN108694882B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram
    • G09B29/003Maps
    • G09B29/005Map projections or methods associated specifically therewith

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

This application discloses the method and apparatus for marking map.One specific implementation mode of this method includes:In response to detecting the operation of selecting to be marked region of the user on map to be marked, the corresponding geographical location information in region to be marked is found out in the corresponding geographical location information list of stored map to be marked;The collected road image sequence of vehicle-mounted camera and track point set are obtained, each tracing point in the point set of track records when being each frame road image in vehicle-mounted camera acquisition road image sequence;According to the geographical location information in region to be marked and the corresponding track point set of road image sequence, the corresponding target image frame in region to be marked is extracted from road image sequence;In response to detecting that user's reference target picture frame treats the labeling operation that the mark object in tab area carries out, the mark file of the location parameter for storing mark object is generated.The embodiment can promote map label efficiency.

Description

Method, apparatus and equipment for marking map
Technical field
This application involves field of computer technology, and in particular to electronic map technique field, more particularly, to mark ground The method, apparatus and equipment of figure.
Background technology
With the development of science and technology, the application field of map is more and more extensive.Such as in automatic Pilot field, vehicle can profit Positioning and route planning are carried out with high-precision map.In order to ensure vehicle safe driving, need to mark out lane line, road in map The markers such as road traffic signboard accurately calculate road environment and driving path for automatic driving vehicle and are precisely grasped Control vehicle traveling.
Marker in current high-precision map, which usually relies on, manually carries out auxiliary mark.Mark personnel rule of thumb look into Corresponding streetscape data are found out, and control map is labeled.This method needs to utilize application, the lookup streetscape for generating map Multiple applicating cooperation operations such as the application of data, and to the more demanding of mark personnel, mark personnel is needed to be familiar with mark Region, correctly to transfer the streetscape data of reference, the annotating efficiency and accuracy of high-precision map have to be hoisted.
Invention content
The purpose of the application is to propose a kind of improved method and apparatus for marking map, to solve background above The technical issues of technology segment is mentioned.
In a first aspect, the embodiment of the present application provides a kind of method for marking map, this method includes:In response to inspection The operation of selecting to be marked region of the user on map to be marked is measured, in the corresponding geographical position of stored map to be marked It sets and finds out the corresponding geographical location information in region to be marked in information list, wherein map to be marked includes swashing vehicle-mounted The two-dimensional grid that the laser point cloud data that optical radar is acquired maps in plane is formed by reflected value map;Obtain vehicle-mounted take the photograph As collected road image sequence and track point set, each tracing point in the point set of track is adopted for vehicle-mounted camera It is recorded when each frame road image in collection road image sequence;According to the geographical location information and mileage chart in region to be marked As the corresponding track point set of sequence, the corresponding target image frame in region to be marked is extracted from road image sequence;Response In detect user's reference target picture frame treat in tab area mark object carry out labeling operation, generate for storing Mark the mark file of the location parameter of object.
In some embodiments, each tracing point includes acquiring the timestamp of corresponding road image and acquiring corresponding The geographical location information of road image;According to the geographical location information in region to be marked and the corresponding tracing point of road image sequence Set, extracts the corresponding target image frame in region to be marked from road image sequence, including:Ground based on region to be marked Reason location information is matched with the geographical location information of each tracing point in the point set of track, to determine target trajectory point; Road image corresponding with the timestamp of target trajectory point is extracted in road image sequence, as target image frame.
In some embodiments, the geographical location information based on region to be marked and each tracing point in the point set of track Geographical location information is matched, to determine target trajectory point, including:Obtain the minimum shooting distance parameter of vehicle-mounted camera; The central point in region to be marked is set to search center, with the sum of minimum shooting distance parameter and preset ranging offset parameter Circle search region is built for radius;The track in circle search region is filtered out according to the geographical location information of tracing point Point;Tracing point in circle search region is grouped according to preset time interval threshold value, obtains multiple tracing points Group;Tracing point in each tracing point group is ranked up according to timestamp, by the track of timestamp minimum in each tracing point group Point is determined as target trajectory point.
In some embodiments, the location parameter for marking object includes the first coordinate for marking object in world coordinate system Information;The above-mentioned method for marking map further includes:Object map will be marked to target image frame according to the first coordinate information.
In some embodiments, object map will be marked to target image frame according to the first coordinate information, including:Obtain generation Boundary's coordinate system is opposite relative to the first calibrating parameters, the vehicle-mounted Inertial Measurement Unit coordinate system of vehicle-mounted Inertial Measurement Unit coordinate system In the second calibrating parameters of mobile lidar coordinate system, mobile lidar coordinate system relative to vehicle-mounted camera coordinate system The 4th calibrating parameters of third calibrating parameters, vehicle-mounted camera relative to the image coordinate system of road image;According to the first calibration First coordinate information is mapped to vehicle-mounted Inertial Measurement Unit coordinate system and obtains the second coordinate information by parameter;Joined according to the second calibration Second coordinate information is mapped to mobile lidar coordinate system and obtains third coordinate information by number;According to third calibrating parameters by Three coordinate informations map to vehicle-mounted camera coordinate system and obtain 4-coordinate information;4-coordinate is believed according to the 4th calibrating parameters Breath maps to image coordinate system and obtains Five Axis information;Mark pair is rendered in target image frame using Five Axis information As.
Second aspect, this application provides a kind of device for marking map, device includes:Searching unit, configuration are used In the operation in response to detecting selected to be marked region of the user on map to be marked, in stored map pair to be marked The corresponding geographical location information in region to be marked is found out in the geographical location information list answered, wherein map packet to be marked It includes and the two-dimensional grid that the laser point cloud data that mobile lidar is acquired maps in plane is formed by reflected value map; Acquiring unit is configured to obtain the collected road image sequence of vehicle-mounted camera and track point set, track point set In each tracing point be that vehicle-mounted camera records when acquiring each frame road image in road image sequence;Extraction is single Member is configured to the corresponding track point set of geographical location information and road image sequence according to region to be marked, from road The corresponding target image frame in region to be marked is extracted in image sequence;Generation unit is configured in response to detecting user Reference target picture frame treats the labeling operation that the mark object in tab area carries out, and generates the position for storing mark object Set the mark file of parameter.
In some embodiments, each tracing point includes acquiring the timestamp of corresponding road image and acquiring corresponding The geographical location information of road image;Extraction unit is further configured to extract region correspondence to be marked as follows Target image frame:Believe in the geographical location of geographical location information and each tracing point in the point set of track based on region to be marked Breath is matched, to determine target trajectory point;It is extracted in road image sequence corresponding with the timestamp of target trajectory point Road image, as target image frame.
In some embodiments, extraction unit is further configured to determine target trajectory point as follows:It obtains The minimum shooting distance parameter of vehicle-mounted camera;Set the central point in region to be marked to search center, with minimum shooting away from It is that radius builds circle search region from the sum of parameter and preset ranging offset parameter;According to the geographical location information of tracing point Filter out the tracing point in circle search region;Between the tracing point in circle search region according to the preset time It is grouped every threshold value, obtains multiple tracing point groups;Tracing point in each tracing point group is ranked up according to timestamp, it will The tracing point of timestamp minimum is determined as target trajectory point in each tracing point group.
The location parameter of mark object includes marking object to sit in first in world coordinate system in some embodiments Mark information;Device further includes:Map unit is configured to that object map will be marked to target image according to the first coordinate information Frame.
In some embodiments, map unit is further configured to mark object map as follows to target Picture frame:Obtain first calibrating parameters of the world coordinate system relative to vehicle-mounted Inertial Measurement Unit coordinate system, vehicle-mounted inertia measurement Unit coordinate system is relative to the second calibrating parameters of mobile lidar coordinate system, mobile lidar coordinate system relative to vehicle-mounted Fourth calibration ginseng of the third calibrating parameters, vehicle-mounted camera of camera coordinate system relative to the image coordinate system of road image Number;The first coordinate information is mapped into vehicle-mounted Inertial Measurement Unit coordinate system according to the first calibrating parameters and obtains the second coordinate letter Breath;The second coordinate information is mapped into mobile lidar coordinate system according to the second calibrating parameters and obtains third coordinate information;Root Third coordinate information is mapped into vehicle-mounted camera coordinate system according to third calibrating parameters and obtains 4-coordinate information;According to the 4th mark Determine parameter and 4-coordinate information MAP to image coordinate system is obtained into Five Axis information;Using Five Axis information in target figure As rendering mark object in frame.
The third aspect, the embodiment of the present application provide a kind of equipment, including one or more processors;Storage device is used In the one or more programs of storage, when one or more programs are executed by one or more processors so that at one or more Reason device realizes the above-mentioned method for marking map.
Method and apparatus provided by the embodiments of the present application for marking map, first in response to detecting that user is waiting marking The operation for selecting region to be marked on map is noted, is looked into the corresponding geographical location information list of stored map to be marked The corresponding geographical location information in region to be marked is found out, the collected road image sequence of vehicle-mounted camera and rail are then obtained Mark point set, later according to the geographical location information in region to be marked and the corresponding track point set of road image sequence, from road The corresponding target image frame in region to be marked is extracted in the image sequence of road, finally in response to detecting user's reference target image Frame treats the labeling operation that the mark object in tab area carries out, and generates the mark of the location parameter for storing mark object File helps to promote map label efficiency to quickly and accurately determine the road image for assisting map label.
Description of the drawings
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the method for marking map of the application;
Fig. 3 is the flow according to a kind of realization method for extracting the corresponding target image frame in region to be marked of the application Schematic diagram;
Fig. 4 is a schematic diagram of a scenario for searching for target trajectory point in the flow for extract target frame in region of search;
Fig. 5 is the flow chart according to another embodiment of the method for marking map of the application;
Fig. 6 is will to mark a flow diagram of the object map to target image frame according to the application;
Fig. 7 is the structural schematic diagram according to one embodiment of the device for marking map of the application;
Fig. 8 is adapted for the structural schematic diagram of the computer system of the equipment for realizing the embodiment of the present application.
Specific implementation mode
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, is illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the implementation of the method for marking map or the device for marking map that can apply the application The exemplary system architecture 100 of example.
As shown in Figure 1, system architecture 100 may include equipment 101 and data acquisition vehicle 102.Wherein, data collecting vehicle Can there are mobile lidar 103 and vehicle-mounted camera 104 on 102.Mobile lidar 103 can travel ring with collection vehicle Three-dimensional laser point cloud data in border, vehicle-mounted camera 104 can acquire road image data.
The data that mobile lidar 103 and vehicle-mounted camera 104 acquire can be sent to equipment 101 by network.If Standby 101 data that can be sent to mobile lidar 103 and vehicle-mounted camera carry out analyzing processing (such as according to laser point cloud Data generate map), equipment 101 can also have display screen, can show handling result on a display screen.
User 110 can interact with equipment 101, such as user can input number by the input interface of equipment 101 According to equipment 101 can be handled the data of input, and handling result is fed back to user 110 by display screen.
It should be noted that the method for marking map that the embodiment of the present application is provided generally is executed by equipment 101, Correspondingly, it is generally positioned in equipment 101 for marking the device of map.
With continued reference to Fig. 2, it illustrates the flows according to one embodiment of the method for marking map of the application 200.The method for being used to mark map, includes the following steps:
Step 201, it in response to detecting the operation of selecting to be marked region of the user on map to be marked, is storing The corresponding geographical location information list of map to be marked in find out the corresponding geographical location information in region to be marked.
In the present embodiment, the method for marking map runs electronic equipment (such as the equipment in Fig. 1 thereon 101) operation that user selectes region to be marked can be detected, and in the selected operation for detecting user by searching for geographical position Information list is set to determine the corresponding geographical location information in region to be marked.Herein, geographical location information can be GPS (Global Positioning System, global positioning system) information, can also be to be built by origin of scheduled position Location parameter in 3 d space coordinate system etc. indicates the information in the geographical location in each region on map.It is each in map to be marked The geographical location information in region can be stored in advance in above-mentioned electronic equipment with list mode.
In the present embodiment, map to be marked includes mapping to the laser point cloud data that mobile lidar is acquired Two-dimensional grid in plane is formed by reflected value map.The map to be marked can be by above-mentioned electronic equipment according to from for Acquire the laser point cloud data generation of the mobile lidar acquisition of road data, or generated, simultaneously by other equipment It is transmitted through the network to above-mentioned electronic equipment.Each data point in the collected laser point cloud data of mobile lidar Information includes the three dimensional space coordinate of the data point and the reflected value of the data point, and reflected value is by the corresponding space of the data point The material of object determines.Map to be marked can specifically generate in the following way:Utilize three where laser point cloud data Two reference axis parallel with ground build plane in dimension coordinate system, and the plane of structure is divided into multiple two-dimensional grids, The size of each two-dimensional grid is identical, in the data spot projection to constructed plane in laser point cloud, that is, will be formed later State reflected value map.Include empty by the three-dimensional cylinder of bottom surface of the two-dimensional grid in reflected value map, in each two-dimensional grid Interior all data points and corresponding reflected value.
Optionally, when generating map to be marked, can according to acquisition laser point cloud when trailer-mounted radar geographical location Information determines the corresponding geographical location information of each data point, records the geographical location information of corresponding each grid, is formed geographical Location information list is simultaneously stored.
In some optional realization methods of the present embodiment, when generating above-mentioned map to be marked, difference can be set Size of mesh opening, to build the map to be marked with different resolution.
After generating or getting map to be marked, user can select region to be marked on map to be marked. Region to be marked can correspond to one section of road or a groups of building in actual scene, can be the region for including mark object. Marking object can be marker, such as lane line, traffic signboard, charge station on road etc., can also be building mark, Such as office building, hospital, school etc..
In general, can include above-mentioned mark object in reflected value map, but since the clarity of reflected value map limits, on Mark object is stated to be not easy to extract from reflected value map.At this moment, mark personnel can select region to be marked, to be marked The image of region internal reference auxiliary is labeled mark object.Optionally, above-mentioned electronic equipment can in advance will be to be marked Map partitioning is multiple pieces, such as can be divided into above-mentioned two-dimensional grid one-to-one multiple pieces, and user waits marking selected One of block is selected when noting region, then can determine the map to be marked that the region to be marked that user selectes is covered for the block On region.
Step 202, the collected road image sequence of vehicle-mounted camera and track point set are obtained.
Vehicle-mounted camera and the environmental data in the common collection vehicle traveling of above-mentioned mobile lidar.In the present embodiment In, vehicle-mounted camera may be mounted at top, front end, rear end or the side of vehicle, can acquire two-dimensional image data, also, Vehicular control unit can record current geographical location information when acquiring image data, and the multiple image of acquisition forms road Image sequence.
Track point set includes multiple tracing points in vehicle travel process, and each tracing point is that vehicle-mounted camera acquires road It is recorded when each frame road image in the image sequence of road.The tracing point may include ground of the vehicle when acquiring road image Manage position and the environmental information on the geographical location periphery.In actual scene, vehicle-mounted camera can be in each tracing point A frame or multiframe road image are acquired, vehicular control unit can be corresponding with the road image acquired in the tracing point by tracing point Ground stores.
Above-mentioned electronic equipment can obtain the road image sequence and corresponding of vehicle-mounted camera acquisition in several ways Track point set, such as can directly can also be taken the photograph from vehicle-mounted by fetching acquisition with the communication link that vehicular control unit is established As head storage unit in obtain or road image sequence and track point set can be to be stored in advance in above-mentioned electronic equipment In local or server, from local lookup or it can send out request when obtaining to server and receive the number that server returns According to.
Step 203, according to the geographical location information in region to be marked and the corresponding track point set of road image sequence, from The corresponding target image frame in region to be marked is extracted in road image sequence.
It is finding out the geographical location information in region to be marked and is getting the corresponding track point set of road image sequence Later, corresponding tracing point can be searched in the point set of track according to the geographical location information in region to be marked, and from road The corresponding road image of the tracing point found is extracted in image sequence, as the corresponding target image frame in region to be marked. Herein, the geographical location of the corresponding tracing point of target image frame is consistent with the geographical location in region to be marked, target image frame The two-dimensional image information in region to be marked can be provided.
In the present embodiment, region to be marked is the region for covering certain area, the geographical location information in region to be marked It can be usually a geographical location point for the set of the geographical location information in the region that it is covered, tracing point.Search with When the corresponding tracing point of the geographical location information in region to be marked, the geography in the region covered with region to be marked can be found out The nearest tracing point of the average distance of each geographical location information in the set of location information;Or it finds out and region to be marked The multiple tracing points of each geographical location information recently in the geographical location information set in the region covered, at this moment can extract Go out multiple target image frames.
Step 204, in response to detecting that user's reference target picture frame treats what the mark object in tab area carried out Labeling operation generates the mark file of the location parameter for storing the mark object.
The target image frame that step 203 extracts can be the picture frame for assisting mark.User can be with reference target Picture frame is labeled the mark object in region to be marked selected on map to be marked.Specifically, in target image frame May include marking the relevant information of object, such as the quantity information including lane line, extending direction information, traffic signboard Position and content, the title of building etc..Here user can be mark personnel, and mark personnel can be with reference target picture frame In mark object relevant information, will mark object marking on map to be marked.
In the present embodiment, above-mentioned electronic equipment can detect the labeling operation of user, as the mark behaviour for detecting user After work, the mark file of the location parameter for storing mark object can be generated.The mark file can be vector file, when When the vector file is loaded into the corresponding file of map to be marked, object to be marked will present the phase in map to be marked Answer position.Optionally, above-mentioned mark file can be also used for the attribute informations such as classification, the title of storage mark object, Yi Jibiao Note the related displaying information of object.Such as the face that is presented on map to be marked of mark object can be stored in file when marking The information such as color, shape and associated word content.These attribute informations and related displaying information can be it is pre-configured, It is marked in file by mark personnel selection and be stored in corresponding with the location parameter of object to be marked.
Optionally, if step 203 extracts multiple target image frames, user can utilize multiple target image frames pair Multiple mark objects in region to be marked are labeled, and correspondingly, above-mentioned electronic equipment can generate corresponding multiple marks File.Further, if including the same mark object in different target image frames, user can combine multiple target figures As frame is accurately located the position of mark object, in this way, the mark object in map to be marked can be marked out, and with mark The mode of explanatory notes part stores.
The above-mentioned method 200 for marking map first looks for out user in the laser acquired according to mobile lidar The geographical location information in the region to be marked selected on the map to be marked that point cloud generates, obtains the road of vehicle-mounted camera acquisition Image sequence and track point set extract according to the geographical location information in region to be marked adopted in corresponding tracing point later The target image frame of collection, finally detect user's reference target picture frame treat in tab area mark object mark behaviour The mark file of the location parameter comprising mark object is generated when making, user is not necessarily to search the image in region to be marked, nothing manually The software collaborations operations such as streetscape map, picture database need to be utilized, you can quickly and accurately determine for assisting map label Road image, improve map label efficiency, map can also be carried out in the case where user is unfamiliar with region to be marked Mark, realizes the making of high-precision map.
In some embodiments, above-mentioned tracing point includes acquiring the timestamp of corresponding road image and acquiring corresponding The geographical location information of road image, at this moment, above-mentioned electronic equipment can be according to the timestamps and geographical location information of tracing point Corresponding target image frame is further accurately extracted from road image sequence.Referring to FIG. 3, it illustrates according to this Shen A kind of schematic diagram of the flow 300 of the realization method for extracting the corresponding target image frame in region to be marked please, namely show The schematic flow of optional realization method of above-mentioned steps 203 a kind of.
As shown in figure 3, the flow 300 for extracting the corresponding target image frame in region to be marked includes:
Step 301, the geographical position of geographical location information and each tracing point in the point set of track based on region to be marked Confidence breath is matched, to determine target trajectory point.
In the present embodiment, tracing point includes the geographical location letter of vehicle present position when acquiring corresponding road image Breath, which can be GPS information.The geographical location information in the region to be marked that step 201 can be found out It is matched with the geographical location information of each tracing point, the tracing point of successful match is target trajectory point.Specifically, Ke Yigen It is calculated therebetween according to the geographical location information in region to be marked and the geographical location information of each tracing point in the point set of track Relative distance, when relative distance be located at setting in section when can determine successful match.
In some optional realization methods, can described in the 3011- steps 3015 as follows in a manner of come it is true Set the goal tracing point.
Step 3011, the minimum shooting distance parameter of vehicle-mounted camera is obtained.
Vehicle-mounted camera usually has certain field angle, can not shoot a certain range space immediately ahead of vehicle-mounted camera Interior image.When vehicle is in a certain position, the object that the distance between vehicle is less than minimum shooting distance can not be vehicle-mounted Camera takes.In the present embodiment, the minimum shooting distance parameter of vehicle-mounted camera can be obtained first, and then according to most Small shooting distance parameter finds out the tracing point for the image that can take the space object in region to be marked.
Step 3012, it sets the central point in region to be marked to search center, with minimum shooting distance parameter and presets The sum of ranging offset parameter build circle search region for radius.
In the present embodiment, it may be determined that go out the center in region to be marked, and be set as search center, with step The sum of minimum shooting distance parameter and preset ranging offset parameter of 3011 vehicle-mounted cameras obtained are round for radius structure Region of search.All tracing points in the circle search region are and the geographical location information in region to be marked is matched matches pair As.The quantity for matching object to match with the geographical location information in region to be marked can be reduced in this way, and then accelerate extraction The speed of target frame promotes annotating efficiency.
Herein, since vehicle-mounted camera is in the track in the border circular areas using minimum shooting distance parameter as radius The image of the center in region to be marked cannot be taken when point, so when building circle search region, shot with minimum The sum of distance parameter and preset ranging offset parameter are that radius is built, it is ensured that include that can clap in circle search region Take the photograph the tracing point of the center in region to be marked.Preset ranging offset parameter can be the distance parameter manually set, Or according between the minimum shooting distance parameter of camera and maximum shooting distance parameter and adjacent tracing point away from From determining distance parameter, for example, 5 meters or 10 meters.
Step 3013, the rail in the circle search region is filtered out according to the geographical location information of the tracing point Mark point.
In the present embodiment, the rail being located in circle search region can be determined according to the geographical location information of tracing point Mark point.Specifically, it may be determined that go out the boundary in the geographical location of above-mentioned border circular areas covering, it can be determined that in the point set of track Whether the geographical location information of each tracing point is located within the boundary in the geographical location of border circular areas covering, if so, determining should Tracing point is located in circle search region.Or each tracing point and search can be calculated according to the geographical location information of each tracing point It the distance between center (center in region i.e. to be marked) can be with if the distance is less than the radius in circle search region Determine that corresponding tracing point is located in circle search region.
Step 3014, the tracing point in the circle search region is divided according to preset time interval threshold value Group obtains multiple tracing point groups.
The road image sequence that above-mentioned electronic equipment obtains can be vehicle repeatedly by with vehicle-mounted camera when a road section Multiple road image sequences of acquisition.Assuming that between the time that vehicle-mounted camera acquires each frame road image when passing through the section every time Every identical, i.e., vehicle-mounted camera repeatedly by with road image frame is all acquired at predetermined intervals when a road section, then position Include multiple tracing point groups in the tracing point in circle search region, it is same that the tracing point in the same tracing point group corresponds to vehicle The primary section by circle search region, different tracks point group correspond to vehicle in the circle search region that different time passes through Interior section.
It in the present embodiment, can be according to the corresponding timestamp of each tracing point, according to preset time interval threshold value to circle Tracing point in shape region of search is grouped.Herein, preset time interval threshold value can be that vehicle-mounted camera acquires phase The time interval of adjacent two frame road images, can be preset.In this way, vehicle can be searched in different time by circle The tracing point in rope region is divided to different tracing point groups.
Step 3015, the tracing point in each tracing point group is ranked up according to timestamp, when by each tracing point group Between stab minimum tracing point and be determined as the target trajectory point.
In the present embodiment, the corresponding road image frame of tracing point of timestamp minimum enters for vehicle in each tracing point group The first frame road image shot behind circle search region, this frame road image include the figure of the center in region to be marked It, can be as the reference picture for assisting the mark object in mark region to be marked as feature.It then can be to each track Multiple tracing points in point group are ranked up according to timestamp, and the tracing point of timestamp minimum is determined as target trajectory point.
Referring to FIG. 4, it illustrates search for one of target trajectory point in region of search in the flow of extraction target frame Schematic diagram of a scenario.Wherein, O is the center of circle in circle search region namely the center in region to be marked, and R is circle search area The radius in domain.F1, F2, F3 are to pass through circle search region within three different periods for acquiring the vehicle of road image Three sections of tracks, every section of track includes multiple tracing points, and D is travel direction when vehicle passes through circle search region.Such as Fig. 4 institutes Show, in search process, tracing point A, B, C are in tracing point group corresponding with track F1, F2, F3 respectively, and in track In the corresponding tracing point groups of F1, the timestamp of tracing point A is minimum, then tracing point A is first in border circular areas in the F1 of track A shooting point, similarly, tracing point B, C are respectively first shooting point in border circular areas in track F2, F3.In this way, can be with Determine that tracing point A, B, C are target trajectory point.
Fig. 3 is returned to extract in road image sequence corresponding with the timestamp of target trajectory point in step 302 Road image, as target image frame.
Vehicle-mounted camera would generally preserve each frame road image of acquisition when preserving the road image sequence of its acquisition Timestamp.In the present embodiment, it may be determined that go out the timestamp of target trajectory point, and in the corresponding road image of target trajectory point The corresponding road image of timestamp that target trajectory point is extracted in sequence is target image frame.Optionally, when true in step 301 When making multiple target trajectory points, multiple target image frames can be extracted.
Said extracted goes out the flow 300 of the corresponding target image frame in region to be marked, can utilize in region to be marked The minimum shooting distance parameter of heart position and vehicle-mounted camera builds region of search, can ensure the tracing point pair in region of search The road image answered includes the relevant information in region to be marked, while can reduce the range of search target trajectory point, reduces meter Calculation amount can be realized automatically and accurate extract for assist the image of mark, helps further to promote annotating efficiency.
Referring to FIG. 5, it illustrates the flows of another embodiment of the method according to the application for marking map Figure.This is used to mark the method flow 500 of map, includes the following steps:
Step 501, it in response to detecting the operation of selecting to be marked region of the user on map to be marked, is storing The corresponding geographical location information list of map to be marked in find out the corresponding geographical location information in region to be marked.
In the present embodiment, map to be marked includes mapping to the laser point cloud data that mobile lidar is acquired Two-dimensional grid in plane is formed by reflected value map.The map to be marked can be used to acquire the vehicle of road data according to Carry what the laser point cloud data that laser radar obtains generated.
User can select region to be marked on map to be marked.Region to be marked can correspond in actual scene One section of road or a groups of building can be the regions for including mark object.It can be the marker on road, example to mark object Such as lane line, charge station, can also be building mark, such as office building, hospital, school etc. at traffic signboard.
In the present embodiment, the method for marking map runs electronic equipment (such as the equipment in Fig. 1 thereon 101) operation that user selectes region to be marked can be detected, and in the selected operation for detecting user by searching for geographical position Information list is set to determine the corresponding geographical location information in region to be marked.Herein, geographical location information can be GPS letters Breath.The geographical location information in each region can be to be stored in advance in above-mentioned electronic equipment with list mode in map to be marked 's.
Step 502, the collected road image sequence of vehicle-mounted camera and track point set are obtained.
Vehicle-mounted camera and the environmental data in the common collection vehicle traveling of above-mentioned mobile lidar.In the present embodiment In, vehicle-mounted camera can form road image with the two-dimensional image data of collection vehicle running environment, collected multiple image Sequence.
Each tracing point in the point set of track is each frame mileage chart in vehicle-mounted camera acquisition road image sequence As when record.The tracing point can be geographical location of the vehicle when acquiring road image.
In some embodiments, above-mentioned tracing point may include timestamp and the acquisition pair for acquiring corresponding road image The geographical location information for the road image answered.
Step 503, according to the geographical location information in region to be marked and the corresponding track point set of road image sequence, from The corresponding target image frame in region to be marked is extracted in road image sequence.
In the present embodiment, above-mentioned electronic equipment can be according to the geographical location information and track point set in region to be marked It closes, extracts the tracing point for the image for taking region to be marked, and using the corresponding road image of the tracing point as target figure As frame.Specifically, it can be matched with the geographical location information of tracing point using the geographical location information in region to be marked, or Region of search is built around region to be marked, is found out in region of search and the mileage chart of the relevant information comprising tab area As corresponding tracing point.
Step 504, in response to detecting that user's reference target picture frame treats what the mark object in tab area carried out Labeling operation generates the mark file of the location parameter for storing the mark object.
May include the relevant information for marking object in target image frame, user can be with reference target picture frame to be marked The mark object in region to be marked selected on map is labeled.
In the present embodiment, above-mentioned electronic equipment can detect the labeling operation of user, as the mark behaviour for detecting user After work, the mark file of the location parameter for storing mark object can be generated.
It should be noted that step 501, step 502, step 503, step 504 in the present embodiment respectively with aforementioned reality Step 201 in example, step 202, step 203, step 204 is applied to correspond, before to step 201, step 202, step 203, the description of step 204 is also applied for step 501, step 502, step 503, the explanation of step 504 or explanation, herein not It repeats again.
Step 505, object map will be marked to the target image frame according to the first coordinate information.
In the present embodiment, after user marks out mark object on map to be marked, what above-mentioned electronic equipment generated The location parameter preserved in mark file includes the first coordinate information for marking object in world coordinate system.World coordinate system is UTM (Universal Transverse Mercartor, Universal Trans Meridian) coordinate system, the laser that mobile lidar obtains The position of each characteristic point in the map to be marked that point cloud data is generated is by the coordinate representation in UTM coordinate systems.
In the present embodiment, can utilize the road image of UTM coordinate systems and vehicle-mounted camera acquisition image coordinate it Between transformational relation the first coordinate information is converted to image coordinate system, map to target image frame so that element will be marked.Tool Body, the conversion parameter of UTM coordinate systems and image coordinate system can be demarcated in advance, and will according to the conversion parameter of calibration First coordinate information is converted to image coordinate system.
In some optional realization methods, vehicle-mounted IMU (Inertial Measurement Unit Inertial Measurement Units) The pose ginseng of the vehicle obtained with POS (Position and Orientation System) system that vehicle GPS is constituted Number, IMU coordinate systems can be measured relative to the first calibrating parameters of UTM coordinate systems by POS system.First calibrating parameters can be with For the coordinate transformation parameter between IMU coordinate systems and UTM coordinate systems.
Lidar (Light Detection And Ranging, laser acquisition and measurement) coordinate system (i.e. vehicle-mounted laser thunder Up to coordinate system) the second calibrating parameters between IMU coordinate systems can be based on mobile lidar relative to IMU systems position Relationship is demarcated in advance.Second calibrating parameters can be between Lidar coordinate systems and IMU coordinate systems coordinate transformation parameter.
Vehicle-mounted camera coordinate system can be opposite based on vehicle-mounted camera relative to the third calibrating parameters of Lidar coordinate systems It is demarcated in advance in the position relationship of mobile lidar, which can be vehicle-mounted camera coordinate system and Lidar Coordinate transformation parameter between coordinate system.
The image coordinate system of road image can be according to advance relative to the 4th calibrating parameters of vehicle-mounted camera coordinate system The camera internal gain of parameter of calibration, the 4th calibrating parameters can be between image coordinate system and vehicle-mounted camera coordinate systems Coordinate transformation parameter.
Above-mentioned electronic equipment can obtain above-mentioned first calibrating parameters, the second calibrating parameters, third calibrating parameters and the 4th Calibrating parameters, and successively by the first coordinate information map to IMU coordinate systems, Lidar coordinate systems, vehicle-mounted camera coordinate system and Image coordinate system, to obtain coordinate of the mark element in image coordinate system.
Referring to FIG. 6, it illustrates showing a flow for marking object map to target image frame according to the application It is intended to.Wherein, the first coordinate information in UTM coordinate systems is mapped in IMU coordinate systems according to POS parameters first and obtains second Coordinate information then believes the second coordinate in IMU coordinate systems according to ginseng (i.e. the external parameter of mobile lidar) outside Lidar Breath, which maps in Lidar coordinate systems, obtains third coordinate information, and then according to Camera extrinsic, (i.e. the external of vehicle-mounted camera is joined Number) it third coordinate information is mapped into camera coordinates system (i.e. vehicle-mounted camera coordinate system) obtains 4-coordinate information, last root 4-coordinate information MAP to image coordinate system is obtained into Five Axis according to camera internal reference (i.e. the inner parameter of vehicle-mounted camera) Information optionally, can be first according to the distortion correction parameter of camera when by 4-coordinate information MAP to image coordinate system (i.e. the distortion correction parameter of vehicle-mounted camera) carries out distortion correction to 4-coordinate information, then utilizes camera internal reference to correction 4-coordinate information afterwards is converted, and Five Axis information is obtained.
Specifically, can the first coordinate information be mapped into image coordinate system as follows:
Assuming that XuTo mark homogeneous coordinates of the upper point P for element in UTM coordinate systems, the first calibrating parameters include By the GPS letters of spin matrix R and the point that the attitude angle (including yaw angle, pitch angle and roll angle) of IMU systems determines The coordinate G in UTM coordinate systems is ceased, then coordinate Xs of the point P in IMU coordinate systemsiFor:
Xi=Ti -1Xu (1)
Wherein,
Above-mentioned second calibrating parameters include outer ginseng matrix T of the mobile lidar system relative to IMUL, the outer ginseng matrix Lever arm value and angle of setting that can be by mobile lidar relative to IMU determine, coordinate Xs of the point P in Lidar coordinate systemsLFor:
XL=TL -1Xi (3)
Above-mentioned third calibrating parameters include the conversion that vehicle-mounted camera is determined relative to the external parameter of mobile lidar Matrix TC, then coordinate Xs of the point P in vehicle-mounted camera coordinate systemCFor:
XC=TC -1XL (4)
It optionally, can be according to the distortion parameter of vehicle-mounted camera before the coordinate of P points is mapped to image coordinate system To coordinate X of the P points in vehicle-mounted camera coordinate systemCIt is corrected, it is assumed that coordinate XC=(xc, yc, zc), then its regularization projects Coordinate XnFor:
Enable r2=x2+y2, then the regularization projection coordinate X after correctingn_correctFor:
Wherein, Dr is to be corrected to the mirror image of projection coordinate, and Dt is the tangential correction to projection coordinate, can be respectively according to formula (7), (8) are calculated:
Dr=(k1r2+k2r4+k3r6)Xn (7)
Wherein, D=[k1 k2 p1 p2 k3]For the distortion parameter of vehicle-mounted camera, P points are sat in vehicle-mounted camera after correction Coordinate X in mark systemC' be:
Above-mentioned 4th calibrating parameters may include the inner parameter f of vehicle-mounted camera, u0, v0, dx, dy, θ, wherein f is vehicle Carry the focal length of camera, (u0, v0) be coordinate namely image of the principal point in image coordinate system center pixel coordinate and figure As the horizontal and vertical pixel number differed between origin pixel coordinate, (dx, dy) is the physical size of pixel, namely is sat along two The length of one pixel in the directions parameter x, y, θ are the angle between two reference axis u, v in image coordinate system, image coordinate system The coordinate X of middle P pointsPFor:
XP=KX'C (10)
Wherein,
Above-mentioned coordinate XPFor homogeneous coordinates, P points two-dimensional coordinate X in image coordinate system is converted into according to formula (12)P' For:
Two-dimensional coordinate XP' it is Five Axis information of the P points in image coordinate system, Five Axis can be utilized later Information renders mark object in target image frame.Specifically, may be used setting color or shape by element wash with watercolours to be marked Contaminate the corresponding position in target image frame.
User can judge the accuracy of annotation results according to the target image frame for having rendered object to be marked, in this way, It is mapped to target image frame by the way that element will be marked, the verification to annotation results accuracy may be implemented, so as to help to use Family removes inaccurate annotation results, retains accurate annotation results, and then promote the accuracy of map label.
With further reference to Fig. 7, as the realization to method shown in above-mentioned each figure, this application provides one kind for marking ground One embodiment of the device of figure, the device embodiment is corresponding with Fig. 2 and embodiment of the method shown in fig. 5, and the device is specific It can be applied in various electronic equipments.
As shown in fig. 7, the device for marking map of the present embodiment includes searching unit 701, acquiring unit 702, carries Take unit 703 and generation unit 704.Searching unit 701 is configured in response to detecting user on map to be marked The operation for selecting region to be marked is found out to be marked in the corresponding geographical location information list of stored map to be marked The corresponding geographical location information in region, wherein map to be marked includes the laser point cloud number for being acquired mobile lidar It is formed by reflected value map according to the two-dimensional grid mapped in plane;Acquiring unit 702 is configured to obtain vehicle-mounted camera Collected road image sequence and track point set, each tracing point in the point set of track are that vehicle-mounted camera acquires road It is recorded when each frame road image in the image sequence of road;Extraction unit 703 is configured to the geography according to region to be marked Location information and the corresponding track point set of road image sequence, it is corresponding to extract region to be marked from road image sequence Target image frame;Generation unit 704 is in response to detecting that user's reference target picture frame treats the mark object in tab area The labeling operation of progress generates the mark file of the location parameter for storing mark object.
In the present embodiment, map to be marked can specifically generate in the following way:Utilize laser point cloud data institute Three-dimensional system of coordinate in two reference axis parallel with ground build plane, and the plane of structure is divided into multiple two dimensions The size of grid, each two-dimensional grid is identical, later by the data spot projection to constructed plane in laser point cloud, i.e. shape At above-mentioned reflected value map.Mobile lidar can record corresponding geographical location information when acquiring laser point cloud, be formed Geographical location information list.After geographical location information list can be mapped to above-mentioned reflected value map by searching unit 701, search Go out the corresponding geographical location information in region to be marked that user selectes.
Obtain single 702 yuan can by fetched with the communication link that vehicular control unit is established obtain road image sequence and Track point set can also obtain road image sequence and track point set directly from the storage unit of vehicle-mounted camera, Or above-mentioned road image sequence and track point set can prestore in the server, acquiring unit 702 can be to service Device sends out request and receives the road image sequence and track point set of server return.
Extraction unit 703 can determine target trajectory according to the geographical location information of track point set and region to be marked Point, and the corresponding road image of target trajectory point is extracted as target image frame.It can search and wait for when determining target trajectory point Tracing point near tab area will meet the preset tracing point apart from section as target track at a distance from region to be marked Mark point.
Optionally, each tracing point includes the timestamp and the corresponding road image of acquisition for acquiring corresponding road image Geographical location information;Extraction unit 703 can further be configured to extract region correspondence to be marked as follows Target image frame:Believe in the geographical location of geographical location information and each tracing point in the point set of track based on region to be marked Breath is matched, to determine target trajectory point;It is extracted in road image sequence corresponding with the timestamp of target trajectory point Road image, as target image frame.
Further, extraction unit 703 can further be configured to determine target trajectory point as follows:It obtains The minimum shooting distance parameter of vehicle-mounted camera;Set the central point in region to be marked to search center, with minimum shooting away from It is that radius builds circle search region from the sum of parameter and preset ranging offset parameter;According to the geographical location information of tracing point Filter out the tracing point in circle search region;Between the tracing point in circle search region according to the preset time It is grouped every threshold value, obtains multiple tracing point groups;Tracing point in each tracing point group is ranked up according to timestamp, it will The tracing point of timestamp minimum is determined as target trajectory point in each tracing point group.
Generation unit 704 can mark the operation of mark element to generate comprising mark member according to user on map to be marked The mark file of the location parameter of element, in this way, the marks such as lane line, traffic signboard can be marked out on reflected value map Object, to generate high-precision map.
In some embodiments, the above-mentioned device 700 for marking map can also include map unit (not shown), reflect Unit is penetrated to be configured to that object map will be marked to target image frame according to the first coordinate information.Further, map unit into One step is configured to mark object map as follows to target image frame:World coordinate system is obtained relative to vehicle-mounted used First calibrating parameters of property measuring unit coordinate system, vehicle-mounted Inertial Measurement Unit coordinate system are relative to mobile lidar coordinate system The second calibrating parameters, mobile lidar coordinate system relative to vehicle-mounted camera coordinate system third calibrating parameters, vehicle-mounted take the photograph The 4th calibrating parameters as head relative to the image coordinate system of road image;The first coordinate information is reflected according to the first calibrating parameters It is incident upon vehicle-mounted Inertial Measurement Unit coordinate system and obtains the second coordinate information;The second coordinate information is mapped according to the second calibrating parameters Third coordinate information is obtained to mobile lidar coordinate system;Third coordinate information mapped to according to third calibrating parameters vehicle-mounted Camera coordinate system obtains 4-coordinate information;4-coordinate information MAP to image coordinate system is obtained according to the 4th calibrating parameters To Five Axis information;Mark object is rendered in target image frame using Five Axis information.
It should be appreciated that all units described in device 700 and each step phase in the method described referring to figs. 2 and 5 It is corresponding.It is equally applicable to device 700 and unit wherein included above with respect to the operation and feature of method description as a result, herein It repeats no more.
Device 700 provided by the embodiments of the present application for marking map utilizes the geographical location information in region to be marked Track point set when road image and laser point cloud data is acquired with vehicle, can quickly and accurately extract area to be marked The road image in domain saves user and searches the image in region to be marked and disappeared to assist user to be labeled mark element The time of consumption improves annotating efficiency, and can promote the accuracy of mark.
Below with reference to Fig. 8, it illustrates the knots of the computer system 800 suitable for the equipment for realizing the embodiment of the present application Structure schematic diagram.Equipment shown in Fig. 8 is only an example, should not bring and appoint to the function and use scope of the embodiment of the present application What is limited.
As shown in figure 8, computer system 800 includes central processing unit (CPU) 801, it can be read-only according to being stored in Program in memory (ROM) 802 or be loaded into the program in random access storage device (RAM) 803 from storage section 808 and Execute various actions appropriate and processing.In RAM 803, also it is stored with system 800 and operates required various programs and data. CPU 801, ROM 802 and RAM 803 are connected with each other by bus 804.Input/output (I/O) interface 805 is also connected to always Line 804.
It is connected to I/O interfaces 805 with lower component:Importation 806 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 807 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 808 including hard disk etc.; And the communications portion 809 of the network interface card including LAN card, modem etc..Communications portion 809 via such as because The network of spy's net executes communication process.Driver 810 is also according to needing to be connected to I/O interfaces 805.Detachable media 811, such as Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 810, as needed in order to be read from thereon Computer program be mounted into storage section 808 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed by communications portion 809 from network, and/or from detachable media 811 are mounted.When the computer program is executed by central processing unit (CPU) 801, limited in execution the present processes Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or Computer readable storage medium either the two arbitrarily combines.Computer readable storage medium for example can be --- but Be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or arbitrary above combination. The more specific example of computer readable storage medium can include but is not limited to:Electrical connection with one or more conducting wires, Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory Part or above-mentioned any appropriate combination.In this application, computer readable storage medium can any be included or store The tangible medium of program, the program can be commanded the either device use or in connection of execution system, device.And In the application, computer-readable signal media may include the data letter propagated in a base band or as a carrier wave part Number, wherein carrying computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but not It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use In by instruction execution system, device either device use or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc., Huo Zheshang Any appropriate combination stated.
Flow chart in attached drawing and block diagram, it is illustrated that according to the system of the various embodiments of the application, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part for a part for one module, program segment, or code of table, the module, program segment, or code includes one or more uses The executable instruction of the logic function as defined in realization.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, this is depended on the functions involved.Also it to note Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard The mode of part is realized.Described unit can also be arranged in the processor, for example, can be described as:A kind of processor packet Include searching unit, acquiring unit, extraction unit and generation unit.Wherein, the title of these units not structure under certain conditions The restriction of the pairs of unit itself, for example, searching unit is also described as " in response to detecting user in map to be marked On the operation for selecting region to be marked, find out and wait in the corresponding geographical location information list of stored map to be marked The unit of the corresponding geographical location information of tab area ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be Included in device described in above-described embodiment;Can also be individualism, and without be incorporated the device in.Above-mentioned calculating Machine readable medium carries one or more program, when said one or multiple programs are executed by the device so that should Device:In response to detecting the operation of selecting to be marked region of the user on map to be marked, stored to be marked Scheme to find out the corresponding geographical location information in region to be marked in corresponding geographical location information list, wherein ground to be marked Figure includes that the two-dimensional grid that the laser point cloud data that mobile lidar is acquired maps in plane is formed by reflected value Map;Obtain the collected road image sequence of vehicle-mounted camera and track point set, each track in the point set of track Point records when being each frame road image in vehicle-mounted camera acquisition road image sequence;According to the geography in region to be marked Location information and the corresponding track point set of road image sequence, it is corresponding to extract region to be marked from road image sequence Target image frame;In response to detecting that user's reference target picture frame treats the mark behaviour that the mark object in tab area carries out Make, generates the mark file of the location parameter for storing mark object.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art Member should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature Other technical solutions of arbitrary combination and formation.Such as features described above has similar work(with (but not limited to) disclosed herein Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (12)

1. a kind of method for marking map, which is characterized in that the method includes:
In response to detecting the operation of selecting to be marked region of the user on map to be marked, stored described to be marked The corresponding geographical location information in the region to be marked is found out in the corresponding geographical location information list of map, wherein described Map to be marked includes the two-dimensional grid institute shape mapped to the laser point cloud data that mobile lidar is acquired in plane At reflected value map;
Obtain the collected road image sequence of vehicle-mounted camera and track point set, each rail in the track point set Mark point is that the vehicle-mounted camera is acquired and recorded when each frame road image in the road image sequence;
According to the geographical location information in the region to be marked and the corresponding track point set of the road image sequence, from described The corresponding target image frame in the region to be marked is extracted in road image sequence;
The mark that the mark object in the region to be marked is carried out with reference to the target image frame in response to detecting user Operation generates the mark file of the location parameter for storing the mark object.
2. according to the method described in claim 1, it is characterized in that, each tracing point includes acquiring corresponding road image Timestamp and the corresponding road image of acquisition geographical location information;
The geographical location information and the corresponding track point set of the road image sequence according to the region to be marked, from The corresponding target image frame in the region to be marked is extracted in the road image sequence, including:
Believe in the geographical location of geographical location information and each tracing point in the track point set based on the region to be marked Breath is matched, to determine target trajectory point;
Road image corresponding with the timestamp of the target trajectory point is extracted in the road image sequence, as described Target image frame.
3. according to the method described in claim 2, it is characterized in that, the geographical location information based on the region to be marked It is matched with the geographical location information of each tracing point in the track point set, to determine target trajectory point, including:
Obtain the minimum shooting distance parameter of the vehicle-mounted camera;
The central point in the region to be marked is set to search center, with the minimum shooting distance parameter and preset distance The sum of offset parameter is that radius builds circle search region;
The tracing point in the circle search region is filtered out according to the geographical location information of the tracing point;
Tracing point in the circle search region is grouped according to preset time interval threshold value, obtains multiple rails Mark point group;
Tracing point in each tracing point group is ranked up according to timestamp, by the track of timestamp minimum in each tracing point group Point is determined as the target trajectory point.
4. according to claim 1-3 any one of them methods, which is characterized in that the location parameter of the mark object includes institute State first coordinate information of the mark object in world coordinate system;
The method further includes:
According to first coordinate information by the mark object map to the target image frame.
5. according to the method described in claim 4, it is characterized in that, it is described according to first coordinate information by it is described mark pair As mapping to the target image frame, including:
Obtain first calibrating parameters, the vehicle-mounted inertia of the world coordinate system relative to vehicle-mounted Inertial Measurement Unit coordinate system Second calibrating parameters of the measuring unit coordinate system relative to mobile lidar coordinate system, the mobile lidar coordinate system phase For the image coordinate relative to the road image of third calibrating parameters, the vehicle-mounted camera of vehicle-mounted camera coordinate system 4th calibrating parameters of system;
First coordinate information vehicle-mounted Inertial Measurement Unit coordinate system is mapped to according to first calibrating parameters to obtain To the second coordinate information;
Second coordinate information is mapped into the mobile lidar coordinate system according to second calibrating parameters and obtains Three coordinate informations;
The third coordinate information is mapped into the vehicle-mounted camera coordinate system according to the third calibrating parameters and obtains the 4th Coordinate information;
The 4-coordinate information MAP to described image coordinate system is obtained into Five Axis letter according to the 4th calibrating parameters Breath;
The mark object is rendered in the target image frame using the Five Axis information.
6. a kind of for marking the device of map, which is characterized in that described device includes:
Searching unit is configured to the operation for selecting region to be marked in response to detecting user on map to be marked, The corresponding geography in the region to be marked is found out in the corresponding geographical location information list of the stored map to be marked Location information, wherein the map to be marked include the laser point cloud data that mobile lidar is acquired is mapped to it is flat Two-dimensional grid in face is formed by reflected value map;
Acquiring unit is configured to obtain the collected road image sequence of vehicle-mounted camera and track point set, the rail Each tracing point in mark point set is that the vehicle-mounted camera acquires each frame road image in the road image sequence Shi Jilu's;
Extraction unit is configured to corresponding according to the geographical location information in the region to be marked and the road image sequence Track point set extracts the corresponding target image frame in the region to be marked from the road image sequence;
Generation unit is configured in response to detecting user with reference to the target image frame to the mark in the region to be marked The labeling operation that object carries out is noted, the mark file of the location parameter for storing the mark object is generated.
7. device according to claim 6, which is characterized in that each tracing point includes acquiring corresponding road image Timestamp and the corresponding road image of acquisition geographical location information;
The extraction unit is further configured to extract the corresponding target image in the region to be marked as follows Frame:
Believe in the geographical location of geographical location information and each tracing point in the track point set based on the region to be marked Breath is matched, to determine target trajectory point;
Road image corresponding with the timestamp of the target trajectory point is extracted in the road image sequence, as described Target image frame.
8. device according to claim 7, which is characterized in that the extraction unit is further configured to according to such as lower section Formula determines target trajectory point:
Obtain the minimum shooting distance parameter of the vehicle-mounted camera;
The central point in the region to be marked is set to search center, with the minimum shooting distance parameter and preset distance The sum of offset parameter is that radius builds circle search region;
The tracing point in the circle search region is filtered out according to the geographical location information of the tracing point;
Tracing point in the circle search region is grouped according to preset time interval threshold value, obtains multiple rails Mark point group;
Tracing point in each tracing point group is ranked up according to timestamp, by the track of timestamp minimum in each tracing point group Point is determined as the target trajectory point.
9. according to claim 6-8 any one of them devices, which is characterized in that the location parameter of the mark object includes institute Mark object is stated in the first coordinate information in world coordinate system;
Described device further includes:
Map unit is configured to the mark object map to the target image frame according to first coordinate information.
10. device according to claim 9, which is characterized in that the map unit is further configured to according to as follows Mode is by the mark object map to the target image frame:
Obtain first calibrating parameters, the vehicle-mounted inertia of the world coordinate system relative to vehicle-mounted Inertial Measurement Unit coordinate system Second calibrating parameters of the measuring unit coordinate system relative to mobile lidar coordinate system, the mobile lidar coordinate system phase For the image coordinate relative to the road image of third calibrating parameters, the vehicle-mounted camera of vehicle-mounted camera coordinate system 4th calibrating parameters of system;
First coordinate information vehicle-mounted Inertial Measurement Unit coordinate system is mapped to according to first calibrating parameters to obtain To the second coordinate information;
Second coordinate information is mapped into the mobile lidar coordinate system according to second calibrating parameters and obtains Three coordinate informations;
The third coordinate information is mapped into the vehicle-mounted camera coordinate system according to the third calibrating parameters and obtains the 4th Coordinate information;
The 4-coordinate information MAP to described image coordinate system is obtained into Five Axis letter according to the 4th calibrating parameters Breath;
The mark object is rendered in the target image frame using the Five Axis information.
11. a kind of equipment, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors so that one or more of processors are real The now method as described in any in claim 1-5.
12. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor The method as described in any in claim 1-5 is realized when execution.
CN201710231536.8A 2017-04-11 2017-04-11 Method, device and equipment for labeling map Active CN108694882B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710231536.8A CN108694882B (en) 2017-04-11 2017-04-11 Method, device and equipment for labeling map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710231536.8A CN108694882B (en) 2017-04-11 2017-04-11 Method, device and equipment for labeling map

Publications (2)

Publication Number Publication Date
CN108694882A true CN108694882A (en) 2018-10-23
CN108694882B CN108694882B (en) 2020-09-22

Family

ID=63843379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710231536.8A Active CN108694882B (en) 2017-04-11 2017-04-11 Method, device and equipment for labeling map

Country Status (1)

Country Link
CN (1) CN108694882B (en)

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109470254A (en) * 2018-10-31 2019-03-15 百度在线网络技术(北京)有限公司 Generation method, device, system and the storage medium of map lane line
CN109579857A (en) * 2018-12-06 2019-04-05 驭势(上海)汽车科技有限公司 It is a kind of for updating the method and apparatus of map
CN109658504A (en) * 2018-10-31 2019-04-19 百度在线网络技术(北京)有限公司 Map datum mask method, device, equipment and storage medium
CN109740487A (en) * 2018-12-27 2019-05-10 广州文远知行科技有限公司 Point cloud mask method, device, computer equipment and storage medium
CN109766793A (en) * 2018-12-25 2019-05-17 百度在线网络技术(北京)有限公司 Data processing method and device
CN109919007A (en) * 2019-01-23 2019-06-21 绵阳慧视光电技术有限责任公司 A method of generating infrared image markup information
CN109934873A (en) * 2019-03-15 2019-06-25 百度在线网络技术(北京)有限公司 Mark image acquiring method, device and equipment
CN109978955A (en) * 2019-03-11 2019-07-05 武汉环宇智行科技有限公司 A kind of efficient mask method for combining laser point cloud and image
CN110110678A (en) * 2019-05-13 2019-08-09 腾讯科技(深圳)有限公司 Determination method and apparatus, storage medium and the electronic device of road boundary
CN110136227A (en) * 2019-04-26 2019-08-16 杭州飞步科技有限公司 Mask method, device, equipment and the storage medium of high-precision map
CN110148185A (en) * 2019-05-22 2019-08-20 北京百度网讯科技有限公司 Determine method, apparatus, electronic equipment and the storage medium of coordinate system conversion parameter
CN110210328A (en) * 2019-05-13 2019-09-06 北京三快在线科技有限公司 The method, apparatus and electronic equipment of object are marked in image sequence
CN110211228A (en) * 2019-04-30 2019-09-06 北京云迹科技有限公司 For building the data processing method and device of figure
CN110647886A (en) * 2019-09-19 2020-01-03 北京百度网讯科技有限公司 Interest point marking method and device, computer equipment and storage medium
CN110781796A (en) * 2019-10-22 2020-02-11 杭州宇泛智能科技有限公司 Labeling method and device and electronic equipment
CN111045767A (en) * 2019-11-26 2020-04-21 北京金和网络股份有限公司 Method and system for displaying map label on map based on flexible data configuration
CN111159459A (en) * 2019-12-04 2020-05-15 恒大新能源汽车科技(广东)有限公司 Landmark positioning method, device, computer equipment and storage medium
CN111259021A (en) * 2020-01-13 2020-06-09 速度时空信息科技股份有限公司 Quick collection and updating method and system for geographical map point information
CN111260549A (en) * 2018-11-30 2020-06-09 北京嘀嘀无限科技发展有限公司 Road map construction method and device and electronic equipment
CN111275766A (en) * 2018-12-05 2020-06-12 杭州海康威视数字技术股份有限公司 Calibration method and device for image coordinate system and GPS coordinate system and camera
CN111274296A (en) * 2020-01-17 2020-06-12 北京无限光场科技有限公司 Method and device for acquiring image data, terminal and storage medium
CN111340890A (en) * 2020-02-20 2020-06-26 北京百度网讯科技有限公司 Camera external reference calibration method, device, equipment and readable storage medium
CN111380543A (en) * 2018-12-29 2020-07-07 沈阳美行科技有限公司 Map data generation method and device
CN111626206A (en) * 2020-05-27 2020-09-04 北京百度网讯科技有限公司 High-precision map construction method and device, electronic equipment and computer storage medium
CN111784798A (en) * 2020-06-30 2020-10-16 滴图(北京)科技有限公司 Map generation method and device, electronic equipment and storage medium
CN111951330A (en) * 2020-08-27 2020-11-17 北京小马慧行科技有限公司 Label updating method and device, storage medium, processor and vehicle
CN111964686A (en) * 2020-07-20 2020-11-20 汉海信息技术(上海)有限公司 Road data acquisition method, device, server, automobile data recorder and medium
CN112037120A (en) * 2020-07-31 2020-12-04 上海图森未来人工智能科技有限公司 Method and device for labeling road plane elements in 3D point cloud data and storage medium
CN112432650A (en) * 2020-11-19 2021-03-02 北京四维图新科技股份有限公司 Acquisition method of high-precision map data, vehicle control method and device
CN112564930A (en) * 2019-09-10 2021-03-26 中国移动通信集团浙江有限公司 Access side optical cable route planning method and system based on map
CN112948517A (en) * 2021-02-26 2021-06-11 北京百度网讯科技有限公司 Area position calibration method and device and electronic equipment
CN112950726A (en) * 2021-03-25 2021-06-11 深圳市商汤科技有限公司 Camera orientation calibration method and related product
CN113011323A (en) * 2021-03-18 2021-06-22 北京百度网讯科技有限公司 Method for acquiring traffic state, related device, road side equipment and cloud control platform
CN113010475A (en) * 2019-12-20 2021-06-22 百度在线网络技术(北京)有限公司 Method and apparatus for storing trajectory data
CN113093128A (en) * 2021-04-09 2021-07-09 阿波罗智联(北京)科技有限公司 Method and device for calibrating millimeter wave radar, electronic equipment and road side equipment
CN113127584A (en) * 2019-12-31 2021-07-16 深圳云天励飞技术有限公司 Map annotation method, device, electronic equipment and storage medium
CN113283269A (en) * 2020-02-20 2021-08-20 上海博泰悦臻电子设备制造有限公司 Method, electronic device, and computer storage medium for identifying a map
CN113378606A (en) * 2020-03-10 2021-09-10 杭州海康威视数字技术股份有限公司 Method, device and system for determining labeling information
CN113591580A (en) * 2021-06-30 2021-11-02 北京百度网讯科技有限公司 Image annotation method and device, electronic equipment and storage medium
CN113657224A (en) * 2019-04-29 2021-11-16 北京百度网讯科技有限公司 Method, device and equipment for determining object state in vehicle-road cooperation
CN113742439A (en) * 2021-08-27 2021-12-03 深圳Tcl新技术有限公司 Space labeling method and device, electronic equipment and storage medium
CN113806461A (en) * 2021-09-03 2021-12-17 深圳特为科创信息技术有限公司 Processing method and processing system of labeled data
CN114531580A (en) * 2020-11-23 2022-05-24 北京四维图新科技股份有限公司 Image processing method and device
CN114526723A (en) * 2022-02-14 2022-05-24 广州小鹏自动驾驶科技有限公司 Road map construction method and device, electronic equipment and storage medium
CN116048698B (en) * 2022-12-30 2023-06-30 以萨技术股份有限公司 Method for displaying popup window in map, electronic equipment and storage medium
CN116385528A (en) * 2023-03-28 2023-07-04 小米汽车科技有限公司 Method and device for generating annotation information, electronic equipment, vehicle and storage medium
CN117253232A (en) * 2023-11-17 2023-12-19 北京理工大学前沿技术研究院 Automatic annotation generation method, memory and storage medium for high-precision map
CN116385528B (en) * 2023-03-28 2024-04-30 小米汽车科技有限公司 Method and device for generating annotation information, electronic equipment, vehicle and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102967315A (en) * 2012-11-09 2013-03-13 广东欧珀移动通信有限公司 Navigation map improvement method and device
CN103148861A (en) * 2011-12-07 2013-06-12 现代自动车株式会社 Road guidance display method and system using geotagging image
CN103645480A (en) * 2013-12-04 2014-03-19 北京理工大学 Geographic and geomorphic characteristic construction method based on laser radar and image data fusion
CN106052697A (en) * 2016-05-24 2016-10-26 百度在线网络技术(北京)有限公司 Driverless vehicle, driverless vehicle positioning method, driverless vehicle positioning device and driverless vehicle positioning system
CN106097444A (en) * 2016-05-30 2016-11-09 百度在线网络技术(北京)有限公司 High-precision map generates method and apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103148861A (en) * 2011-12-07 2013-06-12 现代自动车株式会社 Road guidance display method and system using geotagging image
CN102967315A (en) * 2012-11-09 2013-03-13 广东欧珀移动通信有限公司 Navigation map improvement method and device
CN103645480A (en) * 2013-12-04 2014-03-19 北京理工大学 Geographic and geomorphic characteristic construction method based on laser radar and image data fusion
CN106052697A (en) * 2016-05-24 2016-10-26 百度在线网络技术(北京)有限公司 Driverless vehicle, driverless vehicle positioning method, driverless vehicle positioning device and driverless vehicle positioning system
CN106097444A (en) * 2016-05-30 2016-11-09 百度在线网络技术(北京)有限公司 High-precision map generates method and apparatus

Cited By (71)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109470254B (en) * 2018-10-31 2020-09-08 百度在线网络技术(北京)有限公司 Map lane line generation method, device, system and storage medium
CN109658504A (en) * 2018-10-31 2019-04-19 百度在线网络技术(北京)有限公司 Map datum mask method, device, equipment and storage medium
CN109470254A (en) * 2018-10-31 2019-03-15 百度在线网络技术(北京)有限公司 Generation method, device, system and the storage medium of map lane line
CN109658504B (en) * 2018-10-31 2021-04-20 百度在线网络技术(北京)有限公司 Map data annotation method, device, equipment and storage medium
CN111260549A (en) * 2018-11-30 2020-06-09 北京嘀嘀无限科技发展有限公司 Road map construction method and device and electronic equipment
CN111275766A (en) * 2018-12-05 2020-06-12 杭州海康威视数字技术股份有限公司 Calibration method and device for image coordinate system and GPS coordinate system and camera
CN111275766B (en) * 2018-12-05 2023-09-05 杭州海康威视数字技术股份有限公司 Calibration method and device for image coordinate system and GPS coordinate system and camera
CN109579857B (en) * 2018-12-06 2022-09-20 驭势(上海)汽车科技有限公司 Method and equipment for updating map
CN109579857A (en) * 2018-12-06 2019-04-05 驭势(上海)汽车科技有限公司 It is a kind of for updating the method and apparatus of map
CN109766793A (en) * 2018-12-25 2019-05-17 百度在线网络技术(北京)有限公司 Data processing method and device
CN109740487A (en) * 2018-12-27 2019-05-10 广州文远知行科技有限公司 Point cloud mask method, device, computer equipment and storage medium
CN111380543A (en) * 2018-12-29 2020-07-07 沈阳美行科技有限公司 Map data generation method and device
CN109919007B (en) * 2019-01-23 2023-04-18 绵阳慧视光电技术有限责任公司 Method for generating infrared image annotation information
CN109919007A (en) * 2019-01-23 2019-06-21 绵阳慧视光电技术有限责任公司 A method of generating infrared image markup information
CN109978955A (en) * 2019-03-11 2019-07-05 武汉环宇智行科技有限公司 A kind of efficient mask method for combining laser point cloud and image
CN109978955B (en) * 2019-03-11 2021-03-19 武汉环宇智行科技有限公司 Efficient marking method combining laser point cloud and image
CN109934873A (en) * 2019-03-15 2019-06-25 百度在线网络技术(北京)有限公司 Mark image acquiring method, device and equipment
CN110136227A (en) * 2019-04-26 2019-08-16 杭州飞步科技有限公司 Mask method, device, equipment and the storage medium of high-precision map
CN113657224B (en) * 2019-04-29 2023-08-18 北京百度网讯科技有限公司 Method, device and equipment for determining object state in vehicle-road coordination
CN113657224A (en) * 2019-04-29 2021-11-16 北京百度网讯科技有限公司 Method, device and equipment for determining object state in vehicle-road cooperation
CN110211228A (en) * 2019-04-30 2019-09-06 北京云迹科技有限公司 For building the data processing method and device of figure
CN110110678A (en) * 2019-05-13 2019-08-09 腾讯科技(深圳)有限公司 Determination method and apparatus, storage medium and the electronic device of road boundary
CN110210328A (en) * 2019-05-13 2019-09-06 北京三快在线科技有限公司 The method, apparatus and electronic equipment of object are marked in image sequence
WO2020228296A1 (en) * 2019-05-13 2020-11-19 北京三快在线科技有限公司 Annotate object in image sequence
CN110148185A (en) * 2019-05-22 2019-08-20 北京百度网讯科技有限公司 Determine method, apparatus, electronic equipment and the storage medium of coordinate system conversion parameter
CN110148185B (en) * 2019-05-22 2022-04-15 北京百度网讯科技有限公司 Method and device for determining coordinate system conversion parameters of imaging equipment and electronic equipment
CN112564930B (en) * 2019-09-10 2022-10-18 中国移动通信集团浙江有限公司 Access side optical cable route planning method and system based on map
CN112564930A (en) * 2019-09-10 2021-03-26 中国移动通信集团浙江有限公司 Access side optical cable route planning method and system based on map
US11403766B2 (en) 2019-09-19 2022-08-02 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and device for labeling point of interest
CN110647886A (en) * 2019-09-19 2020-01-03 北京百度网讯科技有限公司 Interest point marking method and device, computer equipment and storage medium
CN110781796B (en) * 2019-10-22 2022-03-25 杭州宇泛智能科技有限公司 Labeling method and device and electronic equipment
CN110781796A (en) * 2019-10-22 2020-02-11 杭州宇泛智能科技有限公司 Labeling method and device and electronic equipment
CN111045767A (en) * 2019-11-26 2020-04-21 北京金和网络股份有限公司 Method and system for displaying map label on map based on flexible data configuration
CN111045767B (en) * 2019-11-26 2024-01-26 北京金和网络股份有限公司 Method and system for displaying map labels on map based on flexible data configuration
CN111159459A (en) * 2019-12-04 2020-05-15 恒大新能源汽车科技(广东)有限公司 Landmark positioning method, device, computer equipment and storage medium
CN111159459B (en) * 2019-12-04 2023-08-11 恒大恒驰新能源汽车科技(广东)有限公司 Landmark positioning method, landmark positioning device, computer equipment and storage medium
CN113010475A (en) * 2019-12-20 2021-06-22 百度在线网络技术(北京)有限公司 Method and apparatus for storing trajectory data
CN113127584A (en) * 2019-12-31 2021-07-16 深圳云天励飞技术有限公司 Map annotation method, device, electronic equipment and storage medium
CN111259021A (en) * 2020-01-13 2020-06-09 速度时空信息科技股份有限公司 Quick collection and updating method and system for geographical map point information
CN111274296B (en) * 2020-01-17 2024-03-01 北京有竹居网络技术有限公司 Image data acquisition method and device, terminal and storage medium
CN111274296A (en) * 2020-01-17 2020-06-12 北京无限光场科技有限公司 Method and device for acquiring image data, terminal and storage medium
CN111340890A (en) * 2020-02-20 2020-06-26 北京百度网讯科技有限公司 Camera external reference calibration method, device, equipment and readable storage medium
CN113283269A (en) * 2020-02-20 2021-08-20 上海博泰悦臻电子设备制造有限公司 Method, electronic device, and computer storage medium for identifying a map
CN111340890B (en) * 2020-02-20 2023-08-04 阿波罗智联(北京)科技有限公司 Camera external parameter calibration method, device, equipment and readable storage medium
CN113378606A (en) * 2020-03-10 2021-09-10 杭州海康威视数字技术股份有限公司 Method, device and system for determining labeling information
CN111626206A (en) * 2020-05-27 2020-09-04 北京百度网讯科技有限公司 High-precision map construction method and device, electronic equipment and computer storage medium
CN111784798A (en) * 2020-06-30 2020-10-16 滴图(北京)科技有限公司 Map generation method and device, electronic equipment and storage medium
CN111964686A (en) * 2020-07-20 2020-11-20 汉海信息技术(上海)有限公司 Road data acquisition method, device, server, automobile data recorder and medium
CN111964686B (en) * 2020-07-20 2022-07-12 汉海信息技术(上海)有限公司 Road data acquisition method, device, server, automobile data recorder and medium
CN112037120A (en) * 2020-07-31 2020-12-04 上海图森未来人工智能科技有限公司 Method and device for labeling road plane elements in 3D point cloud data and storage medium
CN111951330A (en) * 2020-08-27 2020-11-17 北京小马慧行科技有限公司 Label updating method and device, storage medium, processor and vehicle
CN112432650A (en) * 2020-11-19 2021-03-02 北京四维图新科技股份有限公司 Acquisition method of high-precision map data, vehicle control method and device
CN114531580A (en) * 2020-11-23 2022-05-24 北京四维图新科技股份有限公司 Image processing method and device
CN114531580B (en) * 2020-11-23 2023-11-21 北京四维图新科技股份有限公司 Image processing method and device
CN112948517A (en) * 2021-02-26 2021-06-11 北京百度网讯科技有限公司 Area position calibration method and device and electronic equipment
CN112948517B (en) * 2021-02-26 2023-06-23 北京百度网讯科技有限公司 Regional position calibration method and device and electronic equipment
CN113011323B (en) * 2021-03-18 2022-11-29 阿波罗智联(北京)科技有限公司 Method for acquiring traffic state, related device, road side equipment and cloud control platform
CN113011323A (en) * 2021-03-18 2021-06-22 北京百度网讯科技有限公司 Method for acquiring traffic state, related device, road side equipment and cloud control platform
CN112950726A (en) * 2021-03-25 2021-06-11 深圳市商汤科技有限公司 Camera orientation calibration method and related product
CN112950726B (en) * 2021-03-25 2022-11-11 深圳市商汤科技有限公司 Camera orientation calibration method and related product
CN113093128A (en) * 2021-04-09 2021-07-09 阿波罗智联(北京)科技有限公司 Method and device for calibrating millimeter wave radar, electronic equipment and road side equipment
CN113591580B (en) * 2021-06-30 2022-10-14 北京百度网讯科技有限公司 Image annotation method and device, electronic equipment and storage medium
CN113591580A (en) * 2021-06-30 2021-11-02 北京百度网讯科技有限公司 Image annotation method and device, electronic equipment and storage medium
CN113742439A (en) * 2021-08-27 2021-12-03 深圳Tcl新技术有限公司 Space labeling method and device, electronic equipment and storage medium
CN113806461A (en) * 2021-09-03 2021-12-17 深圳特为科创信息技术有限公司 Processing method and processing system of labeled data
CN114526723A (en) * 2022-02-14 2022-05-24 广州小鹏自动驾驶科技有限公司 Road map construction method and device, electronic equipment and storage medium
CN116048698B (en) * 2022-12-30 2023-06-30 以萨技术股份有限公司 Method for displaying popup window in map, electronic equipment and storage medium
CN116385528A (en) * 2023-03-28 2023-07-04 小米汽车科技有限公司 Method and device for generating annotation information, electronic equipment, vehicle and storage medium
CN116385528B (en) * 2023-03-28 2024-04-30 小米汽车科技有限公司 Method and device for generating annotation information, electronic equipment, vehicle and storage medium
CN117253232A (en) * 2023-11-17 2023-12-19 北京理工大学前沿技术研究院 Automatic annotation generation method, memory and storage medium for high-precision map
CN117253232B (en) * 2023-11-17 2024-02-09 北京理工大学前沿技术研究院 Automatic annotation generation method, memory and storage medium for high-precision map

Also Published As

Publication number Publication date
CN108694882B (en) 2020-09-22

Similar Documents

Publication Publication Date Title
CN108694882A (en) Method, apparatus and equipment for marking map
EP3505869B1 (en) Method, apparatus, and computer readable storage medium for updating electronic map
CN110174093B (en) Positioning method, device, equipment and computer readable storage medium
CN109324337B (en) Unmanned aerial vehicle route generation and positioning method and device and unmanned aerial vehicle
CN105448184B (en) The method for drafting and device of map road
JP2018163654A (en) System and method for telecom inventory management
US11501104B2 (en) Method, apparatus, and system for providing image labeling for cross view alignment
CN110426051A (en) A kind of lane line method for drafting, device and storage medium
US20220277478A1 (en) Positioning Method and Apparatus
CN110427917A (en) Method and apparatus for detecting key point
CN109446973B (en) Vehicle positioning method based on deep neural network image recognition
CN111540048A (en) Refined real scene three-dimensional modeling method based on air-ground fusion
CN109285188A (en) Method and apparatus for generating the location information of target object
CN105512646A (en) Data processing method, data processing device and terminal
KR102308456B1 (en) Tree species detection system based on LiDAR and RGB camera and Detection method of the same
CN103703758A (en) Mobile augmented reality system
CN111339876B (en) Method and device for identifying types of areas in scene
US20210003683A1 (en) Interactive sensor calibration for autonomous vehicles
CN111830953A (en) Vehicle self-positioning method, device and system
CN109931950B (en) Live-action navigation method, system and terminal equipment
CN111353453B (en) Obstacle detection method and device for vehicle
CN109872392A (en) Man-machine interaction method and device based on high-precision map
CN110110029A (en) Method and apparatus for matching lane
CN109508579A (en) For obtaining the method and device of virtual point cloud data
CN112446915B (en) Picture construction method and device based on image group

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant