CN111192341A - Method and device for generating high-precision map, automatic driving equipment and storage medium - Google Patents

Method and device for generating high-precision map, automatic driving equipment and storage medium Download PDF

Info

Publication number
CN111192341A
CN111192341A CN201911421660.6A CN201911421660A CN111192341A CN 111192341 A CN111192341 A CN 111192341A CN 201911421660 A CN201911421660 A CN 201911421660A CN 111192341 A CN111192341 A CN 111192341A
Authority
CN
China
Prior art keywords
driving
determining
area
precision map
automatic driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201911421660.6A
Other languages
Chinese (zh)
Inventor
程保山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN201911421660.6A priority Critical patent/CN111192341A/en
Publication of CN111192341A publication Critical patent/CN111192341A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a method and a device for generating a high-precision map, automatic driving equipment and a storage medium, and belongs to the technical field of artificial intelligence. The method comprises the following steps: and acquiring an electronic navigation map of the target area, and determining a driving path of the automatic driving equipment according to the electronic navigation map. The method comprises the steps of obtaining an environment image of the current position of the automatic driving equipment on a driving path, and determining a driving area around the current position according to the environment image. And determining the driving control information of the current position based on the driving area, and controlling the automatic driving equipment to acquire high-precision map data according to the driving control information. And generating a high-precision map of the target area according to the high-precision map data. The method and the device determine the driving control information for controlling the automatic driving equipment based on the electronic navigation map and the environment image, and achieve unmanned collection of high-precision map data through the automatic driving equipment. The labor cost and the hardware cost are saved, the efficiency of collecting high-precision map data is improved, and the efficiency of generating a high-precision map is higher.

Description

Method and device for generating high-precision map, automatic driving equipment and storage medium
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a method and a device for generating a high-precision map, automatic driving equipment and a storage medium.
Background
With the development of artificial intelligence technology, more and more artificial intelligence devices are applied to the life of people, and an automatic driving device is one of the artificial intelligence devices. In practice, autopilot devices often rely on high-precision maps to implement the driving process. Compared with an electronic navigation map used by a driver in a manual driving mode, the high-precision map describes roads through more accurate and rich information so that the automatic driving equipment can understand the road conditions, and therefore the driving process is achieved by executing actions based on the road condition planning actions. For example, the electronic navigation map has roads as a whole. The high-precision map divides the road into a plurality of different lanes, and each lane also corresponds to attribute information such as width, curvature and the like. Therefore, how to generate a high-precision map is the key to ensure that the automatic driving equipment realizes the driving process.
In the related art, map data is collected by manually driving a collection vehicle for an area where a high-precision map is to be generated. And then generating a high-precision map according to the acquired map data. However, the manual collection of map data not only requires a large labor cost, but also has a low collection efficiency, thereby reducing the efficiency of generating high-precision maps.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for generating a high-precision map, and aims to solve the problems of high labor cost and low efficiency of generating the high-precision map in the related technology. The technical scheme is as follows:
in one aspect, a method for generating a high-precision map is provided, the method comprising:
acquiring an electronic navigation map of a target area, and determining a driving path of automatic driving equipment according to the electronic navigation map;
acquiring an environment image of the current position of the automatic driving equipment on the driving path, and determining a driving area around the current position according to the environment image;
determining running control information of the current position based on the running area, and controlling the automatic driving equipment to acquire high-precision map data according to the running control information;
and generating a high-precision map of the target area according to the high-precision map data.
In an exemplary embodiment, the determining the driving area around the current position according to the environment image includes:
determining a subimage corresponding to a main lane and a subimage corresponding to an auxiliary lane from the environment image;
and determining the main lane area according to the sub-image corresponding to the main lane, and determining the auxiliary lane area according to the sub-image corresponding to the auxiliary lane.
The determining of the sub-image corresponding to the main lane and the sub-image corresponding to the auxiliary lane from the environment image includes: and inputting the environment image into a segmentation model to obtain a sub-image corresponding to the main lane and a sub-image corresponding to the auxiliary lane, which are output by the segmentation model, wherein the segmentation model is trained on the basis of a reference high-precision map, a reference image and reference laser radar data.
In an exemplary embodiment, before the inputting the environment image into a segmentation model, the method further comprises:
acquiring a reference high-precision map, a reference image and reference laser radar data of a training area;
determining a reference driving area in the reference image according to the corresponding relation between the three-dimensional point data in the reference high-precision map and the reference laser radar data and the two-dimensional point data in the reference image;
marking the reference driving area in the reference image to obtain a marked reference image;
and training according to the labeled reference image to obtain the segmentation model.
In an exemplary embodiment, the determining a travel path of an autonomous device from the electronic navigation map includes:
acquiring an initial position of the automatic driving equipment;
determining a road in the target area according to the electronic navigation map;
and determining the driving sequence of each road based on the initial position, and arranging the roads in the target area according to the driving sequence to obtain the driving path.
In an exemplary embodiment, the determining of the travel control information of the current location based on the travel area includes:
acquiring future movement tracks of obstacles around the automatic driving equipment and traffic restriction information around the current position;
acquiring a target running track of the current position according to the future motion track, the traffic restriction information and the running area;
and determining the driving control information of the current position according to the target driving track.
In an exemplary embodiment, the obtaining the target driving track of the current position according to the future movement track, the traffic restriction information, and the driving area includes:
sampling the automatic driving equipment to obtain a plurality of speed values and a plurality of orientation values of the automatic driving equipment;
determining a plurality of reference driving tracks from the driving area according to the plurality of speed values and the plurality of orientation values;
and determining a cost value of each reference driving track according to the future motion track and the traffic limitation information, and taking the reference driving track with the minimum cost value in the multiple reference driving tracks as the target driving track.
In one aspect, an apparatus for generating a high-precision map is provided, the apparatus comprising:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring an electronic navigation map of a target area and determining a driving path of the automatic driving equipment according to the electronic navigation map;
the determining module is used for acquiring an environment image of the current position of the automatic driving equipment on the driving path and determining a driving area around the current position according to the environment image;
the control module is used for determining the driving control information of the current position based on the driving area and controlling the automatic driving equipment to acquire high-precision map data according to the driving control information;
and the generating module is used for generating a high-precision map of the target area according to the high-precision map data.
In an exemplary embodiment, the determining the sub-image corresponding to the primary lane and the sub-image corresponding to the secondary lane from the environment image includes: the determining module is used for determining a sub-image corresponding to a main lane and a sub-image corresponding to an auxiliary lane from the environment image; and determining the main lane area according to the sub-image corresponding to the main lane, and determining the auxiliary lane area according to the sub-image corresponding to the auxiliary lane.
In an exemplary embodiment, the determining module is configured to input the environment image into a segmentation model, so as to obtain a sub-image corresponding to the primary lane and a sub-image corresponding to the secondary lane, which are output by the segmentation model, where the segmentation model is trained based on a reference high-precision map, a reference image, and reference lidar data.
In an exemplary embodiment, the apparatus further comprises: the training module is used for acquiring a reference high-precision map, a reference image and reference laser radar data of a training area; determining a reference driving area in the reference image according to the corresponding relation between the three-dimensional point data in the reference high-precision map and the reference laser radar data and the two-dimensional point data in the reference image; marking the reference driving area in the reference image to obtain a marked reference image; and training according to the labeled reference image to obtain the segmentation model.
In an exemplary embodiment, the obtaining module is configured to obtain an initial position of the autonomous device; determining a road in the target area according to the electronic navigation map; and determining the driving sequence of each road based on the initial position, and arranging the roads in the target area according to the driving sequence to obtain the driving path.
In an exemplary embodiment, the control module is configured to obtain a future movement trajectory of an obstacle around the autonomous driving apparatus and traffic restriction information around the current location; acquiring a target running track of the current position according to the future motion track, the traffic restriction information and the running area; and determining the driving control information of the current position according to the target driving track.
In an exemplary embodiment, the control module is configured to sample the autonomous driving apparatus to obtain a plurality of speed values and a plurality of heading values of the autonomous driving apparatus; determining a plurality of reference driving tracks from the driving area according to the plurality of speed values and the plurality of orientation values; and determining a cost value of each reference driving track according to the future motion track and the traffic limitation information, and taking the reference driving track with the minimum cost value in the multiple reference driving tracks as the target driving track.
In one aspect, an autopilot device is provided that includes a memory and a processor; the memory has stored therein at least one instruction, which is loaded and executed by the processor, to implement a method of generating a high-precision map provided by any of the exemplary embodiments of the present application.
In another aspect, a readable storage medium is provided, and at least one instruction is stored in the storage medium and loaded by a processor and executed to implement the method for generating a high-precision map provided in any exemplary embodiment of the present application.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
under the condition that no high-precision map exists, the driving path of the automatic driving equipment is determined through the electronic navigation map, the driving area available for driving is determined through the environment image around the automatic driving equipment, and driving control information used for controlling the automatic driving equipment is determined according to the driving area, so that the high-precision map data can be acquired through the automatic driving equipment. Compare in the mode that collection car realized the collection of high-accuracy map data of manual driving collection, the human cost has not only been practiced thrift to the mode that this embodiment provided, has improved the efficiency of gathering high-accuracy map data to make the efficiency of generating high-accuracy map higher. And moreover, the automatic driving equipment for collecting high-precision map data can be subsequently used as a logistics distribution vehicle for recycling, so that the hardware cost is saved greatly.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application;
FIG. 2 is a flow chart of a method for generating a high-precision map provided by an embodiment of the present application;
FIG. 3 is a system architecture diagram for generating high-precision maps, provided by an embodiment of the present application;
FIG. 4 is a schematic structural diagram of an apparatus for generating a high-precision map according to an embodiment of the present application;
fig. 5 is a terminal structure diagram provided in the embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The embodiment of the application provides a method for generating a high-precision map, which can be applied to an implementation environment as shown in fig. 1. In fig. 1, at least one terminal 11 and a server 12 are included, and the terminal 11 may be in communication connection with the server 12 to download the electronic navigation map from the server 12.
The terminal 11 is installed in the automatic driving device, and the terminal 11 may be a PC (Personal Computer), a mobile phone, a smart phone, a PDA (Personal Digital Assistant), a wearable device, a pocket PC (pocket PC), a tablet PC, a smart car, a smart television, and other electronic products.
The server 12 may be a server, a server cluster composed of a plurality of servers, or a cloud computing service center.
It should be understood by those skilled in the art that the above-mentioned terminal 11 and server 12 are only examples, and other existing or future terminals or servers may be suitable for the present application and are included within the scope of the present application and are herein incorporated by reference.
Based on the implementation environment shown in fig. 1, referring to fig. 2, an embodiment of the present application provides a method for generating a high-precision map, where the method is applicable to the terminal shown in fig. 1. As shown in fig. 2, the method includes:
step 201, obtaining an electronic navigation map of a target area, and determining a driving path of the automatic driving device according to the electronic navigation map.
The terminal can send an acquisition request of the electronic navigation map to the server when the driving path of the automatic driving equipment needs to be determined, so that the electronic navigation map of the target area can be downloaded from the server. Or the terminal can download the electronic navigation map of the target area from the server in advance before the driving path of the automatic driving device needs to be determined, and when the driving path of the automatic driving device needs to be determined according to the electronic navigation map, the electronic navigation map can be directly acquired from the local memory of the terminal without downloading the electronic navigation map from the server. The present embodiment is not limited to the timing of downloading the electronic navigation map of the target area from the server in advance. For example, the detection that the distance between the autopilot device and the target area is less than the reference distance may be used as a time for downloading the electronic navigation map, the electronic navigation map may be downloaded when it is determined that the autopilot device is to be dispatched to the target area, or the electronic navigation map may be directly downloaded during the production of the autopilot device.
In an exemplary embodiment, determining a travel path of an autonomous device from an electronic navigation map includes: and acquiring the initial position of the automatic driving equipment, and determining the road in the target area according to the electronic navigation map. And determining the driving sequence of each road based on the initial position, and arranging the roads in the target area according to the driving sequence to obtain a driving path.
The initial position of the autonomous device is a position when the autonomous device is ready to start traveling within the target area. In this case, the position where the automatic driving device first reaches the target area may be set as the initial position of the automatic driving device when the automatic driving device is controlled to travel from an area other than the target area into the target area. Alternatively, if the autonomous device is manually dropped into the target area, the manual drop point may be used as the initial position of the autonomous device. In addition, the initial position may be obtained by one or more of information measured by a GPS (Global Positioning System), an IMU (Inertial Measurement Unit), a laser radar, and a odometer of an autonomous driving apparatus.
From the electronic navigation map, roads within the target area can be determined, which are often all roads within the target area, so that the subsequently determined travel path can cover all roads within the target area. Of course, the road may also be a partial road in the target area, which is not limited in this embodiment. In addition, the electronic navigation map also includes routing information for each road, such as the required driving direction of the road, the road position, the road length, and the like. Therefore, the driving sequence of each road can be determined according to the initial position and the routing information of each road, and the roads are arranged according to the sequence from small to large of the driving sequence, so that the driving path is obtained.
In implementation, the connection relationship of roads is often complex, and if the driving route is to cover all the roads in the target area determined according to the electronic navigation map, one or more of the roads often need to repeatedly appear in the driving route one or more times. Considering that the smaller the number of repeated roads or the fewer the repeated times of the same road, the shorter the driving route is, that is, the shorter the distance is, all the roads in the target area determined according to the electronic navigation map can be covered, which is more beneficial to saving resources and improving efficiency. Therefore, an algorithm having a function of finding the shortest route in the road network may be called, and the shortest travel route may be determined by using the called algorithm in combination with the initial position and the route information of each road. Algorithms having a function of solving the shortest path in the road network include, but are not limited to, a-x algorithm, Djskla, greedy algorithm, and the like, and the called algorithm is not limited in this embodiment.
It should be noted that, because the electronic navigation map takes the road as a whole, the road is not divided into a plurality of different lanes, and each lane does not correspond to attribute information such as width, curvature, and the like. Therefore, for a driving route determined by the roads and the routing information of the roads in the electronic navigation map, the terminal cannot understand the road condition of any position of the automatic driving device on the driving route, so that the driving route cannot be directly used for driving by the automatic driving device, and the main function of the driving route is to indicate the overall driving trend of the automatic driving device in the target area. And for the travel control information that can be used for the automatic driving device to travel on the travel path, it is further required to be acquired according to the following step 202 and 203.
Step 202, obtaining an environment image of the current position of the automatic driving device on the driving path, and determining a driving area around the current position according to the environment image.
The environment image can be obtained by a camera device of the terminal, and the environment image usually comprises a plurality of environment elements such as a driving area, sky, surrounding obstacles, traffic signboards, traffic lights and the like. The driving area around the current position is determined according to the environment image, namely the partial image corresponding to the environment element of the driving area determined from the environment image, and the terminal can understand the partial image corresponding to the driving area, so that the terminal can understand which areas around the current position of the automatic driving device on the driving path are driving areas which can be used for driving the automatic driving device, and the terminal can obtain driving control information for controlling the automatic driving device based on the driving areas. In implementations, the travel area may include a primary lane area and a secondary lane area on the travel path.
It should be noted that the position of the automatic driving device on the travel path is different at different times during the travel of the automatic driving device. Therefore, it is necessary to acquire the environment image of the current position in real time during the driving process from the initial position of the automatic driving apparatus, and the acquisition process needs to be repeated several times until the automatic driving apparatus passes through all roads included in the driving path. The real-time acquisition may be performed at intervals of reference time, the reference time may be 1 second, and the reference time is not limited in this embodiment.
In an exemplary embodiment, since the travel area includes a main lane area and a sub lane area, determining the travel area around the current location from the environment image includes: and determining the sub-image corresponding to the main lane and the sub-image corresponding to the auxiliary lane from the environment image. And determining a main lane area according to the sub-image corresponding to the main lane, and determining an auxiliary lane area according to the sub-image corresponding to the auxiliary lane.
In implementation, various environment elements included in the environment image may be segmented to obtain a sub-image corresponding to each environment element. And then, identifying the sub-image corresponding to the main lane and the sub-image corresponding to the auxiliary lane from the plurality of sub-images. Therefore, the terminal can understand the sub-image corresponding to the main lane, so as to understand which areas around the current position of the automatic driving device on the driving path are the main lane areas. Correspondingly, the terminal obtains which areas around the current position are auxiliary lane areas according to the sub-image corresponding to the auxiliary lane.
In consideration of the timeliness problem, the process of determining the sub-image corresponding to the main lane and the sub-image corresponding to the auxiliary lane can be executed through a trained model. That is, in an exemplary embodiment, determining the sub-image corresponding to the primary lane and the sub-image corresponding to the secondary lane from the environment image includes: and inputting the environment image into a segmentation model to obtain a sub-image corresponding to the main lane and a sub-image corresponding to the auxiliary lane, which are output by the segmentation model, wherein the segmentation model is trained on the basis of a reference high-precision map, a reference image and reference laser radar data.
In implementation, the segmentation model can output the sub-image corresponding to the main lane and the sub-image corresponding to the auxiliary lane through the segmentation of the environment elements and the recognition of the sub-images. It can be seen that the segmentation model is a model having the capability of extracting a sub-image corresponding to the main lane and a sub-image corresponding to the sub-lane from any input image. Therefore, in the process of training the segmentation model, a plurality of images are required to be used as training data, and a partial image corresponding to the main lane and a partial image corresponding to the auxiliary lane are marked in each of the plurality of images, so that the segmentation model can acquire the capability of extracting the sub-image corresponding to the main lane and the sub-image corresponding to the auxiliary lane from any input image by learning the plurality of images. The segmentation model may adopt a mask rcnn (mask region-based convolutional neural network) or a convolutional network such as FCN (full convolutional network), and the segmentation model is not limited in this embodiment.
For the above-mentioned acquisition mode of multiple images that can be used as training data, multiple original images can be collected in the training area, and the partial images corresponding to the main lane and the partial images corresponding to the auxiliary lane are respectively marked in the multiple original images manually, and the marked images are used as training data. The training area may be any area including the main lane area and the auxiliary lane area, except the target area, and may be selected according to experience or actual needs, and the training area is not limited in this embodiment.
However, considering that the manual labeling mode is low in efficiency, the image can be automatically labeled by combining the existing high-precision map and laser data, so that the segmentation model obtained through training is obtained. That is, in an exemplary embodiment, before inputting the environment image into the segmentation model, the method further comprises: and acquiring a reference high-precision map, a reference image and reference laser radar data of the training area. And determining a reference driving area in the reference image according to the corresponding relation between the three-dimensional point data in the reference high-precision map and the reference laser radar data and the two-dimensional point data in the reference image. And marking a reference driving area in the reference image to obtain a marked reference image, and training according to the marked reference image to obtain a segmentation model.
The high-precision map data of the training area can be acquired in a mode of manually driving the acquisition vehicle, so that a reference high-precision map of the training area is generated according to the acquired high-precision map data, and the driving area is represented by 3D (3D) point data in the reference high-precision map. Since the reference high-precision map describes the driving area by more accurate and abundant information, the driving area is divided into a main lane area and an auxiliary lane area in the reference high-precision map. Therefore, it is possible to determine which 3D point data corresponds to the main lane area and which 3D point data corresponds to the sub lane area by the reference high-precision map.
In addition, the reference lidar data also represents the travel area by 3D point data, the reference image is the original image collected in the training area in the above description, and the reference image represents the environmental elements such as the travel area by 2D (two-dimensional) point data. It can be seen that the 2D point data and the 3D point data are actually different representations for the same object, and thus there is a correspondence between the 2D point data in the reference image used to represent the same object and the 3D point data in the reference high-precision map and in the reference lidar data.
In an implementation, the 3D point data in the reference high-precision map and the reference lidar data may be coordinate-converted, thereby converting the coordinate system of the 3D point data from the original coordinate system to the camera coordinate system of the terminal camera that captured the reference image. And then, projecting the 3D point data into the 2D point data in the reference image based on the projective transformation principle according to the corresponding relation between the 3D point data and the 2D point data. Then, the 2D point data projected from the 3D point data corresponding to the main lane area is determined as the 2D point data indicating the main lane area, and the 2D point data projected from the 3D point data corresponding to the sub lane area is determined as the 2D point data indicating the sub lane area, so that the partial image corresponding to the main lane area and the partial image corresponding to the sub lane area in the reference image can be automatically determined, thereby determining the reference travel area.
Then, the partial image corresponding to the main lane area and the partial image corresponding to the auxiliary lane area can be automatically marked in the reference image, so that the marked reference image is obtained. The marked reference image is used as training data, so that a segmentation model with the capability of extracting a sub-image corresponding to the main lane and a sub-image corresponding to the auxiliary lane from any input image can be obtained through training, and the obtained segmentation model is applied to determine the main lane area and the auxiliary lane area around the current position according to the environment image.
And step 203, determining the driving control information of the current position based on the driving area, and controlling the automatic driving equipment to collect high-precision map data according to the driving control information.
The driving control information includes, but is not limited to, one or more of an accelerator amount, a brake amount, and a steering wheel angle of the automatic driving device. It should be noted that the first position of the automatic driving device in the target area is the initial position in step 201, and when the automatic driving device is located at the initial position, the automatic driving device is a device that has not traveled in the target area, that is, the automatic driving device is in a stationary state. Therefore, it is possible to acquire an environment image of the automatic driving apparatus at the initial position, acquire a travel area around the initial position from the environment image of the initial position, and determine travel control information of the initial position from the travel area around the initial position. The automatic driving device is controlled according to the running control information of the initial position, so that the automatic driving device can be converted from a static state to a running state, and the automatic driving device starts controlled running.
In the driving state, the position of the automatic driving device is different at different times. And then, the driving control information of the current position can be respectively acquired at different moments, so that the current position can be moved to the next position, the driving control information of the next position can be acquired again according to the driving area of the next position at the next position, and the automatic driving equipment is continuously controlled to move, so that the automatic driving equipment can drive on the driving area according to the overall driving trend indicated by the driving path, and the automatic driving equipment can acquire high-precision map data in the driving process.
In implementation, the high-precision map data collected by the autopilot device includes, but is not limited to, a number of lanes of a driving area, widths of lanes at different positions, lane lengths, lane slopes, lane curvatures, lane line shapes (dotted lines, solid lines, or double yellow lines) between different lanes, symbols or characters included on the lanes, lane speed limits, and pedestrian crossings near the lanes, among other data, and can be used to generate a high-precision map of a target area.
It can be seen that, through the steps 201 and 203 in the present embodiment, the automatic driving device can realize the driving in the target area without a high-precision map. In other words, compared with the unmanned driving mode which must depend on a high-precision map in the related art, the unmanned driving can be realized without depending on the high-precision map in the embodiment, and the unmanned driving mode is flexible and has higher popularity.
In an exemplary embodiment, determining the travel control information of the current location based on the travel area includes steps 2031 to 2033 of:
step 2031, a future movement trajectory of an obstacle around the automatic driving apparatus and traffic restriction information around the current position are acquired.
Surrounding obstacles include, but are not limited to, other vehicles in the driving area and static obstacles in the driving area, such as isolation barriers, isolation greenbelts, and the like. For the method for acquiring the future movement locus of the surrounding obstacle, an image including the surrounding obstacle may be captured, and the captured image may be input into the reference model, so as to obtain the position information, the orientation information, and the speed information of each surrounding obstacle at the current time, which are output by the reference model. And then, predicting the position information, the orientation information and the speed information of the future time according to the position information, the orientation information and the speed information of the current time by an enhanced learning model, so as to determine the future movement track of the surrounding obstacle based on the predicted position information, orientation information and speed information of the future time. In an implementation, the reference model may be a trained fast RCNN (Faster regional convolutional neural network), YOLO (You only see Once, an algorithm name), HRNet (Deep High-Resolution reconstruction learning for Human position Estimation), or LSTM (Long Short-Term Memory network), and the embodiment does not limit the reference model used.
For the traffic restriction information acquisition mode around the current position, images of traffic-related facilities including traffic signs, traffic lights and the like around the current position can be shot. And then, extracting the characteristics of the traffic-related facilities in the image through a neural network, and classifying the extracted characteristics through a classifier, so that the terminal can understand the traffic-related facilities around the current position, and the terminal can calculate the traffic restriction information according to the traffic-related facilities. For example, the traffic restriction information may be a speed limit of 60km/h (unit: km/hour), red light prohibition of traveling, and the like.
It should be noted that, in the implementation, since the environment image acquired in step 202 often includes surrounding obstacles, traffic signs and traffic lights, the environment image can be directly used as the image including the surrounding obstacles and the image including the traffic-related facilities. Of course, one or more images may be captured by the terminal camera specifically for the surrounding obstacles or the traffic-related facilities, and the embodiment does not limit the manner of acquiring the images including the surrounding obstacles and the images including the traffic-related facilities.
Step 2032, obtaining a target driving track of the current position according to the future movement track, the traffic restriction information and the driving area.
The future movement track and traffic limitation information of surrounding obstacles can be comprehensively considered, and the target driving track of the current position can be obtained from the driving area. In an exemplary embodiment, 2032 can be implemented as follows: the method comprises the steps of sampling the automatic driving equipment to obtain a plurality of speed values and a plurality of orientation values of the automatic driving equipment. A plurality of reference travel trajectories are determined from the travel area based on the plurality of speed values and the plurality of heading values. And determining a cost value of each reference driving track according to the future movement track of the surrounding obstacles and the traffic limitation information, and taking the reference driving track with the minimum cost value in the plurality of reference driving tracks as a target driving track.
From the information measured by the GPS, IMU, lidar, and odometer of the autonomous device involved in step 201, the vehicle body orientation and vehicle body speed of the autonomous device can be acquired in addition to the position of the autonomous device. The obtaining process is executed for multiple times, and multiple speed values and multiple orientation values can be obtained through sampling. Then, the sampled velocity values and orientation values are combined, and the obtained combined number is the product of the number of the velocity values and the number of the orientation values. For example, if 5 velocity values and 5 orientation values are obtained by sampling, 25 velocity value and orientation value combinations can be obtained.
Then, a reference travel path is determined from the travel area for each combination of speed value and heading value, so as to obtain a plurality of reference travel paths, each of which is a path available for the autonomous driving apparatus to travel at a future time. These reference travel trajectories may be trajectories to follow the vehicle ahead, trajectories to overrun the vehicle ahead, trajectories to autonomously cruise at low speed, trajectories to brake, and so on. Finally, for any reference driving track, a first cost value can be determined according to the future movement track of the surrounding obstacle, a second cost value can be determined according to the traffic limitation information, the first cost value and the second cost value are calculated, for example, multiplication or weighted summation and the like are carried out, and the calculation result is used as the cost value for determining the reference driving track. After the cost value of each reference travel path is determined, the reference travel path with the smallest cost value in all the reference travel paths can be used as the target travel path for the automatic driving device to execute at the future time. In other words, the automatic driving apparatus travels in accordance with the determined target travel track at a future time.
The first cost value can be determined according to the relative distance between the future movement track of the surrounding obstacle and the reference driving track. For example, if the relative distance between the reference travel locus and a certain point in the future movement locus of the surrounding obstacle is smaller than the reference distance, it means that the automated driving equipment collides with the surrounding obstacle and is highly dangerous if the automated driving equipment travels along the reference travel locus, and therefore the first cost value of the reference travel locus can be set to positive infinity (+ ∞). The reference distance may be the sum of the 1/2 radius of the autopilot device and the 1/2 radius of the surrounding obstacles, or the sum of the 1/2 radius of the autopilot device and the 1/2 radius of the surrounding obstacles multiplied by a factor greater than 1 (e.g., 1.1, 1.6, 2, etc.). In addition, the radius of the autonomous device refers to the radius of the smallest circumscribed circle of the maximum horizontal cross-section of the autonomous device, and correspondingly, the radius of the surrounding obstacle refers to the radius of the smallest circumscribed circle of the maximum horizontal cross-section of the obstacle. The first cost value may be inversely proportional to the relative distance if there is no point in the reference travel trajectory which has a relative distance to the future movement trajectory of the surrounding obstacle which is smaller than the reference distance. That is, the larger the relative distance is, the larger the distance between the autonomous driving apparatus and the surrounding obstacle is, the higher the safety of the autonomous driving apparatus is, and thus the smaller the first cost value is.
For the above-described second cost value, it may be determined according to whether the reference travel track violates the traffic restriction information. If the reference travel track violates the traffic restriction information, it is described that the second generation cost value of the reference travel track can be set to positive infinity (+ ∞) because the automatic driving equipment would cause a result of speeding, running a red light, or the like if traveling according to the reference travel track, and the risk is high. If the reference travel track does not violate the traffic restriction information, the second generation price value may be set to 0.
Step 2033, determining the driving control information of the current position according to the target driving track.
After the target travel track is determined, information that enables the automatic driving apparatus to travel according to the target travel track may be determined as the travel control information of the current location. For example, if the target travel locus is a locus of autonomous low-speed cruising, the automatic driving apparatus needs to maintain the current lane and keep the speed below the reference speed in order to achieve the target travel locus. Therefore, the current vehicle speed of the automatic driving equipment can be obtained, the vehicle speed amount required to be adjusted is determined according to the difference value of the current vehicle speed and the reference speed, and therefore the accelerator amount information capable of adjusting the vehicle speed amount is obtained. In addition, the current steering wheel angle of the automatic driving equipment is also obtained, and the steering wheel angle amount required to be adjusted is determined according to the difference value between the current steering wheel angle and the 0-degree angle, so that the steering wheel angle information capable of adjusting the steering wheel angle amount is obtained. And finally, the accelerator amount information and the steering wheel angle information can be used as the running control information of the current position, the accelerator and the steering wheel of the automatic driving equipment are controlled according to the running control information, and the automatic driving equipment can run according to the target running track.
In an implementation, a PID (Proportional-Derivative-Integral-Derivative) control algorithm may be called to calculate the above-described travel control information. The travel control information may be information that the automatic driving apparatus can complete execution within a single time unit (for example, 1 second), or may be information that the automatic driving apparatus needs to execute continuously in a plurality of continuous time units.
And step 204, generating a high-precision map of the target area according to the high-precision map data.
For the high-precision map data acquired in step 203, the high-precision map data can be calibrated and integrated, the integrated high-precision map data is matched with each lane in the target area, and attribute information corresponding to each lane is obtained, so that the lanes are described through accurate and rich information, and the high-precision map is generated. The generation mode can be applied to the generation of the high-precision map for the first time and can also be applied to the updating of the existing high-precision map.
Further, the steps 201-204 can be implemented according to the architecture diagram shown in fig. 3. Next, the operation of each block in fig. 3 will be explained:
a positioning module: the location information, orientation information, and speed information of the autonomous device are determined by one or more of GPS, IMU, lidar, and odometer measured information of the autonomous device.
A travel route acquisition module: and determining a driving path for indicating the overall driving trend of the automatic driving device in the target area according to the electronic navigation map and the position information of the automatic driving device determined by the positioning module, wherein the driving path can cover all roads in the target area.
A perception module: the position information, orientation information, and speed information of obstacles around the target vehicle are determined, and the state of traffic-related facilities around the target vehicle is determined, and the traffic limitation information is resolved according to the state of the traffic-related facilities.
A prediction module: and predicting the future movement locus of the surrounding obstacle according to the position information, the orientation information and the speed information determined by the sensing module.
A travel region determination module: a driving area of the current position of the automatic driving device on the driving path is determined through the environment image around the automatic driving device, and the driving area can be used for driving of the automatic driving device.
A target travel track acquisition module: the method comprises the steps of determining the direction information and the speed information determined by a comprehensive positioning module, determining the running path determined by a running path acquisition module, determining the traffic limitation information determined by a sensing module, determining the future movement track of surrounding obstacles determined by a prediction module, determining the running area determined by a running area determination module, and determining the target running track of the automatic driving equipment.
A travel control information acquisition module: and on the basis of the target running track determined by the target running track acquisition module, using information capable of enabling the automatic driving equipment to run according to the target running track as running control information of the automatic driving equipment.
An autopilot device control module: and controlling one or more of an accelerator, a brake and a steering wheel of the automatic driving equipment according to the running control information determined by the running control information module, so that the running of the automatic driving equipment is realized.
In summary, in the present embodiment, without a high-precision map, the driving path of the automatic driving device is determined through the electronic navigation map, the driving area available for driving is determined through the environment image around the automatic driving device, and then the driving control information for controlling the automatic driving device is determined according to the driving area, so that the collection of high-precision map data can be realized through the automatic driving device. Compare in the mode that collection car realized the collection of high-accuracy map data of manual driving collection, the human cost has not only been practiced thrift to the mode that this embodiment provided, has improved the efficiency of gathering high-accuracy map data to make the efficiency of generating high-accuracy map higher. And moreover, the automatic driving equipment for collecting high-precision map data can be subsequently used as a logistics distribution vehicle for recycling, so that the hardware cost is saved greatly.
Based on the same conception, the embodiment of the application provides a device for generating a high-precision map, and referring to fig. 4, the device comprises:
the acquisition module 401 is configured to acquire an electronic navigation map of a target area, and determine a driving path of the autonomous device according to the electronic navigation map;
a determining module 402, configured to obtain an environment image of a current position of the automatic driving device on a driving path, and determine a driving area around the current position according to the environment image;
the control module 403 is configured to determine driving control information of a current location based on a driving area, and control the automatic driving device to acquire high-precision map data according to the driving control information;
and a generating module 404, configured to generate a high-precision map of the target area according to the high-precision map data.
In an exemplary embodiment, the driving area includes a main lane area and a secondary lane area, and the determining module 402 is configured to determine a sub-image corresponding to the main lane and a sub-image corresponding to the secondary lane from the environment image. And determining a main lane area according to the sub-image corresponding to the main lane, and determining an auxiliary lane area according to the sub-image corresponding to the auxiliary lane.
In an exemplary embodiment, the determining module 402 is configured to input the environment image into a segmentation model, and obtain a sub-image corresponding to the primary lane and a sub-image corresponding to the secondary lane, which are output by the segmentation model, where the segmentation model is a model trained based on a reference high-precision map, a reference image, and reference lidar data.
In an exemplary embodiment, the apparatus further comprises: the training module is used for acquiring a reference high-precision map, a reference image and reference laser radar data of a training area; determining a reference driving area in a reference image according to a corresponding relation between three-dimensional point data in a reference high-precision map and reference laser radar data and two-dimensional point data in the reference image; marking a reference driving area in the reference image to obtain a marked reference image; and training according to the marked reference image to obtain a segmentation model.
In an exemplary embodiment, an obtaining module 401 is configured to obtain an initial position of an autonomous device; determining roads in the target area according to the electronic navigation map; and determining the driving sequence of each road based on the initial position, and arranging the roads in the target area according to the driving sequence to obtain a driving path.
In an exemplary embodiment, the control module 403 is configured to obtain future movement trajectories of obstacles around the automatic driving device and traffic restriction information around a current location; acquiring a target driving track of the current position according to the future movement track, the traffic restriction information and the driving area; and determining the driving control information of the current position according to the target driving track.
In an exemplary embodiment, the control module 403 is configured to sample the autonomous driving apparatus to obtain a plurality of speed values and a plurality of heading values of the autonomous driving apparatus; determining a plurality of reference driving tracks from the driving area according to the plurality of speed values and the plurality of orientation values; and determining a cost value of each reference driving track according to the future motion track and the traffic limitation information, and taking the reference driving track with the minimum cost value in the multiple reference driving tracks as a target driving track.
In summary, in the present embodiment, without a high-precision map, the driving path of the automatic driving device is determined through the electronic navigation map, the driving area available for driving is determined through the environment image around the automatic driving device, and then the driving control information for controlling the automatic driving device is determined according to the driving area, so that the collection of high-precision map data can be realized through the automatic driving device. Compare in the mode that collection car realized the collection of high-accuracy map data of manual driving collection, the human cost has not only been practiced thrift to the mode that this embodiment provided, has improved the efficiency of gathering high-accuracy map data to make the efficiency of generating high-accuracy map higher. And moreover, the automatic driving equipment for collecting high-precision map data can be subsequently used as a logistics distribution vehicle for recycling, so that the hardware cost is saved greatly.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Referring to fig. 5, a schematic structural diagram of a terminal 500 provided in an embodiment of the present application is shown. The terminal 500 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture expert group Audio Layer III, motion video experts compression standard Audio Layer 3), a notebook computer, or a desktop computer. Terminal 500 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
In general, the terminal 500 includes: a processor 501 and a memory 502.
The processor 501 may include one or more processing cores, such as a 4-core processor, a 5-core processor, and so on. The processor 501 may be implemented in at least one hardware form selected from the group consisting of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 501 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 501 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen 505. In some embodiments, processor 501 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 502 may include one or more computer-readable storage media, which may be non-transitory. Memory 502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 502 is used to store at least one instruction for execution by processor 501 to implement the method of generating high precision maps provided by the method embodiments herein.
In some embodiments, the terminal 500 may further optionally include: a peripheral interface 503 and at least one peripheral. The processor 501, memory 502 and peripheral interface 503 may be connected by a bus or signal lines. Each peripheral may be connected to the peripheral interface 503 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of the group consisting of a radio frequency circuit 504, a display screen 505, a camera 506, an audio circuit 507, a positioning component 508 and a power supply 509.
The peripheral interface 503 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 501 and the memory 502. In some embodiments, the processor 501, memory 502, and peripheral interface 503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 501, the memory 502, and the peripheral interface 503 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 504 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 504 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 504 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or Wi-Fi (Wireless Fidelity) networks. In some embodiments, the rf circuit 504 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 505 is a touch display screen, the display screen 505 also has the ability to capture touch signals on or over the surface of the display screen 505. The touch signal may be input to the processor 501 as a control signal for processing. At this point, the display screen 505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 505 may be one, providing the front panel of the terminal 500; in other embodiments, the display screens 505 may be at least two, respectively disposed on different surfaces of the terminal 500 or in a folded design; in still other embodiments, the display 505 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 500. Even more, the display screen 505 can be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 505 may be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 506 is used to capture images or video. Optionally, camera assembly 506 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 506 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 507 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 501 for processing, or inputting the electric signals to the radio frequency circuit 504 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 500. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 501 or the radio frequency circuit 504 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 507 may also include a headphone jack.
The positioning component 508 is used to locate the current geographic position of the terminal 500 for navigation or LBS (location based Service). The positioning component 508 may be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
Power supply 509 is used to power the various components in terminal 500. The power source 509 may be alternating current, direct current, disposable or rechargeable. When power supply 509 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 500 also includes one or more sensors 510. The one or more sensors 510 include, but are not limited to: acceleration sensor 511, gyro sensor 512, pressure sensor 513, fingerprint sensor 514, optical sensor 515, and proximity sensor 516.
The acceleration sensor 510 may detect the magnitude of acceleration on three coordinate axes of a coordinate system established with the terminal 500. For example, the acceleration sensor 511 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 501 may control the display screen 505 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 511. The acceleration sensor 511 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 512 may detect a body direction and a rotation angle of the terminal 500, and the gyro sensor 512 may cooperate with the acceleration sensor 511 to acquire a 3D motion of the user on the terminal 500. The processor 501 may implement the following functions according to the data collected by the gyro sensor 512: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 513 may be disposed on a side frame of the terminal 500 and/or underneath the display screen 505. When the pressure sensor 513 is disposed on the side frame of the terminal 500, a user's holding signal of the terminal 500 may be detected, and the processor 501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 513. When the pressure sensor 513 is disposed at the lower layer of the display screen 505, the processor 501 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 505. The operability control comprises at least one of a group consisting of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 514 is used for collecting a fingerprint of the user, and the processor 501 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 514, or the fingerprint sensor 514 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 501 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 514 may be provided on the front, back, or side of the terminal 500. When a physical button or a vendor Logo is provided on the terminal 500, the fingerprint sensor 514 may be integrated with the physical button or the vendor Logo.
The optical sensor 515 is used to collect the ambient light intensity. In one embodiment, the processor 501 may control the display brightness of the display screen 505 based on the ambient light intensity collected by the optical sensor 515. Specifically, when the ambient light intensity is high, the display brightness of the display screen 505 is increased; when the ambient light intensity is low, the display brightness of the touch screen 505 is turned down. In another embodiment, processor 501 may also dynamically adjust the shooting parameters of camera head assembly 506 based on the ambient light intensity collected by optical sensor 515.
A proximity sensor 516, also referred to as a distance sensor, is typically disposed on the front panel of the terminal 500. The proximity sensor 516 is used to collect the distance between the user and the front surface of the terminal 500. In one embodiment, when the proximity sensor 516 detects that the distance between the user and the front surface of the terminal 500 gradually decreases, the processor 501 controls the display screen 505 to switch from the bright screen state to the dark screen state; when the proximity sensor 516 detects that the distance between the user and the front surface of the terminal 500 becomes gradually larger, the display screen 505 is controlled by the processor 501 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 5 is not intended to be limiting of terminal 500 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Based on the same conception, the embodiment of the application provides automatic driving equipment, which comprises a memory and a processor; the memory has stored therein at least one instruction, which is loaded and executed by the processor, to implement a method of generating a high-precision map as provided by any of the exemplary embodiments of the present application.
Based on the same conception, the embodiment of the application provides a readable storage medium, and at least one instruction is stored in the storage medium and is loaded and executed by a processor to realize the method for generating the high-precision map provided by any one of the exemplary embodiments of the application.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method of generating a high-precision map, the method comprising:
acquiring an electronic navigation map of a target area, and determining a driving path of automatic driving equipment according to the electronic navigation map;
acquiring an environment image of the current position of the automatic driving equipment on the driving path, and determining a driving area around the current position according to the environment image;
determining running control information of the current position based on the running area, and controlling the automatic driving equipment to acquire high-precision map data according to the running control information;
and generating a high-precision map of the target area according to the high-precision map data.
2. The method of claim 1, wherein the driving area comprises a main lane area and a secondary lane area, and the determining the driving area around the current position according to the environment image comprises:
determining a subimage corresponding to a main lane and a subimage corresponding to an auxiliary lane from the environment image;
and determining the main lane area according to the sub-image corresponding to the main lane, and determining the auxiliary lane area according to the sub-image corresponding to the auxiliary lane.
3. The method of claim 2, wherein determining the sub-image corresponding to the primary lane and the sub-image corresponding to the secondary lane from the environment image comprises:
and inputting the environment image into a segmentation model to obtain a sub-image corresponding to the main lane and a sub-image corresponding to the auxiliary lane, which are output by the segmentation model, wherein the segmentation model is trained on the basis of a reference high-precision map, a reference image and reference laser radar data.
4. The method of claim 3, wherein prior to inputting the environmental image into a segmentation model, the method further comprises:
acquiring a reference high-precision map, a reference image and reference laser radar data of a training area;
determining a reference driving area in the reference image according to the corresponding relation between the three-dimensional point data in the reference high-precision map and the reference laser radar data and the two-dimensional point data in the reference image;
marking the reference driving area in the reference image to obtain a marked reference image;
and training according to the labeled reference image to obtain the segmentation model.
5. The method of claim 1, wherein determining a travel path for an autonomous device from the electronic navigation map comprises:
acquiring an initial position of the automatic driving equipment;
determining a road in the target area according to the electronic navigation map;
and determining the driving sequence of each road based on the initial position, and arranging the roads in the target area according to the driving sequence to obtain the driving path.
6. The method according to any one of claims 1 to 5, wherein the determining of the travel control information of the current location based on the travel area includes:
acquiring future movement tracks of obstacles around the automatic driving equipment and traffic restriction information around the current position;
acquiring a target running track of the current position according to the future motion track, the traffic restriction information and the running area;
and determining the driving control information of the current position according to the target driving track.
7. The method according to claim 6, wherein the obtaining of the target driving track of the current position according to the future movement track, the traffic restriction information and the driving area comprises:
sampling the automatic driving equipment to obtain a plurality of speed values and a plurality of orientation values of the automatic driving equipment;
determining a plurality of reference driving tracks from the driving area according to the plurality of speed values and the plurality of orientation values;
and determining a cost value of each reference driving track according to the future motion track and the traffic limitation information, and taking the reference driving track with the minimum cost value in the multiple reference driving tracks as the target driving track.
8. An apparatus for generating a high-precision map, the apparatus comprising:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring an electronic navigation map of a target area and determining a driving path of the automatic driving equipment according to the electronic navigation map;
the determining module is used for acquiring an environment image of the current position of the automatic driving equipment on the driving path and determining a driving area around the current position according to the environment image;
the control module is used for determining the driving control information of the current position based on the driving area and controlling the automatic driving equipment to acquire high-precision map data according to the driving control information;
and the generating module is used for generating a high-precision map of the target area according to the high-precision map data.
9. An autopilot device, comprising a memory and a processor; the memory has stored therein at least one instruction, which is loaded and executed by the processor, to implement the method of generating high-precision maps of any of claims 1-7.
10. A readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor, to implement a method of generating high-precision maps according to any one of claims 1 to 7.
CN201911421660.6A 2019-12-31 2019-12-31 Method and device for generating high-precision map, automatic driving equipment and storage medium Withdrawn CN111192341A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911421660.6A CN111192341A (en) 2019-12-31 2019-12-31 Method and device for generating high-precision map, automatic driving equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911421660.6A CN111192341A (en) 2019-12-31 2019-12-31 Method and device for generating high-precision map, automatic driving equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111192341A true CN111192341A (en) 2020-05-22

Family

ID=70707981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911421660.6A Withdrawn CN111192341A (en) 2019-12-31 2019-12-31 Method and device for generating high-precision map, automatic driving equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111192341A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111928863A (en) * 2020-08-20 2020-11-13 新石器慧义知行智驰(北京)科技有限公司 High-precision map data acquisition method, device and system
CN112230656A (en) * 2020-10-10 2021-01-15 广州汽车集团股份有限公司 Automatic driving method for park vehicle, system, client and storage medium thereof
CN112837414A (en) * 2021-04-22 2021-05-25 速度时空信息科技股份有限公司 Method for constructing three-dimensional high-precision map based on vehicle-mounted point cloud data
CN113701743A (en) * 2021-10-29 2021-11-26 腾讯科技(深圳)有限公司 Map data processing method and device, computer equipment and storage medium
CN113978484A (en) * 2021-09-30 2022-01-28 东风汽车集团股份有限公司 Vehicle control method, device, electronic device and storage medium
CN114187412A (en) * 2021-11-11 2022-03-15 北京百度网讯科技有限公司 High-precision map generation method and device, electronic equipment and storage medium
CN114323039A (en) * 2021-11-11 2022-04-12 阿波罗智能技术(北京)有限公司 Data acquisition method and device for high-precision map, vehicle, equipment and storage medium

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8527199B1 (en) * 2012-05-17 2013-09-03 Google Inc. Automatic collection of quality control statistics for maps used in autonomous driving
CN105702152A (en) * 2016-04-28 2016-06-22 百度在线网络技术(北京)有限公司 Map generation method and device
CN105783936A (en) * 2016-03-08 2016-07-20 武汉光庭信息技术股份有限公司 Road sign drawing and vehicle positioning method and system for automatic drive
CN106097444A (en) * 2016-05-30 2016-11-09 百度在线网络技术(北京)有限公司 High-precision map generates method and apparatus
CN106441319A (en) * 2016-09-23 2017-02-22 中国科学院合肥物质科学研究院 System and method for generating lane-level navigation map of unmanned vehicle
CN106525057A (en) * 2016-10-26 2017-03-22 陈曦 Generation system for high-precision road map
CN106796434A (en) * 2015-08-28 2017-05-31 松下电器(美国)知识产权公司 Ground drawing generating method, self-position presumption method, robot system and robot
WO2017161475A1 (en) * 2016-03-21 2017-09-28 华为技术有限公司 Method and device for generating electronic map, and method and device for planning route
CN108801276A (en) * 2018-07-23 2018-11-13 奇瑞汽车股份有限公司 Accurately drawing generating method and device
CN108871353A (en) * 2018-07-02 2018-11-23 上海西井信息科技有限公司 Road network map generation method, system, equipment and storage medium
CN108955702A (en) * 2018-05-07 2018-12-07 西安交通大学 Based on the lane of three-dimensional laser and GPS inertial navigation system grade map creation system
CN109064506A (en) * 2018-07-04 2018-12-21 百度在线网络技术(北京)有限公司 Accurately drawing generating method, device and storage medium
CN109059946A (en) * 2018-06-26 2018-12-21 上汽通用汽车有限公司 Vehicle route acquisition methods, storage medium and electronic equipment
CN109374008A (en) * 2018-11-21 2019-02-22 深动科技(北京)有限公司 A kind of image capturing system and method based on three mesh cameras
CN109556615A (en) * 2018-10-10 2019-04-02 吉林大学 The driving map generation method of Multi-sensor Fusion cognition based on automatic Pilot
CN109579869A (en) * 2017-09-29 2019-04-05 维布络有限公司 For correcting the method and system of the pre-generated guidance path of pilotless automobile
CN109631873A (en) * 2018-11-01 2019-04-16 百度在线网络技术(北京)有限公司 Road generation method, device and the readable storage medium storing program for executing of high-precision map
CN109862084A (en) * 2019-01-16 2019-06-07 北京百度网讯科技有限公司 Map data updating method, device, system and storage medium
CN109935077A (en) * 2017-12-15 2019-06-25 百度(美国)有限责任公司 System for constructing vehicle and cloud real-time traffic map for automatic driving vehicle
CN110057373A (en) * 2019-04-22 2019-07-26 上海蔚来汽车有限公司 For generating the method, apparatus and computer storage medium of fine semanteme map
CN110244742A (en) * 2019-07-01 2019-09-17 百度在线网络技术(北京)有限公司 Method, equipment and the storage medium that automatic driving vehicle is cruised
US20190317505A1 (en) * 2018-04-12 2019-10-17 Baidu Usa Llc Determining driving paths for autonomous driving vehicles based on map data
CN110345951A (en) * 2019-07-08 2019-10-18 武汉光庭信息技术股份有限公司 A kind of ADAS accurately map generalization method and device
DE102019002780A1 (en) * 2019-04-16 2019-11-14 Daimler Ag Method for generating a dynamic digital map
CN110597275A (en) * 2018-06-13 2019-12-20 宝马股份公司 Method and system for generating map by using unmanned aerial vehicle

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8527199B1 (en) * 2012-05-17 2013-09-03 Google Inc. Automatic collection of quality control statistics for maps used in autonomous driving
CN106796434A (en) * 2015-08-28 2017-05-31 松下电器(美国)知识产权公司 Ground drawing generating method, self-position presumption method, robot system and robot
CN105783936A (en) * 2016-03-08 2016-07-20 武汉光庭信息技术股份有限公司 Road sign drawing and vehicle positioning method and system for automatic drive
WO2017161475A1 (en) * 2016-03-21 2017-09-28 华为技术有限公司 Method and device for generating electronic map, and method and device for planning route
CN105702152A (en) * 2016-04-28 2016-06-22 百度在线网络技术(北京)有限公司 Map generation method and device
CN106097444A (en) * 2016-05-30 2016-11-09 百度在线网络技术(北京)有限公司 High-precision map generates method and apparatus
CN106441319A (en) * 2016-09-23 2017-02-22 中国科学院合肥物质科学研究院 System and method for generating lane-level navigation map of unmanned vehicle
CN106525057A (en) * 2016-10-26 2017-03-22 陈曦 Generation system for high-precision road map
CN109579869A (en) * 2017-09-29 2019-04-05 维布络有限公司 For correcting the method and system of the pre-generated guidance path of pilotless automobile
CN109935077A (en) * 2017-12-15 2019-06-25 百度(美国)有限责任公司 System for constructing vehicle and cloud real-time traffic map for automatic driving vehicle
US20190317505A1 (en) * 2018-04-12 2019-10-17 Baidu Usa Llc Determining driving paths for autonomous driving vehicles based on map data
CN108955702A (en) * 2018-05-07 2018-12-07 西安交通大学 Based on the lane of three-dimensional laser and GPS inertial navigation system grade map creation system
CN110597275A (en) * 2018-06-13 2019-12-20 宝马股份公司 Method and system for generating map by using unmanned aerial vehicle
CN109059946A (en) * 2018-06-26 2018-12-21 上汽通用汽车有限公司 Vehicle route acquisition methods, storage medium and electronic equipment
CN108871353A (en) * 2018-07-02 2018-11-23 上海西井信息科技有限公司 Road network map generation method, system, equipment and storage medium
CN109064506A (en) * 2018-07-04 2018-12-21 百度在线网络技术(北京)有限公司 Accurately drawing generating method, device and storage medium
CN108801276A (en) * 2018-07-23 2018-11-13 奇瑞汽车股份有限公司 Accurately drawing generating method and device
CN109556615A (en) * 2018-10-10 2019-04-02 吉林大学 The driving map generation method of Multi-sensor Fusion cognition based on automatic Pilot
CN109631873A (en) * 2018-11-01 2019-04-16 百度在线网络技术(北京)有限公司 Road generation method, device and the readable storage medium storing program for executing of high-precision map
CN109374008A (en) * 2018-11-21 2019-02-22 深动科技(北京)有限公司 A kind of image capturing system and method based on three mesh cameras
CN109862084A (en) * 2019-01-16 2019-06-07 北京百度网讯科技有限公司 Map data updating method, device, system and storage medium
DE102019002780A1 (en) * 2019-04-16 2019-11-14 Daimler Ag Method for generating a dynamic digital map
CN110057373A (en) * 2019-04-22 2019-07-26 上海蔚来汽车有限公司 For generating the method, apparatus and computer storage medium of fine semanteme map
CN110244742A (en) * 2019-07-01 2019-09-17 百度在线网络技术(北京)有限公司 Method, equipment and the storage medium that automatic driving vehicle is cruised
CN110345951A (en) * 2019-07-08 2019-10-18 武汉光庭信息技术股份有限公司 A kind of ADAS accurately map generalization method and device

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111928863A (en) * 2020-08-20 2020-11-13 新石器慧义知行智驰(北京)科技有限公司 High-precision map data acquisition method, device and system
CN112230656A (en) * 2020-10-10 2021-01-15 广州汽车集团股份有限公司 Automatic driving method for park vehicle, system, client and storage medium thereof
CN112837414A (en) * 2021-04-22 2021-05-25 速度时空信息科技股份有限公司 Method for constructing three-dimensional high-precision map based on vehicle-mounted point cloud data
CN113978484A (en) * 2021-09-30 2022-01-28 东风汽车集团股份有限公司 Vehicle control method, device, electronic device and storage medium
CN113701743A (en) * 2021-10-29 2021-11-26 腾讯科技(深圳)有限公司 Map data processing method and device, computer equipment and storage medium
CN113701743B (en) * 2021-10-29 2022-02-22 腾讯科技(深圳)有限公司 Map data processing method and device, computer equipment and storage medium
CN114187412A (en) * 2021-11-11 2022-03-15 北京百度网讯科技有限公司 High-precision map generation method and device, electronic equipment and storage medium
CN114323039A (en) * 2021-11-11 2022-04-12 阿波罗智能技术(北京)有限公司 Data acquisition method and device for high-precision map, vehicle, equipment and storage medium
CN114323039B (en) * 2021-11-11 2023-10-27 阿波罗智能技术(北京)有限公司 Data acquisition method and device for high-precision map, vehicle, equipment and storage medium
CN114187412B (en) * 2021-11-11 2024-03-22 北京百度网讯科技有限公司 High-precision map generation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111192341A (en) Method and device for generating high-precision map, automatic driving equipment and storage medium
CN110148294B (en) Road condition state determining method and device
CN111126182B (en) Lane line detection method, lane line detection device, electronic device, and storage medium
CN112307642B (en) Data processing method, device, system, computer equipment and storage medium
CN110967024A (en) Method, device, equipment and storage medium for detecting travelable area
CN110095128B (en) Method, device, equipment and storage medium for acquiring missing road information
CN111125442B (en) Data labeling method and device
CN107909840B (en) Information publishing method, device and computer readable storage medium
CN113205515B (en) Target detection method, device and computer storage medium
CN111104893B (en) Target detection method, target detection device, computer equipment and storage medium
CN112802369B (en) Method and device for acquiring flight route, computer equipment and readable storage medium
CN113701743B (en) Map data processing method and device, computer equipment and storage medium
CN114332821A (en) Decision information acquisition method, device, terminal and storage medium
CN113160427A (en) Virtual scene creating method, device, equipment and storage medium
CN112269939B (en) Automatic driving scene searching method, device, terminal, server and medium
CN113343457A (en) Automatic driving simulation test method, device, equipment and storage medium
CN111444749B (en) Method and device for identifying road surface guide mark and storage medium
CN110399688B (en) Method and device for determining environment working condition of automatic driving and storage medium
CN111275300B (en) Road network data processing method, device, equipment and storage medium
CN114283395A (en) Method, device and equipment for detecting lane line and computer readable storage medium
CN113920222A (en) Method, device and equipment for acquiring map building data and readable storage medium
CN114623836A (en) Vehicle pose determining method and device and vehicle
CN113239901B (en) Scene recognition method, device, equipment and storage medium
CN111597285B (en) Road network splicing method and device, electronic equipment and storage medium
CN112241662B (en) Method and device for detecting drivable area

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200522