CN118377290A - Automatic driving method and system, electronic device, storage medium and mobile device - Google Patents

Automatic driving method and system, electronic device, storage medium and mobile device Download PDF

Info

Publication number
CN118377290A
CN118377290A CN202310100969.5A CN202310100969A CN118377290A CN 118377290 A CN118377290 A CN 118377290A CN 202310100969 A CN202310100969 A CN 202310100969A CN 118377290 A CN118377290 A CN 118377290A
Authority
CN
China
Prior art keywords
information
lane
road
target
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310100969.5A
Other languages
Chinese (zh)
Inventor
徐成
张放
王肖
徐宁
肖滔
魏宇腾
任乾
杨浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Idriverplus Technologies Co Ltd
Original Assignee
Beijing Idriverplus Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Idriverplus Technologies Co Ltd filed Critical Beijing Idriverplus Technologies Co Ltd
Priority to CN202310100969.5A priority Critical patent/CN118377290A/en
Publication of CN118377290A publication Critical patent/CN118377290A/en
Pending legal-status Critical Current

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The application provides an automatic driving method and system, electronic equipment and storage medium, wherein the method comprises the following steps: obtaining real-time perception information of a target road based on detection information obtained by detecting the target road, wherein the target road is a road on which a target mobile device is driven; determining road 3D information of the surrounding road environment of the target mobile equipment according to the real-time perception information; obtaining lane topology information according to the road 3D information and the SD map; determining a recommended lane for the recommended target mobile device to travel according to the lane topology information; a mobile device control signal for controlling a target mobile device to travel in a recommended lane is generated. The application can effectively solve the technical problem that the automatic driving capability of the system is severely limited under the condition that the high-precision map does not exist in the area in the related technology.

Description

Automatic driving method and system, electronic device, storage medium and mobile device
Technical Field
The present application relates to the field of autopilot technology, and in particular, to an autopilot method and system, an electronic device, and a storage medium.
Background
The automatic driving technology is one of the core front edge technologies in the current automobile field, but the technology still has a plurality of problems in the application of the technology to the ground, wherein one of the keys is to use or not use a high-precision electronic map.
Currently, one of the implementations of autopilot is to use waymo high-precision map technology routes. The high-precision map which is acquired and manufactured in advance is utilized to assist in online sensing, so that the defect of a sensing system is overcome, and meanwhile, a decision planning system is assisted to make decisions in advance. And most of the advanced automatic driving systems are high-precision map technical routes adopted at present.
However, the scheme adopting the high-precision electronic map is obviously restricted by the high-precision map, and main problems include: the high-precision map is updated slowly, and is updated once in a quarter at present, so that the information provided by the electronic map is not real-time enough; therefore, in areas where no high-precision map exists, the autopilot capability of the system is severely limited.
Therefore, there is a technical problem in the related art that when a high-precision map does not exist in an area, the autopilot capability of the system is severely limited.
Disclosure of Invention
The application provides an automatic driving method and system, electronic equipment, storage media and mobile equipment, which at least solve the problem that the automatic driving capability of the system is severely limited under the condition that a high-precision map does not exist in an area in the related technology.
According to an aspect of an embodiment of the present application, there is provided an automatic driving method including:
based on detection information obtained by detecting a target road, obtaining real-time perception information of the target road, wherein the target road is a road on which a target mobile device is driven;
Determining road 3D information of the surrounding road environment of the target mobile equipment according to the real-time perception information;
Obtaining lane topology information according to the road 3D information and the SD map;
Determining a recommended lane for recommending the target mobile equipment to travel according to the lane topology information;
A mobile device control signal is generated for controlling the target mobile device to travel in the recommended lane.
Optionally, in the foregoing method, the determining, according to the real-time perception information, road 3D information of a surrounding road environment of the target mobile device includes:
determining a ground marking semantic segmentation result in the real-time perception information, and projecting each 3D lane line point image coordinate indicated by the ground marking semantic segmentation result to a preset ground plane according to a camera external parameter to obtain a 3D lane line point coordinate initial value corresponding to each 3D lane line point image coordinate, wherein the camera external parameter is an external parameter of a camera for acquiring an image based on the ground marking semantic segmentation result;
modeling a road surface according to the running state of the target mobile equipment to obtain initial parameters of a road surface model and a road surface initial model established by the initial parameters of the road surface model;
According to the equal width constraint condition of the ground marking, optimizing the initial value of the 3D lane line point coordinate and the initial parameter of the road curved surface model according to the residual error of the 3D lane line point coordinate projected to the current frame image in the history frame image, and obtaining an optimized 3D lane line point coordinate target value and a road curved surface model target parameter;
Determining a lamp post signboard in the real-time perception information, and triangulating the lamp post signboard according to two adjacent frames of information in the real-time perception information to obtain a triangulating result;
tracking the lamp post signboard by using optical flow to obtain an optical flow tracking result;
Filtering error optical flow tracking results in all optical flow tracking results through the triangularization results to obtain filtered optical flow tracking results;
Performing BA optimization on the filtered optical flow tracking result to obtain the 3D coordinates of the marker of the lamp post signboard;
And obtaining the road 3D information based on the 3D lane line point coordinate target value, the road curved surface model target parameter and the identifier 3D coordinates.
Optionally, in the foregoing method, after determining the road 3D information of the surrounding road environment of the target mobile device according to the real-time perception information, the method further includes:
Fitting the 3D lane line point coordinate target value into a polynomial curve;
And matching the polynomial curve with the existing historical lane line, and adjusting the historical lane line.
Optionally, in the foregoing method, the obtaining lane topology information according to the road 3D information and the SD map includes:
obtaining topology constraint information according to the SD map construction, wherein the topology constraint information is used for indicating constraint conditions of lanes;
Determining the lane topology information based on lane standard information and the real-time perception information; and/or deriving the lane topology information based on the topology constraint information, the real-time awareness information, and historical lane information.
Optionally, in the foregoing method, the determining, according to the lane topology information, a recommended lane in which the target mobile device is recommended to travel includes:
acquiring recommended lane position information in navigation information, wherein the recommended lane position information is used for indicating the position of a lane recommended to travel in a road;
And determining a lane corresponding to the recommended lane position information from the lane topology information as the recommended lane.
Optionally, in the foregoing method, after determining the road 3D information of the surrounding road environment of the target mobile device according to the real-time perception information, the method further includes:
acquiring map information, wherein the map information comprises an SD map and/or a high-precision map;
obtaining a reference line model of the surrounding road environment according to the map information and the road 3D information, wherein the reference line model is used for indicating lanes of a road;
And projecting the obstacle into the reference line model according to the relative position relation between the obstacle and the lane line in the real-time perception information.
Optionally, the method as described above, the method further comprises:
under the condition of acquiring a high-precision map, outputting high-precision lane lines, lane topology information and lane-level navigation information indicated by the high-precision map;
And outputting path shape point coordinate information, lane number information and navigation recommended lane information of a road level indicated by the SD map when the SD map is acquired, wherein the path shape point coordinate information is used for indicating the shape and direction of the road.
According to another aspect of an embodiment of the present application, there is also provided an automatic driving system including:
the perception fusion subsystem is used for obtaining real-time perception information of a target road based on detection information obtained by detecting the target road, wherein the target road is a road on which the target mobile equipment is driven;
The environment cognition subsystem is used for determining road 3D information of the surrounding road environment of the target mobile equipment according to the real-time perception information;
The environment cognition subsystem is also used for obtaining lane topology information according to the road 3D information and the SD map;
the environment cognition subsystem is also used for determining a recommended lane for recommending the target mobile device to run according to the lane topology information;
And the planning control subsystem is used for generating a mobile device control signal for controlling the target mobile device to run on the recommended lane.
According to still another aspect of the embodiments of the present application, there is provided an electronic device including a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory complete communication with each other through the communication bus; wherein the memory is used for storing a computer program; a processor for performing the method steps of any of the embodiments described above by running the computer program stored on the memory.
According to a further aspect of the embodiments of the present application there is also provided a computer readable storage medium having stored therein a computer program, wherein the computer program is arranged to perform the method steps of any of the embodiments described above when run.
According to a further aspect of the present embodiments, there is provided a computer program product comprising a computer program stored on a non-volatile computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the method of any preceding claim.
According to yet another aspect of the embodiments of the present application, there is provided a mobile device, including an electronic device as described above.
The embodiment of the application provides an automatic driving method and system, electronic equipment and a storage medium, wherein the method comprises the following steps: based on detection information obtained by detecting a target road, obtaining real-time perception information of the target road, wherein the target road is a road on which a target mobile device is driven; determining road 3D information of the surrounding road environment of the target mobile equipment according to the real-time perception information; obtaining lane topology information according to the road 3D information and the SD map; determining a recommended lane for recommending the target mobile equipment to travel according to the lane topology information; a mobile device control signal is generated for controlling the target mobile device to travel in the recommended lane. According to the method and the device, the SD map can be supplemented through real-time sensing information under the condition that an advanced precision map does not exist, lane topology information is obtained, more accurate running advice can be provided for the mobile device based on the lane topology information, and corresponding mobile device control signals are generated; therefore, the technical problem that the automatic driving capability of the system is severely limited under the condition that the high-precision map does not exist in the area in the related technology is effectively solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a method flow diagram of an alternative autopilot method in accordance with an embodiment of the present application;
FIG. 2 is a method flow diagram of an alternative autopilot method in accordance with another embodiment of the present application;
FIG. 3 is a method flow diagram of an alternative autopilot method in accordance with another embodiment of the present application;
FIG. 4 is a method flow diagram of an alternative autopilot method in accordance with another embodiment of the present application;
FIG. 5 is a method flow diagram of an alternative autopilot method in accordance with another embodiment of the present application;
FIG. 6 is a method flow diagram of an alternative autopilot method in accordance with another embodiment of the present application;
FIG. 7 is a method flow diagram of an alternative autopilot method in accordance with another embodiment of the present application;
FIG. 8 is a block diagram of an alternative autopilot system in accordance with an embodiment of the present application;
FIG. 9 is a block diagram of an alternative perceptual fusion subsystem according to an embodiment of the present application;
FIG. 10 is a block diagram of an alternative environmental awareness subsystem, according to an embodiment of the present application;
FIG. 11 is a block diagram of an alternative planning control subsystem in accordance with an embodiment of the present application;
FIG. 12 is a block diagram of an alternative map engine subsystem according to an embodiment of the present application;
FIG. 13 is a block diagram of an alternative autopilot system according to another embodiment of the application;
Fig. 14 is a block diagram of an alternative electronic device in accordance with an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to one aspect of an embodiment of the present application, an autopilot method is provided. Alternatively, in the present embodiment, the above-described automatic driving method may be applied to a hardware environment constituted by a terminal and a server. The server is connected with the terminal through a network, and can be used for providing services (such as data analysis service, data storage service and the like) for the terminal or a client installed on the terminal, and a database can be arranged on the server or independent of the server and used for providing data storage service for the server.
The network may include, but is not limited to, at least one of: wired network, wireless network. The wired network may include, but is not limited to, at least one of: a wide area network, a metropolitan area network, a local area network, and the wireless network may include, but is not limited to, at least one of: WIFI (WIRELESS FIDELITY ), bluetooth. The terminal may not be limited to a PC, a mobile phone, a tablet computer, or the like.
The automatic driving method of the embodiment of the application can be executed by a server, a terminal and a server together. The terminal may execute the automatic driving method according to the embodiment of the present application by a client installed thereon.
Taking the automatic driving method in the embodiment as an example, fig. 1 is a schematic diagram of an automatic driving method according to an embodiment of the present application, including the following steps:
step S101, obtaining real-time perception information of a target road based on detection information obtained by detecting the target road, wherein the target road is a road on which a target mobile device is driven;
The automatic driving method in the present embodiment may be applied to a scene in which it is necessary to control a mobile device (e.g., a vehicle, a robot, etc.) for automatic driving, for example: the scene of automatic driving when the high-precision map is lacking, the scene of automatic driving when the high-precision map is wrong, and the like, and other automatic driving scenes are also possible. In the embodiment of the application, the automatic driving method is described by taking automatic driving when a high-precision map is lacking as an example, and the automatic driving method is also applicable to other types of scenes under the condition of no contradiction.
When the target mobile device runs on the target road, the detection device, such as an image acquisition device (e.g. a camera) or a laser device, for detecting information on the target mobile device monitors the target road according to a preset period to obtain detection information of the target road. Moreover, the detection information can be road condition information around the target mobile equipment; and the range of the detection information is the detection range of the detection equipment on the target mobile equipment.
After the detection information is obtained, the real-time perception information of the target road can be obtained by performing the following operations:
P1, inputting an image or laser point cloud (namely, detection information is the image or the laser point cloud), and obtaining first real-time sensing information such as the position, the type (for example, pedestrians, motor vehicles and triangle cards) and the like of a target in the image or the point cloud by using a deep learning or traditional rule method;
p2, inputting an image, obtaining the type (such as lane lines and indication information on a lane) of each pixel point in the image by using a deep learning method, and obtaining second real-time perception information;
P3, extracting pixels of the lane line type obtained in the P2, fitting into a curve under an image coordinate system, and finally projecting the curve under the image coordinate system to a ground plane according to camera external parameters, lane line parallel constraint (namely, lane lines in the same road are parallel to each other) and the like to obtain a lane line fitting curve under a Bird's Eye View (BEV) View angle, and taking the lane line fitting curve as third real-time perception information;
p4, according to the relative relation between the lane line pixels and the target grounding point pixels (pixels of the part where the mobile equipment is connected with the ground, such as tires) in the semantic segmentation, giving the relative positions of the obstacle and the lane line under an image coordinate system or a ground plane coordinate system, and obtaining fourth real-time perception information;
And P5, matching and tracking the obstacles detected by cameras with different visual angles or sensors with different types, outputting information such as target positions, types, speeds, heading and the like of the obstacles after fusion filtering, and taking the information as fifth real-time sensing information.
Step S102, road 3D information of the surrounding road environment of the target mobile device is determined according to the real-time perception information.
After the real-time perception information is determined, the real-time perception information can be subjected to 3D projection to obtain road 3D information of the surrounding road environment of the target mobile equipment.
As an alternative embodiment, the step S102 of determining the road 3D information of the surrounding road environment of the target mobile device according to the real-time perception information includes the following steps S201 to S207, as shown in fig. 2:
Step S201, determining a ground marking semantic segmentation result in real-time perception information, and projecting each 3D lane line point image coordinate indicated by the ground marking semantic segmentation result to a preset ground plane according to a camera external parameter to obtain a 3D lane line point coordinate initial value corresponding to each 3D lane line point image coordinate, wherein the camera external parameter is the external parameter of the camera for acquiring an image based on the ground marking semantic segmentation result;
after the real-time perception information is determined, a semantic segmentation result of the real-time perception information can be obtained, and then a ground marking semantic segmentation result corresponding to the ground marking can be obtained.
Among these, semantic segmentation is an important direction in computer vision. Unlike object detection and recognition, semantic segmentation enables classification at the image pixel level. It is able to divide a picture or video (which is actually a picture if the video is extracted in frames) into a plurality of blocks according to the difference in category.
For the ground mark, firstly, projecting each 3D lane line point image coordinate indicated in the ground mark semantic segmentation result to a preset ground plane according to camera external parameters to obtain a 3D lane line point coordinate initial value, namely, projecting the 3D lane line point image coordinate on the ground mark semantic segmentation result from an image coordinate system to a mobile equipment coordinate system.
The preset ground plane may be a plane determined according to a mobile device coordinate system for primarily indicating a road on which the mobile device is located. Since only the initial value of the coordinates of the line points of the 3D lane is obtained, high consistency between the preset ground plane and the road where the mobile device is located is not required to be ensured. The mobile device coordinate system can be a coordinate system with two rear wheel center points of the target mobile device as origins, the headstock direction as the x direction and the left side as the y direction. The preset ground plane may thus be a parallel plane differing from the plane constituted by the xy-axis by the wheel radius distance (i.e. the wheel radius where the z-value is negative).
The camera external parameters may be the pose of the camera, for example: elevation, pitch angle, etc. Therefore, in the images acquired under the external reference of different cameras, the coordinates of the same lane line point will be different. Alternatively, the camera profile may be known when the camera is set up at the target mobile device.
Step S202, road surface modeling is carried out according to the running state of the target mobile equipment, and initial parameters of the road surface model and an initial model of the road surface established by the initial parameters of the road surface model are obtained.
Since the tires of the target mobile device are in contact with the road surface, and the driving state of the mobile device is monitored in real time, the driving state may include, but is not limited to: motion of the mobile device such as speed, direction, pitch fluctuation, etc.
Therefore, after the running state of the mobile device is obtained, the running state of the target mobile device can be used for representing the state of the road, road surface modeling is performed to obtain initial parameters of the road surface model, and the road surface initial model is built based on the initial parameters of the road surface model.
Step S203, according to the equal width constraint condition of the ground marking, the initial value of the 3D lane line point coordinate and the initial parameter of the road curved surface model are optimized according to the residual error projected to the current frame image by the 3D lane line point coordinate in the history frame image, and the optimized target value of the 3D lane line point coordinate and the target parameter of the road curved surface model are obtained;
The positions of the image coordinates of the same 3D lane line point image in the image acquired by each frame are different, and the positions projected onto the road curved surface initial model are also different, but in practice, the correct projection should ensure that the image coordinates of the same 3D lane line point image are projected onto the same position, and the lane lines obtained by projecting the image coordinates of the respective 3D lane line point images onto the road curved surface initial model should satisfy the constraint of equal width (namely, the different lane lines are parallel to each other and the distance between the two adjacent lane lines is fixed); therefore, the initial value of the 3D lane line point coordinate and the initial parameter of the road surface model can be optimized according to the residual error of the 3D lane line point coordinate projected to the current frame image in the historical frame image, and the optimized target value of the 3D lane line point coordinate and the target parameter of the road surface model can be obtained. Namely, determining residual errors among the same 3D lane line points in different frame images, and optimizing the 3D lane line point coordinate target value and the road surface model target parameter. The historical frame image may be one or more frames within a preset period of time (e.g., 0.1s,0.5s, etc.) before the current frame image is acquired.
And enabling the obtained 3D lane line point coordinate target values to be on a road curved surface target model established based on the road curved surface model target parameters, wherein lane lines formed by the 3D lane line point coordinate target values meet the equal width constraint condition.
Step S204, determining a lamp post signboard in the real-time perception information, and triangulating the lamp post signboard according to two adjacent frames of information in the real-time perception information to obtain a triangulating result;
Step S205, tracking the lamp post signboard by using optical flow to obtain an optical flow tracking result;
Step S206, filtering out error optical flow tracking results in all optical flow tracking results through the triangularization results to obtain filtered optical flow tracking results;
And S207, performing BA optimization on the filtered optical flow tracking result to obtain the 3D coordinates of the marker of the lamp post signboard.
Aiming at the lamp post signboard, firstly triangulating the lamp post signboard according to two adjacent frames of information in the front and back frames of images in real-time perception information, then tracking the lamp post signboard by using light flow, filtering an incorrect light flow tracking result by using a triangulating result, and finally performing BA optimization to obtain the 3D coordinate of the lamp post signboard.
BA (Bundle Adjustment) optimization refers to the extraction of optimal 3D models and camera parameters (internal and external parameters) from the visual reconstruction. The several rays (bundles of LIGHT RAYS) reflected from each feature point are finally converged into the camera optical center after making the optimal adjustments (adjustments) to the camera pose and the feature point spatial position. Unlike re-projection, BA optimization is the optimization of the pose of a multi-segment camera and the spatial coordinates of the landmark points under the pose. And step S208, obtaining the road 3D information based on the 3D lane line point coordinate target value, the road curved surface model target parameter and the identifier 3D coordinates.
Finally, the road 3D information can be obtained based on the 3D lane line point coordinate target value, the road curved surface model target parameter and the identifier 3D coordinate obtained in the previous step.
And step S103, obtaining lane topology information according to the road 3D information and the SD map.
After the road 3D information is obtained, since the road 3D information is information of a real-time perception information reaction target road obtained according to real-time detection, and the SD map can represent road-level information, the SD map can be supplemented with the road information according to the road 3D information, and lane topology information is obtained.
The lane topology information may be topology information for indicating a relative positional relationship between lanes. Including but not limited to: number of lanes, lane direction, etc.
And step S104, determining a recommended lane for the recommended target mobile device to travel according to the lane topology information.
After the lane topology information is determined, a recommended lane for recommending the target mobile device to travel can be determined in combination with a target place to which the target mobile device is to travel or navigation information given based on an SD map.
For example, when the target mobile device is required to travel to a target place or navigation information given based on an SD map indicates that the target mobile device is required to travel right, and the target lane is determined to be 3 lanes according to lane topology information, the recommended lane is the first lane on the right side.
Step S105, generating a mobile device control signal for controlling the target mobile device to travel in the recommended lane.
The lane where the target mobile device is currently located can be determined by detecting and identifying the lane where the target mobile device is currently located, then after the recommended lane is determined, the relative position relationship between the lane where the target mobile device is currently located and the recommended lane can be determined, and further a mobile device control signal for controlling the target mobile device to travel from the lane where the target mobile device is currently located to the recommended lane can be generated.
The action mechanism of the mobile device can operate according to the control signal of the mobile device and can drive on the recommended lane.
By the method in the embodiment, the SD map can be supplemented through real-time sensing information under the condition that an advanced precision map does not exist, lane topology information is obtained, more accurate running advice can be provided for the mobile equipment based on the lane topology information, and corresponding mobile equipment control signals are generated; therefore, the technical problem that the automatic driving capability of the system is severely limited under the condition that the high-precision map does not exist in the area in the related technology is effectively solved.
As an alternative embodiment, as in the foregoing method, after the step S102 determines the road 3D information of the surrounding road environment of the target mobile device according to the real-time perception information, the method further includes the following steps S301 to S302, as shown in fig. 3:
Step S301, fitting the coordinate target value of the 3D lane line point to a polynomial curve.
In general, there will be a plurality of 3D lane line point coordinate target values, and therefore, a plurality of 3D lane line point coordinate target values may be fitted to obtain a corresponding polynomial curve.
Optionally, the fitting method for obtaining the corresponding polynomial curve may be a least square method, a gradient descent method or a conjugate gradient method; in addition, polynomial curves may be fitted in other ways, which are not listed here.
In step S302, the history lane line is adjusted by matching the polynomial curve with the existing history lane line.
After the polynomial curves are obtained, matching can be carried out on the basis of the coordinate target value of the 3D lane line point of the obtained polynomial curve and the coordinate value of the history lane line, the polynomial curve can be associated with the history lane line after matching, when the polynomial curve is longer than the history lane line, the history lane line can be newly increased, when the polynomial curve is the same as the history lane line, the history lane line can be kept, when the polynomial curve is shorter than the history lane line, the history lane line can be partially deleted, and in addition, when the polynomial curve does not have the corresponding history lane line, and when the fact that the road is increased by other information is determined, the lane line can be newly increased according to the polynomial curve; otherwise, if the history lane line does not have the corresponding polynomial curve, the history lane line can be deleted.
As an alternative embodiment, the step S103 obtains the lane topology information according to the road 3D information and the SD map, and includes the following steps S401 to S403, as shown in fig. 4:
Step S401, constructing and obtaining topology constraint information according to the SD map, wherein the topology constraint information is used for indicating constraint conditions of lanes.
After the SD map is acquired, corresponding topology constraint information may be obtained according to information existing in the SD map, where the topology constraint information includes, but is not limited to: the current road has more lanes, the current road has less lanes, etc.
Further, the topology constraint information may be used as a priori information.
Step S402, determining lane topology information based on lane standard information and real-time perception information;
The lane standard information may be information indicating each lane standard, for example: a new lane is generated between the two lane lines with the distance and the parallelism meeting the standards.
And under the condition that two pieces of lane line information are determined in the real-time sensing information and meet the requirement of the lane standard information, obtaining a corresponding lane according to the two pieces of lane line information, and taking the obtained lane as lane topology information.
Step S403, obtaining the lane topology information based on the topology constraint information, the real-time perception information and the historical lane information.
For example: the historical lane information is a multi-lane parallel lane, the real-time perception information indicates that a lane line with a larger included angle with the historical lane line is perceived and detected, and the topology constraint information indicates that the current lanes are increased, so that a corresponding lane increased lane can be generated according to the single lane line; then, establishing adjacent lane relation topology between lanes according to the parallel relation between lanes and the common lane line information, and longitudinally breaking the lanes according to the virtual-real change of lane lines, the lane number change and the like to construct precursor subsequent topology relation between the lanes; and finally, complementing the virtual lane topology information corresponding to the imperceptible lane lines according to the position relation between the perceived and detected passable space and the reconstructed lane lines, thereby obtaining the lane topology information of the periphery of the relatively complete target mobile equipment.
As an alternative embodiment, the step S104 of determining the recommended lane to be driven by the target mobile device according to the lane topology information, as described above, includes the following steps S501 to S502, as shown in fig. 5:
Step S501, recommended lane position information in the navigation information is acquired, where the recommended lane position information is used to indicate a position of a lane in the road where traveling is recommended.
The target mobile device may currently be navigated by navigation software, which may be based on the SD map.
When the navigation software is used for navigation, since the SD map only can provide map information of road level, the navigation information can only carry out navigation of road level, namely, the lane which cannot be recommended to be specifically driven in the recommended lane position information can only recommend the driving position at most,
Step S502, determining a lane corresponding to the recommended lane position information from the lane topology information as a recommended lane.
Since the lane topology information has been acquired in the foregoing embodiment and the lane has been identified on the target road, the recommended lane can be determined in combination with the recommended lane position information and the lane topology information.
For example: and if the recommended lane in the recommended lane position information is the rightmost lane, taking the lane adjacent to the boundary of the right road as the recommended lane for navigation, which is recognized by current perception.
By the method in the embodiment, under the condition that the navigation information can only suggest the recommended lane position information, the navigation information can be expanded through the lane topology information, so that the navigation can be accurate from the road level to the lane level, and the navigation accuracy can be effectively improved.
As an optional implementation manner, in any one of the foregoing methods, after the step S102 determines the road 3D information of the surrounding road environment of the target mobile device according to the real-time perception information, the method further includes the following steps S601 to S603, as shown in fig. 6:
Step S601, map information is acquired, wherein the map information comprises an SD map and/or a high-precision map;
The map information may be map information that the EHR module accepts an output of an EHP module (a module of a map provided by a high-precision map provider), and may be an SD map and/or a high-precision map.
Step S602, obtaining a reference line model of the surrounding road environment according to the map information and the road 3D information, wherein the reference line model is used for indicating lanes of the road.
After the map information and the road 3D information are obtained, the lane lines in the road 3D information can determine lanes, so that a reference line model of the surrounding road environment of the target mobile device can be determined in the map information to determine each lane in the map information. Alternatively, the reference line model is a model that includes a lane line and a lane centerline, and the lane centerline is strung according to the navigation requirements (i.e., includes a plurality of longitudinally interrupted lanes).
Step S603, projecting the obstacle into the reference line model according to the relative position relation between the obstacle and the lane line in the real-time sensing information.
The real-time sensing information can also comprise identification information of the obstacle, and the relative position relation between the obstacle and the lane line can be identified, and the lane is determined through the lane line, so that the obstacle can be projected into the reference line model based on the relative position relation between the obstacle and the lane line. For example, when an obstacle is located between lane line a and lane line B, then the obstacle may be projected into lane C between lane line a and lane line B. And then the mobile equipment in the lane C can be prompted to be small psychological barriers obstacle, so that collision is avoided.
As an alternative embodiment, in any one of the foregoing methods, the method further includes the following steps S701 to S702, as shown in fig. 7:
In step S701, when the high-precision map is acquired, the high-precision lane lines, the lane topology information, and the lane-level navigation information indicated by the high-precision map are output.
In step S702, in the case of acquiring the SD map, the route shape point coordinate information indicating the shape and direction of the road, the number of lanes information, and the navigation recommended lane information are output.
The EHR module in the system implementing the method of the present embodiment may be used to implement the relevant steps of step S701 and step S702 described above:
EHR module: the method comprises the steps of receiving a map and navigation information output by an EHP module, reorganizing the map and navigation information into a map and navigation information required by the inside of a system, enabling the EHR module to be compatible with two input modes of a high-precision map and an SD map (standard map), outputting high-precision lane lines, lane topology information and lane-level navigation information by the EHR module when the high-precision map is input, and outputting road-level path shape point coordinate information, lane quantity information and navigation recommended lane information by the EHR module when the SD map is input.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM (Read-Only Memory)/RAM (Random Access Memory), magnetic disk, optical disk) and including instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
According to another aspect of the embodiment of the present application, there is also provided an automatic driving system for implementing the above-described automatic driving method. Fig. 8 is a block diagram of an alternative autopilot system in accordance with an embodiment of the present application, as shown in fig. 8, the system may include:
The perception fusion subsystem 1 is used for obtaining real-time perception information of a target road based on detection information obtained by detecting the target road, wherein the target road is a road on which the target mobile equipment is driven;
the environment cognition subsystem 2 is used for determining road 3D information of the surrounding road environment of the target mobile equipment according to the real-time perception information;
specifically, the road 3D information of the surrounding road environment of the target mobile device determined according to the real-time sensing information may be implemented by the vector environment reconstruction module in the environment cognition subsystem 2.
The environment cognition subsystem 2 is also used for obtaining lane topology information according to the road 3D information and the SD map;
Specifically, obtaining lane topology information according to the road 3D information and the SD map may be implemented by the lane topology relationship construction module 24 in the environment awareness subsystem 2.
The environment cognition subsystem 2 is also used for determining a recommended lane to be driven by the target mobile equipment according to the lane topology information;
Specifically, determining the recommended lane to be driven by the target mobile device according to the lane topology information may be implemented by a navigation recommended lane analysis module in the environmental awareness subsystem 2.
A planning control subsystem 3 for generating a mobile device control signal for controlling the target mobile device to travel in the recommended lane.
It should be noted that, the perception fusion subsystem 1 in this embodiment may be used to perform the above-mentioned step S101, the environmental awareness subsystem 2 in this embodiment may be used to perform the above-mentioned steps S102 to S104, and the planning control subsystem 3 in this embodiment may be used to perform the above-mentioned step S105.
The apparatus in this embodiment may include, in addition to the above-described modules, modules that perform any of the methods in the embodiments of any of the foregoing automatic driving methods.
It should be noted that the above modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to what is disclosed in the above embodiments. It should be noted that the above modules may be implemented as part of an apparatus in a hardware environment implementing the method shown in fig. 1, and may be implemented by software, or may be implemented by hardware, where the hardware environment includes a network environment.
The main functions of the perceptual fusion subsystem 1 shown in fig. 8 include: target detection (mobile equipment, pedestrians, non-motor vehicles and the like), semantic segmentation (barriers, ground identification marks, lamp posts, identification plates, passable spaces and the like), lane line fitting, perception of barrier and lane line relation, target fusion and the like;
The environmental awareness subsystem 2 includes the main functions: visual/laser inertial odometer, vector environment reconstruction, smooth optimization of reconstruction results, construction of lane topological relation, analysis of navigation recommended lanes, construction of obstacle environment models and the like;
the planning control subsystem 3 mainly comprises the following functions: target track prediction, decision planning and mobile equipment control;
The main functions of the map engine subsystem 4 include: high-precision vector map matching positioning and EHR (providing map and navigation information) and off-line map building optimization.
In an alternative embodiment, in the autopilot system shown in fig. 8, the perception fusion subsystem 1 may comprise the following sub-modules, as shown in fig. 9:
the target detection module 11 inputs an image or a laser point cloud (i.e., the detection information is the image or the laser point cloud), and obtains first real-time sensing information such as the position, the type (for example, pedestrians, vehicles, triangle cards) and the like of a target in the image or the point cloud by using a deep learning or traditional rule method;
The semantic segmentation module 12 inputs the image, obtains the type of each pixel point (for example, lane lines, indication information on a lane) in the image by using a deep learning method, and obtains second real-time perception information;
The lane line fitting module 13 extracts pixels of the lane line type obtained by the P2, fits the pixels into a curve under an image coordinate system, and finally projects the curve under the image coordinate system to a ground plane according to camera external parameters, lane line parallel constraint (namely, lane lines in the same road are parallel to each other) and the like to obtain a lane line fitting curve under a Bird's Eye View (Bird's Eye View) angle, and the lane line fitting curve is used as third real-time perception information;
A perceived obstacle and lane line relationship module 14 that gives the relative positions of the obstacle and lane line under an image coordinate system or a ground plane coordinate system according to the relative relationship between the lane line pixels and the target ground point pixels (pixels of the portion where the mobile device and the ground are connected, for example, tires) in the semantic division, and obtains fourth real-time perceived information;
the target fusion module 15 performs matching tracking on the obstacles detected by the cameras with different visual angles or the sensors with different types, outputs information such as target positions, types, speeds, heading and the like of the obstacles after fusion filtering, and serves as fifth real-time sensing information.
The real-time perception information may include, but is not limited to, the aforementioned image, laser point cloud, first real-time perception information, second real-time perception information, third real-time perception information, fourth real-time perception information, and fifth real-time perception information.
In an alternative embodiment, in the autopilot system shown in fig. 8, the environmental awareness subsystem 2 may include the following sub-modules, as shown in fig. 10:
Vision/laser inertial odometer 21: estimating DR coordinates of a vehicle (namely, a target mobile device) in real time by using a vision/laser SLAM method, specifically, extracting features from a vision image by using a vision odometer, carrying out optical flow tracking on the features, then carrying out coordinate conversion by combining IMU pre-integration, thus obtaining a feature point queue, estimating a vehicle state by adopting a sliding window-based nonlinear optimization method, firstly obtaining rough pose by using the IMU by using the laser odometer, registering a new point cloud with a history point cloud, and finally carrying out back-end optimization by using iteration ESKF to obtain the vehicle state; finally, the vehicle state obtained by the visual odometer and the laser odometer is sent into a Kalman filter by using a loose coupling method to obtain a fused vehicle DR coordinate;
Vector environment reconstruction module 22: namely, the method realizes the following steps: step S201, determining a ground marking semantic segmentation result in real-time perception information, and projecting each 3D lane line point image coordinate indicated by the ground marking semantic segmentation result to a preset ground plane according to a camera external parameter to obtain a 3D lane line point coordinate initial value corresponding to each 3D lane line point image coordinate, wherein the camera external parameter is the external parameter of the camera for acquiring an image based on the ground marking semantic segmentation result; step S202, road surface modeling is carried out according to the running state of the target mobile equipment, and initial parameters of the road surface model and an initial model of the road surface established by the initial parameters of the road surface model are obtained. Step S203, according to the equal width constraint condition of the ground marking, the initial value of the 3D lane line point coordinate and the initial parameter of the road curved surface model are optimized according to the residual error projected to the current frame image by the 3D lane line point coordinate in the history frame image, and the optimized target value of the 3D lane line point coordinate and the target parameter of the road curved surface model are obtained; step S204, determining a lamp post signboard in the real-time perception information, and triangulating the lamp post signboard according to two adjacent frames of information in the real-time perception information to obtain a triangulating result; step S205, tracking the lamp post signboard by using optical flow to obtain an optical flow tracking result; step S206, filtering out error optical flow tracking results in all optical flow tracking results through the triangularization results to obtain filtered optical flow tracking results; step S207, performing BA optimization on the filtered optical flow tracking result to obtain the 3D coordinates of the marker of the lamp post signboard; step S208, obtaining road 3D information based on the 3D lane line point coordinate target value, the road curved surface model target parameter and the identifier 3D coordinates;
Reconstruction result smoothing optimization module 23: the method comprises the steps of S301 and S302, firstly, fitting a ground marking discrete point (namely, a 3D lane line point coordinate target value) obtained by vector reconstruction into a polynomial curve, then, carrying out association matching on a newly added lane line fitting result and a historical lane line reconstruction result, and finally, carrying out life cycle management operations (namely, carrying out adjustment on the historical lane line) such as newly added lane line, keeping and deleting the lane line;
Lane topology construction module 24: for implementing the aforementioned steps S401 to S403; firstly, constructing topology constraint information of a local lane topology or a navigation recommended lane corresponding to navigation according to navigation information of an SD map, and taking the topology constraint information as prior information; then a new lane is generated according to two lane lines (namely lane mark brick information) with the distance and the parallelism meeting the standards, or a new lane is generated by integrating historical lane information, perceived lane line information (namely real-time perceived information) and prior information, for example, the historical lane information is a multi-lane parallel lane, a lane line with a larger included angle with the historical lane line is perceived and detected, and meanwhile, the map prior information indicates that the current lanes are increased, so that a corresponding lane increase lane can be generated according to the single lane line; then, establishing adjacent lane relation topology between lanes according to the parallel relation between lanes and the common lane line information, and longitudinally breaking the lanes according to the virtual-real change of lane lines, the lane number change and the like to construct precursor subsequent topology relation between the lanes; finally, complementing virtual lane topology information corresponding to the imperceptible lane lines according to the position relation between the perceived and detected passable space and the reconstructed lane lines, thereby obtaining more complete peripheral lane topology information;
The navigation recommended lane analysis module 25: the method is used for realizing the steps S501 and S502, matching the lane topology constructed in real time according to the sensing result with the topology information constructed by the navigation information (i.e. the recommended lane position information), giving the former navigation information, for example, if the recommended lane in the navigation information topology is the rightmost lane, taking the lane which is recognized by the current sensing and is adjacent to the boundary of the right lane as the recommended lane for navigation;
Obstacle environment model construction module 26: for implementing the aforementioned steps S601 to S603; establishing a reference line model of a peripheral road according to map information and information of real-time reconstruction of a perception result (namely road 3D information), specifically, if a high-precision map is used, establishing a reference line model of the peripheral road based on geometric information of a high-precision map topology and a vector reconstruction result, if an SD map is used, establishing a reference line model of the peripheral road based on geometric and topology information of SD map road line point information (namely, a plurality of coordinate points used for generating a road) and a vector reconstruction result, and finally, projecting an obstacle to a reasonable position of the reference line model of the peripheral road according to the relative position relation of obstacle lane lines in the real-time perception information;
in an alternative embodiment, in the autopilot system shown in fig. 8, the planning control subsystem 3 may include the following sub-modules, as shown in fig. 11:
Target trajectory prediction module 31: using an obstacle environment model output by an environment cognition system, and predicting a predicted track of an obstacle for a period of time in the future by combining information such as the type, the speed and the like of the obstacle;
Decision-making planning module 32: calculating the optimal expected track of the current mobile equipment (lane keeping, vehicle following, left lane changing, right lane changing, yielding, overtaking and the like) and corresponding actions according to the navigation recommended lane, the surrounding obstacle environment and the obstacle prediction track information;
Mobile device control module 33: mobile device control signals tracking the trajectory are calculated based on the desired trajectory and the vehicle state output by the decision planning module, including but not limited to: drive, brake, gear and turn signals, etc.
In an alternative embodiment, in the autopilot system shown in fig. 7, the map engine subsystem 4 may include the following sub-modules, as shown in fig. 12:
High-precision vector map matching and positioning module 41: inputting information such as surrounding lane lines and lamp post identification plates provided by an EHR module 42 and information such as perceived and identified lane lines and lamp post identification plates, projecting map information to an image coordinate system to obtain deviation of a map projection result and a perceived and identified result, calculating a vehicle positioning correction quantity which minimizes the deviation to obtain matched vehicle positioning, combining with RTK (Real-TIME KINEMATIC, carrier phase difference technology) which is a difference method for processing carrier phase observed quantities of two measuring stations in Real time, sending carrier phases acquired by a reference station to a user receiver to calculate difference and solve coordinates), and filtering positioning results by using information such as an IMU (Inertial Measurement Unit, namely an inertial measurement unit for measuring three-axis attitude angles (or angular rates) and acceleration) of an object, and outputting a fusion positioning result;
EHR module 42: receiving a map and navigation information output by an EHP module, and reorganizing the map and navigation information into a map and navigation information required by the inside of a system, wherein the EHR module is compatible with two input modes of a high-precision map and an SD map (standard map), the EHR outputs high-precision lane lines, lane topology information and lane-level navigation information when the high-precision map is input, and the EHR outputs road-level path shape point coordinate information, lane quantity information and navigation recommended lane information when the SD map is input;
Offline map optimization module 43: the vector reconstruction results are stored (can be reconstruction results obtained by passing through the same area for multiple times) and used as a vector map initialization state, and the vector results are jointly optimized by utilizing idle resources of a system to obtain a more accurate and complete offline vector high-precision map for later automatic driving passing through the position.
As shown in fig. 13, a preferred detailed architecture of the autopilot system in this embodiment is shown.
Based on the above-mentioned respective functional modules in the automatic driving system, three typical operation modes are provided in the present application example as follows: 1. standard high-precision map mode, 2. Real-time reconstruction mode, 3. Off-line map optimization mode. The following describes the operation flow in the three modes, respectively.
1. Standard high precision map mode:
In the area with the high-precision map, the scheme can normally use the high-precision map to automatically drive, and correct drawing errors possibly existing in the high-precision map through vector reconstruction. The operation flow is as follows:
(1) The perception fusion subsystem 1 detects surrounding barriers and road marking line information, and meanwhile, obtains the relative position relation of the barriers and the lane lines (between which two lane lines the target is positioned, in the middle of the lane, left or right, the distance from the left lane line to the right lane line is the ratio of the whole lane width) through the semantic segmentation result of the barriers and the lane lines;
(2) The map engine subsystem 4 performs matching positioning according to the perceived and identified road identification mark line and the corresponding vector information in the high-precision map, extracts the high-precision map information around the mobile equipment by combining the navigation information sent by the HMI, and sends the high-precision map information to the environment cognition subsystem 2;
(3) The environment cognition subsystem 2 receives the result of the perception real-time detection, builds geometric information of surrounding road identification marks through vector reconstruction, then builds a corresponding relation between the reconstructed geometric information and map information, and finally corrects the geometric information provided by the high-precision map in a certain range (for example, in a range of 20 cm) by using the reconstructed geometric information when building an obstacle environment model, and simultaneously projects the obstacle to the corresponding lane position according to the relative position relation between the perceived obstacle and the lane lines, and combines the lane topology and navigation information provided by the high-precision map to obtain the environment model;
(4) And the planning control subsystem 3 performs decision planning and control according to the surrounding environment model to obtain a control signal so as to control the movement of the automatic driving mobile equipment.
The modules of each subsystem in the standard high-precision map mode are as follows:
Perceptual fusion subsystem 1: the system comprises a target detection module 11, a semantic segmentation module 12, a lane line fitting module 13, a perceived obstacle and lane line relation module 14 and a target fusion module 15;
Environmental awareness subsystem 2: a vision/laser inertial odometer 21, a vector environment reconstruction module 22, a reconstruction result smooth optimization module 23 and an obstacle environment model construction module 26;
planning control subsystem 3: a target track prediction module 31, a decision planning module 32 and a mobile device control module 33;
Map engine subsystem 4: the high-precision vector Map matching localization module 41, HD Map EHR (i.e., EHR module 42 provides a high-precision Map).
2. Real-time reconstruction mode (SD map mode):
In a simple scene without a high-precision map, the scheme can be based on an SD map, and a certain automatic driving capability can be guaranteed by combining real-time reconstruction. The operation flow is as follows:
(1) The perception fusion subsystem 1 detects surrounding barriers and road marking information, and obtains the relative position relationship of the barriers and the lane lines through semantic segmentation results of the barriers and the lane lines;
(2) The map engine subsystem 4 plays SD map information and navigation information according to the traditional combined navigation positioning result;
(3) The environment cognition subsystem 2 firstly carries out vector reconstruction and smoothing according to a perception result, then carries out lane topology analysis of the vector result by combining map prior information provided by an SD map, attaches lane information recommended in navigation information to corresponding lane topology after obtaining the lane topology, and finally projects an obstacle to a corresponding lane position according to the relative position relation of the perception obstacle and a lane line, so as to obtain an environment model;
(4) And the planning control subsystem 3 performs decision planning and control according to the surrounding environment model to obtain a control signal so as to control the movement of the automatic driving mobile equipment.
The modules of each subsystem operation in the real-time reconstruction mode are as follows:
Perceptual fusion subsystem 1: the system comprises a target detection module 11, a semantic segmentation module 12, a lane line fitting module 13, a perceived obstacle and lane line relation module 14 and a target fusion module 15;
Environmental awareness subsystem 2: the system comprises a vision/laser inertial odometer 21, a vector environment reconstruction module 22, a reconstruction result smooth optimization module 23, a lane topological relation construction module 24, a navigation recommended lane analysis module 25 and an obstacle environment model construction module 26;
planning control subsystem 3: a target track prediction module 31, a decision planning module 32 and a mobile device control module 33;
Map engine subsystem 4: SD Map EHR (i.e., EHR module 42 provides an SD Map).
3. Offline map optimization mode:
Under the relatively complex scene without a high-precision map, the scheme can acquire road information through the same region by multiple times of manual driving, and then construct the prior map of the region offline, so that the follow-up automatic driving in the region is ensured. The operation flow is as follows:
(1) The manual driving passes through a certain area, and the perception subsystem detects the surrounding road identification marking information in real time;
(2) The environment cognition subsystem 2 performs vector reconstruction and smoothing according to the perception result to obtain a real-time reconstruction result, and the real-time reconstruction result and positioning information are stored in the vehicle-mounted controller;
(3) The map engine subsystem 4 performs off-line map optimization based on the real-time reconstruction result to obtain an optimized high-precision map, and the optimized high-precision map is stored in the vehicle-mounted controller for subsequent use;
(4) When the map is in the same area, the map engine subsystem 4 transmits the map data to the environment cognition subsystem 2 in a high-precision map mode to assist in constructing a surrounding environment model, so that the automatic driving of the mobile equipment is controlled.
The modules of each subsystem in the offline map optimization mode are as follows:
perceptual fusion subsystem 1: the semantic segmentation module 12 and the lane line fitting module 13;
Environmental awareness subsystem 2: the system comprises a vision/laser inertial odometer 21, a vector environment reconstruction module 22, a reconstruction result smooth optimization module 23 and a lane topological relation construction module 24;
Map engine subsystem 4: an offline map optimization module 43.
According to yet another aspect of an embodiment of the present application, there is also provided an electronic device for implementing the above-described automatic driving method, which may be a server, a terminal, or a combination thereof.
According to another embodiment of the present application, there is also provided an electronic apparatus including: as shown in fig. 14, the electronic device may include: the device comprises a processor 1501, a communication interface 1502, a memory 1503 and a communication bus 1504, wherein the processor 1501, the communication interface 1502 and the memory 1503 are in communication with each other through the communication bus 1504.
A memory 1503 for storing a computer program;
the processor 1501, when executing the program stored in the memory 1503, performs the following steps:
step S101, obtaining real-time perception information of a target road based on detection information obtained by detecting the target road, wherein the target road is a road on which a target mobile device is driven;
Step S102, road 3D information of the surrounding road environment of the target mobile device is determined according to the real-time perception information.
And step S103, obtaining lane topology information according to the road 3D information and the SD map.
And step S104, determining a recommended lane for the recommended target mobile device to travel according to the lane topology information.
Step S105, generating a mobile device control signal for controlling the target mobile device to travel in the recommended lane.
Alternatively, in the present embodiment, the above-described communication bus may be a PCI (PERIPHERAL COMPONENT INTERCONNECT, peripheral component interconnect standard) bus, or an EISA (Extended Industry Standard Architecture ) bus, or the like. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus. The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general purpose processor and may include, but is not limited to: CPU (Central Processing Unit ), NP (Network Processor, network processor), etc.; but may also be a DSP (DIGITAL SIGNAL Processor), ASIC (Application SPECIFIC INTEGRATED Circuit), FPGA (Field-Programmable gate array) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components.
The embodiment of the application also provides a computer readable storage medium, wherein the storage medium comprises a stored program, and the program executes the method steps of the method embodiment.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a U disk, ROM, RAM, a mobile hard disk, a magnetic disk or an optical disk.
According to a further aspect of the present embodiments, there is provided a computer program product comprising a computer program stored on a non-volatile computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the method of any preceding claim.
The embodiment of the invention also provides mobile equipment, which comprises the electronic equipment.
The mobile device in the embodiments of the present application may be any device that can automatically drive or unmanned or assist in driving movement, such as a mobile device (e.g., a ground car, a dust collector, a sweeper, a logistics car, a passenger car, a bus, a van, a truck, a trailer, a dump truck, a crane, an excavator, a shovel, a road train, a sweeper, a sprinkler, a garbage truck, an engineering truck, a rescue car, a logistics car, an AGV (Automated Guided Vehic l e, automated guided vehicle), etc.), a motorcycle, a bicycle, a tricycle, a trolley, a robot, a sweeper, a balance car, etc., and the present application is not limited strictly to the type of the moving tool and is not exhaustive herein.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The integrated units in the above embodiments may be stored in the above-described computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing one or more computer devices (which may be personal computers, servers or network devices, etc.) to perform all or part of the steps of the method described in the embodiments of the present application.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In several embodiments provided by the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution provided in the present embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (11)

1. An automatic driving method, comprising:
based on detection information obtained by detecting a target road, obtaining real-time perception information of the target road, wherein the target road is a road on which a target mobile device is driven;
Determining road 3D information of the surrounding road environment of the target mobile equipment according to the real-time perception information;
Obtaining lane topology information according to the road 3D information and the SD map;
Determining a recommended lane for recommending the target mobile equipment to travel according to the lane topology information;
A mobile device control signal is generated for controlling the target mobile device to travel in the recommended lane.
2. The method according to claim 1, wherein determining road 3D information of a surrounding road environment of the target mobile device according to the real-time perception information comprises:
determining a ground marking semantic segmentation result in the real-time perception information, and projecting each 3D lane line point image coordinate indicated by the ground marking semantic segmentation result to a preset ground plane according to a camera external parameter to obtain a 3D lane line point coordinate initial value corresponding to each 3D lane line point image coordinate, wherein the camera external parameter is an external parameter of a camera for acquiring an image based on the ground marking semantic segmentation result;
modeling a road surface according to the running state of the target mobile equipment to obtain initial parameters of a road surface model and a road surface initial model established by the initial parameters of the road surface model;
According to the equal width constraint condition of the ground marking, optimizing the initial value of the 3D lane line point coordinate and the initial parameter of the road curved surface model according to the residual error of the 3D lane line point coordinate projected to the current frame image in the history frame image, and obtaining an optimized 3D lane line point coordinate target value and a road curved surface model target parameter;
Determining a lamp post signboard in the real-time perception information, and triangulating the lamp post signboard according to two adjacent frames of information in the real-time perception information to obtain a triangulating result;
tracking the lamp post signboard by using optical flow to obtain an optical flow tracking result;
Filtering error optical flow tracking results in all optical flow tracking results through the triangularization results to obtain filtered optical flow tracking results;
Performing BA optimization on the filtered optical flow tracking result to obtain the 3D coordinates of the marker of the lamp post signboard;
And obtaining the road 3D information based on the 3D lane line point coordinate target value, the road curved surface model target parameter and the identifier 3D coordinates.
3. The method of claim 2, wherein after the determining road 3D information of the surrounding road environment of the target mobile device according to the real-time perception information, the method further comprises:
Fitting the 3D lane line point coordinate target value into a polynomial curve;
And matching the polynomial curve with the existing historical lane line, and adjusting the historical lane line.
4. The method according to claim 1, wherein the obtaining lane topology information from the road 3D information and SD map comprises:
obtaining topology constraint information according to the SD map construction, wherein the topology constraint information is used for indicating constraint conditions of lanes;
Determining the lane topology information based on lane standard information and the real-time perception information; and/or deriving the lane topology information based on the topology constraint information, the real-time awareness information, and historical lane information.
5. The method of claim 1, wherein determining a recommended lane in which to recommend travel of the target mobile device according to the lane topology information comprises:
acquiring recommended lane position information in navigation information, wherein the recommended lane position information is used for indicating the position of a lane recommended to travel in a road;
And determining a lane corresponding to the recommended lane position information from the lane topology information as the recommended lane.
6. The method of claim 1, wherein after the determining road 3D information of the surrounding road environment of the target mobile device according to the real-time perception information, the method further comprises:
acquiring map information, wherein the map information comprises an SD map and/or a high-precision map;
obtaining a reference line model of the surrounding road environment according to the map information and the road 3D information, wherein the reference line model is used for indicating lanes of a road;
And projecting the obstacle into the reference line model according to the relative position relation between the obstacle and the lane line in the real-time perception information.
7. The method according to any one of claims 1 to 6, further comprising:
under the condition of acquiring a high-precision map, outputting high-precision lane lines, lane topology information and lane-level navigation information indicated by the high-precision map;
And outputting path shape point coordinate information, lane number information and navigation recommended lane information of a road level indicated by the SD map when the SD map is acquired, wherein the path shape point coordinate information is used for indicating the shape and direction of the road.
8. An autopilot system comprising:
the perception fusion subsystem is used for obtaining real-time perception information of a target road based on detection information obtained by detecting the target road, wherein the target road is a road on which the target mobile equipment is driven;
The environment cognition subsystem is used for determining road 3D information of the surrounding road environment of the target mobile equipment according to the real-time perception information;
The environment cognition subsystem is also used for obtaining lane topology information according to the road 3D information and the SD map;
the environment cognition subsystem is also used for determining a recommended lane for recommending the target mobile device to run according to the lane topology information;
And the planning control subsystem is used for generating a mobile device control signal for controlling the target mobile device to run on the recommended lane.
9. An electronic device comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory communicate with each other via the communication bus, characterized in that,
The memory is used for storing a computer program;
The processor is configured to perform the method steps of any of claims 1 to 7 by running the computer program stored on the memory.
10. A computer-readable storage medium, characterized in that the storage medium has stored therein a computer program, wherein the computer program is arranged to perform the method steps of any of claims 1 to 7 when run.
11. A mobile device, characterized in that it comprises the electronic device of claim 9.
CN202310100969.5A 2023-01-18 2023-01-18 Automatic driving method and system, electronic device, storage medium and mobile device Pending CN118377290A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310100969.5A CN118377290A (en) 2023-01-18 2023-01-18 Automatic driving method and system, electronic device, storage medium and mobile device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310100969.5A CN118377290A (en) 2023-01-18 2023-01-18 Automatic driving method and system, electronic device, storage medium and mobile device

Publications (1)

Publication Number Publication Date
CN118377290A true CN118377290A (en) 2024-07-23

Family

ID=91907419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310100969.5A Pending CN118377290A (en) 2023-01-18 2023-01-18 Automatic driving method and system, electronic device, storage medium and mobile device

Country Status (1)

Country Link
CN (1) CN118377290A (en)

Similar Documents

Publication Publication Date Title
JP7040867B2 (en) System, method and program
US11217012B2 (en) System and method for identifying travel way features for autonomous vehicle motion control
US11698263B2 (en) Safety and comfort constraints for navigation
CN111104849B (en) Automatic annotation of environmental features in a map during navigation of a vehicle
US20210025725A1 (en) Map verification based on collected image coordinates
CN111102986B (en) Automatic generation of reduced-size maps for vehicle navigation and time-space positioning
US20210061306A1 (en) Systems and methods for identifying potential communication impediments
US20220282989A1 (en) Fully aligned junctions
US20230073897A1 (en) Aligning road information for navigation
GB2626691A (en) Systems and methods for determining road safety
WO2021053393A1 (en) Systems and methods for monitoring traffic lane congestion
JP2022532695A (en) Systems and methods for vehicle navigation based on image analysis
EP4163595A1 (en) Automatic annotation of environmental features in a map during navigation of a vehicle
WO2020174279A2 (en) Systems and methods for vehicle navigation
US20230195122A1 (en) Systems and methods for map-based real-world modeling
WO2022089627A1 (en) Method and system for motion planning for an autonmous vehicle
US20240233404A9 (en) Graph neural networks for parsing roads
Qin et al. Traffic Flow-Based Crowdsourced Mapping in Complex Urban Scenario
CN118377290A (en) Automatic driving method and system, electronic device, storage medium and mobile device
US20240127603A1 (en) Unified framework and tooling for lane boundary annotation
CN118533187A (en) Track prediction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination