CN111477028B - Method and device for generating information in automatic driving - Google Patents
Method and device for generating information in automatic driving Download PDFInfo
- Publication number
- CN111477028B CN111477028B CN202010348532.XA CN202010348532A CN111477028B CN 111477028 B CN111477028 B CN 111477028B CN 202010348532 A CN202010348532 A CN 202010348532A CN 111477028 B CN111477028 B CN 111477028B
- Authority
- CN
- China
- Prior art keywords
- information
- road network
- road
- traffic
- traffic participant
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 230000003068 static effect Effects 0.000 claims abstract description 31
- 238000007619 statistical method Methods 0.000 claims abstract description 27
- 230000004888 barrier function Effects 0.000 claims abstract description 5
- 230000008859 change Effects 0.000 claims description 21
- 230000015654 memory Effects 0.000 claims description 20
- 238000004458 analytical method Methods 0.000 claims description 17
- 238000012986 modification Methods 0.000 claims description 16
- 230000004048 modification Effects 0.000 claims description 16
- 238000013210 evaluation model Methods 0.000 claims description 12
- 238000011156 evaluation Methods 0.000 claims description 8
- 238000002955 isolation Methods 0.000 claims description 7
- 230000033228 biological regulation Effects 0.000 claims description 6
- 230000003247 decreasing effect Effects 0.000 claims description 5
- 238000012360 testing method Methods 0.000 description 25
- 230000006870 function Effects 0.000 description 7
- 239000003795 chemical substances by application Substances 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000006399 behavior Effects 0.000 description 5
- 230000003993 interaction Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 238000010276 construction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 241000283070 Equus zebra Species 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
- G08G1/096708—Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
- G08G1/096725—Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
- G08G1/096766—Systems involving transmission of highway information, e.g. weather, speed limits where the system is characterised by the origin of the information transmission
- G08G1/096791—Systems involving transmission of highway information, e.g. weather, speed limits where the system is characterised by the origin of the information transmission where the origin of the information is another vehicle
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Atmospheric Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Traffic Control Systems (AREA)
Abstract
The application discloses a method and a device for generating information in automatic driving, and relates to the field of automatic driving. The specific implementation scheme is as follows: the method comprises the steps of obtaining an electronic map of a target area and obtaining road information generated when at least one vehicle runs in a road network in the target area, wherein the road information comprises road obstacle information and traffic participant information; carrying out statistical analysis on information contained in the electronic map to obtain static road network information of a road network; analyzing the acquired road barrier information to obtain dynamic road network information; analyzing the acquired traffic participant information to obtain the characteristics of the traffic participants; a road network representation of the road network is generated based on the static road network information, the dynamic road network information and the traffic participant characteristics. This embodiment enables the generation of a road network representation.
Description
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a road network portrait technology in the field of automatic driving.
Background
Road testing of an unmanned automobile (also known as an auto-drive automobile) is a testing method for verifying the trafficability and stability of the unmanned automobile in an open road scene by real interaction of the unmanned automobile with traffic participants (such as vehicles, pedestrians and the like) in the open road. The road test is an important component of an unmanned automobile test system and is the forefront position of commercialization of unmanned automobiles.
The three elements of the unmanned automobile road test comprise a test vehicle, a test version and a test environment. The test environment is the most distinct characteristic of the road test from other test links (such as an integration test and a module test) of the unmanned automobile. The real road environment is varied, and various factors can influence the driving of the unmanned automobile. For example, the behavior of the traffic participant has an important influence on the road test effect of the unmanned automobile, and is one of the important variables of the road test. Therefore, it is very important for road testing to represent the road network of road testing and to characterize the road network in multiple dimensions.
Disclosure of Invention
A method, apparatus, device, and storage medium for generating information in automatic driving are provided.
According to a first aspect, there is provided a method for generating information in autonomous driving, comprising: the method comprises the steps of obtaining an electronic map of a target area and obtaining road information generated when at least one vehicle runs in a road network in the target area, wherein the road information comprises road obstacle information and traffic participant information; performing statistical analysis on information contained in the electronic map to obtain static road network information of the road network; analyzing the acquired road barrier information to obtain dynamic road network information; analyzing the acquired traffic participant information to obtain traffic participant characteristics; generating a road network representation of said road network based on said static road network information, said dynamic road network information and said traffic participant characteristics.
According to a second aspect, there is provided an apparatus for generating information in autonomous driving, comprising: the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is configured to acquire an electronic map of a target area and acquire road information generated when at least one vehicle runs in a road network in the target area, and the road information comprises road obstacle information and traffic participant information; a first analysis unit configured to perform statistical analysis on information included in the electronic map to obtain static road network information of the road network; the second analysis unit is configured to analyze the acquired road barrier information to obtain dynamic road network information; the third analysis unit is configured to analyze the acquired traffic participant information to obtain traffic participant characteristics; a generating unit configured to generate a road network representation of said road network based on said static road network information, said dynamic road network information and said traffic participant characteristics.
According to a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method according to any one of the first aspect.
According to a fourth aspect, the disclosed embodiments provide a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are configured to cause the computer to perform the method as described in any one of the first aspect.
According to the technology of the embodiment, a road network image capable of describing a road network from multiple dimensions is generated by combining the characteristics of an automatic driving automobile, the running road environment of the automatic driving automobile is quantitatively described, and the road network image suitable for the road test of the automatic driving automobile is generated.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a flow diagram of one embodiment of a method for generating information in autonomous driving according to the present disclosure;
FIG. 2 is a schematic illustration of a traffic-engaging vehicle changing lanes according to the present disclosure;
FIG. 3 is a schematic diagram of one application scenario of a method for generating information in autonomous driving according to the present disclosure;
FIG. 4 is a flow chart of yet another embodiment of a method for generating information in autonomous driving according to the present disclosure;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for generating information in autonomous driving according to the present disclosure;
fig. 6 is a block diagram of an electronic device for implementing a method for generating information in automatic driving according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application to assist in understanding, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
As shown in fig. 1, a flow 100 of one embodiment of a method for generating information in autonomous driving according to the present disclosure is shown. The method for generating information in automatic driving comprises the following steps:
s101, acquiring an electronic map of a target area and acquiring road information generated when at least one vehicle runs in a road network in the target area.
In the present embodiment, the execution subject of the method for generating information in automatic driving may acquire an electronic map of a target area. Here, the target area may be an area artificially designated according to actual needs. For example, the target area may be an area where road testing of an autonomous automobile is desired. For example, the target area may be an area where a city is located. In addition, the execution subject may further acquire road information generated when at least one vehicle travels in a road network within the target area. Here, the road information may include road obstacle information and traffic participant information.
Each of the at least one vehicle may have an obstacle sensing function. As an example, the at least one vehicle may be a vehicle having an automatic driving function. When a vehicle travels through a road network in a target area (i.e., during manual driving or automatic driving), road environment information is acquired by sensors such as a laser radar, an ultrasonic radar, and a camera, and vehicle body Positioning information is acquired by a GPS (Global Positioning System), an IMU (Inertial measurement unit), and the like. Then, the vehicle may perform various analysis processes on the information acquired by the sensor, and generate road information, driving decisions, and the like according to a certain rule. For example, road obstacle information on a road, such as obstacle type, obstacle position, obstacle size, obstacle moving speed, and the like, may be analyzed.
Here, the electronic map may refer to an automatic driving map (or an automatic driving high-precision map) for guiding the driving of an automatic driving automobile. The autopilot map may construct a virtual road environment model that maps reality for the autopilot. For the automatic driving automobile, the automatic driving map can reduce the difficulty of environmental perception, provide more perfect surrounding environment and more accurate positioning precision and reliability, and provide the automatic driving automobile for behavior decision. As an example, the automatic driving map may include various information such as road-related information (e.g., the number of lanes, construction state, travelable region, non-travelable region, intersection position, etc.), lane information (lane line position, curvature/gradient, center line, lane attribute change, etc.), traffic facility information (traffic signal lights, zebra crossing, stop line, traffic sign, road isolation barrier, etc.).
In the present embodiment, the execution subject may be various electronic devices having information processing and generating functions, including but not limited to a smart phone, a tablet computer, a laptop portable computer, a desktop computer, and the like.
S102, performing statistical analysis on the information contained in the electronic map to obtain static road network information of the road network.
In this embodiment, the executing entity may perform statistical analysis on the information included in the electronic map, so as to obtain the static road network information of the road network. Here, the static road network information may be a description of the kind and number of road network contained elements (e.g., intersections, red street lamps, roundabouts, etc.) of the target area. Taking a scene that the unmanned vehicle enters the intersection as an example, after the unmanned vehicle enters the intersection, a series of actions including intersection parking waiting, intersection lane changing, traffic signal lamp identification, interaction with motor vehicles and pedestrians in the intersection and the like can be executed. Therefore, the difficulty of driving the unmanned vehicle is obviously increased by the large number of the intersections of the road network. Similarly, if the number of lanes in the road network increases and the number of lanes decreases, the probability of interaction between the driverless vehicle and other vehicles and the difficulty of lane selection may increase.
In some optional implementations of the present embodiment, the step S102 may specifically be performed as follows:
firstly, carrying out statistical analysis on at least one of the following information contained in the electronic map: the road comprises intersections, traffic lights, lane number increasing areas, lane number reducing areas, lane width, lane number, main road number, auxiliary road number and road isolation guardrail length.
In this implementation, the execution subject may perform statistical analysis on at least one of the following information included in the electronic map obtained in S101: intersections, red street lamps, lane number increasing areas, lane number decreasing areas, lane widths, lanes, main roads, auxiliary roads, road isolation guardrail lengths and the like. Therefore, statistical analysis results such as the number of intersections, the number of traffic lights, the number of lane increasing areas, the number of lane decreasing areas, lane width, the number of lanes, the number of main roads, the number of auxiliary roads, the length of road isolation guardrails and the like are obtained.
Then, the static road network information is generated based on the statistical analysis result, for example, the statistical analysis result is directly used as the static road network information.
In this implementation, the static road network information may comprise a variety of statistical analysis information. By the implementation mode, the static road network information comprising various information can be acquired, so that the acquired static road network information contains more abundant information.
And S103, analyzing the acquired road obstacle information to obtain dynamic road network information.
In this embodiment, the executing entity may analyze the road obstacle information acquired in S101, thereby obtaining dynamic road network information. Here, the dynamic road network information may include distribution of obstacles around the vehicle, the number of obstacles around the vehicle, traffic volume (e.g., traffic volume, pedestrian volume) at the intersection, and the like.
In some optional implementations of this embodiment, the step S103 may specifically include the following: 1) counting the number of different types of obstacles in different time at different places of a road network; and/or 2) counting the traffic volumes of the target intersections in the road network at different times; and/or 3) the number of different types of obstacles in the range taking the position of the vehicle as the center of a circle and taking the preset distance as the radius; and/or 4) determining the traffic distribution of the road network within a preset statistical period.
In this implementation, the executing entity may analyze the acquired road obstacle information to obtain the dynamic road network information by at least one of the following manners:
1) the number of different types of obstacles at different points of the road network at different times is counted.
In this implementation, the vehicle may generate road obstacle information in real time while traveling through the road network, and record the time and place at which the road obstacle information is generated. The road obstacle information may include an obstacle type, among others. As an example, the types of obstacles that the vehicle may recognize may include vehicles, pedestrians, bicycles, movable obstacles, immovable obstacles, unknown obstacles, and so on. In this way the executive may count the number of obstacles of different types at different points of the road network at different times. For example, the number of different types of obstacles of a site X of the road network may be counted over different time periods of the day, thereby forming a time series of the number of different types of obstacles for the site X. The execution entity may generate one of the dynamic road network information from different types of obstacle data of different locations of the road network at different times, e.g. directly take different types of obstacle data of different locations of the road network at different times as one of the dynamic road network information.
2) And (5) counting the traffic volumes of the target intersections in the road network at different times.
In this implementation manner, the execution subject may count the traffic volumes of the target intersections in the road network at different times. Here, the target intersection may be an intersection manually specified according to actual needs. The execution subject may generate one of the dynamic road network information according to the traffic volumes of the target intersection at different times, for example, directly take the traffic volumes of the target intersection at different times as one of the dynamic road network information.
3) And counting the number of different types of obstacles within a range taking the position of the vehicle as the center of a circle and taking the preset distance as the radius.
In this implementation, the execution subject may count the number of different types of obstacles within a range (i.e., within a predetermined range around the vehicle) with a preset distance as a radius (e.g., 30 meters, 50 meters, 100 meters, etc.) around the position of the vehicle as a center. The execution body may generate one of the dynamic road network information based on data of different types of obstacles within a predetermined range around the vehicle, for example, directly using the data of different types of obstacles within the predetermined range around the vehicle as one of the dynamic road network information.
4) And determining the traffic distribution of the road network in a preset statistical period.
In this implementation, the executive agent may determine the traffic distribution of the road network over a preset statistical period (e.g., one day, one week, etc.). The executing entity may generate one of the dynamic road network information from the traffic distribution of the road network during the statistical period, e.g. directly taking the traffic distribution of the road network during the statistical period as one of the dynamic road network information. By the implementation mode, the dynamic road network information comprising various information can be obtained, so that the information contained in the obtained dynamic road network information is richer.
And S104, analyzing the acquired traffic participant information to obtain the characteristics of the traffic participants.
In this embodiment, the executing subject may analyze the traffic participant information acquired in S101, so as to obtain the traffic participant characteristic. Wherein the traffic participants may include vehicles, bicycles, pedestrians, and the like. By way of example, the traffic participant characteristic may include a degree to which the traffic participant complies with traffic regulations.
In general, an autonomous automobile may travel according to traffic regulations, and a scene that the autonomous automobile may handle is a scene in which other traffic participants comply with the traffic regulations. The autonomous automobile responds to and processes the aggressive driving behavior (e.g., a short-distance quick lane change) of the surrounding vehicles for a short time, and may have unreasonable driving behaviors such as sudden braking, collision, and the like, and thus, the degree of compliance of traffic participants with traffic regulations may also affect the driving of the autonomous automobile.
In some optional implementations of this embodiment, the traffic participant information may include traffic participant violation information and lane change information. And the executing body can analyze the acquired traffic participant information in at least one of the following ways, so as to obtain the traffic participant characteristics:
1) and carrying out statistical analysis on the traffic participant violation information sent by at least one vehicle to obtain the traffic participant violation characteristics.
In the implementation mode, the execution main body can perform statistical analysis on the traffic participant violation information sent by at least one vehicle, so that the traffic participant violation characteristics are obtained. For example, the number of traffic participant violations occurring over a predetermined time (e.g., over a day) may be counted, or the number of traffic participant violations occurring per predetermined number of kilometers over a predetermined time (e.g., per 100 kilometers of a day) may also be counted. Here, the violation may include various behaviors violating the road traffic regulation, such as a reverse run, an overspeed, an illegal parking, a vehicle entering a non-travel area, a pedestrian entering a lane exclusive to the vehicle, an overspeed, and the like. The enforcement agent may generate one of the traffic participant characteristics from the traffic participant violation characteristics, for example, by directly taking the traffic participant violation characteristics as one of the traffic participant characteristics.
2) And counting the change distance of the traffic participating vehicle when changing lanes and the vehicle speed when changing lanes.
In this implementation, the executing agent may count a change distance and a vehicle speed at the time of a lane change of the traffic-participating vehicle in the road network. Here, the change distance may refer to a distance from the tail of the vehicle to the head of the vehicle running on the upper side of the lane to be changed (i.e., the changed lane) before the vehicle changes the lane, and here, the side running vehicle may refer to a vehicle located behind the lane-changed vehicle after the vehicle changes the lane. The vehicle speed at the time of the change may include a longitudinal speed and a lateral speed. Taking fig. 2 as an example, fig. 2 includes a lane 1 and a lane 2, and a vehicle a traveling in the lane 1 intends to change lanes to the lane 2, and the side vehicle of the vehicle a is a vehicle B. The change distance is a distance d from the tail of the vehicle a to the head of the lane B, and the vehicle speed at the time of the change may include a longitudinal speed V1 and a lateral speed V2. The execution subject may generate one of the traffic participant characteristics from a change distance of the traffic participant vehicle at the time of changing the lane and a vehicle speed at the time of the change, for example, directly use the change distance of the traffic participant vehicle at the time of changing the lane and the vehicle speed at the time of the change as one of the traffic participant characteristics.
3) And counting the vehicle speed of the traffic participating vehicles.
In this implementation, the executing entity may count the vehicle speeds of the traffic participant vehicles within the road network and generate one of the traffic participant characteristics according to the vehicle speeds of the traffic participant vehicles, for example, directly take the vehicle speed of the traffic participant vehicle as one of the traffic participant characteristics. By the implementation mode, the traffic participant characteristics comprising various information can be obtained, so that the obtained traffic participant characteristics contain more abundant information.
S105, generating the road network portrait of the road network according to the static road network information, the dynamic road network information and the traffic participant characteristics.
In this embodiment, the executive agent may combine the static road network information obtained in S102, the dynamic road network information obtained in S103, and the traffic participant characteristics obtained in S104 to obtain the road network representation of the road network. In practice, road network representations may be applied to aspects of road testing of autonomous vehicles. As an example, for a new road network, the road network image for the new road network may be quantitatively described from multiple dimensions of static road network information, dynamic road network information, and traffic participant information. The road network representation of the new road network is laterally compared with the road network representation of the known mature road network, so that the new road network can be intuitively and comprehensively recognized. Thus, the road network representation can be used for road network extension and road network selection for autonomous vehicles. As another example, the road network representation may be used to guide the selection of corresponding road network regions for different test purposes for dedicated and focused tests. For example, to detect the ability of an autonomous vehicle to safely pass through an intersection, an area corresponding to a road network image with a large number of intersections may be selected as a test area.
In some optional implementations of this embodiment, the method for generating information in automatic driving described above may further include the following steps not shown in fig. 1:
first, a road network representation is displayed for viewing by a user.
In this implementation, the executive agent may display the road network representation generated in S105 for viewing by the user. Here, the user may be a person who verifies the road network representation. The user can verify the road network representation displayed by the executive body according to the real condition of the road network in the target area, and if the information in the road network representation is found to be wrong, the user can input the modification information aiming at the wrong information.
Modification information sent by the user is then received, and the road network representation is modified in accordance with the modification information.
In this implementation, the execution agent may receive modification information sent by a user, and modify information in the road network representation according to the modification information. By the implementation mode, the user can modify the generated road network portrait, so that the modified road network portrait is more accurate.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for generating information in autonomous driving according to the present embodiment. In the application scenario of fig. 3, the terminal device 301 may first acquire an electronic map of a target area and road information generated when a large number of vehicles travel through a road network in the target area, where the road information includes road obstacle information and traffic participant information. Next, the terminal device 301 may perform statistical analysis on the information included in the electronic map to obtain static road network information of the road network. Then, the terminal device 301 may further analyze the acquired road obstacle information to obtain dynamic road network information. Then, the terminal device 301 may further analyze the obtained traffic participant information to obtain the traffic participant characteristics. Finally, the terminal device 301 may generate a road network representation of the road network based on the static road network information, the dynamic road network information and the traffic participant characteristics.
The method provided by the above embodiment of the present disclosure combines the characteristics of the auto-driving automobile to generate a road network representation capable of describing a road network from multiple dimensions, quantitatively describe the road environment in which the auto-driving automobile runs, and generate a road network representation suitable for the road test of the auto-driving automobile.
With further reference to fig. 4, a flow 400 of yet another embodiment of a method for generating information in autonomous driving is shown. The flow 400 of the method for generating information in autonomous driving includes the steps of:
s401, acquiring an electronic map of a target area, and acquiring road information generated when at least one vehicle runs in a road network in the target area.
In this embodiment, S401 is similar to S101 of the embodiment shown in fig. 1, and is not described here again.
S402, carrying out statistical analysis on the information contained in the electronic map to obtain static road network information of the road network.
In this embodiment, S402 is similar to S102 of the embodiment shown in fig. 1, and is not described here again.
And S403, analyzing the acquired road obstacle information to obtain dynamic road network information.
In this embodiment, S403 is similar to S103 of the embodiment shown in fig. 1, and is not described here again.
S404, analyzing the acquired traffic participant information to obtain the characteristics of the traffic participants.
In this embodiment, S404 is similar to S104 of the embodiment shown in fig. 1, and is not described here again.
S405, generating a road network representation of the road network according to the static road network information, the dynamic road network information and the traffic participant characteristics.
In this embodiment, S405 is similar to S105 of the embodiment shown in fig. 1, and is not described here again.
S406, based on the road network image of the road network, the automatic driving difficulty evaluation is carried out on the road network.
In this embodiment, the executor may obtain a road network representation based on S405 and perform automatic driving difficulty evaluation on the road network.
As an example, the execution subject may have stored therein an evaluation rule in advance, for example, the evaluation rule may be a calculation formula for calculating one or more information in the road network representation to obtain a calculation result for representing the degree of difficulty of automatic driving. For example, a plurality of information items having a large influence on the driving of an autonomous vehicle in the road network representation may be manually selected, and the selected plurality of information items may be calculated using a predetermined calculation formula, thereby obtaining a calculation result.
As another example, an evaluation model may be stored in advance inside the execution body, and the evaluation model may be used to represent the correspondence between the road network representation and the automatic driving difficulty. In this way, the operator can input the road network image obtained in S405 to the evaluation model, and output the degree of difficulty of automatic driving from the evaluation model. The evaluation model can be obtained by adopting a machine learning method and utilizing a preset sample set for training. For example, an executive (the same as or different from the executive described above) used to train the evaluation model may train the evaluation model in the following manner: first, a sample set is obtained, wherein the sample comprises a sample road network representation and a sample autopilot difficulty corresponding to the sample road network representation. Then, a sample road network image of the sample in the sample set is input, and the sample automatic driving difficulty corresponding to the input sample road network image is output as an expected output, and an evaluation model is trained.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 1, the method for generating information in the present embodiment highlights the step of automatically evaluating the difficulty of driving the road network. Therefore, the method described in the embodiment can evaluate the difficulty of automatic driving of the road network in the target area based on the road network image, thereby realizing quantitative description of the difficulty of driving of the automatic driving automobile on the road network in the target area.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for generating information, which corresponds to the method embodiment shown in fig. 1, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for generating information in autonomous driving of the present embodiment includes: an acquisition unit 501, a first analysis unit 502, a second analysis unit 503, a third analysis unit 504, and a generation unit 505. The acquiring unit 501 is configured to acquire an electronic map of a target area and acquire road information generated when at least one vehicle travels through a road network in the target area, wherein the road information includes road obstacle information and traffic participant information; the first analysis unit 502 is configured to perform statistical analysis on information included in the electronic map to obtain static road network information of the road network; the second analysis unit 503 is configured to analyze the acquired road obstacle information to obtain dynamic road network information; the third analyzing unit 504 is configured to analyze the acquired traffic participant information to obtain traffic participant characteristics; the generating unit 505 is configured to generate a road network representation of said road network based on said static road network information, said dynamic road network information and said traffic participant characteristics.
In this embodiment, specific processes of the obtaining unit 501, the first analyzing unit 502, the second analyzing unit 503, the third analyzing unit 504 and the generating unit 505 of the apparatus 500 for generating information in automatic driving and technical effects brought by the specific processes may refer to the related descriptions of S101, S102, S103, S104 and S105 in the corresponding embodiment of fig. 1, and are not described herein again.
In some optional implementations of this embodiment, the apparatus 500 further includes: and an evaluation unit (not shown) configured to evaluate the difficulty of automatic driving of the road network based on the road network image of the road network.
In some optional implementations of this embodiment, the apparatus 500 further includes: a display unit (not shown) configured to display the road network representation for viewing by a user; a modification unit (not shown) configured to receive modification information sent by said user and to modify said road network representation in accordance with said modification information.
In some optional implementations of the present embodiment, the first analysis unit 502 is further configured to: performing statistical analysis on at least one of the following information contained in the electronic map: the system comprises intersections, traffic lights, lane number increasing areas, lane number decreasing areas, lane width, lane number, main road number, auxiliary road number and road isolation guardrail length; and generating static road network information according to the statistical analysis result.
In some optional implementations of this embodiment, the second analysis unit 503 is further configured to: counting the number of different types of obstacles in different time at different places of the road network; and/or counting the traffic volumes of the target intersections in the road network at different time; and/or counting the number of different types of obstacles in a range taking the position of the vehicle as the center of a circle and taking the preset distance as the radius; and/or determining the traffic distribution of said road network within a predetermined statistical period.
In some optional implementation manners of this embodiment, the traffic participant information includes traffic participant violation information and lane change information; and the third analyzing unit 504 is further configured to: carrying out statistical analysis on the traffic participant violation information sent by the at least one vehicle to obtain traffic participant violation characteristics; and/or counting the change distance of the traffic participating vehicle when changing lanes and the vehicle speed when changing lanes; and/or to count vehicle speeds of the traffic participating vehicles.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 6, the present disclosure is a block diagram of an electronic device for a method of generating information in automatic driving according to an embodiment of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 6, the electronic apparatus includes: one or more processors 601, memory 602, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 6, one processor 601 is taken as an example.
The memory 602 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the method for generating information in autonomous driving provided herein. A non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method for generating information in autonomous driving provided by the present application.
The memory 602 is used as a non-transitory computer readable storage medium for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the method for generating information in automatic driving in the embodiment of the present application (for example, the acquiring unit 501, the first analyzing unit 502, the second analyzing unit 503, the third analyzing unit 504, and the generating unit 505 shown in fig. 5). The processor 601 executes various functional applications of the server and data processing, i.e., implementing the method for generating information in automatic driving in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 602.
The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of an electronic device for generating information in automatic driving, and the like. Further, the memory 602 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 602 optionally includes memory located remotely from processor 601, and these remote memories may be connected over a network to electronics used in generating information in autonomous driving. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method for generating information in automatic driving may further include: an input device 603 and an output device 604. The processor 601, the memory 602, the input device 603 and the output device 604 may be connected by a bus or other means, and fig. 6 illustrates the connection by a bus as an example.
The input device 603 may receive input numeric or character information and generate key signal inputs related to user settings and function control of an electronic apparatus for generating information in autopilot, such as a touch screen, keypad, mouse, track pad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, or like input device. The output devices 604 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the road network portrait capable of describing the road network from multiple dimensions is generated by combining the characteristics of the automatic driving automobile, the road environment of the automatic driving automobile is quantitatively described, and the road network portrait suitable for the road test of the automatic driving automobile is generated.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments are not intended to limit the scope of the present disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (12)
1. A method for generating information in autonomous driving, comprising:
the method comprises the steps of obtaining an electronic map of a target area, and obtaining road information generated when at least one vehicle runs in a road network in the target area, wherein the road information comprises road obstacle information and traffic participant information;
performing statistical analysis on information contained in the electronic map to obtain static road network information of the road network;
Analyzing the acquired road barrier information to obtain dynamic road network information;
analyzing the acquired traffic participant information to obtain traffic participant characteristics, wherein the traffic participant characteristics comprise the degree of other traffic participants to comply with traffic rules;
generating a road network representation of said road network from said static road network information, said dynamic road network information and said traffic participant characteristics;
the method further comprises the following steps:
and inputting the road network portrait of the road network into a pre-stored evaluation model to obtain the evaluation of the automatic driving difficulty of the road network, wherein the evaluation model is used for representing the corresponding relation between the road network portrait and the automatic driving difficulty.
2. The method of claim 1, further comprising:
displaying the road network representation for a user to view;
receiving modification information sent by the user, and modifying the road network representation according to the modification information.
3. The method according to claim 1, wherein said statistically analyzing information contained in said electronic map to obtain static road network information of said road network comprises:
Performing statistical analysis on at least one of the following information contained in the electronic map: the system comprises intersections, traffic lights, lane number increasing areas, lane number decreasing areas, lane width, lane number, main road number, auxiliary road number and road isolation guardrail length;
and generating static road network information according to the statistical analysis result.
4. The method according to claim 1, wherein said analyzing said acquired road obstacle information to obtain dynamic road network information comprises:
counting the number of different types of obstacles at different positions of the road network in different time; and/or
Counting the traffic volumes of the target intersections in the road network at different times; and/or
Counting the number of different types of obstacles within a range taking the position of the vehicle as the center of a circle and taking a preset distance as a radius; and/or
And determining the traffic distribution of the road network in a preset statistical period.
5. The method of claim 1 wherein the traffic participant information includes traffic participant violation information, lane change information; and
analyzing the acquired traffic participant information to obtain traffic participant characteristics, wherein the analyzing comprises the following steps:
Carrying out statistical analysis on the traffic participant violation information sent by the at least one vehicle to obtain traffic participant violation characteristics; and/or
Counting the change distance of the traffic participating vehicle when changing lanes and the vehicle speed when changing lanes; and/or
And counting the vehicle speed of the traffic participating vehicles.
6. An apparatus for generating information in automatic driving, characterized by comprising:
the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is configured to acquire an electronic map of a target area and acquire road information generated when at least one vehicle runs in a road network in the target area, and the road information comprises road obstacle information and traffic participant information;
the first analysis unit is configured to perform statistical analysis on information contained in the electronic map to obtain static road network information of the road network;
the second analysis unit is configured to analyze the acquired road obstacle information to obtain dynamic road network information;
a third analysis unit configured to analyze the acquired traffic participant information to obtain traffic participant characteristics, wherein the traffic participant characteristics include degrees of other traffic participants complying with traffic regulations;
A generating unit configured to generate a road network representation of said road network from said static road network information, said dynamic road network information and said traffic participant characteristics;
the device further comprises:
and the evaluation unit is configured to input the road network images of the road network into a pre-stored evaluation model to obtain the road network for automatic driving difficulty evaluation, wherein the evaluation model is used for representing the corresponding relation between the road network images and the automatic driving difficulty.
7. The apparatus of claim 6, further comprising:
a display unit configured to display the road network representation for viewing by a user;
a modification unit configured to receive modification information sent by said user and to modify said road network representation in accordance with said modification information.
8. The apparatus of claim 6, wherein the first analysis unit is further configured to:
performing statistical analysis on at least one of the following information contained in the electronic map: the system comprises intersections, traffic lights, lane number increasing areas, lane number decreasing areas, lane width, lane number, main road number, auxiliary road number and road isolation guardrail length;
And generating static road network information according to the statistical analysis result.
9. The apparatus of claim 6, wherein the second analysis unit is further configured to:
counting the number of different types of obstacles in different time at different places of the road network; and/or
Counting the traffic volumes of the target intersections in the road network at different times; and/or
Counting the number of different types of obstacles within a range taking the position of the vehicle as the center of a circle and taking a preset distance as a radius; and/or
And determining the traffic distribution of the road network in a preset statistical period.
10. The apparatus of claim 6 wherein the traffic participant information includes traffic participant violation information, lane change information; and
the third analysis unit is further configured to:
carrying out statistical analysis on the traffic participant violation information sent by the at least one vehicle to obtain traffic participant violation characteristics; and/or
Counting the change distance of the traffic participating vehicle when changing lanes and the vehicle speed when changing lanes; and/or
And counting the vehicle speed of the traffic participating vehicles.
11. An electronic device, comprising:
At least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010348532.XA CN111477028B (en) | 2020-04-28 | 2020-04-28 | Method and device for generating information in automatic driving |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010348532.XA CN111477028B (en) | 2020-04-28 | 2020-04-28 | Method and device for generating information in automatic driving |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111477028A CN111477028A (en) | 2020-07-31 |
CN111477028B true CN111477028B (en) | 2022-05-24 |
Family
ID=71762952
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010348532.XA Active CN111477028B (en) | 2020-04-28 | 2020-04-28 | Method and device for generating information in automatic driving |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111477028B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112598897B (en) * | 2020-09-21 | 2021-10-15 | 禾多科技(北京)有限公司 | Traffic participant behavior detection method, device, electronic equipment and medium |
CN113554871B (en) * | 2021-07-19 | 2023-03-21 | 联想(北京)有限公司 | Internet of vehicles data processing method and electronic equipment |
CN114863701B (en) * | 2022-04-26 | 2024-01-16 | 北京百度网讯科技有限公司 | Traffic signal lamp control method, device, electronic equipment and medium |
CN115116041A (en) * | 2022-06-16 | 2022-09-27 | 阿里巴巴达摩院(杭州)科技有限公司 | Method and device for training state recognition model and method and device for acquiring driving state of obstacle vehicle |
CN117058900A (en) * | 2023-04-24 | 2023-11-14 | 深圳市赛诺杰科技有限公司 | Traffic signal lamp system based on buried signal lamp |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102012017569A1 (en) * | 2012-09-06 | 2013-03-14 | Daimler Ag | Method for operating vehicle i.e. car, involves determining height profile of road lying ahead of vehicle, and automatically reducing momentary driving speed for passing over unevenness in road to preset driving speed |
CN104477167A (en) * | 2014-11-26 | 2015-04-01 | 浙江大学 | Intelligent driving system and control method thereof |
CN105954040A (en) * | 2016-04-22 | 2016-09-21 | 百度在线网络技术(北京)有限公司 | Testing method and device for driverless automobiles |
US9718466B2 (en) * | 2014-11-12 | 2017-08-01 | Hyundai Motor Company | Driving path planning apparatus and method for autonomous vehicle |
CN107727411A (en) * | 2017-10-30 | 2018-02-23 | 青岛慧拓智能机器有限公司 | A kind of automatic driving vehicle test and appraisal scene generation system and method |
CN108108448A (en) * | 2017-12-27 | 2018-06-01 | 北京中交兴路车联网科技有限公司 | A kind of method and system for generating national road portrait |
CN108216249A (en) * | 2016-12-21 | 2018-06-29 | 罗伯特·博世有限公司 | The system and method detected for the ambient enviroment of vehicle |
CN109902899A (en) * | 2017-12-11 | 2019-06-18 | 百度在线网络技术(北京)有限公司 | Information generating method and device |
CN110020504A (en) * | 2019-04-23 | 2019-07-16 | 吉林大学 | Unmanned vehicle running environment complexity quantization method based on rear intrusion and collision time |
CN110126822A (en) * | 2018-02-08 | 2019-08-16 | 本田技研工业株式会社 | Vehicle control system, control method for vehicle and storage medium |
CN110379165A (en) * | 2019-07-26 | 2019-10-25 | 中国第一汽车股份有限公司 | A kind of road type prediction technique, device, equipment and storage medium |
CN110517177A (en) * | 2018-05-21 | 2019-11-29 | 上海申通地铁集团有限公司 | Generation method, the portrait method and system of rail traffic station of model |
CN110597711A (en) * | 2019-08-26 | 2019-12-20 | 湖南大学 | Automatic driving test case generation method based on scene and task |
CN110996053A (en) * | 2019-11-26 | 2020-04-10 | 浙江吉城云创科技有限公司 | Environment safety detection method and device, terminal and storage medium |
CN111027430A (en) * | 2019-11-29 | 2020-04-17 | 西安交通大学 | Traffic scene complexity calculation method for intelligent evaluation of unmanned vehicles |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103207090B (en) * | 2013-04-09 | 2016-02-24 | 北京理工大学 | A kind of automatic driving vehicle environmental simulation test macro and method of testing |
CN105488534B (en) * | 2015-12-04 | 2018-12-07 | 中国科学院深圳先进技术研究院 | Traffic scene deep analysis method, apparatus and system |
CN109189760A (en) * | 2018-08-16 | 2019-01-11 | 北京易华录信息技术股份有限公司 | A kind of building of traffic element portrait and analysis method based on big data technology |
CN109520744B (en) * | 2018-11-12 | 2020-04-21 | 百度在线网络技术(北京)有限公司 | Driving performance testing method and device for automatic driving vehicle |
-
2020
- 2020-04-28 CN CN202010348532.XA patent/CN111477028B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102012017569A1 (en) * | 2012-09-06 | 2013-03-14 | Daimler Ag | Method for operating vehicle i.e. car, involves determining height profile of road lying ahead of vehicle, and automatically reducing momentary driving speed for passing over unevenness in road to preset driving speed |
US9718466B2 (en) * | 2014-11-12 | 2017-08-01 | Hyundai Motor Company | Driving path planning apparatus and method for autonomous vehicle |
CN104477167A (en) * | 2014-11-26 | 2015-04-01 | 浙江大学 | Intelligent driving system and control method thereof |
CN105954040A (en) * | 2016-04-22 | 2016-09-21 | 百度在线网络技术(北京)有限公司 | Testing method and device for driverless automobiles |
CN108216249A (en) * | 2016-12-21 | 2018-06-29 | 罗伯特·博世有限公司 | The system and method detected for the ambient enviroment of vehicle |
CN107727411A (en) * | 2017-10-30 | 2018-02-23 | 青岛慧拓智能机器有限公司 | A kind of automatic driving vehicle test and appraisal scene generation system and method |
CN109902899A (en) * | 2017-12-11 | 2019-06-18 | 百度在线网络技术(北京)有限公司 | Information generating method and device |
CN108108448A (en) * | 2017-12-27 | 2018-06-01 | 北京中交兴路车联网科技有限公司 | A kind of method and system for generating national road portrait |
CN110126822A (en) * | 2018-02-08 | 2019-08-16 | 本田技研工业株式会社 | Vehicle control system, control method for vehicle and storage medium |
CN110517177A (en) * | 2018-05-21 | 2019-11-29 | 上海申通地铁集团有限公司 | Generation method, the portrait method and system of rail traffic station of model |
CN110020504A (en) * | 2019-04-23 | 2019-07-16 | 吉林大学 | Unmanned vehicle running environment complexity quantization method based on rear intrusion and collision time |
CN110379165A (en) * | 2019-07-26 | 2019-10-25 | 中国第一汽车股份有限公司 | A kind of road type prediction technique, device, equipment and storage medium |
CN110597711A (en) * | 2019-08-26 | 2019-12-20 | 湖南大学 | Automatic driving test case generation method based on scene and task |
CN110996053A (en) * | 2019-11-26 | 2020-04-10 | 浙江吉城云创科技有限公司 | Environment safety detection method and device, terminal and storage medium |
CN111027430A (en) * | 2019-11-29 | 2020-04-17 | 西安交通大学 | Traffic scene complexity calculation method for intelligent evaluation of unmanned vehicles |
Also Published As
Publication number | Publication date |
---|---|
CN111477028A (en) | 2020-07-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111477028B (en) | Method and device for generating information in automatic driving | |
WO2020224434A1 (en) | Driving simulation method and apparatus, electronic device and computer storage medium | |
CN109141464B (en) | Navigation lane change prompting method and device | |
CN113032285B (en) | High-precision map testing method and device, electronic equipment and storage medium | |
CN108921200B (en) | Method, apparatus, device and medium for classifying driving scene data | |
CN111739344B (en) | Early warning method and device and electronic equipment | |
CN109919347B (en) | Road condition generation method, related device and equipment | |
CN111680362B (en) | Automatic driving simulation scene acquisition method, device, equipment and storage medium | |
CN112581763A (en) | Method, device, equipment and storage medium for detecting road event | |
CN111694973A (en) | Model training method and device for automatic driving scene and electronic equipment | |
JP2023055697A (en) | Automatic driving test method and apparatus, electronic apparatus and storage medium | |
CN111767360B (en) | Method and device for marking virtual lane at intersection | |
CN111121815A (en) | Path display method and system based on AR-HUD navigation and computer storage medium | |
CN110823237B (en) | Starting point binding and prediction model obtaining method, device and storage medium | |
CN111338232B (en) | Automatic driving simulation method and device | |
CN113091757A (en) | Map generation method and device | |
US20230159052A1 (en) | Method for processing behavior data, method for controlling autonomous vehicle, and autonomous vehicle | |
CN114475656B (en) | Travel track prediction method, apparatus, electronic device and storage medium | |
US11940287B2 (en) | Device and method for route planning | |
US12039757B2 (en) | Associating labels between multiple sensors | |
CN115657494A (en) | Virtual object simulation method, device, equipment and storage medium | |
CN115062240A (en) | Parking lot sorting method and device, electronic equipment and storage medium | |
CN111767651B (en) | Index prediction model construction method, index prediction method and device | |
CN113470343B (en) | Road blocking opening detection method, device, equipment and storage medium | |
CN111563046B (en) | Method and device for generating information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |