CN115342826A - Scene generation method and device for automatic driving of vehicle and control method thereof - Google Patents

Scene generation method and device for automatic driving of vehicle and control method thereof Download PDF

Info

Publication number
CN115342826A
CN115342826A CN202211003873.9A CN202211003873A CN115342826A CN 115342826 A CN115342826 A CN 115342826A CN 202211003873 A CN202211003873 A CN 202211003873A CN 115342826 A CN115342826 A CN 115342826A
Authority
CN
China
Prior art keywords
scene
driving
vehicle
lane
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211003873.9A
Other languages
Chinese (zh)
Inventor
舒伟
董汉
陈超
尤超
徐磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Tsing Standard Automobile Technology Co ltd
Original Assignee
Suzhou Tsing Standard Automobile Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Tsing Standard Automobile Technology Co ltd filed Critical Suzhou Tsing Standard Automobile Technology Co ltd
Priority to CN202211003873.9A priority Critical patent/CN115342826A/en
Publication of CN115342826A publication Critical patent/CN115342826A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3658Lane guidance
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3667Display of a road map
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3691Retrieval, searching and output of information related to real-time traffic, weather, or environmental conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Ecology (AREA)
  • Environmental & Geological Engineering (AREA)
  • Environmental Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to a scene generation method and device for automatic driving of a vehicle and a control method thereof. The method comprises the following steps: planning a driving route on an electronic map according to a preset starting point and a preset end point of the vehicle; sequentially acquiring the whole road type information corresponding to the planned driving route according to the driving sequence of the vehicle, and sectionally connecting the whole roads according to the road types; combining real-time road condition information, extracting corresponding scenes at a field Jing Kuna according to the road type of each section of road, and fitting to form a first driving scene; and updating the first driving scene according to the real-time road condition information in the process that the vehicle drives according to the first driving scene to form a second driving scene. The method considers the actual operation environment information in the automatic driving process, increases the safety of automatic driving, and the fitted second driving scene comprises the scenes in the scene library, so that the data analysis selection mode is more scientific and reasonable than the manual control mode, and the driving safety is ensured.

Description

Scene generation method and device for automatic driving of vehicle and control method thereof
Technical Field
The present application relates to the field of scene generation technology for vehicle automatic driving, and in particular, to a scene generation method, device and control method for vehicle automatic driving.
Background
The automatic driving vehicle depends on the cooperation of artificial intelligence, visual calculation, radar, monitoring device and global positioning system, so that the computer can automatically and safely operate the motor vehicle without any active operation of human.
The scene library in the automatic driving process is generally constructed based on a large amount of data acquisition, data analysis and other steps in the early stage, however, in the actual operation process of the automatic driving vehicle, the actual requirements of the automatic driving of the vehicle are difficult to meet only by using the pre-constructed scene library, and the safety of the automatic driving can be reduced.
Disclosure of Invention
Based on this, it is necessary to solve the above technical problems, embodiments of the present invention provide a method, an apparatus, and a control method for generating a scene for automatic driving of a vehicle, where a currently applicable driving scene is generated by timely adjusting external real-time traffic information, and an influence of environmental change in practical application is considered in an automatic driving process, so as to solve a technical problem that safety of automatic driving is difficult to guarantee at present.
In one aspect, a scene generation method for vehicle automatic driving is provided, the method including:
planning a driving route on an electronic map according to a preset starting point and a preset end point of the vehicle;
sequentially acquiring whole road type information corresponding to the planned driving route according to the driving sequence of the vehicle, and sectionally connecting the whole roads according to the road types;
combining real-time road condition information, extracting corresponding scenes at a field Jing Kuna according to the road type of each section of road, and fitting to form a first driving scene; and
and updating the first driving scene according to the real-time road condition information to form a second driving scene in the process that the vehicle drives according to the first driving scene.
In one embodiment, before extracting the corresponding scene according to the road type of each road in the scene library, the method further includes: constructing a scene library, wherein the scene library comprises a plurality of scenes arranged according to road types; the road types comprise a express way, a main road, a secondary road and a branch; each road is provided with attribute, structure, geometry, road network connection and lane information; the attributes comprise length, height, speed limit and height limit; the structure comprises a horizontal road, an uphill road, a downhill road and an arch bridge road; the geometry comprises a linear type, a curve type, an arc shape and a spiral line type; the road network connection comprises a road head and a road tail; the lane information comprises lane types, lane attributes, lane lines, lane starting points and pavement; the types of the scenes comprise a basic packet, a primary packet, a middle-level packet and a high-level packet; the foundation comprises a standard regulation scene; the primary package comprises standard regulation scenes and natural driving data reconstruction cases; the middle-level package comprises standard regulation scenes, natural driving data reconstruction cases and accident scenes; the advanced package comprises a standard regulation scene, a natural driving data reconstruction case, an accident scene and an automatic driving test failure scene.
In one embodiment, the lane line comprises a road center line, the lane types comprise a left-turn lane, a right-turn lane, a straight lane, and a u-turn lane, the lane attributes comprise a speed limit condition, a vehicle limit condition, and the pavement comprises lane flatness.
In one embodiment, the fitting forms a first driving scenario, including:
dividing the number of lanes of each road section according to the lane lines, numbering the lanes after division, determining candidate lane numbers in which the vehicle can run according to the lane types, determining available lanes corresponding to vehicle types from the candidate lane numbers according to the lane attributes, and acquiring the highest running speed of the vehicle on each available lane based on the lane attributes and pavement of the available lanes;
acquiring the degree of congestion and the visible range of the vehicle;
calculating the actual running speed of the vehicle on each available lane according to the degree of congestion of the vehicle and the highest running speed on each available lane; simultaneously extracting the optimal scene corresponding to each section of road by combining the visible range; and splicing all the optimal scenes to form the first driving scene.
In one embodiment, the extracting, in combination with the visibility range, an optimal scene corresponding to each road segment includes:
determining the type of selectable scenes according to the size of the visible range;
wherein the maximum value of the visible range is compared with a first threshold value, a second threshold value and a third threshold value which are increased in sequence,
when the visibility range is greater than or equal to a third threshold, selecting the basic packet;
selecting the primary packet when the visibility range is greater than or equal to a second threshold and less than a third threshold;
when the visibility range is greater than or equal to a first threshold value and less than a second threshold value, selecting the intermediate-level packet;
selecting the premium package when the visibility range is less than a first threshold.
In one embodiment, when constructing the scene library, the method includes:
a data acquisition step, which is to acquire lane information, nearby vehicle information, weather information, vehicle type information, vehicle speed information and lane occupation information during vehicle travel;
a data fusion step, wherein the collected data are fused to form a plurality of available scenes;
a scene extraction step of extracting each available scene;
a scene labeling step, labeling available situations one by one for the extracted scenes;
a scene analysis step, which is used for verifying the available situation and analyzing whether the corresponding scene is reasonable or not; and
and a scene construction step, when the scene is reasonable, classifying the scene into at least one of a basic packet, a primary packet, a middle packet and a high packet according to the complexity of each scene.
In one embodiment, the updating the first driving scene to form a second driving scene according to the real-time traffic status information during the driving process of the vehicle includes:
modularizing the vehicle and nearby vehicles, and acquiring the driving data of the nearby vehicles in real time by the vehicle;
judging whether the first driving scene of the vehicle conflicts with the driving data of nearby vehicles or not, and if not, keeping the speed and driving lane of the vehicle; and if so, adjusting the speed and/or the driving lane of the vehicle, and updating the first driving scene to form a second driving scene.
In one embodiment, the driving data of the nearby vehicle includes lane change trajectory data of the nearby vehicle; the lane change track data of the nearby vehicles comprise the current lane of the vehicle, the lane after the vehicle is changed, the vehicle speed, the starting point and the ending point of the vehicle lane change, the vehicle transverse speed and the vehicle lane change duration.
In another aspect, there is provided a scene generating apparatus for automatic driving of a vehicle, the apparatus including:
the planning driving route module is used for planning a driving route on the electronic map according to the preset starting point and end point positions of the vehicle;
the road segmentation module is used for sequentially acquiring the whole road type information corresponding to the planned driving route according to the driving sequence of the vehicles and sectionally connecting the whole roads according to the road types;
forming a first driving scene module, which is used for extracting corresponding scenes according to the road type of each section of road at a field Jing Kuna by combining real-time road condition information and fitting to form a first driving scene;
and forming a second driving scene module, which is used for updating the first driving scene according to the real-time road condition information to form a second driving scene when the vehicle drives according to the first driving scene.
In still another aspect, a control method for automatic driving of a vehicle is provided, which includes the steps of:
acquiring the first driving scene formed by the scene generation method for automatic driving of the vehicle;
according to the planned driving route of the vehicle, the vehicle travels according to the first driving scene by combining with real-time road condition information, and the speed and/or the driving lane of the vehicle are controlled;
and updating the first driving scene to form a second driving scene according to the real-time road condition information during the driving process of the vehicle, and controlling the speed and/or the driving lane of the vehicle according to the second driving scene.
According to the scene generation method, the device and the control method for the automatic driving of the vehicle, the first driving scene is timely updated to form the second driving scene based on the real-time road condition information of the outside world, the updating of the second driving scene can be realized by finely adjusting and controlling the vehicle on the basis of the first driving scene, the preset scene library is corrected by continuously combining the real-time road condition information in the automatic driving process, the authenticity of the automatic driving scene is improved, and the safety of the automatic driving is improved; and the fitted second driving scene comprises scenes in a scene library, and the data analysis selection mode is more scientific and reasonable than a manual control mode, so that the safety of automatic driving is further ensured.
Drawings
FIG. 1 is a diagram of an embodiment of an application environment of a method for generating a scene for automatic driving of a vehicle;
FIG. 2 is a schematic flow chart diagram illustrating a method for generating a scene for automatic vehicle driving according to one embodiment;
FIG. 3 is a schematic flow diagram of optimal scene extraction in one embodiment;
FIG. 4 is a schematic flow chart illustrating the construction of a scene library according to one embodiment;
FIG. 5 is a flow diagram illustrating the formation of a second driving scenario in one embodiment;
FIG. 6 is a block diagram showing the structure of a scene generator for automatic driving of a vehicle according to an embodiment;
FIG. 7 is a flowchart illustrating a control method for automatic driving of a vehicle according to an embodiment;
FIG. 8 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The scene generation method for vehicle automatic driving provided by the application can be realized in the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The terminal 102 is a vehicle-mounted control system or a control device connected with the vehicle-mounted control system, and can control automatic driving of a vehicle; the server 104 is a cloud service for issuing a control instruction to the terminal 102 to implement automatic driving of the vehicle. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 104 may be implemented by an independent server or a server cluster formed by multiple servers.
In one embodiment, as shown in fig. 2, a scene generation method for automatic driving of a vehicle is provided, which is described by taking the method as an example applied to the server 104 in fig. 1, and includes the following steps S1 to S5.
And S1, planning a driving route on an electronic map according to the preset starting point and end point positions of the vehicle.
And S2, sequentially acquiring the whole-course road type information corresponding to the planned driving route according to the driving sequence of the vehicles, and connecting the whole-course roads in sections according to the road types.
And S3, constructing a scene library, wherein the scene library comprises a plurality of scenes arranged according to road types.
And S4, combining the real-time road condition information, extracting a corresponding scene at a field Jing Kuna according to the road type of each section of road, and fitting to form a first driving scene.
And S5, updating the first driving scene according to the real-time road condition information to form a second driving scene in the process that the vehicle travels according to the first driving scene.
The road types comprise express ways, main roads, secondary roads and branch roads; each road is provided with attribute, structure, geometry, road network connection and lane information; the attributes comprise length, height, speed limit and height limit; the structure comprises a horizontal road, an uphill road, a downhill road and an arch bridge road; the geometry comprises a linear type, a curve type, an arc shape and a spiral line type; the road network connection comprises a road head and a road tail; the lane information comprises lane types, lane attributes, lane lines, lane starting points and pavement; the types of the scenes comprise a basic packet, a primary packet, a middle-level packet and a high-level packet; only standard regulation scenes are contained in the basic bag; the primary package comprises standard regulation scenes and natural driving data reconstruction cases; the middle-level package comprises a standard regulation scene, a natural driving data reconstruction case and an accident scene; the advanced package comprises a standard regulation scene, a natural driving data reconstruction case, an accident scene and an automatic driving test failure scene.
In the scene generation method for automatic vehicle driving, the lane line includes a road center line, the lane types include a left-turn lane, a right-turn lane, a straight lane and a u-turn lane, the lane attributes include lane materials, speed limit conditions and vehicle limitation types, and the pavement includes information such as lane flatness.
In this embodiment, the fitting forms a first driving scenario, including:
and S41, dividing the number of lanes of each road section according to the lane lines, numbering each divided lane, determining the number of candidate lanes in which the vehicle can travel according to the lane types, determining the available lanes corresponding to the vehicle types from the number of the candidate lanes according to the lane attributes, and acquiring the highest traveling speed of the vehicle on each available lane based on the lane attributes and pavement of the available lanes.
And S42, acquiring the degree of congestion and the visible range of the vehicle by combining the real-time road condition information.
The vehicle crowdedness and the visible range can be obtained in real time through a sensing system arranged on the vehicle.
Step S43, calculating the actual running speed of the vehicle on each available lane according to the vehicle congestion degree and the highest running speed on each available lane; simultaneously extracting the optimal scene corresponding to each section of road by combining the visible range; and splicing all the optimal scenes to form the first driving scene.
In this embodiment, when the degree of congestion of the vehicle on the current driving road section detected by the sensing system is high, the driving speed of the vehicle can be appropriately reduced to ensure safety in the driving process.
As shown in fig. 3, in step S43, the extracting, in combination with the visibility range, an optimal scene corresponding to each road segment includes:
and step S431, determining the types of selectable scenes according to the size of the visible range.
Step S432, comparing the maximum value of the visible range with a first threshold, a second threshold, and a third threshold that are sequentially increased, and when the visible range is greater than or equal to the third threshold, selecting the basic packet; selecting the primary packet when the visibility range is greater than or equal to a second threshold and less than a third threshold; when the visibility range is greater than or equal to a first threshold value and less than a second threshold value, selecting the middle-level packet; selecting the premium package when the visibility range is less than a first threshold.
In the embodiment, the visible range in the actual driving environment is considered when the scene complexity of the automatic driving is selected, and the method further guarantees the safety in the automatic driving process.
As shown in fig. 4, in step S3 of this embodiment, when constructing a scene library, the method includes:
step S31, a data acquisition step, namely acquiring lane information, nearby vehicle information, weather information, vehicle type information, vehicle speed information and lane occupation information during vehicle travel; the lane occupation information comprises lane occupation conditions such as a fault vehicle, a maintenance area and traffic jam and parking.
And S32, a data fusion step, namely fusing the acquired data to form a plurality of available scenes.
Step S33, a scene extraction step, extracting each available scene.
And step S34, a scene labeling step, wherein the available situations of the extracted scenes are labeled one by one.
And step S35, a scene analysis step, namely verifying the available situation and analyzing whether the corresponding scene is reasonable or not.
And S36, a scene construction step, wherein when the scene is reasonable, the scene is classified into at least one of a basic package, a primary package, a middle package and a high package according to the complexity of each scene.
As shown in fig. 5, in step S5 in this embodiment, the updating the first driving scenario to form a second driving scenario according to the real-time traffic status information during the driving process of the vehicle includes:
step S51, modularizing the vehicle and nearby vehicles, and acquiring the running data of the nearby vehicles in real time by the vehicle;
step S52, judging whether the first driving scene of the vehicle conflicts with the driving data of nearby vehicles, and if not, keeping the speed and driving lane of the vehicle; and if so, adjusting the speed and/or the driving lane of the vehicle, and updating the first driving scene to form a second driving scene.
In the present embodiment, the travel data of the nearby vehicle includes lane change trajectory data (also referred to as a cut-in scene) of the nearby vehicle; the lane change track data of the nearby vehicles comprises the current lane of the vehicle, the lane after the vehicle is changed, the vehicle speed, the starting point and the ending point of the vehicle lane change, the vehicle transverse speed and the vehicle lane change duration.
It should be understood that although the various steps in the flow charts of fig. 2-5 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-5 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 6, a scene generating device 10 for vehicle automatic driving is provided, which includes a planned driving route module 1, a road segmentation module 2, a first driving scene forming module 3 and a second driving scene forming module 4.
The planned driving route module 1 is used for planning a driving route on an electronic map according to a preset starting point and a preset end point of the vehicle.
The road segmentation module 2 is used for sequentially acquiring the whole road type information corresponding to the planned driving route according to the driving sequence of the vehicles and connecting the whole roads in a segmentation mode according to the road type.
The first driving scene forming module 3 is used for extracting corresponding scenes in a field Jing Kuna according to the road type of each road by combining real-time road condition information and fitting to form a first driving scene;
the second driving scene forming module 4 is configured to update the first driving scene according to the real-time traffic information to form a second driving scene during the vehicle traveling according to the first driving scene.
For specific limitations of the scene generation apparatus for vehicle automatic driving, reference may be made to the above limitations of the scene generation method for vehicle automatic driving, which are not described herein again. The modules in the scene generating device for automatic driving of the vehicle can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
As shown in fig. 7, in one embodiment, there is provided a control method of automatic driving of a vehicle, characterized by comprising the steps of:
step S11, acquiring the first driving scene formed by the scene generation method for automatic driving of the vehicle;
step S12, according to the planned driving route of the vehicle, the vehicle advances according to the first driving scene by combining with real-time road condition information, and the speed and/or the driving lane of the vehicle are/is controlled; and
and S13, updating the first driving scene to form a second driving scene according to the real-time road condition information during the driving process of the vehicle, and controlling the speed and/or the driving lane of the vehicle according to the second driving scene.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 8. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing scene generation data of automatic driving of the vehicle. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a scene generation method for vehicle autonomous driving.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
planning a driving route on an electronic map according to a preset starting point and a preset end point of the vehicle;
sequentially acquiring the whole road type information corresponding to the planned driving route according to the driving sequence of the vehicle, and sectionally connecting the whole roads according to the road types;
combining real-time road condition information, extracting corresponding scenes at a field Jing Kuna according to the road type of each section of road, and fitting to form a first driving scene;
and updating the first driving scene according to the real-time road condition information to form a second driving scene in the process that the vehicle drives according to the first driving scene.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
planning a driving route on an electronic map according to a preset starting point and a preset end point of the vehicle;
sequentially acquiring the whole road type information corresponding to the planned driving route according to the driving sequence of the vehicle, and sectionally connecting the whole roads according to the road types;
combining real-time road condition information, extracting corresponding scenes in a field Jing Kuna according to the road type of each section of road, and fitting to form a first driving scene;
and updating the first driving scene according to the real-time road condition information to form a second driving scene in the process that the vehicle drives according to the first driving scene.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A scene generation method for automatic driving of a vehicle is characterized by comprising the following steps:
planning a driving route on an electronic map according to a preset starting point and a preset end point of a vehicle;
sequentially acquiring whole road type information corresponding to the planned driving route according to the driving sequence of the vehicle, and sectionally connecting the whole roads according to the road types;
combining real-time road condition information, extracting corresponding scenes at a field Jing Kuna according to the road type of each section of road, and fitting to form a first driving scene; and
updating the first driving scene according to real-time road condition information to form a second driving scene in the process that the vehicle drives according to the first driving scene;
the types of the scenes in the scene library comprise a basic packet, a primary packet, a middle-level packet and a high-level packet; the foundation comprises a standard regulation scene; the primary package comprises standard regulation scenes and natural driving data reconstruction cases; the middle-level package comprises a standard regulation scene, a natural driving data reconstruction case and an accident scene; the advanced package comprises a standard regulation scene, a natural driving data reconstruction case, an accident scene and an automatic driving test failure scene.
2. The method for generating a scene for automatic driving of a vehicle according to claim 1, wherein before extracting a corresponding scene according to a road type of each road in the scene library, the method further comprises:
constructing a scene library, wherein the scene library comprises a plurality of scenes arranged according to road types;
the road types comprise express ways, main roads, secondary roads and branch roads; each road is provided with attribute, structure, geometry, road network connection and lane information; the attributes comprise length, height, speed limit and height limit; the structure comprises a horizontal road, an uphill road, a downhill road and an arch bridge road; the geometry comprises a linear type, a curve type, an arc shape and a spiral line type; the road network connection comprises a road head and a road tail; the lane information includes lane type, lane attribute, lane line, lane starting point, and pavement.
3. The method of generating a scene for automatic driving of a vehicle according to claim 2, wherein the lane line includes a road center line, the lane types include a left-turn lane, a right-turn lane, a straight lane, and a u-turn lane, the lane attributes include a speed limit condition, a vehicle limit condition, and the pavement includes a lane flatness.
4. The vehicle autopilot context generating method of claim 3 wherein the fitting forms a first driving context comprising:
dividing the number of lanes of each road section according to the lane lines, numbering the lanes after division, determining candidate lane numbers in which the vehicle can run according to the lane types, determining available lanes corresponding to vehicle types from the candidate lane numbers according to the lane attributes, and acquiring the highest running speed of the vehicle on each available lane based on the lane attributes and pavement of the available lanes;
acquiring the degree of congestion and the visible range of the vehicle;
calculating the actual running speed of the vehicle on each available lane according to the degree of congestion of the vehicle and the highest running speed on each available lane; simultaneously extracting the optimal scene corresponding to each section of road by combining the visible range; and splicing all the optimal scenes to form the first driving scene.
5. The method for generating scenes for automatic driving of vehicles according to claim 4, wherein said extracting the optimal scene corresponding to each road section in combination with said visibility range comprises:
determining the types of selectable scenes according to the size of the visible range;
wherein the maximum value of the visible range is compared with a first threshold, a second threshold and a third threshold which are increased in sequence,
when the visibility range is greater than or equal to a third threshold value, selecting the basic packet;
selecting the primary packet when the visibility range is greater than or equal to a second threshold and less than a third threshold;
when the visibility range is greater than or equal to a first threshold value and less than a second threshold value, selecting the middle-level packet;
selecting the premium package when the visibility range is less than a first threshold.
6. The method for generating a scene for automatic driving of a vehicle according to claim 2, wherein the method for generating a scene library for automatic driving of a vehicle comprises:
a data acquisition step, which is to acquire lane information, nearby vehicle information, weather information, vehicle type information, vehicle speed information and lane occupation information during vehicle travel;
a data fusion step, wherein the collected data are fused to form a plurality of available scenes;
a scene extraction step of extracting each available scene;
a scene labeling step, labeling available situations of the extracted scenes one by one;
a scene analysis step, which is used for verifying the available situation and analyzing whether the corresponding scene is reasonable or not; and
and a scene construction step, when the scene is reasonable, classifying the scene into at least one of a basic packet, a primary packet, a middle packet and a high packet according to the complexity of each scene.
7. The method for generating the scene of the vehicle automatic driving according to claim 1, wherein the vehicle updates the first driving scene to form a second driving scene according to the real-time traffic information during the driving process, and the method comprises:
modularizing the vehicle and nearby vehicles, and acquiring the driving data of the nearby vehicles in real time by the vehicle;
judging whether the first driving scene of the vehicle conflicts with the driving data of nearby vehicles or not, and if not, keeping the speed and driving lane of the vehicle; and if so, adjusting the speed and/or the driving lane of the vehicle, and updating the first driving scene to form a second driving scene.
8. The scene generation method for vehicle autonomous driving according to claim 7, characterized in that the travel data of the nearby vehicle includes lane change trajectory data of the nearby vehicle;
the lane change track data of the nearby vehicles comprises the current lane of the vehicle, the lane after the vehicle is changed, the vehicle speed, the starting point and the ending point of the vehicle lane change, the vehicle transverse speed and the vehicle lane change duration.
9. A scene generation apparatus for automatic driving of a vehicle, the apparatus comprising:
the planning driving route module is used for planning a driving route on the electronic map according to the preset starting point and end point positions of the vehicle;
the road segmentation module is used for sequentially acquiring the whole-course road type information corresponding to the planned driving route according to the driving sequence of the vehicles and sectionally connecting the whole-course roads according to the road types;
a first driving scene module is formed and used for combining real-time road condition information, extracting corresponding scenes in a field Jing Kuna according to the road type of each road, and fitting to form a first driving scene;
a second driving scene forming module, configured to update the first driving scene according to the real-time traffic information to form a second driving scene when the vehicle travels according to the first driving scene;
the types of the scenes in the scene library comprise a basic packet, a primary packet, a middle-level packet and a high-level packet; the foundation comprises standard regulation scenes; the primary package comprises standard regulation scenes and a natural driving data reconstruction case; the middle-level package comprises a standard regulation scene, a natural driving data reconstruction case and an accident scene; the advanced package comprises a standard regulation scene, a natural driving data reconstruction case, an accident scene and an automatic driving test failure scene.
10. A control method of automatic driving of a vehicle, characterized by comprising the steps of:
acquiring the first driving scene formed by the scene generation method for vehicle automatic driving according to any one of claims 1 to 8;
according to the planned driving route of the vehicle, the vehicle travels according to the first driving scene by combining with real-time road condition information, and the speed and/or the driving lane of the vehicle are controlled; and
and updating the first driving scene to form a second driving scene according to the real-time road condition information during the driving process of the vehicle, and controlling the speed and/or the driving lane of the vehicle according to the second driving scene.
CN202211003873.9A 2022-08-22 2022-08-22 Scene generation method and device for automatic driving of vehicle and control method thereof Pending CN115342826A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211003873.9A CN115342826A (en) 2022-08-22 2022-08-22 Scene generation method and device for automatic driving of vehicle and control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211003873.9A CN115342826A (en) 2022-08-22 2022-08-22 Scene generation method and device for automatic driving of vehicle and control method thereof

Publications (1)

Publication Number Publication Date
CN115342826A true CN115342826A (en) 2022-11-15

Family

ID=83953811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211003873.9A Pending CN115342826A (en) 2022-08-22 2022-08-22 Scene generation method and device for automatic driving of vehicle and control method thereof

Country Status (1)

Country Link
CN (1) CN115342826A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115665699A (en) * 2022-12-27 2023-01-31 博信通信股份有限公司 Multi-scene signal coverage optimization method, device, equipment and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115665699A (en) * 2022-12-27 2023-01-31 博信通信股份有限公司 Multi-scene signal coverage optimization method, device, equipment and medium

Similar Documents

Publication Publication Date Title
EP3859278A1 (en) Road feature point extraction method and system
CN112033425A (en) Vehicle driving assistance method and device, computer equipment and storage medium
CN108171967B (en) Traffic control method and device
CN110550030B (en) Lane changing control method and device for unmanned vehicle, computer equipment and storage medium
CN112710317A (en) Automatic driving map generation method, automatic driving method and related product
CN111160480B (en) Underground parking garage entrance and exit excavation method and device and electronic equipment
CN115342826A (en) Scene generation method and device for automatic driving of vehicle and control method thereof
CN113932792B (en) Map updating system suitable for unmanned transportation system of surface mine
DE102020205310A1 (en) Computer-implemented method for providing a data structure for the route complexity detection and validation of functionalities of an automated driving system, such data structure, computer for validating functionalities of an automated driving system, computer program for providing such a data structure and computer-readable data carriers
CN111753639A (en) Perception map generation method and device, computer equipment and storage medium
CN111967163B (en) Vehicle simulation control method and device, computer equipment and storage medium
US20230018996A1 (en) Method, device, and computer program for providing driving guide by using vehicle position information and signal light information
CN113874923B (en) Traffic signal lamp control method, device, computer equipment and storage medium
CN111223296B (en) Signal lamp control method and device, computer equipment and storage medium
CN112732844B (en) Method, apparatus, device and medium for automatically associating road object with road
DE102021201177A1 (en) Computer-implemented method and computer program for generating routes for an automated driving system
CN112116226A (en) Control method and device for simulated vehicle, computer equipment and storage medium
CN117104272A (en) Intelligent driving method, system, vehicle and storage medium
CN115892076A (en) Lane obstacle screening method and device and domain controller
CN114459495B (en) Displacement information generation method, device and computer readable storage medium
CN114116854A (en) Track data processing method, device, equipment and storage medium
CN113050452A (en) Vehicle lane change control method and device, computer equipment and storage medium
CN110570495A (en) virtual lane generation method, device and storage medium
CN111680817B (en) Route generation-based obstacle crossing vehicle calling method and device and computer equipment
CN113885319B (en) Control method, device, equipment and storage medium for vehicle confluence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination