WO2020052344A1 - 一种智能驾驶方法及智能驾驶系统 - Google Patents

一种智能驾驶方法及智能驾驶系统 Download PDF

Info

Publication number
WO2020052344A1
WO2020052344A1 PCT/CN2019/095943 CN2019095943W WO2020052344A1 WO 2020052344 A1 WO2020052344 A1 WO 2020052344A1 CN 2019095943 W CN2019095943 W CN 2019095943W WO 2020052344 A1 WO2020052344 A1 WO 2020052344A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
vehicle
driving
class
standard
Prior art date
Application number
PCT/CN2019/095943
Other languages
English (en)
French (fr)
Inventor
胡伟龙
周亚兵
刘华伟
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201910630930.8A external-priority patent/CN110893860B/zh
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP19858830.3A priority Critical patent/EP3754552A4/en
Publication of WO2020052344A1 publication Critical patent/WO2020052344A1/zh
Priority to US17/029,561 priority patent/US11724700B2/en
Priority to US18/347,051 priority patent/US20240001930A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/04Traffic conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/06Road conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/005Handover processes
    • B60W60/0061Aborting handover process
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/772Determining representative reference patterns, e.g. averaging or distorting patterns; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats

Definitions

  • the present application relates to the field of autonomous driving technology, and in particular, to an intelligent driving method and an intelligent driving system.
  • Intelligent driving vehicles add advanced sensors (radar, camera), controllers, actuators and other devices on the basis of ordinary vehicles, and realize intelligent information exchange with people, cars, roads, etc. through on-board sensing systems and information terminals.
  • the vehicle has intelligent environmental perception capabilities, which can automatically analyze the safety and dangerous state of the vehicle, and make the vehicle reach the destination according to the wishes of the person, and finally achieve the purpose of replacing the person to operate to reduce the burden of driving the car.
  • the overall control system of a smart driving vehicle will collect data of each part of each sub-system uniformly, and then process these data in a unified manner to control the smart driving vehicle. For example, it can statistically analyze the acquired road environment video images and establish a database of urban road scenes, rural road scenes, and highway scene recognition databases, and use deep convolutional neural networks to perform feature extraction and convolution training on sample pictures in the database to obtain convolutions.
  • the neural network classifier finally inputs the real-time perceptual picture into the convolutional neural network classifier for identification, and classifies the driving scene where the current vehicle is located.
  • the above-mentioned method of classifying scenes using a convolutional neural network classifier is likely to cause the real-time perception image to be unclear under the conditions of rain, fog, and lighting conditions, which reduces the input of real-time perception images into the convolutional neural network classification.
  • the accuracy of the recognition by the device can not accurately identify the current driving scene and affect the intelligent driving of the vehicle.
  • the embodiments of the present application provide an intelligent driving method and an intelligent driving system to solve the existing problem that the current driving scenario cannot be accurately identified and affects the intelligent driving of the vehicle.
  • an embodiment of the present application provides an intelligent driving method, which includes: acquiring characteristic parameters (structured semantic information, road attributes, and traffic situation spectrum) of a vehicle at the current moment and driving of the vehicle within a preset time period in the future The road attributes of the scene are compared with the feature parameters of the current moment and the feature parameters of the standard scene in the scene feature library, and the road attributes of the driving scene of the vehicle in a preset time period in the future with the road attributes of the standard scene in the scene feature database.
  • characteristic parameters structured semantic information, road attributes, and traffic situation spectrum
  • the result determines the total similarity between each scene class in the scene feature database and the driving scene of the vehicle at the current moment; the first scene class with the highest total similarity among the N scene classes is determined as the driving scene at the current moment; according to the determination result Control the driving state of the vehicle.
  • the scene class to which the vehicle currently belongs can be identified based on three dimensions: structured semantic information, road attributes, and traffic situation spectrum, making the information referenced in scene class recognition more comprehensive and reliable, and improving scene recognition The accuracy improves the realizability of intelligent driving.
  • structured semantic information instead of pictures to identify scene classes, the computational complexity is reduced.
  • the feature parameters of the current moment are compared with the feature parameters of the standard scene in the scene feature library, and the vehicle is compared in the future.
  • the road attributes of the driving scene in a preset period of time and the road attributes of the standard scene in the scene feature library, and the total similarity of the scene class is determined according to the comparison result, including: the structured semantic information of the current moment and the standard scene in the scene feature database
  • the structured semantic information is compared to obtain the first similarity of the standard scene, and the first similarities of all standard scenes belonging to the scene class are combined to calculate the first probability of the scene class; the road attributes at the current moment and the scene characteristics
  • the road attributes of standard scenes in the database are compared to obtain the second similarity of the standard scene.
  • the second similarity of all standard scenes belonging to the scene class is combined to calculate the second probability of the scene class.
  • the vehicle is preset in the future.
  • the road attributes of the driving scene during the time period are compared with the road attributes of the standard scene in the scene feature database.
  • the third similarity of the standard scene is obtained by comparison, and the third similarity of all standard scenes belonging to the scene class is combined to calculate the third probability of the scene class.
  • the traffic situation spectrum at the current moment is compared with the standard scene in the scene feature library.
  • the traffic situation spectrum is compared to obtain the fourth similarity of the standard scene.
  • the fourth similarity of all standard scenes belonging to the scene class is combined to calculate the fourth probability of the scene class.
  • the scene The second probability of the class, the third probability of the scene class, and the fourth probability of the scene class obtain the total similarity of the scene class.
  • the scene class to which the vehicle currently belongs can be identified based on three dimensions: structured semantic information, road attributes, and traffic situation spectrum.
  • the feature parameters of the current moment are compared with the feature parameters of the standard scene in the scene feature library, and the vehicle is compared in the future.
  • the method further includes: setting the similarity of the standard scene in the scene feature database that does not contain real-time structured semantic information to 0. In this way, standard scenes that do not contain real-time structured semantic information in the scene feature database can be filtered, and the complexity of subsequent structured semantic information comparisons in the scene feature database need not be reduced.
  • controlling the vehicle to perform intelligent driving according to a determination result includes: determining whether the first scenario category is in line with the former The scene class at the moment is the same; if the first scene class is the same as the scene class at the previous moment, it is judged whether the current design operation range of the vehicle meets the design operation range corresponding to the first scene class; if the current design operation range of the vehicle satisfies the first The design driving range corresponding to a scenario category maintains the current driving state; if the vehicle's current design operating range does not meet the design operating range corresponding to the first scenario category, a fault warning message is sent. In this way, when the current driving scenario is the same as the previous driving scenario, and the current driving situation of the vehicle can support the vehicle running in the current driving scenario, the current driving state is protected from being changed.
  • the method further includes: if the first scene class is different from the scene class of the previous moment, judging Whether the current design operation range of the vehicle meets the design operation range corresponding to the first scene class; if the current design operation range of the vehicle meets the design operation range corresponding to the first scene class, the vehicle is switched from the current driving state to the first scene class corresponding Driving state of the vehicle; if the current design operation range of the vehicle does not meet the design operation range corresponding to the first scenario class, determine whether the current design operation range of the vehicle meets the design operation range corresponding to the previous scenario class.
  • the vehicle If the vehicle ’s current design If the operating range meets the design operating range corresponding to the scene class at the previous moment, the scene class switching failure message is sent and the current driving state is maintained. If the current design operating range of the vehicle does not meet the corresponding design of the scene class at the previous moment, Running range, then send fault alarm information. In this way, when the current driving scene is different from the driving scene at the previous moment, the driving state of the vehicle can be intelligently switched to make it suitable for the current driving scene.
  • the method further Including: judging whether the driver has taken over the vehicle; if it is determined that the driver has taken over the vehicle, sending an operation instruction to the vehicle active execution unit on the vehicle to instruct the release of driving rights and sending a release notification to the driver; if it is determined that the driver has not taken over the vehicle
  • the vehicle sends an operation instruction to the vehicle's active execution unit to indicate a safe stop. In this way, it is possible to ensure that the driver stops intelligent driving after taking over the vehicle, which improves driving safety and user experience.
  • the method before controlling the vehicle to perform intelligent driving according to a determination result, the method further includes: acquiring intelligence Driving instructions; among them, the intelligent driving instruction is used to indicate whether to stop the intelligent driving of the vehicle; if the intelligent driving instruction is used to instruct the intelligent driving of the vehicle, the vehicle is controlled to perform intelligent driving according to the determination result; if the intelligent driving instruction is used to indicate stopping For intelligent driving of a vehicle, an operation instruction for releasing the driving right is sent to the vehicle's active execution unit on the vehicle, and a release notification is sent to the driver. In this way, intelligent driving can be performed only under the instruction of the driver (or user), which improves the user experience.
  • the present application provides an intelligent driving system, which may be a vehicle or a plurality of modules combined in the vehicle.
  • the intelligent driving system can implement the intelligent driving method described in the above aspects or possible designs, and the functions can be implemented by hardware or by executing corresponding software by hardware.
  • the hardware or software includes one or more modules corresponding to the foregoing functions.
  • the intelligent driving system may include:
  • a perceptual fusion unit is used to obtain the characteristic parameters of the current moment of the vehicle and the road attributes of the driving scene of the vehicle in a preset time period in the future; wherein the characteristic parameters include structured semantic information, road attributes, and traffic situation spectrum;
  • the scene class recognition unit is used to compare the feature parameters of the current moment with the feature parameters of the standard scene in the scene feature database, and compare the road attributes of the driving scene of the vehicle in a preset time period in the future with the road attributes of the standard scene in the scene feature database. Determine the total similarity between each scene class in the scene feature database and the driving scene of the vehicle at the current time according to the comparison result, and determine the first scene class with the highest total similarity among the N scene classes as the driving scene at the current time;
  • the scene feature library includes N scene classes, each scene class corresponds to M standard scenes, and each standard scene corresponds to a feature parameter; N is an integer greater than or equal to 1, and M is an integer greater than or equal to 1;
  • the scene-type switching module is used to control the driving state of the vehicle according to the determination result.
  • the intelligent driving system can achieve the same beneficial effects as the first aspect or any possible implementation manner of the first aspect.
  • the present application provides an intelligent driving method for an intelligent driving system that is located in a vehicle.
  • the method includes: acquiring a characteristic parameter of the vehicle at the first time and a future preset of the vehicle at the first time.
  • Road attributes of the driving scene in the time period where the characteristic parameters include structured semantic information, road attributes, and traffic situation spectrum; according to the characteristic parameters of the vehicle at the first time and the road attributes of the driving scene in the future preset time period , Select the first driving scene class in the scene feature library; display a first prompt, the first prompt is used to prompt the driver to switch the driving scene of the vehicle at the first time to the first driving scene class; receive the first instruction
  • the first instruction corresponds to the first prompt and is used to instruct to switch the driving scene of the vehicle at the first time to the first driving scene category, and control the driving state of the vehicle according to the first driving scene category.
  • a possible implementation manner is to select a first driving scene class in a scene feature library, including: comparing feature parameters of the vehicle at the first time with feature parameters of a standard scene in the feature scene library, and comparing The road attributes of the driving scene of the vehicle in the future preset time period of the first time and the road attributes of the standard scene in the scene feature library are determined according to the comparison result between each scene class in the scene feature database and the driving scene of the current moment of the vehicle.
  • Total similarity where the scene feature library includes N scene classes, each scene class corresponds to M standard scenes, and N and M are positive integers; the highest total similarity among the N scene classes is the first The scene class is determined as the driving scene at the first time.
  • another possible implementation manner is that after controlling the driving state of the vehicle according to the first driving scene class, the method further includes: selecting the second driving scene class in the scene feature database as the second time Driving scenario; displaying a second prompt, which is used to request that the driving scenario of the vehicle at the second time be switched to the second driving scenario category; when the second instruction is not received within a preset time, it is maintained
  • the driving state of the vehicle is controlled according to a second driving scene category, where the second instruction corresponds to a second prompt and is used to instruct to switch the current driving scene of the vehicle to the second driving scene category.
  • another possible implementation manner is that, when the second response is not received within a preset time, the method includes: determining that a design operation range of the vehicle at the second time does not satisfy the first scene class correspondence Design operation range; send fault alarm information.
  • another possible implementation manner is that after sending the fault alarm information, the method further includes: determining whether the driver has taken over the vehicle; and if it is determined that the driver has taken over the vehicle, sending the instruction to release the driving Right operation instructions and sending a release notification to the driver, and if it is determined that the driver has not taken over the vehicle, an operation instruction for instructing safe parking is sent.
  • an intelligent driving system includes an acquisition module, a determination module, a display module, and a control module.
  • the intelligent driving system includes an acquisition module, a determination module, a display module, and a control module.
  • an intelligent driving system including: a processor and a memory; the memory is configured to store computer execution instructions; when the intelligent driving system is running, the processor executes the computer execution instructions stored in the memory to The intelligent driving system is caused to execute the intelligent driving method according to any one of the first aspect, the third aspect, and the possible design of the first aspect, or the intelligent driving method provided by any possible implementation manner of the third aspect.
  • the intelligent driving system may further include a vehicle active execution unit, a sensor unit, and a human-machine interaction interface (or communication interface).
  • a computer-readable storage medium stores instructions that, when run on a computer, enable the computer to execute the first aspect, the third aspect, and the first aspect.
  • a computer program product containing instructions which when run on a computer, enables the computer to execute the intelligence described in any one of the first aspect, the third aspect, and the possible design of the first aspect
  • the driving method or the intelligent driving method provided by any possible implementation manner of the third aspect.
  • a chip system includes a processor and a communication interface, and is used to support the intelligent driving system to implement the functions involved in the above aspects, such as the processor to obtain the characteristic parameters of the vehicle at the current moment and the vehicle in the future.
  • the chip system further includes a memory, an active vehicle execution unit, a sensor unit, and a human-machine interaction interface.
  • the memory is used to store program instructions, data, and intelligent driving algorithms necessary for the intelligent driving system.
  • the chip system can be composed of chips, and can also include chips and other discrete devices.
  • the present invention provides a method for intelligent driving, which includes: acquiring characteristic parameters of a vehicle's current moment and road attributes of a vehicle's driving scene in a preset time period in the future; wherein the characteristic parameters include structured semantic information , Road attributes and traffic situation spectrum; comparing the characteristic parameters of the current moment of the vehicle with the characteristic parameters of the standard scene in the scene feature library, and comparing the road attributes of the driving scenario with the scene characteristics in a future preset time period of the vehicle.
  • the road attributes of the standard scene in the database, and the first similarity and the second similarity of the first standard scene and the second standard scene of each scene class in the scene feature library to the driving scene of the current moment of the vehicle are respectively determined according to the comparison results.
  • each scene class includes M standard scenes, each standard scene corresponds to a characteristic parameter; the M and N are integers greater than or equal to 2; according to The first similarity and the second similarity of each of the N scene classes determine a driving scene pair at the current moment.
  • the scene class is
  • the present invention provides an intelligent system, which is characterized in that the system includes: a perceptual fusion unit for acquiring a characteristic parameter of a current moment of a vehicle and a road attribute of a driving scene of the vehicle in a future preset time period; wherein, The feature parameters include structured semantic information, road attributes, and traffic situation spectrum; a scene class recognition unit configured to compare feature parameters of the current moment of the vehicle with feature parameters of a standard scene in a scene feature library, and compare the vehicle The road attributes of the driving scene in the future preset time period and the road attributes of the standard scene in the scene feature library, and the first standard scene and the second standard scene of each scene class in the scene feature database are respectively determined according to the comparison result.
  • the scene feature database includes N scene classes, each scene class includes M standard scenes, and each standard scene corresponds to Characteristic parameters; both M and N are integers greater than or equal to 2; according to each of the N scene classes Determining a first similarity and a second similarity-based scene at the current time corresponding to the driving scene.
  • FIG. 1 is a principle block diagram provided by an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of an intelligent driving system according to an embodiment of the present application.
  • FIG. 3 is a flowchart of a smart driving method according to an embodiment of the present application.
  • 4a is a flowchart of a method for calculating a first probability according to an embodiment of the present application
  • 4b is a flowchart of a method for calculating a second probability and a third probability according to an embodiment of the present application
  • 4c is a flowchart of a method for calculating a fourth probability according to an embodiment of the present application.
  • 5a is a flowchart of an intelligent handover method according to an embodiment of the present application.
  • FIG. 5b is a flowchart of another intelligent handover method according to an embodiment of the present application.
  • FIG. 6a is a schematic diagram of a human-computer interaction interface provided by an embodiment of the present application.
  • FIG. 6b is a schematic diagram of another human-machine interaction interface provided by an embodiment of the present application.
  • FIG. 6c is a schematic diagram of another human-computer interaction interface provided by an embodiment of the present application.
  • FIG. 6d is a schematic diagram of another human-machine interaction interface provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of another intelligent driving system according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of still another intelligent driving system according to an embodiment of the present application.
  • FIG. 1 is a principle block diagram of the intelligent driving method provided by an embodiment of the present application.
  • the idea of the embodiment of the present application is to set a scene feature library in advance, and the scene feature library includes scene classes and standard scenes corresponding to the scene classes.
  • the structured semantic information, road attributes, and traffic situation spectrum corresponding to the standard scene, and the structured semantic information, road attributes, and traffic situation spectrum and scene feature database included in the current moment of structured semantic information, road attributes, and traffic situation database Compare the spectrums separately to find the scene class that is most similar to the driving scene at the current time (for the convenience of description, the driving scene at the current time can be described as the current driving scene), and determine that the current driving scene belongs to this scene class That is, comprehensively structured semantic information, map information, and traffic situation information are used to identify the current driving scene, determine the scene category to which the current driving scene belongs, and automatically switch the intelligent driving algorithm based on the recognition result to enable the switched vehicle to drive.
  • the state conforms to the current driving scenario Intelligent driving for vehicles. For example, according to the recognition result of the current driving scene, the driving state of the vehicle can be switched to make it suitable for a new driving scene or the current driving state can be maintained.
  • the method may be executed by the intelligent driving system 200 shown in FIG. 2.
  • the intelligent driving system 200 may be located on a vehicle, and the vehicle may be a car, a truck, a truck, or any other type of vehicle, without limitation.
  • the intelligent driving system 200 may include, but is not limited to, a processor 210, a memory 220, a vehicle active execution unit 230, a sensor unit 240, a human-computer interaction interface 250, and connecting different system components (including the memory 220, processing Device 210, vehicle active execution unit 230, sensor unit 240, and human-machine interface 250).
  • the processor 210 may be a central processing unit (CPU), a specific integrated circuit (ASIC), or an integrated circuit (ASIC) configured to implement one or more embodiments of the present application.
  • Circuits for example: one or more digital signal processors (DSPs), or one or more field programmable gate arrays (FPGAs).
  • DSPs digital signal processors
  • FPGAs field programmable gate arrays
  • the memory 220 stores a program and an intelligent driving algorithm, and the program code can be executed by the processor 210, so that the processor 210 executes the intelligent driving method according to the embodiment of the present application.
  • the processor 210 may perform steps as shown in FIG. 3. After the processor 210 recognizes the current driving scenario, it may perform intelligent driving on the vehicle according to the intelligent driving algorithm corresponding to the current driving scenario.
  • the memory 220 may include a readable medium in the form of a volatile memory, such as a random access memory unit (RAM) and / or a cache memory, and may further include a read-only memory (ROM) ).
  • the memory 220 may further include a program / utility tool having a set of (at least one) program modules.
  • program modules include, but are not limited to: an operating system, one or more application programs, other program modules, and program data. These examples Each or some combination may include an implementation of the network environment.
  • the vehicle execution unit 230 includes, but is not limited to, a braking system, a steering system, a driving system, and a lighting system, and each system is capable of receiving instructions from an upper layer and executing instructions.
  • the processor 210 may send an operation instruction to the vehicle execution unit 230 according to the provisions of the intelligent driving algorithm, so that the driving state of the vehicle conforms to the current driving scenario.
  • the sensor unit 240 includes, but is not limited to, a camera, a millimeter-wave radar, a lidar, a map, a positioning module, and other systems, and is mainly used to collect information about things around the vehicle.
  • the positioning module may be a Global Positioning System (Global Positioning System, GPS), a Glonass system, or a Beidou system.
  • the human-machine interaction module 250 may be a display screen or a touch screen provided on the vehicle, and may be referred to as a human-machine interface (HMI).
  • HMI human-machine interface
  • the driver may send an operation instruction to the vehicle through a touch operation.
  • the human-machine interaction module 250 may The instruction or information generated by the processor 210 is displayed to the driver.
  • the human-machine interaction module 250 may also use voice and other methods to implement the interaction between the vehicle and the driver.
  • the application does not limit the form and operation mode of the human-machine interaction module 250.
  • the bus 260 may be one or more of several types of bus structures, including a memory bus or a memory controller, a peripheral bus, a graphics acceleration port, a processor, or a local area bus using any of a variety of bus structures.
  • FIG. 3 is a smart driving method provided by an embodiment of the present application. As shown in FIG. 3, the method may include:
  • Step 301 Acquire the characteristic parameters of the current moment of the vehicle and the road attributes of the driving scene of the vehicle in a future preset time period.
  • the characteristic parameters may include structured semantic information, road attributes, and traffic situation spectrum, and may also include other characteristic parameters used to characterize driving scenarios, without limitation.
  • the structured semantic information may be converted from the object information in the driving scene and may be used to characterize the object attributes in the driving scene.
  • the structured semantic information may include parameters such as the coordinates, speed, and acceleration of the object. information.
  • the objects in the driving scene can be people, trees, flowers, buildings, mountains, rivers, and so on.
  • the current time can be collected in real time through an external sensing module (such as a visual sensor (or camera), a lidar, a millimeter wave radar, a global positioning system (GPS) positioning module, and a high-precision map, etc.) on the vehicle.
  • GPS global positioning system
  • the collected object information is processed by the perceptual fusion algorithm to obtain structured semantic information.
  • the perceptual fusion algorithm is a commonly used algorithm in image processing, and the perceptual fusion algorithm corresponding to different driving scenarios is different.
  • the process of obtaining structured semantic information after the perceptual fusion algorithm can refer to the existing technology, and is not allowed. To repeat.
  • Road attributes can be used to characterize the types of roads that vehicles travel on, such as highways, intercity highways, country roads, basement lanes, main roads, and so on.
  • the road attributes at the current moment can be determined by the GPS positioning module and high-precision map on the vehicle, such as: positioning the current running position of the vehicle through the GPS positioning module, finding the position of the vehicle positioning in the high-precision map, and according to the location's environment Determine road attributes.
  • the road attributes of the driving scene of the vehicle in a future preset time can be determined through a GPS positioning module, a high-precision map, and global planning information on the vehicle.
  • the global planning information is used to specify the current driving route of the user, which can be set in advance by the user and stored on the vehicle.
  • the future preset time period may refer to a time period after the current time, and the time period may be set as required without restriction.
  • the user's current driving route is from point A to point B, C, and D to point E.
  • the preset time period in the future is set to 2 hours. This 2 hour driving time can make the vehicle After driving to C, and determining that a section of highway needs to pass from B to C by viewing a high-precision map, it can be determined that the road attribute of the driving scene of the vehicle in a future preset time period is: highway.
  • the traffic situation spectrum can be used to characterize the running status of the vehicle and the running status of the participating vehicles.
  • the running status of the vehicle can include, but is not limited to, the vehicle speed, steering wheel angle, vehicle yaw angle, and so on.
  • the traffic participating vehicle may refer to a vehicle driving around the own vehicle.
  • the running state of the traffic participating vehicle may include, but is not limited to, the distance of the traffic participating vehicle from the own vehicle, the running speed of the traffic participating vehicle, and the like.
  • the vehicle state parameter information reported by a vehicle execution module such as a braking system, a steering system, a driving system, and a lighting system
  • the vehicle state information may be Fourier transformed to generate the vehicle state.
  • the spectral characteristics of the parameter information FA is collected through a sensor unit on the vehicle (such as: vision sensors (or cameras), lidar, millimeter wave radar, GPS positioning module, high-precision map, etc.) Information to perform a Fourier transform on the operating status information of participating vehicles to generate a spectral characteristic FOA of the operating status information of the participating vehicles, and combine the spectral characteristic FA and the spectral characteristic FOA to form the traffic situation spectrum at the current moment; of which, a period of time may refer to including A period of time including the current time.
  • the Fourier transform can be an existing commonly used Fourier transform algorithm, and is not repeated here.
  • Step 302 Compare the characteristic parameters of the current moment of the vehicle with the characteristic parameters of the standard scene in the scene feature database, and compare the road attributes of the driving scene of the vehicle in the future preset time period with the road attributes of the standard scene in the scene feature database, and according to the comparison result A total similarity between each scene class and a driving scene of the current moment of the vehicle is determined.
  • the scene feature library may include N scene classes, and each scene class may be characterized by M standard scenes.
  • Each standard scene corresponds to the following characteristic parameters: structured semantic information, road attributes, and traffic situation spectrum.
  • N is an integer greater than or equal to 1
  • M is an integer greater than or equal to 1.
  • the scene classes are some common driving scenes.
  • a large number of driving scenes at different times can be collected, and the collected driving scenes are analyzed and processed through statistical analysis algorithms to obtain N scene classes, standard scenes corresponding to each scene class, and features corresponding to each standard scene. parameter.
  • the statistical analysis algorithm is an existing algorithm and will not be described in detail.
  • the similarity between different road attributes can also be obtained through statistical analysis algorithms.
  • the similarity between different road attributes is stored in the scene feature database.
  • the scene feature library can be pre-existed on the intelligent driving system in the form of a list and dynamically maintained. For example, when a new scene class appears, the standard scene corresponding to the scene class and the feature parameters corresponding to the standard scene can be real-time
  • the scene feature database is added to the scene feature library; or the scene feature database is stored on another device, and the scene feature database is obtained from the other device when step 302 is performed.
  • the structured semantic information, road attributes, and traffic situation spectrum corresponding to each standard scene are not limited to being stored in the same scene feature database, and the structured semantic information, road attributes, and traffic corresponding to each standard scene can also be stored.
  • the situational spectrum can be stored separately in different feature libraries, for example: N scene classes, and the structured semantic information of M standard scenes corresponding to each scene class can be stored in the scene semantic database in the form of a list; N Scene classes, the road attributes of the M standard scenes corresponding to each scene class are stored in the scene road attribute database in the form of a list; the traffic situation spectrum of the N standard classes and the M standard scenes corresponding to each scene class can be The list is stored in the scene traffic situation feature database.
  • the total similarity of the scene class can be used to characterize the similarity between the scene class and the current driving scene.
  • Step 303 Determine the first scene class with the highest total similarity among the N scene classes as the driving scene at the current moment.
  • the total similarity of the N scene classes can be arranged in order from large to small, and the frontmost scene class can be determined as the driving scene at the current moment; or the total similarity of the N scene classes can be determined. Degrees are arranged in ascending order, and the scene category ranked last is determined as the driving scene at the current moment.
  • Step 304 Control the vehicle for intelligent driving according to the determination result.
  • controlling a vehicle for intelligent driving may refer to controlling a vehicle execution unit on a vehicle such as a braking system, a steering system, a driving system, and a lighting system, so that the current driving state of the vehicle meets the current driving scenario.
  • the total similarity of each scene class may be determined and compared, but according to a number of standard scenes in each scene class and vehicle time characteristic parameters and the future of the vehicle
  • the similarity of the driving scene in the preset time period determines the scene class corresponding to the driving scene at the current moment.
  • the characteristic parameters of the current time of the vehicle and the road attributes of the driving scene of the vehicle in a preset time period in the future may be compared, and Compares the road attributes of the driving scene of the vehicle in a future preset time period with the road attributes of the standard scene in the scene feature library, and determines the first standard scene and The first standard degree and the second degree of similarity between the second standard scene and the driving scene at the current moment of the vehicle; and then determine the current moment of time according to the first degree of similarity and the second degree of similarity of each of the N scene classes The scene class corresponding to the driving scene.
  • the characteristic parameters of the current time of the vehicle and the scene may be compared.
  • the feature parameters of the standard scene in the feature library, and the road attributes of the driving scene of the vehicle in a preset time period in the future are compared with the road attributes of the standard scene in the scene feature library, and each scene in the scene library is determined according to the comparison result.
  • the similarity of each standard scene of the class to the driving scene of the current moment of the vehicle, and the similarity of the standard scene with the highest similarity in each scene class is used as the similarity of the scene class, and then based on the similarity of each scene class The similarity determines the scene class corresponding to the driving scene at the current moment.
  • the intelligent driving method shown in Figure 3 can identify the scene class to which the vehicle's current driving scene belongs based on three dimensions: structured semantic information, road attributes, and traffic situation spectrum, making the information referenced during scene class identification more comprehensive and reliable, and improving the scene.
  • the accuracy of recognition improves the realization of intelligent driving.
  • the computational complexity is reduced.
  • step 302 compares the feature parameters of the current moment with the feature parameters of the standard scene in the scene feature database, and compares the road attributes of the driving scene of the vehicle in a preset time period in the future with the road attributes of the standard scene in the scene feature database. It is determined that the total similarity of each scene class may include the following (1) to (4):
  • the structured semantic information of the current moment is compared with the structured semantic information of the standard scene in the scene feature library to obtain the first similarity of the standard scene, and the first similarities of all standard scenes belonging to the scene class are combined Calculate to get the first probability of the scene class.
  • this determination process can be referred to FIG. 4a and may include:
  • S401a statistically obtain N scene classes.
  • S402a Statistically analyze M standard scenes corresponding to each scene class.
  • S403a The image information corresponding to all standard scenes is processed by a perceptual fusion algorithm to obtain structured semantic information corresponding to all standard scenes, and the scene class, standard scene, and structured semantic information are correspondingly stored in a scene semantic database.
  • S404a Acquire structured semantic information at the current moment.
  • S405a Compare the structured semantic information at the current moment with the structured semantic information corresponding to the standard scene in the scene semantic database to obtain the similarity between the current driving scene and the standard scene.
  • S406a For any scene class, the similarities corresponding to the standard scenes belonging to the scene class are combined and calculated to obtain the first probability of the scene class. Among them, the combination calculation may refer to a weighted sum or a summed average.
  • the structured semantic information of the standard scenes corresponding to all the scene classes can be filtered, and the similarity of the standard scenes that do not contain real-time structured semantic information is set (or assigned) 0, for example, the first similarity, second similarity, third similarity, and fourth similarity of standard scenes that do not contain real-time structured semantic information can be set to 0, that is, these standard scenes need not be related to real-time driving scenes
  • the corresponding structured semantic information is compared.
  • the absence of real-time structured semantic information may refer to structured semantic information of objects that are completely different from those represented by the structured semantic information corresponding to the real-time driving scenario.
  • the structured semantic information corresponding to the standard scene 1 can characterize the attributes of objects such as mountains and trees, while the structured semantic information corresponding to the real-time driving scene is used to characterize the attributes of objects such as farmhouses, grasslands, etc. It is determined that the structured semantic information corresponding to the standard scenario 1 does not contain real-time structured semantic information.
  • this determination process may be referred to FIG. 4b, and may include:
  • S401b Statistically analyze the road attributes of the standard scene corresponding to each scene class and the similarity between different road attributes.
  • S402b The scene class, standard scene, road attributes, and similarity between different road attributes are correspondingly stored in the scene road attribute database.
  • S404b Compare the road attributes at the current moment with the road attributes corresponding to the standard scene in the scene road attribute database to obtain the second similarity between the current driving scene and the standard scene.
  • S406b Obtain the road attributes of the driving scene of the vehicle in a future preset time period.
  • S407b Compare the road attributes of the driving scene of the vehicle in the future preset time period with the road attributes corresponding to the standard scene in the scene road attribute database to obtain the third similarity between the current driving scene and the standard scene.
  • S408b Combine and calculate the third similarity corresponding to the standard scene belonging to the scene class to obtain a third probability of the scene class.
  • comparing the road attributes at the current time with the road attributes corresponding to the standard scene in the road attribute database of the scene, and obtaining the similarity between the current driving scene and the standard scene may include: if the road attributes at the current time and the standard in the road attribute database of the scene The road attributes corresponding to the scene are the same, the similarity is determined to be 1, if they are different, the road attributes at the current moment corresponding to the standard scene in the road attribute database of the scene are determined according to the similarity between different road attributes stored in the road attribute database of the scene Similarity of road attributes.
  • the combination calculation may refer to weighted summation or equalization.
  • the traffic situation spectrum at the current moment is compared with the traffic situation spectrum of the standard scene in the scene feature library to obtain the fourth similarity between the standard scene and the current driving scene, and the fourth similarity to all standard scenes belonging to the scene category
  • the combination is calculated to obtain the fourth probability of the scene class.
  • this determination process may be referred to FIG. 4c, and may include:
  • S401c Statistically analyze the spectral characteristics of the vehicle state information in the standard scenario corresponding to each scene class and the spectral characteristics of the observation parameters of the participating vehicles in the standard scenario.
  • S402c The spectrum characteristics of the vehicle state information and the spectrum characteristics of the observation parameters of the participating vehicles are used to form the traffic situation spectrum of the standard scene, and the scenario class, the standard scene, and the traffic situation spectrum are correspondingly stored in the scene traffic situation database.
  • S404c The traffic situation spectrum at the current moment is compared with the traffic situation spectrum corresponding to the standard scene in the scene traffic situation spectrum database to obtain the similarity between the current driving scene and the standard scene.
  • the combination calculation may refer to a weighted sum or a summed average.
  • the total similarity of the scene class is obtained according to the first probability of the scene class, the second probability of the scene class, the third probability of the scene class, and the fourth probability of the scene class.
  • the first probability of the scene class, the second probability of the scene class, the third probability of the scene class, and the fourth probability of the scene class may be summed to obtain the total similarity of the scene class.
  • the sum operation may be For weighted summation, it may also be summed and averaged. For example, assuming the first probability of the scene class is, the second probability of the scene class is, the third probability of the scene class is, and the fourth probability of the scene class is, then the total similarity of the scene class can be:
  • weighting coefficients which can be set as required without restriction.
  • each similarity of the standard scene may be used to characterize the possibility that the current driving scene belongs to the standard scene.
  • the greater the similarity the more likely the current driving scene belongs to the standard scene; otherwise, it means that the current driving scene cannot be the standard scene.
  • step 304 controls the driving of the vehicle according to the determination result.
  • the method may include:
  • S501 Determine whether the first scene class is the same as the previous scene class
  • step S502 If the first scenario category is the same as the previous scenario category, determine whether the current design operation range of the vehicle satisfies the design operation scope corresponding to the first scenario category, and if so, execute step S503. If not, step S504 is performed.
  • S504 Send fault alarm information (or manual takeover request information).
  • step S507 Determine whether the current design operation range of the vehicle satisfies the design operation range corresponding to the scene class at the previous moment. If the current design operation range of the vehicle meets the design operation range corresponding to the scene class at the previous moment, step S508 is performed; if the vehicle If the current design operation range does not satisfy the design operation range corresponding to the scene class at the previous moment, step S509 is performed.
  • S508 Send scene scene switching unsuccessful information, and maintain the current driving state unchanged.
  • S509 Send fault alarm information (or manual takeover request information).
  • the previous time may refer to the time before the current time.
  • the design operation range of the vehicle may be used to refer to the driving state of the vehicle when it is normally operating in a driving scenario, and may include multiple states such as driver state, vehicle fault state, controller hardware fault state, and structured semantic information.
  • the design operation range corresponding to the N scene classes may be statistically analyzed, and the design operation range corresponding to the N scene classes may be stored in the scene feature database in advance.
  • For the process of statistically analyzing the design operation range corresponding to the N scenario classes reference may be made to the existing technology, and details are not described again.
  • the fault alarm information may be used to indicate that the vehicle may currently be in a fault state and is not suitable for driving.
  • the manual takeover request information can be used to request the user to manually control the vehicle for intelligent driving.
  • switching the vehicle from the current driving state to the driving state corresponding to the first scene type may include: obtaining an intelligent driving algorithm corresponding to the first scene type, and switching the running state of the vehicle according to the intelligent driving algorithm corresponding to the first scene type.
  • Intelligent driving algorithms correspond to scenario classes
  • N scenario classes may correspond to N intelligent driving algorithms one by one.
  • N intelligent driving algorithms corresponding to the N scene classes may be stored in the intelligent driving system in advance.
  • the scene class switching unsuccessful information may be used to indicate that the current driving state of the vehicle is not successfully switched to the driving state corresponding to the first scene class.
  • Sending scene class switching unsuccessful information and maintaining the current driving state may include: sending a scene class switching unsuccessful information to a user through a human-computer interaction module, and executing an intelligent driving algorithm corresponding to the current scene class unchanged.
  • the method may further include:
  • S510 Obtain an intelligent driving instruction, and determine that the intelligent driving instruction is used to indicate whether to stop the intelligent driving of the vehicle (that is, whether to drive the vehicle intelligently); wherein the intelligent driving instruction may be issued by a driver or a cloud operation.
  • the operation instruction sent to the vehicle active execution unit for instructing to release the driving right may include, but is not limited to, releasing the control right of the steering wheel, the brake, the accelerator, and the like.
  • the release notification can be used to notify the driver (or user) that the vehicle's intelligent driving has been stopped, that is, the vehicle's intelligent driving rights have been released.
  • the release notification may include, but is not limited to, one or more operations such as a voice alarm, a light alarm, and tightening of a seat belt.
  • step S512 If the intelligent driving instruction is used to instruct the intelligent driving of the vehicle (that is, intelligent driving without stopping the vehicle), determine whether a scene switching instruction is received to indicate switching to the driving state corresponding to the second scene category. If it is instructed, step S513 is performed; if the scene switching instruction is not received, the vehicle is controlled to perform intelligent driving according to the execution result of FIG. 5a.
  • S513 Acquire the intelligent driving algorithm corresponding to the second scenario category, and switch the running state of the vehicle according to the intelligent driving algorithm corresponding to the second scenario category.
  • an operation instruction is issued to the vehicle's active execution unit (braking system, steering system, drive system, lighting system, etc.) to make it work according to the operation instruction.
  • the vehicle's active execution unit braking system, steering system, drive system, lighting system, etc.
  • the method may further include:
  • step S514 It is determined whether the driver has taken over the vehicle. If it has taken over, step S515 is performed, otherwise, step S516 is performed.
  • whether the driver has taken over the vehicle can be determined by the vehicle state parameters input by the vehicle active execution unit;
  • S515 Send an operation instruction to the vehicle active execution unit to instruct the release of driving rights, and issue a release notification to the driver.
  • S516 Send an operation instruction to the vehicle active execution unit to indicate a safe stop.
  • the operation instruction for indicating a safe stop may include, but is not limited to, a slow centering of the steering wheel, a throttle release, a certain percentage of braking, and the like.
  • the safety of using each scene class algorithm can be ensured by judging whether the design and operation scope of the scene class is met and the supervision of the switching interaction process; at the same time, the embodiment of the present application ensures the continuous switching of the intelligent driving algorithm in different scenarios. To achieve intelligent driving in full mode.
  • the following describes the intelligent driving algorithm provided by the embodiment of the present application with a model of a smart car driving from point A to point B as an example.
  • the car has a vehicle active execution unit such as a braking system, a steering system, a driving system, and a lighting system. These systems have the ability to accept instructions from the intelligent driving system.
  • the car has sensor units such as cameras, radar, lasers, maps, and positioning modules.
  • each section may have traffic jams and unblocked roads.
  • Unstructured irregular road scenes ⁇ urban road repair scenes, rural road scenes, highway ramp scenes, highway repair road scenes, highway traffic accident road scenes ... ⁇ .
  • Freeway traffic jam scenarios ⁇ Highway 2 lane scene, Freeway 3 lane scene .. ⁇ .
  • Tunnel scene category Shadow intersection between urban high-rise buildings, tunnels ... ⁇ .
  • Parking scene category ⁇ Parking scene A, Parking lot B ... ⁇ .
  • Toll booth scene category ⁇ high-speed entrance scene, high-speed exit scene ... ⁇ .
  • Intersection scene classes ⁇ standard scene 1, standard scene 2, ..., standard scene 20 ⁇ .
  • Unstructured irregular road scene classes ⁇ standard scene 1, standard scene 2, ..., standard scene 20 ⁇ .
  • Freeway traffic jam scenarios ⁇ standard scenario 1, standard scenario 2, ..., standard scenario 20 ⁇ .
  • Tunnel scene class ⁇ standard scene 1, standard scene 2, ..., standard scene 20 ⁇ .
  • Toll booth scene class ⁇ standard scene 1, standard scene 2, ..., standard scene 20 ⁇ .
  • the probability that the current driving scene belongs to each scene class calculated from the map dimensions is obtained.
  • the traffic situation dimensions are calculated The probability that the current driving scene belongs to each scene category;
  • the total similarity between the real-time scene and each scene class is calculated comprehensively, and the scene class with the highest total similarity is the identified scene class.
  • the running intelligent driving algorithm is an urban structured traffic jam-free scene class algorithm, according to the design operation range judgment information, determine whether it meets the intersection scene class design operation range:
  • the recognition scene class is an intersection scene class and the running intelligent driving algorithm is also an intersection scene class algorithm, according to the design operation range judgment information, determine whether the intersection scene class design operation range is satisfied: if it is satisfied, send to maintain the intersection Scenario-type instructions, if not met, send fault alarm information and manual takeover request information;
  • the intelligent driving algorithm corresponding to the switch to the intersection scene class is sent;
  • the information is fault alarm information and manual takeover request information
  • the feedback of the manual takeover vehicle status parameter information from the vehicle execution module
  • release the throttle, gear, steering wheel, Commands for control of brakes and other systems and informs the driver through HMI that when the feedback of manual takeover has not been received for a period of time, control the vehicle to stop safely (send control instructions to the throttle, gear, steering wheel, brake and other systems);
  • an intelligent driving algorithm corresponding to the maintenance of the intersection scene class is sent.
  • the vehicle includes a hardware structure and / or a software module corresponding to each function.
  • this application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is performed by hardware or computer software-driven hardware depends on the specific application of the technical solution and design constraints. Professional technicians can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of this application.
  • the functional modules of a vehicle or an intelligent driving system that executes the foregoing intelligent driving method may be divided according to the foregoing method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated.
  • a processing module In a processing module.
  • the above integrated modules may be implemented in the form of hardware or software functional modules. It should be noted that the division of the modules in the embodiments of the present application is schematic, and is only a logical function division. In actual implementation, there may be another division manner.
  • the interaction between the intelligent driving system in the vehicle and the driver can be preset with multiple scene switching modes, and the driver can select one of the scenes in advance.
  • Switch modes The possible scene switching modes provided by the vehicle are as follows:
  • a notification-only switching mode can be provided.
  • the vehicle determines that a scene change is required, it can perform the scene change by itself without the driver's confirmation.
  • the driver may be notified through the human-machine interaction interface 250 or voice that the vehicle has performed the scene switch. Further, the driver may also set information that the vehicle does not notify the scene change.
  • Level 4 corresponds to a high degree of automation, which specifically refers to the completion of all driving operations by the automatic driving system. The driver does not need to provide a response according to the system request. In practice, the vehicle can be mapped on the map. Zones accomplish most tasks that human drivers can accomplish. Level 5 (Level 5, L5) corresponds to complete automation, specifically the automatic driving system can complete all driving operations without limiting road and environmental conditions, and the driver takes over the vehicle only when necessary. Generally speaking, in vehicles that implement L4 and L5, the vehicle can achieve autonomous driving in most cases without the need for driver intervention, thereby providing a notification-only switching mode.
  • the second is the default consent mode.
  • a default consent mode can be provided.
  • the vehicle determines that a scene change is required, it first sends a request to the driver through a human-machine interaction interface or voice.
  • the driver may respond to the request and instruct the vehicle to perform a scene change; or when the driver does not respond to the request within a preset time, the vehicle default driver agrees to the scene change request.
  • the vehicle switches the driving scene.
  • Level 3, L3 of the driverless level the vehicle's automatic driving system completes all driving operations.
  • the driver provides an appropriate response according to the request of the vehicle's automatic driving system, and takes over the vehicle at any time. Control.
  • a silent consent mode can be provided.
  • the third is the default rejection mode.
  • a default rejection mode can be provided.
  • the vehicle determines that a scene change is required, it first sends a request to the driver through a human-machine interaction interface or voice. For example, based on information such as feature parameters at the current moment, the vehicle proposes to select the first driving scene category in the scene feature library as the driving scene of the vehicle. A request may be issued to prompt the driver to confirm. If the driver does not respond to the request within a preset time, the vehicle default driver rejects the scene switching request. In this case, the vehicle will maintain the driving scene of the previous moment.
  • the vehicle issues a fault warning message and requests the driver to take over the vehicle.
  • the fault alarm information determines whether the driver has taken over the vehicle after a preset time interval (for example, 3 seconds). If it is determined that the driver has taken over the vehicle, send an operation instruction indicating the release of driving rights, and send The driver sends a driving right release notification; if the driver does not take over the vehicle within a preset time, the intelligent driving system sends an operation instruction indicating a safe stop to ensure driving safety.
  • the above three scene switching modes are merely examples to illustrate that the present application can provide a manner of interaction between a vehicle and a driver in the process of multiple scene switching, and does not limit the present application.
  • the correspondence between the driverless level of the vehicle and the switching mode that the vehicle may provide is only one possible implementation and does not limit the present application.
  • FIG. 6a is a schematic diagram of a human-computer interaction interface according to an embodiment of the present application. As shown in Figure 6a, the human-computer interaction interface may include the following parts:
  • the navigation section is used to indicate the map of the area where the vehicle is currently located, as well as the specific location and driving direction of the vehicle. As shown in the figure, the vehicle is driving north along Huizhou Avenue;
  • a manual control vehicle button is used to transfer driving rights from the vehicle to the driver when the driver touches the button.
  • vehicles use autonomous driving technology, they are usually given the authority to take over the vehicle. Therefore, a manual control vehicle button can be set in the man-machine interface to control the transfer of driving rights.
  • the button for manually controlling the vehicle may also be set such that if the driver is currently manually controlling the vehicle, when the driver touches the button for manually controlling the vehicle, the driving right is transferred to the vehicle and automatic driving is started.
  • a status bar showing whether the vehicle is currently in an automatic driving state or manually controlled by the driver can also be added to the button for manually controlling the vehicle;
  • Speed display part used to display the current speed of the vehicle
  • the remaining fuel amount display section is used to display the current remaining fuel amount of the vehicle; similarly, when the vehicle is powered by electricity or a hybrid fuel-electric power, the current remaining energy of the vehicle may be displayed accordingly or the vehicle is expected to return The distance that can be traveled
  • the driving scene display section is used to display the current driving scene of the vehicle.
  • the vehicle can choose different driving scenes according to the characteristic parameters of the vehicle, so the current driving scene of the vehicle can be displayed in the human-computer interaction interface.
  • the driving scene part can be set so that the driver can manually change the current driving scene by clicking the part of the screen of the human-computer interaction interface;
  • the time display section is used to display the current time of the vehicle.
  • FIG. 6b is a schematic diagram of a human-computer interaction interface provided by an embodiment of the present application when a scene is switched.
  • the vehicle uses an urban structured traffic jam-free scenario.
  • the vehicle drives to the intersection of Huizhou Avenue and Changjiang Road as shown in FIG. 6B, the vehicle determines to switch the current scene from the urban structured traffic jam-free scene category to the cross-road scene category through the aforementioned method.
  • the human-computer interaction interface may also include a notification display section for displaying information that requires the driver to know. For example, as shown in FIG.
  • information may be displayed in the notification display part to inform the driver that the driving scene of the vehicle is about to be switched to a crossroad scene category (at this time, the driver may also be reminded with voice or prompt sound).
  • the human-computer interaction interface shown in FIG. 6b is applicable to the foregoing switching of the notification-only mode.
  • FIG. 6c is a schematic diagram of another human-computer interaction interface provided by an embodiment of the present application when a scene is switched. Similar to FIG. 6b, when the vehicle drives to the intersection of Huizhou Avenue and Changjiang Road, the vehicle determines to switch the current scene from the urban structured traffic jam-free scene category to the cross-road scene category through the aforementioned method.
  • the notification display section in the human-computer interaction interface prompts: the driving scene is about to switch to the crossroad scene category, and two buttons for confirming and canceling are provided for the driver to choose (at this time, the driver can also be reminded with voice or prompt sound) ).
  • the human-computer interaction interface shown in FIG. 6c is applicable to the aforementioned default consent and default rejection modes.
  • the default consent mode when the notification display part of the human-computer interaction interface prompts the scene switch and the driver does not make a selection within a preset time, the default driver agrees to the vehicle switch scene; when the default reject mode is used, When the notification display part in the human-computer interaction interface prompts a scene change and the driver does not make a selection within a preset time, the default driver refuses the vehicle to change the scene.
  • FIG. 6d is a schematic diagram of another human-machine interaction interface according to an embodiment of the present application.
  • the notification display section can remind the current voice instructions that the driver may adopt. For example, as shown in FIG. 6d, the notification display section may prompt: You can say: "Switch to XX scene category"; "Release vehicle driving right". The driver can perform operations such as manually switching the driving scene or transferring the driving right of the vehicle through voice interaction according to the prompt of the notification display part, thereby increasing the efficiency of human-computer interaction.
  • FIG. 7 is an intelligent driving system according to an embodiment of the present application.
  • the intelligent driving system may be a vehicle or included in a vehicle.
  • the intelligent driving system 60 may include a perception fusion unit 61, a scene recognition unit 62, a scene switching module 63, and may further include a vehicle execution unit 64.
  • the perceptual fusion unit 61 is configured to obtain the characteristic parameters of the current moment of the vehicle and the road attributes of the driving scene of the vehicle in a future preset time period. For example, the perceptual fusion unit 61 may perform step 301.
  • the scene class recognition unit 62 is configured to compare the characteristic parameters of the current moment of the vehicle with the characteristic parameters of the standard scene in the scene feature database, and compare the road attributes of the driving scene of the vehicle in the future preset time period with the roads of the standard scene in the scene feature database. Attributes, determine the total similarity between each scene class in the scene feature library and the driving scene at the current moment of the vehicle based on the comparison result, and determine the first scene class with the highest total similarity among the N scene classes as the driving scene at the current moment Among them, the scene feature library includes N scene classes, each scene class corresponds to M standard scenes, each standard scene corresponds to a characteristic parameter; N is an integer greater than or equal to 1, and M is an integer greater than or equal to 1. For example, the scene class recognition unit 62 may perform steps 302 and 303.
  • the scene-type switching module 63 is configured to control a driving state of the vehicle according to a determination result.
  • the scene-type slicing module 63 may perform step 304.
  • the scene class recognition unit 62 may include:
  • the scene class perception probability calculation module 620 is configured to compare the structured semantic information of the current moment with the structured semantic information of the standard scene in the scene feature library to obtain the first similarity of the standard scene, and to all the standard scenes belonging to the scene class The first similarity of is combined and calculated to obtain the first probability of the scene class.
  • the scene class perception probability calculation module 620 may perform the process shown in FIG. 4a.
  • the scene class map probability calculation module 621 is configured to compare the road attributes at the current moment with the road attributes of the standard scene in the scene feature library to obtain the second similarity of the standard scene and the second similarity of all standard scenes belonging to the scene class.
  • the second probability of the scene class is obtained through a combination calculation of the degrees; and the road attributes of the driving scene in a preset time period in the future are compared with the road attributes of the standard scene in the scene feature library to obtain the third similarity of the standard scene.
  • the scene map probability calculation module 621 may perform the process shown in FIG. 4b.
  • the scenario traffic situation probability calculation module 622 is configured to compare the traffic situation spectrum at the current moment with the traffic situation spectrum of the standard scene in the scene feature library to obtain the fourth similarity of the standard scene. For all standard scenes belonging to the scene category, The fourth similarity is combined and calculated to obtain a fourth probability of the scene class.
  • the scenario-type traffic situation probability calculation module 622 may perform the process shown in FIG. 4c.
  • the scene class recognition and judgment module 623 is configured to obtain the total similarity of the scene class according to the first probability of the scene class, the second probability of the scene class, the third probability of the scene class, and the fourth probability of the scene class.
  • the intelligent driving system 60 may include a processing module, a communication module, and a vehicle active execution unit.
  • the perceptual fusion unit 61, the scene class recognition unit 62, and the scene class switching module 63 may be integrated in the processing module.
  • the processing module is used to control and manage the actions of the intelligent driving system 60.
  • the processing module is used to support the communication device 11 to perform steps 301, 302, 303, 304, and the like, as well as other processes that perform the techniques described herein.
  • the communication module is used to support the intelligent driving system 60 to communicate with the driver, and can be a human-computer interaction interface.
  • the intelligent driving system 60 may further include a storage module for storing program codes and intelligent driving algorithms of the intelligent driving system 60.
  • the processing module may be a processor or a controller. It may implement or execute various exemplary logical blocks, modules, and circuits described in connection with the present disclosure.
  • a processor may also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of a DSP and a microprocessor, and so on.
  • the communication module may be a human-computer interaction interface.
  • the memory module may be a memory. When the processing module is a processor, the communication module is a human-computer interaction interface, and the storage module is a memory, the intelligent driving system 60 shown in FIG. 7 may be the intelligent driving system shown in FIG. 2.
  • FIG. 8 is a structural example diagram of another intelligent driving system according to an embodiment of the present application.
  • the intelligent driving system 70 includes an acquisition module 71, a determination module 72, a display module 73, and a control module 74.
  • the obtaining module 71 is configured to obtain characteristic parameters of a vehicle at a first time and road attributes of a driving scene in a future preset time period of the first time, wherein the characteristic parameters include structured semantic information, road attributes, and Traffic situation spectrum;
  • a determination module 72 configured to select a first driving scene class in a scene feature database according to the characteristic parameters of the vehicle at the first time and road attributes of the driving scene of the vehicle in a future preset time period;
  • a display module 73 for: displaying a first prompt, the first prompt is used to prompt a driver to switch the driving scene of the vehicle in the first time to a first driving scene category; receiving the first instruction, all The first instruction corresponds to the first indicator and is used to instruct the driving scene of the vehicle at the first time to be switched to the first scene category;
  • the control module 74 controls the driving state of the vehicle according to the first driving category
  • the determining module 72 is specifically configured to compare the feature parameters of the vehicle at the first time with the feature parameters of the standard scene in the feature scene library.
  • Road attributes based on the comparison results, determine the total similarity of each scene class in the scene feature library to the driving scene of the vehicle at the current moment, where each scene feature library includes N scene classes, and each scene class corresponds to M
  • N and M are positive integers; the first scene class with the highest total similarity among the N scene classes is determined as the driving scene at the first time.
  • the determining module 72 is further configured to select the second driving scenario category in the scenario feature database as the second-time driving scenario; display Module 73 is further configured to: display a second prompt, the second prompt is used to instruct to switch the driving scene of the vehicle at the second time to the second driving scene; when the second instruction is not received within a preset time
  • the instruction control module 74 maintains the driving state of the vehicle according to the first driving scene category, wherein the second instruction corresponds to the second prompt and is used to instruct to switch the current driving scene of the vehicle to the second driving scene category.
  • the determining module 71 is further configured to determine that a design running range of the vehicle at the second time does not satisfy a design corresponding to the first scenario class Operating range; the display module 73 is also used to send fault alarm information.
  • the determining module 71 is further configured to determine whether the driver has taken over the vehicle; the display module 73 is further configured to determine if the determination module 71 determines that the driver has taken over the vehicle , Sending an operation instruction for instructing the release of driving rights and sending a release notification to the driver; if the determination module 71 determines that the driver has not taken over the vehicle, sending an operation instruction for instructing safe parking.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable systems.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be from a website site, a computer, a server, or a data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more servers, data centers, and the like that can be integrated with the medium.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

本申请实施例提供一种智能驾驶方法及智能驾驶系统,以解决现有的智能车辆无法准确识别出驾驶场景的问题。该方法可以包括:获取车辆当前时刻的特征参数以及车辆在未来预设时间段内驾驶场景的道路属性;其中,所述特征参数可以包括结构化语义信息、道路属性以及交通态势频谱;比较当前时刻的特征参数与场景特征库中标准场景的特征参数,以及比较车辆在未来预设时间段内驾驶场景的道路属性与场景特征库中标准场景的道路属性,根据比较结果确定每个场景类与所述车辆当前时刻的驾驶场景的总相似度;将N个场景类中总相似度最高的第一场景类确定为当前时刻的驾驶场景;根据确定结果控制车辆进行智能驾驶。

Description

一种智能驾驶方法及智能驾驶系统 技术领域
本申请涉及自动驾驶技术领域,尤其涉及一种智能驾驶方法及智能驾驶系统。
背景技术
智能驾驶车辆在普通车辆的基础上增加了先进的传感器(雷达、摄像)、控制器、执行器等装置,通过车载传感系统和信息终端实现与人、车、路等的智能信息交换,使车辆具备智能的环境感知能力,能够自动分析车辆行驶的安全及危险状态,并使车辆按照人的意愿到达目的地,最终实现替代人来操作的目的以减轻人驾驶汽车的负担。
在现有技术中,智能驾驶车辆的总控制系统,会统一采集各个分系统的各个部分的数据,然后对这些数据统一处理,进而对智能驾驶车辆进行控制。如:可以统计分析获取的道路环境视频图像并建立城市道路场景、乡村道路场景、高速公路场景识别数据库,并利用深度卷积神经网络对数据库内样本图片进行特征提取和卷积训练,得到卷积神经网络分类器,最终将实时感知图片输入卷积神经网络分类器进行识别,分类出当前车辆所处驾驶场景。
然而,上述采用卷积神经网络分类器对场景进行分类的方式,在雨天、雾天、光照条件等不好的情况下容易造成实时感知图像不清晰,降低将实时感知图片输入卷积神经网络分类器进行识别的准确性,进而无法准确识别出当前驾驶场景,影响车辆的智能驾驶。
发明内容
本申请实施例提供一种智能驾驶方法及智能驾驶系统,以解决现有无法准确识别出当前驾驶场景,影响车辆的智能驾驶的问题。
为达到上述目的,本申请实施例提供如下技术方案:
第一方面,本申请实施例提供一种智能驾驶方法,所述方法包括:获取车辆当前时刻的特征参数(结构化语义信息、道路属性以及交通态势频谱)以及车辆在未来预设时间段内驾驶场景的道路属性,比较当前时刻的特征参数与场景特征库中标准场景的特征参数、以及比较车辆在未来预设时间段内驾驶场景的道路属性与场景特征库中标准场景的道路属性,根据比较结果确定场景特征库中每个场景类与所述车辆当前时刻的驾驶场景的总相似度;将N个场景类中总相似度最高的第一场景类确定为当前时刻的驾驶场景;根据确定结果控制车辆的驾驶状态。基于第一方面提供的方法,可以基于结构化语义信息、道路属性以及交通态势频谱三种维度识别车辆当前时刻属于的场景类,使得场景类识别时参考的信息更加全面、可靠,提高了场景识别的准确性,提高了智能驾驶的可实现性。同时,基于结构化语义信息而不是图片识别场景类,降低了运算复杂度。
在第一方面的第一种可能的实现方式中,结合第一方面,对于场景库中的任一场景,比较当前时刻的特征参数与场景特征库中标准场景的特征参数、以及比较车辆在未来预设时间段内驾驶场景的道路属性与场景特征库中标准场景的道路属性,根据比较结果确 定场景类的总相似度,包括:将当前时刻的结构化语义信息与场景特征库中标准场景的结构化语义信息进行比较,得到标准场景的第一相似度,对属于场景类的所有标准场景的第一相似度进行组合计算,得到场景类的第一概率;将当前时刻的道路属性与场景特征库中标准场景的道路属性进行比较,得到标准场景的第二相似度,对属于场景类的所有标准场景的第二相似度进行组合计算,得到场景类的第二概率;将车辆在未来预设时间段内驾驶场景的道路属性与场景特征库中标准场景的道路属性进行比较,得到标准场景的第三相似度,对属于场景类的所有标准场景的第三相似度进行组合计算,得到场景类的第三概率;将当前时刻的交通态势频谱与场景特征库中标准场景的交通态势频谱进行比较,得到标准场景的第四相似度,对属于场景类的所有标准场景的第四相似度进行组合计算,得到场景类的第四概率;根据场景类的第一概率、场景类的第二概率、场景类的第三概率以及场景类的第四概率得到场景类的总相似度。如此,可以基于结构化语义信息、道路属性以及交通态势频谱三种维度识别车辆当前时刻属于的场景类。
在第一方面的第二种可能的实现方式中,结合第一方面的第一种可能的实现方式,在比较当前时刻的特征参数与场景特征库中标准场景的特征参数、以及比较车辆在未来预设时间段内驾驶场景的道路属性与场景特征库中标准场景的道路属性之前,所述方法还包括:将场景特征库中不含有实时结构化语义信息的标准场景的相似度设置为0。如此,可以筛选场景特征库中不含有实时结构化语义信息的标准场景,不需要对场景特征库中,降低后续结构化语义信息比较的复杂性。
在第一方面的第三种可能的实现方式中,结合第一方面或者第一方面的任一种可能的实现方式,根据确定结果控制车辆进行智能驾驶,包括:判断第一场景类是否与前一时刻的场景类相同;若第一场景类与前一时刻的场景类相同,则判断车辆当前的设计运行范围是否满足第一场景类对应的设计运行范围;若车辆当前的设计运行范围满足第一场景类对应的设计运行范围,则维持当前驾驶状态不变;若车辆当前的设计运行范围不满足第一场景类对应的设计运行范围,则发送故障告警信息。如此,可以在当前驾驶场景与前一时刻的驾驶场景相同、且车辆当前驾驶情况可以支持车辆在当前驾驶场景下运行时,保护当前驾驶状态不变。
在第一方面的第四种可能的实现方式中,结合第一方面的第三种可能的实现方式,所述方法还包括:若第一场景类与前一时刻的场景类不相同,则判断车辆当前的设计运行范围是否满足第一场景类对应的设计运行范围;若车辆当前的设计运行范围满足第一场景类对应的设计运行范围,则将车辆从当前驾驶状态切换到第一场景类对应的驾驶状态;若车辆当前的设计运行范围不满足第一场景类对应的设计运行范围,则判断车辆当前的设计运行范围是否满足前一时刻的场景类对应的设计运行范围,若车辆当前的设计运行范围满足前一时刻的场景类对应的设计运行范围,则发送场景类切换不成功信息,并维持当前驾驶状态不变;若车辆当前的设计运行范围不满足前一时刻的场景类对应的设计运行范围,则发送故障告警信息。如此,可以在当前驾驶场景与前一时刻的驾驶场景不同时,智能地切换车辆的驾驶状态,使其适用于当前驾驶场景。
在第一方面的第五种可能的实现方式中,结合第一方面的第三种可能的实现方式或者第一方面的第四种可能的实现方式,在发送故障告警信息之后,所述方法还包括:判断驾驶员是否已接管车辆;若确定驾驶员接管车辆,则向车辆上的车辆主动执行单元发 送用于指示释放驾驶权的操作指令以及向驾驶员发送释放通知;若确定驾驶员未接管车辆,则向车辆主动执行单元发送用于指示安全停车的操作指令。如此,可以保证驾驶员接管该车辆后,才停止智能驾驶,提高了驾驶的安全性以及用户体验。
在第一方面的第六种可能的实现方式中,结合第一方面或第一方面的任一种可能的实现方式,在根据确定结果控制车辆进行智能驾驶之前,所述方法还包括:获取智能驾驶指示;其中,智能驾驶指示用于指示是否停止车辆的智能驾驶;若智能驾驶指示用于指示对车辆的进行智能驾驶,则根据确定结果控制车辆进行智能驾驶;若智能驾驶指示用于指示停止车辆的智能驾驶,则向车辆上的车辆主动执行单元发送用于指示释放驾驶权的操作指令以及向驾驶员发送释放通知。如此,可以在驾驶员(或者用户)的指示下才进行智能驾驶,提高了用户体验。
第二方面,本申请提供一种智能驾驶系统,该智能驾驶系统可以为车辆或者车辆中组合在一起的多个模块。该智能驾驶系统可以实现上述各方面或者各可能的设计所述的智能驾驶方法,所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个上述功能相应的模块。如:该智能驾驶系统可以包括:
感知融合单元,用于获取车辆当前时刻的特征参数以及车辆在未来预设时间段内驾驶场景的道路属性;其中,特征参数包括结构化语义信息、道路属性以及交通态势频谱;
场景类识别单元,用于比较当前时刻的特征参数与场景特征库中标准场景的特征参数、以及比较车辆在未来预设时间段内驾驶场景的道路属性与场景特征库中标准场景的道路属性,根据比较结果确定场景特征库中每个场景类与所述车辆当前时刻的驾驶场景的总相似度,将N个场景类中总相似度最高的第一场景类确定为当前时刻的驾驶场景;其中,场景特征库中包括N个场景类,每个场景类对应M个标准场景,每个标准场景对应有特征参数;N为大于或等于1的整数,M为大于或等于1的整数;
场景类切换模块,用于根据确定结果控制车辆的驾驶状态。
其中,智能驾驶系统的具体实现方式可以参考第一方面或第一方面的任一种可能的实现方式提供的智能驾驶方法中的各个步骤,在此不再重复赘述。因此,该智能驾驶系统可以达到与第一方面或者第一方面的任一种可能的实现方式相同的有益效果。
第三方面,本申请提供一种智能驾驶方法,该方法用于智能驾驶系统,该智能系统位于车辆,该方法包括:获取车辆在第一时间的特征参数以及车辆在第一时间的未来预设时间段内驾驶场景的道路属性,其中,该特征参数包括结构化语义信息、道路属性以及交通态势频谱;根据车辆在第一时间的特征参数以及车辆在未来预设时间段内驾驶场景的道路属性,选择场景特征库中的第一驾驶场景类;显示第一提示符,该第一提示符用于提示驾驶员将车辆在第一时间的驾驶场景切换为第一驾驶场景类;接收第一指示,该第一指示与第一提示符对应,用于指示将车辆在第一时间的驾驶场景切换为第一驾驶场景类,根据第一驾驶场景类控制车辆的驾驶状态。
在第三方面中,一种可能的实现方式是,选择场景特征库中的第一驾驶场景类,包括:比较车辆在第一时间的特征参数与特征场景库中标准场景的特征参数、以及比较车辆在第一时间的未来预设时间段内驾驶场景的道路属性与场景特征库中标准场景的道路属性,根据比较结果确定场景特征库中每个场景类与所述车辆当前时刻的驾驶场景的总相似度,其中,该场景特征库中包括N个场景类,每个场景类对应M个标准场景,N与M 均为正整数;将所述N个场景类中总相似度最高的第一场景类确定为所述第一时间的驾驶场景。
在第三方面中,另一种可能的实现方式是,当根据第一驾驶场景类控制车辆的驾驶状态之后,该方法还包括:选择场景特征库中的第二驾驶场景类为第二时间的驾驶场景;显示第二提示符,该第二提示符用于请求将车辆在第二时间的驾驶场景切换为第二驾驶场景类;当在预设的时间内未接收到第二指示时,维持根据第二驾驶场景类控制该车辆的驾驶状态,其中,该第二指示与第二提示符对应,用于指示将车辆当前的驾驶场景切换为第二驾驶场景类。
在第三方面中,另一种可能的实现方式是,当在预设时间内未接收到第二响应之后,该方法包括:确定车辆在第二时间的设计运行范围不满足第一场景类对应的设计运行范围;发送故障告警信息。
在第三方面中,另一种可能的实现方式是,在发送故障告警信息之后,该方法还包括:判断驾驶员是否已经接管车辆;若确定驾驶员已经接管车辆,则发送用于指示释放驾驶权的操作指令以及向驾驶员发送释放通知,若确定驾驶员未接管车辆,则发送用于指示安全停车的操作指令。
第四方面,提供一种智能驾驶系统,该智能驾驶系统包括获取模块、确定模块、显示模块和控制模块,该智能驾驶系统的具体实现方式可以参考第三方面或第三方面的任一种可能的实现方式所提供的智能驾驶方法中的各个步骤,在此不再重复赘述。
第五方面,提供了一种智能驾驶系统,包括:处理器、存储器;该存储器用于存储计算机执行指令,当该智能驾驶系统运行时,该处理器执行该存储器存储的该计算机执行指令,以使该智能驾驶系统执行如上述第一方面、第三方面、第一方面的任一种可能的设计所述的智能驾驶方法或者第三方面的任一种可能的实现方式提供的智能驾驶方法。此外,该智能驾驶系统还可以包括车辆主动执行单元、传感器单元以及人机交互界面(或通信接口)。
第四方面,提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机可以执行上述第一方面、第三方面、第一方面的任一种可能的设计所述的智能驾驶方法或者第三方面的任一种可能的实现方式提供的智能驾驶方法。
第五方面,提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机可以执行上述第一方面、第三方面、第一方面的任一种可能的设计所述的智能驾驶方法或者第三方面的任一种可能的实现方式提供的智能驾驶方法。
第六方面,提供了一种芯片系统,该芯片系统包括处理器、通信接口,用于支持智能驾驶系统实现上述方面中所涉及的功能,例如处理器获取车辆当前时刻的特征参数以及车辆在未来预设时间段内驾驶场景的道路属性;比较当前时刻的特征参数与场景特征库中标准场景的特征参数、以及比较车辆在未来预设时间段内驾驶场景的道路属性与场景特征库中标准场景的道路属性,根据比较结果确定场景特征库中每个场景类与所述车辆当前时刻的驾驶场景的总相似度,将N个场景类中总相似度最高的第一场景类确定为当前时刻的驾驶场景;根据确定结果控制车辆的驾驶状态。在一种可能的设计中,所述芯片系统还包括存储器、车辆主动执行单元、传感器单元以及人机交互界面,所述存储 器,用于保存智能驾驶系统必要的程序指令、数据和智能驾驶算法。该芯片系统,可以由芯片构成,也可以包含芯片和其他分立器件。
第七方面,本发明提供一种智能驾驶方法,该方法包括:获取车辆当前时刻的特征参数以及车辆在未来预设时间段内驾驶场景的道路属性;其中,所述特征参数包括结构化语义信息、道路属性以及交通态势频谱;比较所述车辆当前时刻的特征参数与场景特征库中标准场景的特征参数、以及比较所述车辆在未来预设时间段内驾驶场景的道路属性与所述场景特征库中标准场景的道路属性,根据比较结果分别确定所述场景特征库中每个场景类的第一标准场景和第二标准场景与所述车辆当前时刻的驾驶场景的第一相似度和第二相似度;其中,所述场景特征库中包括N个场景类,每个场景类包括M个标准场景,每个标准场景对应有特征参数;所述M和N均为大于等于2的整数;根据所述N个场景类中的每个场景类的第一相似度和第二相似度确定当前时刻的驾驶场景对应的场景类。
第八方面,本发明提供一种智能系统,其特征在于,该系统包括:感知融合单元,用于获取车辆当前时刻的特征参数以及车辆在未来预设时间段内驾驶场景的道路属性;其中,所述特征参数包括结构化语义信息、道路属性以及交通态势频谱;场景类识别单元,用于:比较所述车辆当前时刻的特征参数与场景特征库中标准场景的特征参数、以及比较所述车辆在未来预设时间段内驾驶场景的道路属性与所述场景特征库中标准场景的道路属性,根据比较结果分别确定所述场景特征库中每个场景类的第一标准场景和第二标准场景与所述车辆当前时刻的驾驶场景的第一相似度和第二相似度;其中,所述场景特征库中包括N个场景类,每个场景类包括M个标准场景,每个标准场景对应有特征参数;所述M和N均为大于等于2的整数;根据所述N个场景类中的每个场景类的第一相似度和第二相似度确定当前时刻的驾驶场景对应的场景类。
其中,第三方面至第八方面中任一种设计方式所带来的技术效果可参见上述第一方面、第三方面、第一方面的任一种可能的实现方式所述的智能驾驶方法或者第三方面的任一种可能的实现方式提供的智能驾驶方法所带来的技术效果,不再赘述。
附图说明
图1为本申请实施例提供的原理框图;
图2为本申请实施例提供的一种智能驾驶系统的组成示意图;
图3为本申请实施例提供的一种智能驾驶方法流程图;
图4a为本申请实施例提供的一种计算第一概率的方法流程图;
图4b为本申请实施例提供的一种计算第二概率、第三概率的方法流程图;
图4c为本申请实施例提供的一种计算第四概率的方法流程图;
图5a为本申请实施例提供的一种智能切换方法流程图;
图5b为本申请实施例提供的又一种智能切换方法流程图;
图6a是本申请实施例提供的一种人机交互界面的示意图。
图6b是本申请实施例提供的另一种人机交互界面的示意图。
图6c是本申请实施例提供的另一种人机交互界面的示意图。
图6d是本申请实施例提供的另一种人机交互界面的示意图。
图7为本申请实施例提供的又一智能驾驶系统的组成示意图。
图8为本申请实施例提供的又一智能驾驶系统的组成示意图。
具体实施方式
下面结合附图对本申请实施例提供的方法进行阐述。
图1为本申请实施例提供的智能驾驶方法的原理框图,如图1所示,本申请实施例的思想为:预先设置场景特征库,该场景特征库包括场景类、场景类对应的标准场景以及标准场景对应的结构化语义信息、道路属性以及交通态势频谱,将当前时刻获取到的结构化语义信息、道路属性以及交通态势频谱与场景特征库包括的结构化语义信息、道路属性以及交通态势频谱分别进行比较,找到与当前时刻的驾驶场景(本申请实施例中,为了便于描述,可以将当前时刻的驾驶场景描述为当前驾驶场景)最相似的场景类,确定当前驾驶场景属于该场景类,即综合结构化语义信息、地图信息以及交通势态信息等多方面因素对当前驾驶场景进行识别,确定当前驾驶场景所属的场景类,并根据识别结果自动切换智能驾驶算法,使切换后的车辆驾驶状态符合当前驾驶场景,实现对车辆的智能驾驶。例如,可以根据当前驾驶场景的识别结果,切换车辆的驾驶状态使其适用于新的驾驶场景或者保持当前驾驶状态不变等。
该方法可以由图2所示的智能驾驶系统200执行,该智能驾驶系统200可以位于车辆上,该车辆可以为轿车、货车、卡车或其他任何类型的车辆,不予限制。如图2所示,该智能驾驶系统200可以包括但不限于:处理器210、存储器220、车辆主动执行单元230,传感器单元240,人机交互界面250,连接不同系统组件(包括存储器220、处理器210、车辆主动执行单元230,传感器单元240、人机交互界面250)的总线260。
其中,处理器210可以是一个中央处理器(Central Processing Unit,CPU),也可以是特定集成电路(Application Specific Integrated Circuit,ASIC),或者是被配置成实施本申请实施例的一个或多个集成电路,例如:一个或多个数字信号处理器(Digital Signal Processor,DSP),或,一个或者多个现场可编程门阵列(Field Programmable Gate Array,FPGA)。
所述存储器220存储有程序以及智能驾驶算法,所述程序代码可以被所述处理器210执行,使得所述处理器210执行本申请实施例所述的智能驾驶方法。例如,所述处理器210可以执行如图3中所示的步骤。当处理器210识别出当前驾驶场景后,可以根据与当前驾驶场景对应的智能驾驶算法对车辆进行智能驾驶。
所述存储器220可以包括易失性存储器形式的可读介质,例如随机存取存储单元(random access memory,RAM)和/或高速缓存存储器,还可以进一步包括只读存储器(read-only memory,ROM)。所述存储器220还可以包括具有一组(至少一个)程序模块的程序/实用工具,这样的程序模块包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。
该车辆执行单元230包括但不限于制动系统、转向系统、驱动系统、照明系统,每一个系统都具备接受上层指令,并执行指令动作的能力。处理器210可以按照智能驾驶算法的规定向车辆执行单元230发送操作指令,使车辆驾驶状态符合当前驾驶场景。
传感器单元240,该传感器单元包括但不限于摄像头、毫米波雷达、激光雷达、地图、 定位模块等系统,主要用于收集车辆周围的事物的相关信息。其中,定位模块可以是全球定位系统(Global Position System,GPS)、格洛纳斯系统或者北斗系统。
人机交互模块250可以为设置在车辆上的显示屏或者触摸屏,可称为人机交互界面(human machine interface,HMI),驾驶员可以通过触摸操作向车辆发送操作指令,人机交互模块250可以将处理器210生成的指令或者信息显示给驾驶员。类似的,人机交互模块250也可以使用语音等方式实现车辆与驾驶员之间的交互,本申请不对人机交互模块250的形态和运行方式进行限定。
总线260可以为表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器、外围总线、图形加速端口、处理器或者使用多种总线结构中的任意总线结构的局域总线。
下面结合附图2对本申请实施例提供的智能驾驶技术进行阐述。需要说明的是,本申请下述实施例中各参数的名字只是一个示例,具体实现中也可以是其他的名字,本申请实施例对此不作具体限定。
图3为本申请实施例提供的一种智能驾驶方法,如图3所示,该方法可以包括:
步骤301:获取车辆当前时刻的特征参数以及车辆在未来预设时间段内驾驶场景的道路属性。
其中,所述特征参数可以包括结构化语义信息、道路属性以及交通态势频谱,还可以包括其他用于表征驾驶场景的特征参数,不予限制。
本申请实施例中,结构化语义信息可以由驾驶场景中的物体信息转换而成,可以用于表征驾驶场景中的物体属性,如:结构化语义信息可以包括物体的坐标、速度、加速度等参数信息。其中,驾驶场景中的物体可以为人、数木、花草、建筑物、山川河流等等。示例性的,可以通过车辆上的外部感知模块(如:视觉传感器(或摄像头)、激光雷达、毫米波雷达、全球定位(global positioning system,GPS)定位模块和高精度地图等)实时采集当前时刻车辆行驶时周边的物体信息,将采集到的物体信息经过感知融合算法处理后得到结构化语义信息。其中,感知融合算法为图像处理中的常用算法,不同驾驶场景对应的感知融合算法不同,本申请实施例中,经过感知融合算法处理后得到结构化语义信息的过程可参照现有技术,不予赘述。
道路属性可以用于表征车辆行驶的道路的类型,如:可以为高速公路、城际公路、乡村小路、地库行车道、大马路等等。示例性的,可以通过车辆上的GPS定位模块和高精度地图确定当前时刻的道路属性,如:通过GPS定位模块定位车辆当前运行位置,查找高精度地图中车辆定位的位置,根据该位置的环境确定道路属性。示例性的,可以通过车辆上的GPS定位模块、高精度地图和全局规划信息确定车辆在未来预设时间内驾驶场景的道路属性。其中,全局规划信息用于规定用户此次的行车路线,可以由用户预先设置并存储在车辆上。未来预设时间段可以指当前时刻之后的时间段,该时间段可以根据需要进行设置,不予限制。例如,用户此次行车路线为从A地出发经过B、C、D到达E地,若车辆当前定位到B地,未来预设时间段设置为2个小时,这2小时的行车时间可以使车辆行驶到C地,且通过查看高精度地图确定从B到C需要经过一段高速公路,则可以确定车辆在未来预设时间段内驾驶场景的道路属性为:高速公路。
交通态势频谱可以用于表征车辆的运行状态以及交通参与车辆的运行状态,车辆的 运行状态可以包括但不限于:车辆的车速、方向盘转角、车辆横摆角等。交通参与车辆可以指在本车周围行驶的车辆,交通参与车辆的运行状态可以包括但不限于:交通参与车辆距离本车的距离、交通参与车辆的运行速度等。示例性的,可以收集车辆执行模块(如:制动系统、转向系统、驱动系统、照明系统等)在一段时间内上报的车辆状态参数信息,对车辆状态信息进行傅里叶变换,生成车辆状态参数信息的频谱特征FA,同时,通过车辆上的传感器单元(如:视觉传感器(或摄像头)、激光雷达、毫米波雷达、GPS定位模块和高精度地图等)采集一段时间内参与车辆的运行状态信息,对参与车辆的运行状态信息进行傅里叶变换,生成参与车辆的运行状态信息的频谱特征FOA,将频谱特征FA和频谱特征FOA组成当前时刻的交通态势频谱;其中,一段时间可以指包括当前时刻在内的一段时间。傅里叶变换可以为现有常用的傅里叶变换算法,不再赘述。
步骤302:比较车辆当前时刻的特征参数与场景特征库中标准场景的特征参数,以及比较车辆在未来预设时间段内驾驶场景的道路属性与场景特征库中标准场景的道路属性,根据比较结果确定每个场景类与所述车辆当前时刻的驾驶场景的总相似度。
其中,场景特征库中可以包括N个场景类,每个场景类可以由M个标准场景表征,每个标准场景对应有如下特征参数:结构化语义信息、道路属性以及交通态势频谱。N为大于或等于1的整数,M为大于或等于1的整数。场景类为目前常见的一些驾驶场景。可选的,可以收集大量车辆位于不同时刻的驾驶场景,通过统计分析算法对收集到的驾驶场景进行分析处理得到N个场景类、每个场景类对应的标准场景以及每个标准场景对应的特征参数。其中,统计分析算法为现有算法,不予赘述。除此之外,还可以通过统计分析算法分析得到不同道路属性间的相似度,将不同道路属性间的相似度保存在场景特征库中,不同道路属性间的相似度越高,表示道路情况比较接近,反之,则表示道路情况相差较大,如:高速公路与快速干道相差不大,二者间的相似度较高;高速公路和山间小路相差较大,二者之间的相似度较低。
其中,场景特征库可以以列表的形式预先存在在智能驾驶系统上,并动态维护,如:当有新的场景类出现时,可以将该场景类对应的标准场景以及标准场景对应的特征参数实时添加到该场景特征库中;或者,场景特征库存储在其他设备上,在执行步骤302时从其他设备获取场景特征库。
需要说明的是,每个标准场景对应的结构化语义信息、道路属性以及交通态势频谱不限于存储在同一场景特征库中,还可以将每个标准场景对应的结构化语义信息、道路属性以及交通态势频谱可以分开单独存储在不同的特征库中,如:可以将N个场景类,每个场景类对应的M个标准场景的结构化语义信息以列表的形式存储在场景语义库;可以将N个场景类,每个场景类对应的M个标准场景的道路属性以列表的形式存储在场景道路属性库;可以将N个场景类,每个场景类对应的M个标准场景的交通态势频谱以列表的形式存储在场景交通态势特征库。
其中,场景类的总相似度可以用于表征场景类与当前驾驶场景间的相似程度,总相似度越高,表示当前驾驶场景越有可能为该场景类,反之,则表示当前驾驶场景不可能为该场景类。
步骤303:将N个场景类中总相似度最高的第一场景类确定为当前时刻的驾驶场景。
可选的,可以将N个场景类的总相似度按照从大到小的顺序进行排列,将排到最前 面的场景类确定为当前时刻的驾驶场景;或者,将N个场景类的总相似度按照从小到大的顺序进行排列,将排到最后的场景类确定为当前时刻的驾驶场景。
步骤304:根据确定结果控制车辆进行智能驾驶。
其中,控制车辆进行智能驾驶可以指控制车辆的制动系统、转向系统、驱动系统、照明系统等车辆上的车辆执行单元,使车辆当前驾驶状态满足当前驾驶场景。
在本申请的另一种可能的实现方式中,可以不确定并比较每个场景类的总相似度,而是根据每个场景类中的若干标准场景与车辆时刻特征参数以及所述车辆在未来预设时间段内驾驶场景的相似度确定当前时刻驾驶场景对应的场景类。
具体来说,可以当获取车辆当前时刻的特征参数以及车辆在未来预设时间段内驾驶场景的道路属性之后,比较所述车辆当前时刻的特征参数与场景特征库中标准场景的特征参数、以及比较所述车辆在未来预设时间段内驾驶场景的道路属性与所述场景特征库中标准场景的道路属性,根据比较结果分别确定所述场景特征库中每个场景类的第一标准场景和第二标准场景与所述车辆当前时刻的驾驶场景的第一相似度和第二相似度;再根据N个场景类中的每个场景类的第一相似度和第二相似度确定当前时刻的驾驶场景对应的场景类。
在本申请的另一种可能的实现方式中,还可以当获取车辆当前时刻的特征参数以及车辆在未来预设时间段内驾驶场景的道路属性之后,比较所述车辆当前时刻的特征参数与场景特征库中标准场景的特征参数、以及比较所述车辆在未来预设时间段内驾驶场景的道路属性与所述场景特征库中标准场景的道路属性,根据比较结果分别确定场景库中每个场景类的每个标准场景与所述车辆当前时刻的驾驶场景的相似度,并将每个场景类中相似度最高的标准场景的相似度作为该场景类的相似度,再根据每个场景类的相似度确定当前时刻的驾驶场景对应的场景类。
图3所示的智能驾驶方法可以基于结构化语义信息、道路属性以及交通态势频谱三种维度识别车辆当前驾驶场景属于的场景类,使得场景类识别时参考的信息更加全面、可靠,提高了场景识别的准确性,提高了智能驾驶的可实现性。同时,基于结构化语义信息而不是图片识别场景类,降低了运算复杂度。
具体的,步骤302比较当前时刻的特征参数与场景特征库中标准场景的特征参数,以及比较车辆在未来预设时间段内驾驶场景的道路属性与场景特征库中标准场景的道路属性,根据比较结果确定每个场景类的总相似度可以包括下述(1)~(4):
(1)将当前时刻的结构化语义信息与场景特征库中标准场景的结构化语义信息进行比较,得到标准场景的第一相似度,对属于场景类的所有标准场景的第一相似度进行组合计算,得到场景类的第一概率。
具体的,该确定过程可参照图4a所示,可以包括:
S401a:统计分析得到N个场景类。
S402a:统计分析每个场景类对应的M个标准场景。
S403a:将所有标准场景对应的图像信息经过感知融合算法处理得到所有标准场景对应的结构化语义信息,将场景类、标准场景以及结构化语义信息对应存储在场景语义库中。
S404a:获取当前时刻的结构化语义信息。
S405a:将当前时刻的结构化语义信息与场景语义库中标准场景对应的结构化语义信息进行比较,得到当前驾驶场景和标准场景间的相似度。
S406a:对于任一场景类,将属于该场景类的标准场景对应的相似度进行组合计算,得到该场景类的第一概率。其中,组合计算可以指加权求和或者求和平均等。
可选的,为了降低计算复杂度,在图4a中,可以所有场景类对应的标准场景的结构化语义信息进行筛选,将不含有实时结构化语义信息的标准场景的相似度设置(或赋值)为0,如:可以将不含有实时结构化语义信息的标准场景的第一相似度、第二相似度、第三相似度以及第四相似度设置为0,即这些标准场景无需与实时驾驶场景对应的结构化语义信息进行比较。其中,不含有实时结构化语义信息可以指与实时驾驶场景对应的结构化语义信息所表征的物体完全不同的结构化语义信息。例如,标准场景1对应的结构化语义信息可以表征高山、树木等物体的属性,而实时驾驶场景对应的结构化语义信息用于表征农舍、草原等物体的属性,二者毫不相关,则可以将标准场景1对应的结构化语义信息确定为不含有实时结构化语义信息。
(2)将当前时刻的道路属性与场景特征库中标准场景的道路属性进行比较,得到标准场景与当前驾驶场景间的第二相似度,对属于场景类的所有标准场景的第二相似度进行组合计算,得到场景类的第二概率;将车辆在未来预设时间段内驾驶场景的道路属性与场景特征库中标准场景的道路属性进行比较,得到标准场景与当前驾驶场景间的第三相似度,对属于场景类的所有标准场景的第三相似度进行组合计算,得到场景类的第三概率。
具体的,该确定过程可参照图4b所示,可以包括:
S401b:统计分析每个场景类对应的标准场景的道路属性以及不同道路属性间的相似度。
S402b:将场景类、标准场景、道路属性以及不同道路属性间的相似度对应存储在场景道路属性库中。
S403b:获取当前时刻的道路属性。
S404b:将当前时刻的道路属性与场景道路属性库中标准场景对应的道路属性进行比较,得到当前驾驶场景和标准场景间的第二相似度。
S405b:对于任一场景类,将属于该场景类的标准场景对应的第二相似度进行组合计算,得到该场景类的第二概率。
S406b:获取车辆在未来预设时间段内驾驶场景的道路属性。
S407b:将车辆在未来预设时间段内驾驶场景的道路属性与场景道路属性库中标准场景对应的道路属性进行比较,得到当前驾驶场景和标准场景间的第三相似度。
S408b:将属于该场景类的标准场景对应的第三相似度进行组合计算,得到该场景类的第三概率。
其中,将当前时刻的道路属性与场景道路属性库中标准场景对应的道路属性进行比较,得到当前驾驶场景和标准场景间的相似度可以包括:若当前时刻的道路属性与场景道路属性库中标准场景对应的道路属性相同,则确定相似度为1,若不同,则根据场景道路属性库中预先存储的不同道路属性间的相似度确定当前时刻的道路属性与场景道路属性库中标准场景对应的道路属性的相似度。其中,组合计算可以指加权求和或者求和平 均等。
(3)将当前时刻的交通态势频谱与场景特征库中标准场景的交通态势频谱进行比较,得到标准场景与当前驾驶场景间的第四相似度,对属于场景类的所有标准场景的第四相似度进行组合计算,得到场景类的第四概率。
具体的,该确定过程可参照图4c所示,可以包括:
S401c:统计分析每个场景类对应的标准场景下的车辆状态信息的频谱特征以及标准场景下的交通参与车辆的观测参数的频谱特征。
S402c:将车辆状态信息的频谱特征以及交通参与车辆的观测参数的频谱特征组成标准场景的交通态势频谱,将场景类、标准场景、交通态势频谱对应存储在场景交通态势库中。
S403c:获取当前时刻的交通态势频谱。
S404c:将当前时刻的交通态势频谱与场景交通态势频谱库中标准场景对应的交通态势频谱进行比较,得到当前驾驶场景和标准场景间的相似度。
S405c:对于任一场景类,将属于该场景类的标准场景对应的相似度进行组合计算,得到该场景类的第四概率。
其中,组合计算可以指加权求和或者求和平均等。
(4)根据场景类的第一概率、场景类的第二概率、场景类的第三概率以及场景类的第四概率得到场景类的总相似度。
示例性的,可以将场景类的第一概率、场景类的第二概率、场景类的第三概率以及场景类的第四概率进行求和运算得到场景类的总相似度,该求和运算可以为加权求和,也可以为求和平均等。例如,假设场景类的第一概率为、场景类的第二概率为、场景类的第三概率为,场景类的第四概率为,则场景类的总相似度为可以为:
其中,、、和为加权系数,可以根据需要进行设置,不予限制。
需要说明的是,本申请实施例中,标准场景的各个相似度(第一相似度、第二相似度、三相似度以及第四相似度)可以用于表征当前驾驶场景属于该标准场景的可能性,相似度越大,表示当前驾驶场景越可能属于该标准场景,反之,则表示当前驾驶场景不可能为该标准场景。
具体的,步骤304根据确定结果控制车辆驾驶如图5a所示,可以包括:
S501:判断第一场景类是否与前一时刻的场景类相同;
S502:若第一场景类与前一时刻的场景类相同,则判断车辆当前的设计运行范围是否满足第一场景类对应的设计运行范围,若满足,则执行步骤S503。若不满足,则执行步骤S504。
S503:维持当前驾驶状态不变。
S504:发送故障告警信息(或者人工接管请求信息)。
S505:若第一场景类与前一时刻的场景类不同,则判断车辆当前的设计运行范围是否满足第一场景类对应的设计运行范围,若满足,则执行步骤S506。若不满足,则执行S507。
S506:将车辆从当前驾驶状态切换到第一场景类对应的驾驶状态;
S507:判断车辆当前的设计运行范围是否满足前一时刻的场景类对应的设计运行范围,若车辆当前的设计运行范围满足前一时刻的场景类对应的设计运行范围,则执行步骤S508;若车辆当前的设计运行范围不满足前一时刻的场景类对应的设计运行范围,则执行步骤S509。
S508:发送场景类切换不成功信息,并维持当前驾驶状态不变。
S509:发送故障告警信息(或者人工接管请求信息)。
其中,前一时刻可以指当前时刻之前的时刻。
其中,车辆的设计运行范围可以用于指车辆正常运行在某个驾驶场景时的驾驶状态,可以包括驾驶员状态、车辆故障状态、控制器硬件故障状态以及结构化语义信息等多个。可选的,可以统计分析N个场景类对应的设计运行范围,将N个场景类对应的设计运行范围预先存储在场景特征库中。其中,统计分析N个场景类对应的设计运行范围的过程可参照现有技术,不再赘述。
其中,故障告警信息可以用于指示车辆当前可能处于故障状态、不适合行驶。人工接管请求信息可以用于请求用户自己来手动控制车辆进行智能驾驶。
其中,将车辆从当前驾驶状态切换到第一场景类对应的驾驶状态可以包括:获取第一场景类对应的智能驾驶算法,根据第一场景类对应的智能驾驶算法切换车辆的运行状态。智能驾驶算法与场景类对应,N个场景类可能一一对应N个智能驾驶算法。可选的,可以将N个场景类对应的N个智能驾驶算法预先存储在智能驾驶系统中。
其中,场景类切换不成功信息可以用于指示未能成功将车辆当前驾驶状态切换到第一场景类对应的驾驶状态。发送场景类切换不成功信息,并维持当前驾驶状态不变可以包括:通过人机交互模块向用户发送场景类切换不成功信息,并执行当前场景类对应的智能驾驶算法不变。
可选的,在车辆行驶过程中,为了提高用户体验,在根据确定结果控制车辆进行智能驾驶之前,如图5b所示,所述方法还可以包括:
S510:获取智能驾驶指示,确定该智能驾驶指示用于指示是否停止车辆的智能驾驶(即是否智能驾驶车辆);其中,智能驾驶指示可以由驾驶员或者云端操作发出。
S511:若智能驾驶指示用于指示停止车辆的智能驾驶,则向车辆主动执行单元发送用于指示释放驾驶权的操作指令,并驾驶员发出释放通知。
其中,向车辆主动执行单元发送用于指示释放驾驶权的操作指令可以包括但不限于释放方向盘、刹车、油门等的控制权。释放通知可以用于通知驾驶员(或用户)已停止车辆的智能驾驶,即车辆的智能驾驶权利已释放。可选的,释放通知可以包括但不限于语音告警、灯光告警、安全带收紧等一种或多种操作。
S512:若智能驾驶指示用于指示智能驾驶车辆(即不停止车辆的智能驾驶),则判断是否接收到用于指示切换到第二场景类对应的驾驶状态的场景切换指示,若接收到场景切换指示,则执行步骤S513;若未接收到场景切换指示,则根据图5a的执行结果控制车辆进行智能驾驶。
S513:获取第二场景类对应的智能驾驶算法,根据第二场景类对应的智能驾驶算法切换车辆的运行状态。
如:根据第二场景类对应的智能驾驶算法向车辆主动执行单元(制动系统、转向系 统、驱动系统、照明系统等)发出操作指令,使其按照操作指令工作。
具体的,如图5b所示,在发送故障告警信息(或者人工接管请求信息)后,所述方法还可以包括:
S514:判断驾驶员是否已接管车辆,若接管,执行步骤S515,否则,执行步骤S516。
例如,可以通过车辆主动执行单元输入的车辆状态参数判断驾驶员是否已接管车辆;
S515:向车辆主动执行单元发送用于指示释放驾驶权的操作指令,并向驾驶员发出释放通知。
S516:向车辆主动执行单元发送用于指示安全停车的操作指令。
其中,用于指示安全停车的操作指令可以包括但不限于方向盘缓慢归中、油门释放、一定的刹车百分比等。
如此,可以通过判断是否满足场景类的设计运行范围和对切换交互过程的监管,确保了使用每一个场景类算法的安全性;同时,本申请实施例确保智能驾驶算法在不同场景内的连贯切换,实现了全工况模式的智能驾驶。
下面以某型号轿车从A点智能驾驶到B点为例对本申请实施例提供的智能驾驶算法进行描述,其中,该轿车具有制动系统、转向系统、驱动系统、照明系统等车辆主动执行单元,这些系统都具备接受来自智能驾驶系统指令的能力,同时该轿车拥有摄像头、雷达、激光、地图、定位模块等传感器单元。且从A点智能驾驶到B点的过程中会有城区主干道2车道、城区主干道4车道、城区3叉路口、城区4叉路口、城区高架桥路段、城区高楼间阴影路口、城区修路路段、城区垃圾堆放路段、乡村道路、高速公路收费站、高速公路匝道、高速公路修路路段、高速公路交通事故路段…等路段,每个路段可能会有堵车、道路通畅多种情况出现。
1)对从A点智能驾驶到B点的场景进行分类:将A点到B点的场景统计并分析后,可以分成以下8类:
a、城区结构化无堵车场景类{城区主干道2车道通畅场景,城区主干道4车道通畅场景,城区高架桥路段通畅场景…}。
b、交叉路口场景类{城区3叉路口,城区4叉路口…}。
c、非结构化不规则道路场景类{城区修路路段场景,乡村道路场景,高速公路匝道场景,高速公路修路路段场景,高速公路交通事故路段场景…}。
d、高速公路无堵车场景类{高速公路2车道场景,高速公路3车道场景..}。
e、城区和高速结构化堵车场景类{城区主干道2车道堵车场景,城区主干道4车道堵车场景,城区高架桥路段堵车场景,高速公路2车道堵车场景…}。
f、隧道场景类{城区高楼间阴影路口,隧道…}。
g、泊车场景类{停车场A场景,停车场B场景…}。
h、收费站场景类{高速入口场景,高速出口场景…}。
2)统计分析每个场景类的标准场景:
a、城区结构化无堵车场景类{标准场景1,标准场景2,…,标准场景20}。
b、交叉路口场景类{标准场景1,标准场景2,…,标准场景20}。
c、非结构化不规则道路场景类{标准场景1,标准场景2,…,标准场景20}。
d、高速公路无堵车场景类{标准场景1,标准场景2,…,标准场景20}。
e、城区和高速结构化堵车场景类{标准场景1,标准场景2,…,标准场景20}。
f、隧道场景类{标准场景1,标准场景2,…,标准场景20}。
g、泊车场景类{标准场景1,标准场景2,…,标准场景20}。
h、收费站场景类{标准场景1,标准场景2,…,标准场景20}。
3)获取每一个场景类的结构化语义信息、道路属性以及交通态势频谱,建立场景特征库。
4)根据实时场景的结构化语义信息和每一个标准场景的结构化语义信息集对比求相似度,得到从感知维度计算的当前驾驶场景属于每一个场景类的概率;
根据实时场景的道路属性信息、全局规划点的道路属性信息和每一个场景类的道路属性对比,得到从地图维度计算的当前驾驶场景属于每一个场景类的概率,预期场景属于每一个场景类的概率;
根据实时场景的车辆状态参数信息和每一个场景类的车辆状态参数特征频谱的相似性、交通参与车辆的观测参数信息和交通参与车辆的观测参数特征频谱的相似性,得到从交通态势维度计算的当前驾驶场景属于每一个场景类的概率;
根据以上3个维度的概率,综合计算实时场景和每一场景类总相似度,总相似度最高的场景类为识别出的场景类。
5)若识别出场景类为交叉路口场景类,且正在运行的智能驾驶算法为城区结构化无堵车场景类算法,根据设计运行范围判断信息判断是否满足交叉路口场景类设计运行范围:
1)如果满足,发送切换到交叉路口场景类的指令;
2)如果不满足,再次判断是否满足城区结构化无堵车场景类的设计运行范围:a)如果满足,发送维持城区结构化无堵车场景指令和场景类切换到交叉路口场景类不成功信息;b)如果不满足,发送故障告警信息和人工接管请求信息;
6)若识别场景类为交叉路口场景类,且正在运行的智能驾驶算法也是交叉路口场景类算法,根据设计运行范围判断信息判断是否满足交叉路口场景类设计运行范围:如果满足,发送维持交叉路口场景类的指令,如果不满足,发送故障告警信息和人工接管请求信息;
7)根据驾驶员或云端操作意图判断是否有停止智能驾驶系统的指令信息:
如果有,向车辆主动执行模块发送释放油门、档位、方向盘、刹车等系统的控制权指令,并通过HMI告知驾驶员;如果没有,判断驾驶员是否有让驾驶系统切换到某个场景类的企图,如果有,向感知融合算子选择模块和决策控制算子模块发送切换到该场景类对应算法,如果没有,接收来自场景类切换判断模块的信息;
当信息为切换到交叉路口场景类的指令时,发送切换到交叉路口场景类对应的智能驾驶算法;
当信息为发送维持城区结构化无堵车场景类指令和场景类切换到交叉路口场景类不成功信息时,发送维持城区结构化无堵车场景类指令对应的智能驾驶算法,并向交互模块发送场景类切换到交叉路口场景类不成功信息;
当信息为故障告警信息和人工接管请求信息时,发送故障告警信息和人工接管请求,当接收到人工接管的反馈(来自车辆执行模块的车辆状态参数信息)时,释放油门、档 位、方向盘、刹车等系统的控制权指令,并通过HMI告知驾驶员,当一段时间内仍没有接收到人工接管的反馈时,控制车辆安全停车(向油门、档位、方向盘、刹车等系统发送控制指令);
当信息为维持交叉路口场景类的指令时,发送维持交叉路口场景类对应的智能驾驶算法。
可以理解的是,上述车辆为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对执行上述智能驾驶方法的车辆或者智能驾驶系统进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在本申请所提供的一种实现方式中,在场景切换的过程中,车辆中的智能驾驶系统和驾驶员的交互可以预设有多种场景切换模式,驾驶员可以预先选择其中的一种场景切换模式。关于车辆可能提供的场景切换模式,具体如下:
第一种,切换仅通知模式。在自动化程度较高的智能驾驶车辆,可以提供切换仅通知模式,在这种模式下,当车辆判断需要进行场景切换时,可以自行进行场景切换,不需要经过驾驶员的确认。在场景切换的同时,可以通过人机交互界面250或者语音通知驾驶员车辆已进行了场景切换。进一步的,驾驶员也可以设置让车辆不通知场景切换的信息。
具体来说,在国际自动机工程师学会(SAE International)制定的标准中,将无人驾驶级别分成5级。其中,第四级(Level 4,L4)对应高度的自动化,具体是指由自动驾驶系统完成所有驾驶操作,驾驶员并不必要根据系统请求提供应答,在实践中,车辆可以在地图绘制完善的区域完成人类司机能够完成的多数任务。第五级(Level 5,L5)对应完全的自动化,具体是指由自动驾驶系统能够在不限定道路和环境条件的情况下完成所有的驾驶操作,驾驶员仅在必要的情况下接管车辆。通常来说,在实现L4和L5的车辆中,车辆可以在大多数情况下实现自动驾驶,而不需要驾驶员的介入,从而可以提供切换仅通知模式。
第二种,默认同意模式。对于自动化程度还不足够高的智能驾驶车辆,可以提供默认同意模式,在这种模式下,当车辆判断需要进行场景切换时,先通过人机交互界面或者语音向驾驶员发出请求。驾驶员可以响应该请求,指示车辆进行场景切换;或者当驾驶员没有在预设的时间内响应该请求时,车辆默认驾驶员同意该场景切换请求。在这种情况下,车辆进行驾驶场景的切换。例如,在无人驾驶级别的第三级(Level 3,L3),由车辆的自动驾驶系统完成所有的驾驶操作,驾驶员根据车辆的自动驾驶系统的请求提供适当的应答,并随时接管车辆的控制权。通常来说,在实现L3的车辆中,可以提供默 认同意模式。
第三种,默认拒绝模式。对于自动化程度较低的智能驾驶车辆,可以提供默认拒绝模式。在这种情况下,当车辆判断需要进行场景切换时,先通过人机交互界面或者语音向驾驶员发出请求。例如,车辆根据当前时刻的特征参数等信息,建议选择场景特征库中的第一驾驶场景类作为车辆的驾驶场景,可以发出请求,提示驾驶员进行确认。如果驾驶员没有在预设的时间内响应该请求,则车辆默认驾驶员拒绝该场景切换请求。在这种情况下,车辆将保持前一时刻的驾驶场景。如果车辆当前的设计运行范围不满足前一时刻的驾驶场景对应的设计运行范围,则车辆发出故障告警信息,并请求驾驶员接管车辆。当发送故障告警信息后,间隔预设的时间(例如3秒)后判断驾驶员是否已经接管了该车辆,如果确定驾驶员已经接管了该车辆,则发送指示释放驾驶权的操作指令,并向驾驶员发送驾驶权释放通知;如果驾驶员未在预设的时间内接管车辆,则智能驾驶系统发送指示安全停车的操作指令,以保证驾驶安全。
需要指出的是,上述三种场景切换模式仅为举例说明本申请可以提供多种场景切换的过程中车辆和驾驶员的交互方式,并不对本申请进行限定。同样,上述说明中,将车辆的无人驾驶级别与车辆可能提供的切换模式进行对应,仅为一种可能的实现方式,并不对本申请进行限定。
图6a为本申请实施例提供的一种人机交互界面的示意图。如图6a所示,人机交互界面可能包括如下部分:
导航部分,用于表示车辆当前所在的区域的地图以及车辆的具体位置、行驶方向等信息,如图所示,车辆正在沿徽州大道向北行驶;
人工控制车辆按钮,用于当驾驶员触碰该按钮时,驾驶权从车辆转移给驾驶员。虽然车辆采用自动驾驶技术,但通常会给予驾驶员接管车辆的权限。因此,可以在人机界面中设置人工控制车辆按钮,用来控制驾驶权的转移。类似的,人工控制车辆按钮还可以设置成,如果当前是驾驶员人工控制车辆行驶时,当驾驶员触碰人工控制车辆按钮后,驾驶权转移给车辆,开始进行自动驾驶。人工控制车辆按钮处还可以增加显示车辆当前是处于自动驾驶状态还是由驾驶员人工控制的状态栏;
车速显示部分,用于显示车辆当前速度;
剩余油量显示部分,用于显示车辆当前剩余的油量;类似的,当车辆是以电作为动力或者是油电混合动力的时,也可以相应的显示车辆当前剩余的能源或者显示车辆预计还可以行驶的距离;
驾驶场景显示部分,用于显示车辆当前的驾驶场景;由于在本申请中,车辆可以根据车辆的特征参数,选择采用不同的驾驶场景,因此可以在人机交互界面中显示车辆当前的驾驶场景,以供驾驶员知悉。进一步的,可以将驾驶场景部分设置为,驾驶员可以通过点击人机交互界面的该部分屏幕可以手动更改当前的驾驶场景;
时间显示部分,用于显示车辆当前的时间。
图6b为本申请实施例提供的一种人机交互界面在场景切换时的示意图。当车辆如图6A所示沿徽州大道向北行驶时,车辆所采用的是城区结构化无堵车场景类。而当车辆如图6B所示行驶至徽州大道和长江路的交叉路口时,车辆经过前述的方法确定将当前的场景由城区结构化无堵车场景类切换为交叉路场景类。人机交互界面还可以包括通知显示 部分,用于显示需要驾驶员知悉的信息。例如,如图6b所示,可以在通知显示部分中显示信息,告知驾驶员车辆的驾驶场景即将切换为交叉路场景类(此时也可以伴随语音或者提示音对驾驶员进行提醒)。图6b所示的人机交互界面适用于前述的切换仅通知模式。
图6c为本申请实施例提供的另一种人机交互界面在场景切换时的示意图。和图6b类似,当车辆行驶至徽州大道和长江路的交叉路口时,车辆经过前述的方法确定将当前的场景由城区结构化无堵车场景类切换为交叉路场景类。人机交互界面中的通知显示部分中提示:驾驶场景即将切换为交叉路场景类,并提供了确认和取消两个按钮供驾驶员选择(此时也可以伴随语音或者提示音对驾驶员进行提醒)。图6c所示的人机交互界面适用于前述的默认同意和默认拒绝模式。当采用默认同意模式时,当人机交互界面中的通知显示部分提示切换场景且驾驶员没有在预设的时间内进行选择时,默认驾驶员同意车辆切换场景;当采用默认拒绝模式时,当人机交互界面中的通知显示部分提示切换场景且驾驶员没有在预设的时间内进行选择时,默认驾驶员拒绝车辆切换场景。
图6d为本申请实施例提供的另一种人机交互界面的示意图。由于人机交互界面可以使用语音等方式实现车辆与驾驶员之间的交互,因此,通知显示部分中可以对当前驾驶员可能采用的语音指令进行提醒。例如,如图6d所示,通知显示部分可以提示:您可以这样说:“切换为XX场景类”;“释放车辆驾驶权”。驾驶员根据通知显示部分的提示,可以进行通过语音交互的方式手动切换驾驶场景或者转移车辆的驾驶权等操作,从而增加了人机交互的效率。
需要指出的是,上述所提供的人机交互界面的示意图仅为举例说明车辆场景切换中的人机交互,并不对本申请所保护的范围进行限定。
图7为本申请实施例提供的一种智能驾驶系统,该智能驾驶系统可以为车辆或包括在车辆中。如图7所示,该智能驾驶系统60可以包括:感知融合单元61、场景类识别单元62、场景类切换模块63,还可以包括车辆执行单元64。
感知融合单元61,用于获取车辆当前时刻的特征参数以及车辆在未来预设时间段内驾驶场景的道路属性。例如,感知融合单元61可以执行步骤301。
场景类识别单元62,用于比较车辆当前时刻的特征参数与场景特征库中标准场景的特征参数、以及比较车辆在未来预设时间段内驾驶场景的道路属性与场景特征库中标准场景的道路属性,根据比较结果确定场景特征库中每个场景类与所述车辆当前时刻的驾驶场景的总相似度,将N个场景类中总相似度最高的第一场景类确定为当前时刻的驾驶场景;其中,场景特征库中包括N个场景类,每个场景类对应M个标准场景,每个标准场景对应有特征参数;N为大于或等于1的整数,M为大于或等于1的整数。例如,场景类识别单元62可以执行步骤302和步骤303。
场景类切换模块63,用于根据确定结果控制车辆的驾驶状态。例如,场景类切片模块63可以执行步骤304。
如图7所示,场景类识别单元62,可以包括:
场景类感知概率计算模块620,用于将当前时刻的结构化语义信息与场景特征库中标准场景的结构化语义信息进行比较,得到标准场景的第一相似度,对属于场景类的所有标准场景的第一相似度进行组合计算,得到场景类的第一概率。例如,场景类感知概率计算模块620可以执行图4a所示过程。
场景类地图概率计算模块621,用于将当前时刻的道路属性与场景特征库中标准场景的道路属性进行比较,得到标准场景的第二相似度,对属于场景类的所有标准场景的第二相似度进行组合计算,得到场景类的第二概率;以及,将车辆在未来预设时间段内驾驶场景的道路属性与场景特征库中标准场景的道路属性进行比较,得到标准场景的第三相似度,对属于场景类的所有标准场景的第三相似度进行组合计算,得到场景类的第三概率。例如,场景类地图概率计算模块621可以执行图4b所示过程。
场景类交通态势概率计算模块622,用于将当前时刻的交通态势频谱与场景特征库中标准场景的交通态势频谱进行比较,得到标准场景的第四相似度,对属于场景类的所有标准场景的第四相似度进行组合计算,得到场景类的第四概率。例如,场景类交通态势概率计算模块622可以执行图4c所示过程。
场景类识别判断模块623,用于根据场景类的第一概率、场景类的第二概率、场景类的第三概率以及场景类的第四概率得到场景类的总相似度。
需要说明的是,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。本申请实施例提供的智能驾驶系统,用于执行上述智能驾驶方法,因此可以达到与上述智能驾驶方法相同的效果。
作为又一种可实现方式,智能驾驶系统60可以包括:处理模块、通信模块以及车辆主动执行单元。感知融合单元61、场景类识别单元62、场景类切换模块63可以集成在处理模块中。处理模块用于对智能驾驶系统60的动作进行控制管理,例如,处理模块用于支持该通信装置11执行步骤301、步骤302、步骤303、步骤304等以及执行本文所描述的技术的其它过程。通信模块用于支持智能驾驶系统60与驾驶员进行通信,可以为人机交互界面。进一步的,该智能驾驶系统60还可以包括存储模块,用于存储智能驾驶系统60的程序代码和智能驾驶算法。
其中,处理模块可以是处理器或控制器。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信模块可以是人机交互界面等。存储模块可以是存储器。当处理模块为处理器,通信模块为人机交互界面,存储模块为存储器时,图7所示智能驾驶系统60可以为图2所示智能驾驶系统。
图8为本申请实施例提供的又一智能驾驶系统的组成示例图。
如图8所示,智能驾驶系统70包括获取模块71、确定模块72、显示模块73和控制模块74。其中,获取模块71用于获取车辆在第一时间的特征参数以及车辆在第一时间的未来预设时间段内驾驶场景的道路属性,其中,所述特征参数包含结构化语义信息、道路属性以及交通态势频谱;确定模块72,用于根据所述车辆在第一时间的特征参数以及车辆在未来预设时间段内驾驶场景的道路属性,选择场景特征库中的第一驾驶场景类;显示模块73,用于:显示第一提示符,所述第一提示符用于提示驾驶员将所述车辆在所述第一时间内的驾驶场景切换为第一驾驶场景类;接收第一指示,所述第一指示与第一指示符对应,用于指示将车辆在第一时间的驾驶场景切换为第一场景类;控制模块74,根据第一驾驶类控制车辆的驾驶状态。
在一种可能的实现方式中,当选择场景特征库中的第一驾驶场景类时,确定模块72具体用于:比较车辆在第一时间的特征参数与特征场景库中标准场景的特征参数的道路 属性,根据比较结果确定场景特征库中每个场景类与所述车辆当前时刻的驾驶场景的总相似度,其中,每个场景特征库中包括N个场景类,每个场景类对应M个标准场景,N和M均为正整数;将所述N个场景类中总相似度最高的第一场景类确定为第一时间的驾驶场景。
在一种可能的实现方式中,当根据第一驾驶场景类控制车辆的驾驶状态之后,确定模块72还用于,选择场景特征库中的第二驾驶场景类为第二时间的驾驶场景;显示模块73还用于:显示第二提示符,该第二提示符用于指示将车辆在第二时间的驾驶场景切换为第二驾驶场景;当在预设的时间内未接收到第二指示时,指示控制模块74维持根据第一驾驶场景类控制车辆的驾驶状态,其中,第二指示与第二提示符对应,用于指示将车辆当前的驾驶场景切换为第二驾驶场景类。
在一种可能的实现方式中,当在预设的时间内未接收到第二响应之后,确定模块71还用于,确定车辆在第二时间的设计运行范围不满足第一场景类对应的设计运行范围;显示模块73还用于,发送故障告警信息。
在一种可能的实现方式中,当发送故障告警信息之后,确定模块71还用于,判断驾驶员是否已经接管所述车辆;显示模块73还用于,若确定模块71确定驾驶员已经接管车辆,则发送用于指示释放驾驶权的操作指令以及向所述驾驶员发送释放通知;若确定模块71确定驾驶员未接管所述车辆,则发送用于指示安全停车的操作指令。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件程序实现时,可以全部或部分地以计算机程序产品的形式来实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程系统。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或者数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可以用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带),光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
尽管在此结合各实施例对本申请进行了描述,然而,在实施所要求保护的本申请过程中,本领域技术人员通过查看所述附图、公开内容、以及所附权利要求书,可理解并实现所述公开实施例的其他变化。在权利要求中,“包括”(comprising)一词不排除其他组成部分或步骤,“一”或“一个”不排除多个的情况。单个处理器或其他单元可以实现权利要求中列举的若干项功能。相互不同的从属权利要求中记载了某些措施,但这并不表示这些措施不能组合起来产生良好的效果。
尽管结合具体特征及其实施例对本申请进行了描述,显而易见的,在不脱离本申请的精神和范围的情况下,可对其进行各种修改和组合。相应地,本说明书和附图仅仅是所附权利要求所界定的本申请的示例性说明,且视为已覆盖本申请范围内的任意和所有 修改、变化、组合或等同物。显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (26)

  1. 一种智能驾驶方法,其特征在于,所述方法包括:
    获取车辆当前时刻的特征参数以及车辆在未来预设时间段内驾驶场景的道路属性;其中,所述特征参数包括结构化语义信息、道路属性以及交通态势频谱;
    比较所述车辆当前时刻的特征参数与场景特征库中标准场景的特征参数、以及比较所述车辆在未来预设时间段内驾驶场景的道路属性与所述场景特征库中标准场景的道路属性,根据比较结果确定所述场景特征库中每个场景类与所述车辆当前时刻的驾驶场景的总相似度;其中,所述场景特征库中包括N个场景类,每个场景类对应M个标准场景,每个标准场景对应有特征参数;所述N为大于或等于1的整数,所述M为大于或等于1的整数;
    将所述N个场景类中总相似度最高的第一场景类确定为当前时刻的驾驶场景。
  2. 根据权利要求1所述的方法,其特征在于,对于所述场景库中的任一场景,比较所述车辆当前时刻的特征参数与场景特征库中标准场景的特征参数、以及比较所述车辆在未来预设时间段内驾驶场景的道路属性与所述场景特征库中标准场景的道路属性,根据比较结果确定所述场景类的总相似度,包括:
    将所述当前时刻的结构化语义信息与所述场景特征库中标准场景的结构化语义信息进行比较,得到所述标准场景的第一相似度,对属于所述场景类的所有标准场景的第一相似度进行组合计算,得到所述场景类的第一概率;
    将所述当前时刻的道路属性与所述场景特征库中标准场景的道路属性进行比较,得到所述标准场景的第二相似度,对属于所述场景类的所有标准场景的第二相似度进行组合计算,得到所述场景类的第二概率;
    将所述车辆在未来预设时间段内驾驶场景的道路属性与所述场景特征库中标准场景的道路属性进行比较,得到所述标准场景的第三相似度,对属于所述场景类的所有标准场景的第三相似度进行组合计算,得到所述场景类的第三概率;
    将所述当前时刻的交通态势频谱与所述场景特征库中标准场景的交通态势频谱进行比较,得到所述标准场景的第四相似度,对属于所述场景类的所有标准场景的第四相似度进行组合计算,得到所述场景类的第四概率;
    根据所述场景类的第一概率、所述场景类的第二概率、所述场景类的第三概率以及所述场景类的第四概率得到所述场景类的总相似度。
  3. 根据权利要求2所述的方法,其特征在于,在比较所述车辆当前时刻的特征参数与场景特征库中标准场景的特征参数、以及比较所述车辆在未来预设时间段内驾驶场景的道路属性与所述场景特征库中标准场景的道路属性之前,所述方法还包括:
    将所述场景特征库中不含有实时结构化语义信息的标准场景的相似度设置为0。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,在将所述N个场景类中总相似度最高的第一场景类确定为当前时刻的驾驶场景之后,所述方法还包括:
    判断所述第一场景类是否与前一时刻的场景类相同;
    若所述第一场景类与前一时刻的场景类相同,则判断所述车辆当前的设计运行范围是否满足所述第一场景类对应的设计运行范围;
    若所述车辆当前的设计运行范围满足所述第一场景类对应的设计运行范围,则维持当前驾驶状态不变;若所述车辆当前的设计运行范围不满足所述第一场景类对应的设计运行范围,则发送故障告警信息。
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    若所述第一场景类与前一时刻的场景类不相同,则判断所述车辆当前的设计运行范围是否满足所述第一场景类对应的设计运行范围;
    若所述车辆当前的设计运行范围满足所述第一场景类对应的设计运行范围,则将所述车辆从当前驾驶状态切换到所述第一场景类对应的驾驶状态;
    若所述车辆当前的设计运行范围不满足所述第一场景类对应的设计运行范围,则判断所述车辆当前的设计运行范围是否满足前一时刻的场景类对应的设计运行范围,若所述车辆当前的设计运行范围满足所述前一时刻的场景类对应的设计运行范围,则发送场景类切换不成功信息,并维持当前驾驶状态不变;若所述车辆当前的设计运行范围不满足所述前一时刻的场景类对应的设计运行范围,则发送故障告警信息。
  6. 根据权利要求4或5所述的方法,其特征在于,在所述发送故障告警信息之后,所述方法还包括:
    判断驾驶员是否已接管所述车辆;
    若确定所述驾驶员接管所述车辆,则向所述车辆上的车辆主动执行单元发送用于指示释放驾驶权的操作指令以及向所述驾驶员发送释放通知;若确定所述驾驶员未接管所述车辆,则向所述车辆主动执行单元发送用于指示安全停车的操作指令。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,在将所述N个场景类中总相似度最高的第一场景类确定为当前时刻的驾驶场景之后,所述方法还包括:
    获取智能驾驶指示;其中,所述智能驾驶指示用于指示是否停止所述车辆的智能驾驶;
    若所述智能驾驶指示用于指示对所述车辆的进行智能驾驶,则根据确定结果控制所述车辆进行智能驾驶;
    若所述智能驾驶指示用于指示停止所述车辆的智能驾驶,则向所述车辆上的车辆主动执行单元发送用于指示释放驾驶权的操作指令以及向所述驾驶员发送释放通知。
  8. 一种智能驾驶系统,其特征在于,所述系统包括:
    感知融合单元,用于获取车辆当前时刻的特征参数以及车辆在未来预设时间段内驾驶场景的道路属性;其中,所述特征参数包括结构化语义信息、道路属性以及交通态势频谱;
    场景类识别单元,用于比较所述车辆当前时刻的特征参数与场景特征库中标准场景的特征参数、以及比较所述车辆在未来预设时间段内驾驶场景的道路属性与所述场景特征库中标准场景的道路属性,根据比较结果确定所述场景特征库中每个场景类与所述车辆当前时刻的驾驶场景的总相似度;其中,所述场景特征库中包括N个场景类,每个场景类对应M个标准场景,每个标准场景对应有特征参数;所述N为大于或等于1的整数,所述M为大于或等于1的整数;
    以及,将所述N个场景类中总相似度最高的第一场景类确定为所述当前时刻的驾驶 场景。
  9. 根据权利要求8所述的系统,其特征在于,所述场景类识别单元,包括:
    场景类感知概率计算模块,用于将所述车辆当前时刻的结构化语义信息与所述场景特征库中标准场景的结构化语义信息进行比较,得到所述标准场景的第一相似度,对属于所述场景类的所有标准场景的第一相似度进行组合计算,得到所述场景类的第一概率;
    场景类地图概率计算模块,用于将所述车辆当前时刻的道路属性与所述场景特征库中标准场景的道路属性进行比较,得到所述标准场景的第二相似度,对属于所述场景类的所有标准场景的第二相似度进行组合计算,得到所述场景类的第二概率;
    以及,将所述车辆在未来预设时间段内驾驶场景的道路属性与所述场景特征库中标准场景的道路属性进行比较,得到所述标准场景的第三相似度,对属于所述场景类的所有标准场景的第三相似度进行组合计算,得到所述场景类的第三概率;
    场景类交通态势概率计算模块,用于将所述车辆当前时刻的交通态势频谱与所述场景特征库中标准场景的交通态势频谱进行比较,得到所述标准场景的第四相似度,对属于所述场景类的所有标准场景的第四相似度进行组合计算,得到所述场景类的第四概率;
    场景类识别判断模块,用于根据所述场景类的第一概率、所述场景类的第二概率、所述场景类的第三概率以及所述场景类的第四概率得到所述场景类的总相似度。
  10. 根据权利要求9所述的系统,其特征在于,
    所述场景类识别单元,还用于在比较所述车辆当前时刻的特征参数与场景特征库中标准场景的特征参数、以及比较所述车辆在未来预设时间段内驾驶场景的道路属性与所述场景特征库中标准场景的道路属性之前,将所述场景特征库中不含有实时结构化语义信息的标准场景的相似度设置为0。
  11. 根据权利要求8-10任一项所述的系统,其特征在于,所述系统还包括场景类切换模块,用于:
    判断所述第一场景类是否与前一时刻的场景类相同;
    若所述第一场景类与前一时刻的场景类相同,则判断所述车辆当前的设计运行范围是否满足所述第一场景类对应的设计运行范围;
    若所述车辆当前的设计运行范围满足所述第一场景类对应的设计运行范围,则维持当前驾驶状态不变;若所述车辆当前的设计运行范围不满足所述第一场景类对应的设计运行范围,则发送故障告警信息。
  12. 根据权利要求11所述的系统,其特征在于,所述场景类切换模块,还用于:
    若所述第一场景类与前一时刻的场景类不相同,则判断所述车辆当前的设计运行范围是否满足所述第一场景类对应的设计运行范围;
    若所述车辆当前的设计运行范围满足所述第一场景类对应的设计运行范围,则将所述车辆从当前驾驶状态切换到所述第一场景类对应的驾驶状态;
    若所述车辆当前的设计运行范围不满足所述第一场景类对应的设计运行范围,则判断所述车辆当前的设计运行范围是否满足前一时刻的场景类对应的设计运行范围,若所述车辆当前的设计运行范围满足所述前一时刻的场景类对应的设计运行范围,则发送场景类切换不成功信息,并维持当前驾驶状态不变;若所述车辆当前的设计运行范围不满足所述前一时刻的场景类对应的设计运行范围,则发送故障告警信息。
  13. 根据权利要求11或12所述的系统,其特征在于,所述系统还包括:车辆主动执行单元;所述场景类切换模块,还用于:
    在所述发送故障告警信息之后,判断驾驶员是否已接管所述车辆;
    若确定所述驾驶员接管所述车辆,则向所述车辆上的车辆主动执行单元发送用于指示释放驾驶权的操作指令以及向所述驾驶员发送释放通知;若确定所述驾驶员未接管所述车辆,则向所述车辆主动执行单元发送用于指示安全停车的操作指令。
  14. 根据权利要求8-13任一项所述的系统,其特征在于,所述场景类切换模块,还用于:
    在将所述N个场景类中总相似度最高的第一场景类确定为所述当前时刻的驾驶场景之后,获取智能驾驶指示;其中,所述智能驾驶指示用于指示是否停止所述车辆的智能驾驶;
    若所述智能驾驶指示用于指示对所述车辆的进行智能驾驶,则根据确定结果控制所述车辆进行智能驾驶;若所述智能驾驶指示用于指示停止所述车辆的智能驾驶,则向所述车辆上的车辆主动执行单元发送用于指示释放驾驶权的操作指令以及向所述驾驶员发送释放通知。
  15. 一种智能驾驶方法,其特征在于,所述方法用于智能驾驶系统,所述智能驾驶系统位于车辆,所述方法包括:
    获取车辆在第一时间的特征参数以及车辆在所述第一时间的未来预设时间段内驾驶场景的道路属性,其中,所述特征参数包含结构化语义信息、道路属性以及交通态势频谱;
    根据所述车辆在第一时间的特征参数以及车辆在所述第一时间的未来预设时间段内驾驶场景的道路属性,选择场景特征库中的第一驾驶场景类;
    显示第一提示符,所述第一提示符用于提示驾驶员将所述车辆在所述第一时间的驾驶场景切换为所述第一驾驶场景类;
    接收第一指示,所述第一指示与所述第一提示符对应,用于指示将所述车辆在所述第一时间的驾驶场景切换为所述第一驾驶场景类;
    根据所述第一驾驶场景类控制所述车辆的驾驶状态。
  16. 根据权利要求15所述的方法,其特征在于,所述选择场景特征库中的第一驾驶场景类,包括:
    比较所述车辆在所述第一时间的特征参数与所述特征场景库中标准场景的特征参数,以及比较所述车辆在所述第一时间的未来预设时间段内驾驶场景的道路属性与所述场景特征库中标准场景的道路属性,根据比较结果确定所述场景特征库中每个场景类与所述车辆当前时刻的驾驶场景的总相似度;其中,所述场景特征库中包括N个场景类,每个场景类对应M个标准场景,N和M均为正整数;
    将所述N个场景类中总相似度最高的第一场景类确定为所述第一时间的驾驶场景。
  17. 根据权利要求15或16所述的方法,其特征在于,当根据所述第一驾驶场景类控制所述车辆的驾驶状态之后,所述方法还包括:
    选择所述场景特征库中的第二驾驶场景类为第二时间的驾驶场景;
    显示第二提示符,所述第二提示符用于请求将所述车辆在第二时间的驾驶场景切换 为所述第二驾驶场景类;
    当在预设的时间内未接收到第二指示时,维持根据所述第一驾驶场景类控制所述车辆的驾驶状态,其中,所述第二指示与所述第二提示符对应,用于指示将车辆当前的驾驶场景切换为所述第二驾驶场景类。
  18. 根据权利要求17所述的方法,其特征在于,当在预设的时间内未接收到第二指示之后,所述方法还包括:
    确定所述车辆在所述第二时间的设计运行范围不满足所述第一场景类对应的设计运行范围;
    发送故障告警信息。
  19. 根据权利要求18所述的方法,其特征在于,在所述发送故障告警信息之后,所述方法还包括:
    判断驾驶员是否已接管所述车辆;
    若确定所述驾驶员接管所述车辆,则发送用于指示释放驾驶权的操作指令以及向所述驾驶员发送释放通知;若确定所述驾驶员未接管所述车辆,则发送用于指示安全停车的操作指令。
  20. 一种智能驾驶系统,其特征在于,所述系统包括:
    获取模块,用于获取车辆在第一时间的特征参数以及车辆在所述第一时间的未来预设时间段内驾驶场景的道路属性,其中,所述特征参数包含结构化语义信息、道路属性以及交通态势频谱;
    确定模块,用于根据所述车辆在所述第一时间的特征参数以及车辆在所述第一时间的未来预设时间段内驾驶场景的道路属性,选择场景特征库中的第一驾驶场景类;
    显示模块,用于:显示第一提示符,所述第一提示符用于提示驾驶员将所述车辆在所述第一时间内的驾驶场景切换为所述第一驾驶场景类;
    接收第一指示,所述第一指示与所述第一提示符对应,用于指示将所述车辆在所述第一时间的驾驶场景切换为所述第一场景类;
    控制模块,用于根据所述第一驾驶场景类控制所述车辆的驾驶状态。
  21. 根据权利要求20所述的系统,其特征在于,当选择场景特征库中的第一驾驶场景类时,所述确定模块具体用于:
    比较所述车辆在所述第一时间的特征参数与所述特征场景库中标准场景的特征参数、以及比较所述车辆在所述第一时间的未来预设时间段内驾驶场景的道路属性与所述场景特征库中标准场景的道路属性,根据比较结果确定所述场景特征库中每个场景类与所述车辆当前时刻的驾驶场景的总相似度;其中,所述场景特征库中包括N个场景类,每个场景类对应M个标准场景,N和M均为正整数;
    将所述N个场景类中总相似度最高的第一场景类确定为所述第一时间的驾驶场景。
  22. 根据权利要求20或21所述的系统,其特征在于,当根据所述第一驾驶场景类控制所述车辆的驾驶状态之后,
    所述确定模块还用于,选择所述场景特征库中的第二驾驶场景类为第二时间的驾驶场景;
    所述显示模块还用于:显示第二提示符,所述第二提示符用于请求将所述车辆在第二时间的驾驶场景切换为所述第二驾驶场景类;
    当在预设的时间内未接收到第二指示时,指示所述控制模块维持根据所述第一驾驶场景类控制所述车辆的驾驶状态,其中,所述第二指示与所述第二提示符对应,用于指示将车辆当前的驾驶场景切换为所述第二驾驶场景类。
  23. 根据权利要求22所述的系统,其特征在于,当在预设的时间内未接收到第二指示之后,
    所述确定模块还用于,确定所述车辆在所述第二时间的设计运行范围不满足所述第一场景类对应的设计运行范围;
    所述显示模块还用于,发送故障告警信息。
  24. 根据权利要求23所述的系统,其特征在于,当发送故障告警信息之后,
    所述确定模块还用于,判断驾驶员是否已接管所述车辆;
    所述显示模块还用于,若所述确定模块确定所述驾驶员接管所述车辆,则发送用于指示释放驾驶权的操作指令以及向所述驾驶员发送释放通知;若所述确定模块确定所述驾驶员未接管所述车辆,则发送用于指示安全停车的操作指令。
  25. 一种智能驾驶方法,其特征在于,所述方法包括:获取车辆当前时刻的特征参数以及车辆在未来预设时间段内驾驶场景的道路属性;其中,所述特征参数包括结构化语义信息、道路属性以及交通态势频谱;
    比较所述车辆当前时刻的特征参数与场景特征库中标准场景的特征参数、以及比较所述车辆在未来预设时间段内驾驶场景的道路属性与所述场景特征库中标准场景的道路属性,根据比较结果分别确定所述场景特征库中每个场景类的第一标准场景和第二标准场景与所述车辆当前时刻的驾驶场景的第一相似度和第二相似度;其中,所述场景特征库中包括N个场景类,每个场景类包括M个标准场景,每个标准场景对应有特征参数;所述M和N均为大于等于2的整数;
    根据所述N个场景类中的每个场景类的第一相似度和第二相似度确定当前时刻的驾驶场景对应的场景类。
  26. 一种智能驾驶系统,其特征在于,所述系统包括:
    感知融合单元,用于获取车辆当前时刻的特征参数以及车辆在未来预设时间段内驾驶场景的道路属性;其中,所述特征参数包括结构化语义信息、道路属性以及交通态势频谱;
    场景类识别单元,用于:比较所述车辆当前时刻的特征参数与场景特征库中标准场景的特征参数、以及比较所述车辆在未来预设时间段内驾驶场景的道路属性与所述场景特征库中标准场景的道路属性,根据比较结果分别确定所述场景特征库中每个场景类的第一标准场景和第二标准场景与所述车辆当前时刻的驾驶场景的第一相似度和第二相似度;其中,所述场景特征库中包括N个场景类,每个场景类包括M个标准场景,每个标准场景对应有特征参数;所述M和N均为大于等于2的整数;
    根据所述N个场景类中的每个场景类的第一相似度和第二相似度确定当前时刻的驾驶场景对应的场景类。
PCT/CN2019/095943 2018-09-12 2019-07-15 一种智能驾驶方法及智能驾驶系统 WO2020052344A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP19858830.3A EP3754552A4 (en) 2018-09-12 2019-07-15 INTELLIGENT DRIVING PROCESS AND INTELLIGENT DRIVING SYSTEM
US17/029,561 US11724700B2 (en) 2018-09-12 2020-09-23 Intelligent driving method and intelligent driving system
US18/347,051 US20240001930A1 (en) 2018-09-12 2023-07-05 Intelligent Driving Method and Intelligent Driving System

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201811062799.1 2018-09-12
CN201811062799 2018-09-12
CN201910630930.8A CN110893860B (zh) 2018-09-12 2019-07-12 一种智能驾驶方法及智能驾驶系统
CN201910630930.8 2019-07-12

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/029,561 Continuation US11724700B2 (en) 2018-09-12 2020-09-23 Intelligent driving method and intelligent driving system

Publications (1)

Publication Number Publication Date
WO2020052344A1 true WO2020052344A1 (zh) 2020-03-19

Family

ID=69778184

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/095943 WO2020052344A1 (zh) 2018-09-12 2019-07-15 一种智能驾驶方法及智能驾驶系统

Country Status (2)

Country Link
US (1) US20240001930A1 (zh)
WO (1) WO2020052344A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112420058A (zh) * 2020-12-16 2021-02-26 舒方硕 人车协同系统及其实施流程
CN113619610A (zh) * 2021-09-18 2021-11-09 一汽解放汽车有限公司 车辆驾驶模式切换方法、装置、计算机设备和存储介质
CN113762918A (zh) * 2021-08-03 2021-12-07 南京领行科技股份有限公司 一种业务逻辑执行方法、装置、设备及介质
CN114013450A (zh) * 2021-11-16 2022-02-08 交控科技股份有限公司 车辆运行控制方法、系统和计算机设备
CN114104000A (zh) * 2021-12-16 2022-03-01 智己汽车科技有限公司 一种危险场景的评估与处理系统、方法及存储介质
CN115374498A (zh) * 2022-10-24 2022-11-22 北京理工大学 一种考虑道路属性特征参数的道路场景重构方法及系统
EP4141816A1 (en) * 2021-08-31 2023-03-01 Beijing Tusen Weilai Technology Co., Ltd. Scene security level determination method, device and storage medium
CN115857176A (zh) * 2023-01-31 2023-03-28 泽景(西安)汽车电子有限责任公司 一种抬头显示器及其高度调节方法、装置、存储介质
CN116046014A (zh) * 2023-03-31 2023-05-02 小米汽车科技有限公司 道线规划方法、装置、电子设备及可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103778443A (zh) * 2014-02-20 2014-05-07 公安部第三研究所 基于主题模型方法和领域规则库实现场景分析描述的方法
CN105719362A (zh) * 2016-01-05 2016-06-29 常州加美科技有限公司 一种无人驾驶车辆的备用电脑、车用黑匣子和车门锁
CN105954048A (zh) * 2016-07-07 2016-09-21 百度在线网络技术(北京)有限公司 测试无人车正常驾驶的方法及装置
US20170249504A1 (en) * 2016-02-29 2017-08-31 Toyota Jidosha Kabushiki Kaisha Autonomous Human-Centric Place Recognition
CN107609502A (zh) * 2017-09-05 2018-01-19 百度在线网络技术(北京)有限公司 用于控制无人驾驶车辆的方法和装置
WO2018200026A1 (en) * 2017-04-25 2018-11-01 Nec Laboratories America, Inc Detecting dangerous driving situations by parsing a scene graph of radar detections

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103778443A (zh) * 2014-02-20 2014-05-07 公安部第三研究所 基于主题模型方法和领域规则库实现场景分析描述的方法
CN105719362A (zh) * 2016-01-05 2016-06-29 常州加美科技有限公司 一种无人驾驶车辆的备用电脑、车用黑匣子和车门锁
US20170249504A1 (en) * 2016-02-29 2017-08-31 Toyota Jidosha Kabushiki Kaisha Autonomous Human-Centric Place Recognition
CN105954048A (zh) * 2016-07-07 2016-09-21 百度在线网络技术(北京)有限公司 测试无人车正常驾驶的方法及装置
WO2018200026A1 (en) * 2017-04-25 2018-11-01 Nec Laboratories America, Inc Detecting dangerous driving situations by parsing a scene graph of radar detections
CN107609502A (zh) * 2017-09-05 2018-01-19 百度在线网络技术(北京)有限公司 用于控制无人驾驶车辆的方法和装置

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112420058A (zh) * 2020-12-16 2021-02-26 舒方硕 人车协同系统及其实施流程
CN112420058B (zh) * 2020-12-16 2024-01-19 舒方硕 人车协同系统及其实施流程
CN113762918A (zh) * 2021-08-03 2021-12-07 南京领行科技股份有限公司 一种业务逻辑执行方法、装置、设备及介质
EP4141816A1 (en) * 2021-08-31 2023-03-01 Beijing Tusen Weilai Technology Co., Ltd. Scene security level determination method, device and storage medium
CN113619610A (zh) * 2021-09-18 2021-11-09 一汽解放汽车有限公司 车辆驾驶模式切换方法、装置、计算机设备和存储介质
CN113619610B (zh) * 2021-09-18 2024-01-05 一汽解放汽车有限公司 车辆驾驶模式切换方法、装置、计算机设备和存储介质
CN114013450A (zh) * 2021-11-16 2022-02-08 交控科技股份有限公司 车辆运行控制方法、系统和计算机设备
CN114013450B (zh) * 2021-11-16 2023-10-31 交控科技股份有限公司 车辆运行控制方法、系统和计算机设备
CN114104000A (zh) * 2021-12-16 2022-03-01 智己汽车科技有限公司 一种危险场景的评估与处理系统、方法及存储介质
CN114104000B (zh) * 2021-12-16 2024-04-12 智己汽车科技有限公司 一种危险场景的评估与处理系统、方法及存储介质
CN115374498B (zh) * 2022-10-24 2023-03-10 北京理工大学 一种考虑道路属性特征参数的道路场景重构方法及系统
CN115374498A (zh) * 2022-10-24 2022-11-22 北京理工大学 一种考虑道路属性特征参数的道路场景重构方法及系统
CN115857176A (zh) * 2023-01-31 2023-03-28 泽景(西安)汽车电子有限责任公司 一种抬头显示器及其高度调节方法、装置、存储介质
CN116046014A (zh) * 2023-03-31 2023-05-02 小米汽车科技有限公司 道线规划方法、装置、电子设备及可读存储介质

Also Published As

Publication number Publication date
US20240001930A1 (en) 2024-01-04

Similar Documents

Publication Publication Date Title
CN110893860B (zh) 一种智能驾驶方法及智能驾驶系统
WO2020052344A1 (zh) 一种智能驾驶方法及智能驾驶系统
WO2021135371A1 (zh) 一种自动驾驶方法、相关设备及计算机可读存储介质
WO2020164238A1 (zh) 用于驾驶控制的方法、装置、设备、介质和系统
CN108292473A (zh) 自适应自主车辆计划器逻辑
CN114995451A (zh) 用于车路协同自动驾驶的控制方法、路侧设备和系统
CN114945493A (zh) 协作式交通工具前灯引导
CN114945492A (zh) 协作式交通工具前灯引导
CN111464972A (zh) 经优先级排序的车辆消息传递
CN114929517A (zh) 协作式交通工具前灯引导
US20230071836A1 (en) Scene security level determination method, device and storage medium
CN114945958A (zh) 协作式交通工具前灯引导
US11622228B2 (en) Information processing apparatus, vehicle, computer-readable storage medium, and information processing method
US20230256999A1 (en) Simulation of imminent crash to minimize damage involving an autonomous vehicle
WO2023155041A1 (zh) 一种智能驾驶方法、装置及包括该装置的车辆
US20230192077A1 (en) Adjustment of object trajectory uncertainty by an autonomous vehicle
JP2023021919A (ja) 自動運転車両の制御方法、装置、電子機器及び読み取り可能な記憶媒体
WO2021229671A1 (ja) 走行支援装置および走行支援方法
JP2020192824A (ja) 運転挙動制御方法及び運転挙動制御装置
US11881031B2 (en) Hierarchical processing of traffic signal face states
US20230221408A1 (en) Radar multipath filter with track priors
US20230391367A1 (en) Inferring autonomous driving rules from data
WO2024087712A1 (zh) 一种目标行为预测方法、智能设备及车辆
DE102023113448A1 (de) Systeme und Verrfahren zur Detektion von Fußgängern, die einen Fußgängerübergang zu überqueren beabsichtigen oder eine Straße unerlaubt zu überqueren beabsichtigen
CN117148828A (zh) 操作自主运载工具的方法、系统和介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19858830

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019858830

Country of ref document: EP

Effective date: 20200914

NENP Non-entry into the national phase

Ref country code: DE