WO2021135371A1 - 一种自动驾驶方法、相关设备及计算机可读存储介质 - Google Patents

一种自动驾驶方法、相关设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2021135371A1
WO2021135371A1 PCT/CN2020/114265 CN2020114265W WO2021135371A1 WO 2021135371 A1 WO2021135371 A1 WO 2021135371A1 CN 2020114265 W CN2020114265 W CN 2020114265W WO 2021135371 A1 WO2021135371 A1 WO 2021135371A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
driving strategy
target vehicle
driving
road section
Prior art date
Application number
PCT/CN2020/114265
Other languages
English (en)
French (fr)
Inventor
刘继秋
伍勇
刘建琴
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20910434.8A priority Critical patent/EP4071661A4/en
Priority to JP2022540325A priority patent/JP2023508114A/ja
Publication of WO2021135371A1 publication Critical patent/WO2021135371A1/zh
Priority to US17/855,253 priority patent/US20220332348A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/04Traffic conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/02Ensuring safety in case of control system failures, e.g. by diagnosing, circumventing or fixing failures
    • B60W50/0225Failure correction strategy
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/02Ensuring safety in case of control system failures, e.g. by diagnosing, circumventing or fixing failures
    • B60W50/029Adapting to failures or work around with other constraints, e.g. circumvention by avoiding use of failed parts
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0011Planning or execution of driving tasks involving control alternatives for a single driving scenario, e.g. planning several paths to avoid obstacles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0015Planning or execution of driving tasks specially adapted for safety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0808Diagnosing performance data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/44Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0002Automatic control, details of type of controller or control system architecture
    • B60W2050/0004In digital systems, e.g. discrete-time systems involving sampling
    • B60W2050/0005Processor details or data handling, e.g. memory registers or chip architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0019Control system elements or transfer functions
    • B60W2050/0028Mathematical models, e.g. for simulation
    • B60W2050/0031Mathematical model of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/02Ensuring safety in case of control system failures, e.g. by diagnosing, circumventing or fixing failures
    • B60W50/0205Diagnosing or detecting failures; Failure detection models
    • B60W2050/0215Sensor drifts or sensor failures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/02Ensuring safety in case of control system failures, e.g. by diagnosing, circumventing or fixing failures
    • B60W50/029Adapting to failures or work around with other constraints, e.g. circumvention by avoiding use of failed parts
    • B60W2050/0292Fail-safe or redundant systems, e.g. limp-home or backup systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/45External transmission of data to or from the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/45External transmission of data to or from the vehicle
    • B60W2556/50External transmission of data to or from the vehicle of positioning data, e.g. GPS [Global Positioning System] data
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2756/00Output or target parameters relating to data
    • B60W2756/10Involving external transmission of data to or from the vehicle

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to an automatic driving method, related equipment, and computer-readable storage media.
  • Artificial Intelligence is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge, and use knowledge to obtain the best results.
  • artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new kind of intelligent machine that can react in a similar way to human intelligence.
  • Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
  • Research in the field of artificial intelligence includes robotics, natural language processing, computer vision, decision-making and reasoning, human-computer interaction, recommendation and search, and basic AI theories.
  • Autonomous driving is a mainstream application in the field of artificial intelligence.
  • Autonomous driving technology relies on the collaboration of computer vision, radar, monitoring devices, and global positioning systems to enable motor vehicles to achieve autonomous driving without the need for human active operations.
  • Self-driving vehicles use various computing systems to transport passengers from one location to another. Some autonomous vehicles may require some initial input or continuous input from an operator (such as a driver, a passenger). The self-driving vehicle permits the operator to switch from the manual driving mode to the automatic driving mode or a mode in between. Since autonomous driving technology does not require humans to drive motor vehicles, it can theoretically effectively avoid human drivers’ driving errors, reduce traffic accidents, and improve highway transportation efficiency. Therefore, more and more attention has been paid to autonomous driving technology.
  • the sensor can acquire the driving state data of the surrounding vehicles (for example, the speed, acceleration, heading angle, etc.) of the surrounding vehicles, so that the autonomous vehicle can be based on the acquired sensor
  • the data determines the driving strategy that meets the safety requirements. Since data needs to be obtained from sensors, and the data obtained by sensors is often limited, when determining a driving strategy that meets safety requirements based on the obtained data, if the sensor fails or the sensitivity is not high or the accuracy is not high enough, it is easy to produce poor safety This undoubtedly increases the risk of autonomous driving. Therefore, how to improve the accuracy of determining the driving strategy that meets the safety requirements to improve the safety of the safe driving of the vehicle is a technical problem that needs to be solved urgently.
  • the present application provides an automatic driving method, related equipment, and computer-readable storage medium, which can improve the accuracy of determining a driving strategy that meets safety requirements and reduce the risk of automatic driving.
  • an automatic driving method is provided, the method is applied to a cloud server, and the method includes: receiving vehicle attribute information reported by a target vehicle and travel information of the target vehicle; wherein the vehicle of the target vehicle The attribute information is used to generate an automatic driving strategy; the layer information of the first road segment on which the target vehicle is traveling is acquired in the automatic driving strategy layer according to the travel information; according to the layer information of the first road segment and the target The vehicle attribute information of the vehicle acquires a first driving strategy; and the first driving strategy is sent to the target vehicle.
  • the cloud server can obtain the layer information of the first road segment in the automatic driving strategy layer based on the itinerary information reported by the target vehicle, and then, according to the layer information of the first road segment and the attribute information of the vehicle Obtain a driving strategy that meets the safety requirements, so that the target vehicle can drive automatically according to the driving strategy determined by the cloud server.
  • the information contained in the autonomous driving strategy layer is richer and can overcome the sensor’s perception defects (for example, the sensor’s perception defects are reflected in the limited data acquired by the sensor, the limited detection range of the sensor, and the detection of the sensor is easily affected by the environment. Etc.), which can improve the accuracy of determining driving strategies that meet safety requirements and reduce the risk of autonomous driving.
  • the cloud server stores the corresponding relationship between the driving strategy and the safe passage probability; the first driving is obtained according to the layer information of the first road section and the vehicle attribute information of the target vehicle.
  • the strategy includes: obtaining the first safe passing probability according to the layer information of the first road section and the vehicle attribute information of the target vehicle; obtaining the first safe passing probability according to the corresponding relationship between the driving strategy and the safe passing probability The corresponding first driving strategy.
  • the cloud server can obtain the first safe passage probability according to the layer information of the first road section being driven and the vehicle attribute information of the target vehicle, and then the corresponding relationship between the preset driving strategy and the safe passage probability Acquire the first driving strategy corresponding to the first safe passage probability. Since the layer contains rich information and can overcome the sensor's perception defects, this implementation method can improve the accuracy of determining the driving strategy that meets the safety requirements and reduce the risk of autonomous driving.
  • the obtaining the first safe passage probability according to the layer information of the first road section and the vehicle attribute information of the target vehicle includes: calculating the first safe passage through a first model Probability, wherein the first model includes at least one information item and a weight parameter corresponding to the at least one information item, and the at least one information item is a vehicle based on the layer information of the first road section and the target vehicle An information item obtained by extracting attribute information, and the weight parameter is used to indicate the importance of the information item when it is used to determine the first safe passage probability.
  • the first model is a model trained based on at least one sample data, and the sample data includes information extracted from the layer information of the second road section and the vehicle attribute information of the target vehicle.
  • the second road section is a road section adjacent to the first road section, and the exit of the second road section is the entrance of the first road section.
  • the layer information of the first road section includes at least one of static layer information of the first road section and dynamic layer information of the first road section; wherein, the first road section The static layer information of a road section is used to indicate the infrastructure information of the first road section; the dynamic layer information of the first road section is used to indicate the dynamic traffic information of the first road section.
  • the cloud server can obtain the layer information of the first road segment in the automatic driving strategy layer based on the itinerary information reported by the target vehicle, and then, according to the layer information of the first road segment and the attribute information of the vehicle Obtain a driving strategy that meets the safety requirements, so that the target vehicle can drive automatically according to the driving strategy determined by the cloud server.
  • this implementation method can overcome the sensor’s perception defects (for example, the sensor’s perception defects are reflected in the limited data acquired by the sensor, the limited detection range of the sensor, the detection of the sensor is easily affected by the environment, etc.), which can improve Determining the accuracy of driving strategies that meet safety requirements reduces the risk of autonomous driving.
  • the static layer information of the first road section includes at least one of lane attributes, digital device information, and green belt information;
  • the dynamic layer information of the first road section includes weather information, Road surface information, the congestion situation of the first road section in the first time period, the probability of pedestrians and non-motorized vehicles passing through the first road section in the first time period, the first road section in the first time period At least one of the accident probabilities of a driving accident occurring within the time period.
  • the static layer information of the road ensures that the automatic driving can efficiently determine the driving path and avoid obstacles, and the dynamic layer information of the road ensures that the automatic driving can respond to sudden information in a timely manner.
  • This implementation method can overcome the sensor's perception defects (for example, the sensor's perception defects are reflected in the limited data acquired by the sensor, the limited detection range of the sensor, the detection of the sensor is easily affected by the environment, etc.), which can improve the determination of the driving strategy that meets the safety requirements. Accuracy reduces the risk of autonomous driving.
  • the first road section is the road section where the target vehicle is driving; the method further includes: obtaining a second driving strategy for the target vehicle driving on the first road section; wherein , The second driving strategy is determined by the target vehicle according to sensor data obtained in real time; in the case where the similarity between the first driving strategy and the second driving strategy is less than a first threshold, the target vehicle The vehicle sends prompt information for switching the second driving strategy to the first driving strategy.
  • the automatic driving strategy layer contains rich information and can overcome the sensor’s perception defects, this also means that the cloud server determines the safety requirements of the first driving strategy based on the automatic driving strategy layer.
  • a second driving strategy that is higher than the target vehicle determined based on real-time sensor data.
  • the cloud server determines that the similarity between the first driving strategy and the second driving strategy is less than the set threshold, it sends to the target vehicle an instruction to switch the second driving strategy to the first driving strategy. Prompt information, so that the target vehicle adopts a safer first driving strategy for automatic driving, which can reduce the risk of automatic driving.
  • the vehicle attribute information of the target vehicle includes at least one of the automatic driving capability of the target vehicle, sensor distribution information of the target vehicle, and the driving state of the driver in the target vehicle.
  • an embodiment of the present application provides an automatic driving method, which is applied to a vehicle-mounted terminal on a target vehicle, and the method includes: receiving a first driving strategy sent by a cloud server; wherein, the first driving strategy is The first driving strategy obtained by the method described in the above first aspect; the target vehicle is automatically driven according to the first driving strategy.
  • the cloud server determines the first driving strategy according to the automatic driving strategy layer, it sends the first driving strategy to the target vehicle so that the target vehicle can perform automatic driving according to the first driving strategy determined by the cloud server .
  • the information contained in the autonomous driving strategy layer is richer and can overcome the sensor’s perception defects (for example, the sensor’s perception defects are reflected in the limited data acquired by the sensor, the limited detection range of the sensor, and the detection of the sensor is easily affected by the environment. Etc.), which can improve the accuracy of determining driving strategies that meet safety requirements and reduce the risk of autonomous driving.
  • the method further includes: acquiring a second driving strategy for the target vehicle driving on the first road section through sensor data; and performing the operation on the target vehicle according to the first driving strategy
  • Automatic driving includes: performing automatic driving on the target vehicle according to the first driving strategy and the second driving strategy.
  • the automatic driving of the target vehicle according to the first driving strategy and the second driving strategy includes: after determining the first driving strategy and the second driving strategy When the similarity of the driving strategy is greater than the first threshold, the target vehicle is automatically driven according to the first driving strategy, or the target vehicle is automatically driven according to the second driving strategy; In a case where the similarity between the first driving strategy and the second driving strategy is less than the first threshold, the target vehicle is automatically driven according to the first driving strategy.
  • the automatic driving strategy layer contains rich information and can overcome the sensor’s perception defects, this also means that the cloud server determines the safety requirements of the first driving strategy based on the automatic driving strategy layer.
  • a second driving strategy that is higher than the target vehicle determined based on real-time sensor data.
  • the target vehicle determines that the similarity between the first driving strategy and the second driving strategy is greater than the set threshold, the target vehicle is automatically driven according to the first driving strategy, or according to the first driving strategy.
  • the second driving strategy automatically drives the target vehicle; when it is determined that the similarity between the first driving strategy and the second driving strategy is less than the set threshold, the target vehicle is automatically driven according to the first driving strategy.
  • an embodiment of the present application provides a cloud server, and the cloud server may include:
  • the receiving unit is configured to receive the vehicle attribute information reported by the target vehicle and the travel information of the target vehicle; wherein the vehicle attribute information of the target vehicle is used to generate an automatic driving strategy; the first acquiring unit is used to generate an automatic driving strategy according to the travel information Acquire the layer information of the first road section on which the target vehicle is traveling in the automatic driving strategy layer; the second acquisition unit is configured to acquire the first road section according to the layer information of the first road section and the vehicle attribute information of the target vehicle A driving strategy; a first sending unit, configured to send the first driving strategy to the target vehicle.
  • the cloud server can obtain the layer information of the first road segment in the automatic driving strategy layer based on the itinerary information reported by the target vehicle, and then, according to the layer information of the first road segment and the attribute information of the vehicle Obtain a driving strategy that meets the safety requirements, so that the target vehicle can drive automatically according to the driving strategy determined by the cloud server.
  • the information contained in the autonomous driving strategy layer is richer and can overcome the sensor’s perception defects (for example, the sensor’s perception defects are reflected in the limited data acquired by the sensor, the limited detection range of the sensor, and the detection of the sensor is easily affected by the environment. Etc.), which can improve the accuracy of determining driving strategies that meet safety requirements and reduce the risk of autonomous driving.
  • the cloud server stores a corresponding relationship between a driving strategy and a safe passage probability
  • the second acquisition unit includes a safe passage probability acquisition unit and a driving strategy acquisition unit, wherein the safe passage probability
  • the obtaining unit is configured to obtain the first safe passage probability according to the layer information of the first road section and the vehicle attribute information of the target vehicle
  • the driving strategy obtaining unit is used to obtain the first safe passage probability according to the driving strategy and the safe passage probability
  • the corresponding relationship obtains the first driving strategy corresponding to the first safe passage probability.
  • the safe passage probability acquisition unit is specifically configured to: calculate the first safe passage probability through a first model, where the first model includes at least one information item and the at least one The weight parameter corresponding to the information item, the at least one information item is an information item extracted according to the layer information of the first road section and the vehicle attribute information of the target vehicle, and the weight parameter is used to indicate that the information item is used The degree of importance in determining the first safe passage probability.
  • the first model is a model trained based on at least one sample data, and the sample data includes information extracted from the layer information of the second road section and the vehicle attribute information of the target vehicle.
  • the second road section is a road section adjacent to the first road section, and the exit of the second road section is the entrance of the first road section.
  • the layer information of the first road section includes at least one of static layer information of the first road section and dynamic layer information of the first road section; wherein, the first road section
  • the static layer information of a road section is used to indicate the infrastructure information of the first road section; the dynamic layer information of the first road section is used to indicate the dynamic traffic information of the first road section.
  • the static layer information of the first road section includes at least one of lane attributes, digital device information, and green belt information;
  • the dynamic layer information of the first road section includes weather information, Road surface information, the congestion situation of the first road section in the first time period, the probability of pedestrians and non-motorized vehicles passing through the first road section in the first time period, the first road section in the first time period At least one of the accident probabilities of a driving accident occurring within the time period.
  • the first road section is the road section where the target vehicle is driving;
  • the cloud server further includes: a third acquiring unit configured to acquire that the target vehicle is driving on the first road section The second driving strategy on the above; wherein, the second driving strategy is determined by the target vehicle based on sensor data obtained in real time; the second sending unit is used to determine the difference between the first driving strategy and the second driving strategy In a case where the similarity of is less than the first threshold, sending prompt information for switching the second driving strategy to the first driving strategy to the target vehicle.
  • an embodiment of the present application provides an automatic driving device, which is applied to a vehicle-mounted terminal on a target vehicle.
  • the device may include: a receiving unit for receiving a first driving strategy sent by a cloud server;
  • the first driving strategy is the first driving strategy obtained by the method described in the first aspect;
  • the control unit is configured to automatically drive the target vehicle according to the first driving strategy.
  • the device further includes: a second driving strategy acquisition unit, configured to acquire the second driving strategy of the target vehicle driving on the first road section through sensor data; the control unit specifically uses Yu: Automatically drive the target vehicle according to the first driving strategy and the second driving strategy.
  • control unit is specifically configured to: in a case where it is determined that the similarity between the first driving strategy and the second driving strategy is greater than a first threshold, according to the first driving strategy The target vehicle is automatically driven according to the strategy, or the target vehicle is automatically driven according to the second driving strategy; after judging that the similarity between the first driving strategy and the second driving strategy is less than that of the In the case of the first threshold, the target vehicle is automatically driven according to the first driving strategy.
  • an embodiment of the present application provides a cloud server, which may include a memory and a processor, and the memory is used to store a computer program that supports a coordinated device to execute the above method, and the computer program includes program instructions.
  • the processor is configured to call the program instructions to execute the method of the first aspect described above.
  • the embodiments of the present application provide a vehicle-mounted terminal, which may include a memory and a processor, the memory is used to store a computer program that supports a coordinated device to execute the foregoing method, and the computer program includes program instructions, so The processor is configured to call the program instructions to execute the method of the second aspect described above.
  • the present application provides a chip system that can execute any method involved in the above-mentioned first aspect, so that related functions can be realized.
  • the chip system further includes a memory, and the memory is used to store necessary program instructions and data.
  • the chip system can be composed of chips, or include chips and other discrete devices.
  • the present application provides a chip system, which can execute any method involved in the above second aspect, so that related functions can be realized.
  • the chip system further includes a memory, and the memory is used to store necessary program instructions and data.
  • the chip system can be composed of chips, or include chips and other discrete devices.
  • an embodiment of the present application also provides a computer-readable storage medium, the computer storage medium stores a computer program, and the computer program includes program instructions that, when executed by a processor, cause the processing The device executes the method of the first aspect described above.
  • an embodiment of the present application also provides a computer-readable storage medium that stores a computer program, and the computer program includes program instructions that, when executed by a processor, cause the processing The device executes the method of the second aspect described above.
  • an embodiment of the present application also provides a computer program, the computer program includes computer software instructions, and when the computer software instructions are executed by a computer, the computer executes any one described in the first aspect.
  • An autonomous driving method when the computer software instructions are executed by a computer, the computer executes any one described in the first aspect.
  • the embodiments of the present application also provide a computer program, the computer program includes computer software instructions, and when the computer software instructions are executed by a computer, the computer executes any one described in the second aspect.
  • An autonomous driving method when the computer software instructions are executed by a computer, the computer executes any one described in the second aspect.
  • FIG. 1a is a schematic diagram of an automatic driving strategy layer provided by an embodiment of this application.
  • FIG. 1b is a schematic diagram of another automatic driving strategy layer provided by an embodiment of this application.
  • FIG. 1c is a schematic diagram of another automatic driving strategy layer provided by an embodiment of this application.
  • FIG. 1d is a schematic diagram of another automatic driving strategy layer provided by an embodiment of this application.
  • FIG. 1e is a schematic diagram of a network architecture of an automatic driving system provided by an embodiment of this application.
  • FIG. 1f is a functional block diagram of an automatic driving device 110 provided by an embodiment of this application.
  • FIG. 1g is a schematic structural diagram of an automatic driving system provided by an embodiment of this application.
  • FIG. 2 is a schematic flowchart of an automatic driving method provided by an embodiment of this application.
  • FIG. 3a is a schematic flowchart of another automatic driving method provided by an embodiment of this application.
  • FIG. 3b is a first schematic flowchart of a method for detecting and identifying a target provided by an embodiment of this application;
  • FIG. 3c is a second schematic flowchart of a method for detecting and identifying a target provided by an embodiment of the application;
  • 3d is a schematic structural diagram of a convolutional neural network model provided by an embodiment of this application.
  • FIG. 5 is a schematic structural diagram of a cloud server provided by an embodiment of this application.
  • FIG. 6 is a schematic structural diagram of an automatic driving device provided by an embodiment of the application.
  • FIG. 7 is a schematic structural diagram of another cloud server provided by an embodiment of the application.
  • any embodiment or design method described as “exemplary” or “for example” in the embodiments of the present application should not be construed as being superior or advantageous over other embodiments or design solutions.
  • words such as “exemplary” or “for example” are used to present related concepts in a specific manner.
  • “A and/or B” means A and B, A or B.
  • “A, and/or B, and/or C” means any one of A, B, and C, or means any two of A, B, and C, or means A, B, and C.
  • an autonomous vehicle is also called an unmanned vehicle, a computer-driven vehicle, or a wheeled mobile robot, which is an intelligent vehicle that realizes unmanned driving through a computer system.
  • autonomous vehicles rely on artificial intelligence, visual computing, radar, monitoring devices, and global positioning systems to cooperate to allow computer equipment to automatically and safely operate motor vehicles without any human active operation.
  • the automatic driving strategy layer is a subset of the automatic driving map and can be used to instruct the vehicle to perform automatic driving.
  • the automatic driving strategy layer may include static layer information or dynamic layer information.
  • Each layer can be considered as a specific map.
  • the static layer information can be: the connection relationship between roads, the position of the lane line, the number of lane lines, and other objects around the road, etc.; for another example, the static layer information can be: the information of the traffic sign ( For example, the location and height of traffic lights, the content of the signs, such as speed limit signs, continuous detours, slow driving, etc.), trees around the road, building information, etc.
  • the dynamic layer information may be dynamic traffic information, and the information may be associated with a time point (or time period), or may not be related to a time point (or time period).
  • the format of the dynamic layer information may be: timestamp (or time period) + road section + information. For example, weather information of section 1 at a certain time or within a certain period of time, road surface information of section 1 at a certain time or within a certain period of time (for example, road interruption, road maintenance, road spilling, road water accumulation )and many more.
  • the layer information contained in the automatic driving strategy layer may be two-dimensional information or three-dimensional information.
  • two-dimensional information is also called vector information.
  • the so-called vector is a quantity that has both magnitude and direction.
  • the two-dimensional information may be coordinate information of the obstacle in the road.
  • three-dimensional information refers to some abstract information based on two-dimensional information, and the abstract information is used to reflect the characteristics of an object.
  • the three-dimensional information may be the coordinate information of the obstacle on the road and the size of the obstacle.
  • the use object of the automatic driving strategy layer is often a vehicle with automatic driving capability.
  • FIGS. 1a to 1d are schematic diagrams of an automatic driving strategy layer provided by an embodiment of the present application.
  • the automatic driving strategy layer contains road static layer information such as lane line information, number of lanes, road boundary information, and road driving parameters.
  • the automatic driving strategy layer contains lane line information, number of lanes, road boundary information, green belt information and other static road layer information, as well as trees lying on the road.
  • Dynamic layer information Take road section 3 as an example, as shown in Figure 1c, the automatic driving strategy layer contains lane line information, number of lanes, road boundary information, green belt information and other road static layer information, as well as weather information (For example, at time T1, light snow turns to heavy snow) and other dynamic layer information; take road section 4 as an example, as shown in Figure 1d, the automatic driving strategy layer contains lane line information, number of lanes, road boundary information, and green belt
  • the static layer information of roads such as information, digital information equipment, etc., also includes dynamic layer information such as weather information (at T1, sunny to cloudy), historical pedestrian and non-motor vehicle crossing probability 60%, moderate congestion and other dynamic layer information.
  • the autonomous driving strategy layer can be regarded as an extension of traditional hardware sensors (for example, radar, laser rangefinder, or camera), which contains more data and is free from environment, obstacles, or The impact of interference.
  • traditional hardware sensors for example, radar, laser rangefinder, or camera
  • the static layer information of the road ensures that the automatic driving can efficiently determine the driving path and avoid obstacles
  • the dynamic layer information of the road ensures that the automatic driving can respond to emergencies in a timely manner.
  • This implementation method can effectively overcome the perception defects of traditional hardware sensors. For example, the perception defects of the sensor are reflected in the limited data acquired by the sensor, the limited detection range of the sensor, and the detection of the sensor is easily affected by the environment.
  • the static layer information of the road can be regarded as a priori information.
  • the prior information refers to some information that can be collected in advance and will not change in a short time. In other words, these things exist objectively and will not change with changes in external things, so they can be collected in advance and passed as a priori information to autonomous vehicles to make decisions.
  • the automatic driving strategy may include the automatic driving level, and may also include instructions instructing the vehicle to accelerate, decelerate, advance, stop, and start, and may also include instructing the vehicle's speed, acceleration, movement direction, position, and so on.
  • the specific driving strategy to be used for example, which automatic driving level to use.
  • the automatic driving method provided in the embodiments of the present application is applied to other devices (for example, a cloud server) with the function of controlling automatic driving, or applied to a vehicle with the function of automatic driving, which will be specifically introduced below:
  • the cloud server is used to implement the automatic driving method provided in the embodiments of the present application, and obtain the driving information of the target vehicle through the automatic driving map stored in the cloud server or the automatic driving strategy layer transferred from other devices by the cloud server.
  • the first driving strategy corresponding to the first road section (the first road section is any road section in the itinerary information), and the first driving strategy is sent to the target vehicle.
  • the first driving strategy is used to instruct the target vehicle to follow the first driving strategy Carry out autonomous driving.
  • the cloud server can also obtain the second driving strategy of the target vehicle driving on the first road segment, where the second driving strategy is determined by the target vehicle based on real-time sensor data.
  • the cloud server determines that the similarity between the first driving strategy and the second driving strategy satisfies the preset condition, the cloud server sends to the target vehicle a prompt message for switching the second driving strategy to the first driving strategy.
  • the similarity of the first driving strategy and the second driving strategy meeting the preset condition may include that the similarity of the first driving strategy and the second driving strategy meets the set first threshold, or it may include The similarity between the first driving strategy and the second driving strategy satisfies the functional relationship, which is not specifically limited in the embodiment of the present application. In this implementation, the risk of automatic driving of the target vehicle can be reduced.
  • the target vehicle may receive the autonomous driving map sent by the cloud server, and obtain the first driving strategy corresponding to the first road segment on which the target vehicle is traveling in the autonomous driving map, and then the target vehicle may be subjected to the first driving strategy according to the first driving strategy. Carry out autonomous driving.
  • the target vehicle can also obtain the second driving strategy through real-time sensor data.
  • the target vehicle can be automatically driven according to the first driving strategy, or the target vehicle can be automatically driven according to the second driving strategy; after judging that the similarity between the first driving strategy and the second driving strategy is less than the set first In the case of a threshold value, the target vehicle is automatically driven according to the first driving strategy. In this implementation, the risk of automatic driving of the target vehicle can be reduced.
  • Fig. 1e is a schematic diagram of a network architecture of an automatic driving system provided by an embodiment of the present application.
  • the autopilot system architecture includes vehicle 10 (ie vehicle 1), vehicle 2 and vehicle M (M is an integer greater than 0.
  • vehicle 10 ie vehicle 1
  • vehicle 2 vehicle 2
  • vehicle M M is an integer greater than 0.
  • the number of vehicles shown in the figure is an exemplary description and should not constitute a limitation.
  • the cloud server 20 may establish a communication connection with multiple vehicles 10 through a wired network or a wireless network.
  • the vehicle 10 includes an automatic driving device 110.
  • the cloud server 20 can control the vehicle 1, the vehicle 2, and the vehicle M through the multi-dimensional data contained in the automatic driving strategy layer by running its stored programs for controlling the automatic driving of automobiles (for example, Instruct the vehicle how to drive through the driving strategy).
  • Programs related to controlling auto-driving cars can be programs that manage the interaction between autonomous vehicles and road obstacles, programs that control the route or speed of autonomous vehicles, and programs that control interaction between autonomous vehicles and other autonomous vehicles on the road.
  • the cloud server obtains the layer information of the road section (for example, the first road section) the vehicle is driving on in the autonomous driving layer, and then sends the information about the driving vehicle according to the layer information of the first road section the vehicle is driving on.
  • the driving strategy suggested by the driving situation on the first road segment for example, the dynamic layer information of the first road segment (for example, the dynamic layer information contains remnants) to determine the presence of obstacles in front, and tell the autonomous vehicle how to go around Turn it on; for another example, determine the road surface water condition through the dynamic layer information of the first road section (for example, the dynamic layer information contains road surface information), and inform the autonomous vehicle how to drive on the water-filled road.
  • the cloud server sends a response to the autonomous vehicle indicating how the vehicle should travel in a given scene. For example, based on the layer information of the first road section, the cloud server can confirm the existence of a temporary stop sign in front of the road and inform the autonomous vehicle how to bypass the road. Correspondingly, the cloud server sends a suggested operation mode (for example, instructing the vehicle to change lanes on another road) for the automatic driving vehicle to pass through a closed road section (or an obstacle). In practical applications, the cloud server can also send the recommended operation mode to other vehicles in the area that may encounter the same obstacle, so as to assist other vehicles not only to recognize the closed lanes but also to know how to pass.
  • a suggested operation mode for example, instructing the vehicle to change lanes on another road
  • the cloud server can also send the recommended operation mode to other vehicles in the area that may encounter the same obstacle, so as to assist other vehicles not only to recognize the closed lanes but also to know how to pass.
  • FIG. 1f is a functional block diagram of the automatic driving device 100 provided by an embodiment of the present application.
  • the automatic driving device 100 may be configured in a fully automatic driving mode, a partially automatic driving mode, or a manual driving mode.
  • the fully automatic driving mode can be L5, which means that all driving operations are completed by the vehicle, and the human driver does not need to maintain attention
  • partially automatic driving modes can be L1, L2, L3, L4, where L1 indicates that the vehicle provides driving for one of the steering wheel and acceleration/deceleration operations, and the human driver is responsible for the rest of the driving operations; L2 indicates that the vehicle provides driving for multiple operations in the steering wheel and acceleration/deceleration, and the human driver is responsible for the rest L3 means that most of the driving operations are completed by the vehicle, and the human driver needs to maintain concentration for emergencies; L4 means that all driving operations are completed by the vehicle, and the human driver does not need to maintain attention, but limits the road and Environmental conditions; the manual driving mode
  • the automatic driving device 100 can control itself while in the automatic driving mode, and can determine the current state of the vehicle and the surrounding environment through human operations, determine the possible behavior of at least one other vehicle in the surrounding environment, and determine The confidence level corresponding to the possibility of the other vehicle performing possible behaviors is controlled based on the determined information.
  • the automatic driving device 110 may be set to operate without human interaction.
  • the automatic driving device 110 may include various subsystems, for example, a traveling system 102, a sensor system 104, a control system 106, one or more peripheral devices 108 and a power supply 110, a computer system 112, and a user interface 116.
  • the autonomous driving device 110 may include more or fewer subsystems, and each subsystem may include multiple elements.
  • each subsystem and element of the automatic driving device 110 may be interconnected by wire or wirelessly.
  • the traveling system 102 may include components that provide power movement for the automatic driving device 110.
  • the travel system 102 may include an engine 118, an energy source 119, a transmission 120, and wheels/tires 121.
  • the engine 118 may be an internal combustion engine, an electric motor, an air compression engine, or a combination of other types of engines, for example, a hybrid engine composed of a gasoline engine and an electric motor, or a hybrid engine composed of an internal combustion engine and an air compression engine. In practical applications, the engine 118 converts the energy source 119 into mechanical energy.
  • the energy engine 119 may include, but is not limited to: gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, or other power sources.
  • the energy source 119 may also provide energy for other systems of the automatic driving device 110.
  • the transmission device 120 can transmit the mechanical power from the engine 118 to the wheels 121.
  • the transmission device 120 may include a gearbox, a differential, and a drive shaft.
  • the transmission device 120 may also include other devices, such as a clutch.
  • the drive shaft includes one or more shafts that can be coupled to one or more wheels 121.
  • the sensor 104 may include several sensors that sense information about the environment around the automatic driving device 110.
  • the sensor system 104 may include a positioning system 122 (here, the positioning system may be a GPS system, a Beidou system or other positioning systems), an inertial measurement unit (IMU) 124, a radar 126, and laser ranging ⁇ 128 and camera 130.
  • the sensor system 104 may also include sensors of the internal system of the monitored automatic driving device 110, for example, an in-vehicle air quality monitor, a fuel gauge, and an oil temperature gauge.
  • One or more sensor data from these sensors can be used to detect objects and their corresponding characteristics (e.g., position, shape, direction, speed, etc.). These detections and identifications are key functions for the safe operation of the autonomous automatic driving device 110.
  • the positioning system 122 can be used to estimate the geographic location of the automatic driving device 110; the IMU 124 is used to sense the position and orientation changes of the automatic driving device 110 based on inertial acceleration.
  • the IMU 124 may be a combination of an accelerometer and a gyroscope.
  • the radar 126 may use radio signals to sense objects in the surrounding environment of the automatic driving device 110. In some implementations, in addition to sensing the object, the radar 126 may also be used to sense the speed and/or direction of the object.
  • the laser rangefinder 128 can use laser light to sense objects in the environment where the automatic driving device 110 is located.
  • the laser rangefinder 128 may include one or more laser sources, laser scanners, and one or more monitors, as well as other system components.
  • the camera 130 may be used to capture multiple images of the surrounding environment of the automatic driving device 110.
  • the camera 130 may be a still camera or a video camera, which is not specifically limited in the embodiment of the present application.
  • control system 106 can control the operation of the automatic driving device 110 and components.
  • the control system 106 may include various components, including a steering system 132, a throttle 134, a braking unit 136, a computer vision system 140, a route control system 142, and an obstacle avoidance system.
  • the steering system 132 is operable to adjust the forward direction of the automatic driving device 110.
  • it may be a steering wheel system in one embodiment.
  • the throttle 134 is used to control the operating speed of the engine 118 and further control the speed of the automatic driving device 110.
  • the braking unit 136 is used to control the speed of the automatic driving device 110.
  • the braking unit 136 may use friction to slow down the wheels 121.
  • the braking unit 136 may convert the kinetic energy of the wheels 121 into electric current.
  • the braking unit 136 may also take other forms to slow down the rotation speed of the wheels 121 to control the speed of the automatic driving device 110.
  • the computer vision system 140 may be operated to process and analyze the images captured by the camera 130 to identify objects and/or features in the surrounding environment of the automatic driving device 110.
  • the objects and/or features mentioned here may include, but are not limited to: traffic signals, road boundaries, and obstacles.
  • the computer vision system 140 may use object recognition algorithms, structure from motion (SFM) algorithms, visual tracking, and other computer vision technologies.
  • SFM structure from motion
  • the computer vision system 140 can be used to map the environment, track objects, estimate the speed of objects, and so on.
  • the route control system 142 is used to determine the driving route of the automatic driving device 110.
  • the route control system 142 may combine data from sensors, the positioning system 122, and one or more predetermined maps to determine the driving route for the autonomous driving device 110.
  • the obstacle avoidance system 144 is used to identify, evaluate and avoid or otherwise surpass potential obstacles in the environment of the automatic driving device 110.
  • control system 106 may add or alternatively include components other than those shown and described in FIG. 1f. Or you can reduce a part of the components shown above,
  • the automatic driving device 110 interacts with external sensors, other vehicles, other computer systems, or users through the peripheral device 108.
  • the peripheral device 108 may include a wireless communication system 146, an onboard computer 148, a microphone 150, and/or a speaker 152.
  • the peripheral device 108 provides a means for the user of the autonomous driving device 110 to interact with the user interface 116.
  • the onboard computer 148 may provide information to the user of the automatic driving device 110.
  • the user interface 116 can also operate the onboard computer 148 to receive user input.
  • the on-board computer 148 can be operated through a touch screen.
  • the peripheral device 108 may provide a means for the autonomous driving device 110 to communicate with other devices in the vehicle.
  • the microphone 150 may receive audio from the user of the autonomous driving device 110, such as a voice command or other audio input.
  • the speaker 150 may output audio to the user of the automatic driving device 110.
  • the wireless communication system 146 may wirelessly communicate with one or more devices directly or via a communication network.
  • the wireless communication system 146 may use 3G cellular communication, such as CDMA, EVDO, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication.
  • the wireless communication system 146 may communicate with a wireless local area network (Wireless local area network, WLAN) by using WIFI.
  • the wireless communication system 146 can directly communicate with the device using an infrared link, Bluetooth, or ZigBee.
  • Other wireless protocols for example, various vehicle communication systems, such as the wireless communication system 146, may include one or more dedicated short-range communications (DSRC) devices.
  • DSRC dedicated short-range communications
  • the power supply 110 may provide power to various components of the automatic driving device 110.
  • the power source 110 may be a rechargeable lithium ion or lead-acid battery.
  • One or more battery packs of such batteries may be configured as a power source to provide power to various components of the automatic driving device 110.
  • the power source 110 and the energy source 119 may be implemented together, for example, the two are configured together as in some all-electric vehicles.
  • the computer system 112 may include at least one processor 113 that executes instructions 115 stored in a non-transitory computer-readable storage medium such as the data storage device 114.
  • the computer system 112 may also be a plurality of computing devices in individual components or subsystems of the automatic driving apparatus 110 that adopt distributed control.
  • the processor 113 may be any conventional processor, such as a commercially available central processing unit (CPU), or other general-purpose processors or digital signal processors (Digital Signal Processors, DSP), Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • CPU central processing unit
  • DSP Digital Signal Processors
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • FIG. 1f functionally shows the processor, memory, and other elements in the same physical housing, those of ordinary skill in the art should understand that the processor, computer system, or memory, or including the processor, the computer system, or the memory, may not be stored in the same physical enclosure. Multiple processors, computer systems, or memories in a housing.
  • the memory may be a hard disk drive, or other storage medium located in a different physical enclosure. Therefore, a reference to a processor or computer system will be understood to include a reference to a collection of processors or computer systems or memories that may operate in parallel, or a reference to a collection of processors or computer systems or memories that may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components such as the steering component and the deceleration component may each have its own processor that only performs calculations related to the function of the specific component.
  • the processor 113 may be located away from the vehicle and wirelessly communicate with the vehicle. In other respects, some of the processes described herein are executed on a processor in the vehicle and others are executed by a remote processor, including taking the necessary steps to perform a single operation.
  • the data storage device 114 may include instructions 115 (for example, program logic), and the instructions 115 may be executed by the processor 113 to perform various functions of the automatic driving device 110, including those described above.
  • the data storage device 114 may also contain additional instructions, including sending data to, receiving data from, interacting with, and/or performing data on one or more of the traveling system 102, the sensor system 104, the control system 106, and the peripheral device 108. Control instructions.
  • the data storage device 114 may also store data, such as road maps, route messages, the location, direction, and speed of the vehicle, and other vehicle data, as well as other information.
  • data such as road maps, route messages, the location, direction, and speed of the vehicle, and other vehicle data, as well as other information.
  • the above-mentioned information may be used by the automatic driving device 110 and the computer system 112 during the operation of the automatic driving device 110 in the autonomous, semi-autonomous, and/or manual mode.
  • the data storage device 114 obtains the environmental information of the vehicle from the sensor 104 or other components of the automatic driving device 110.
  • the environmental information can be, for example, lane line information, number of lanes, road boundary information, road driving parameters, traffic signals, green belt information, whether there are pedestrians, vehicles, etc. in the environment where the vehicle is currently located.
  • the data storage device 114 may also store state information of the vehicle itself and state information of other vehicles that interact with the vehicle.
  • the status information may include, but is not limited to: the speed, acceleration, and heading angle of the vehicle. For example, based on the speed measurement and distance measurement functions of the radar 126, the vehicle obtains the distance between other vehicles and the speed of other vehicles, and so on.
  • the processor 113 may obtain the aforementioned vehicle data from the data storage device 114, and determine a driving strategy that meets safety requirements based on the environmental information in which the vehicle is located.
  • the user interface 116 is used to provide information to or receive information from the user of the automatic driving device 110.
  • the user interface 116 may include one or more input/output devices within the set of peripheral devices 108, such as one or more of the wireless communication system 146, the onboard computer 148, the microphone 150, and the speaker 152.
  • the computer system 112 may control the functions of the automatic driving device 110 based on inputs received from various subsystems (for example, the traveling system 102, the sensor system 104, and the control system) and the user interface 116.
  • the computer system 112 may use the input of the incoming air control system 106 to control the steering system 132 so as to avoid obstacles detected by the sensor system 104 and the obstacle avoidance system 144.
  • the computer system 112 is operable to provide control of many aspects of the autonomous driving device 110 and its subsystems.
  • one or more of the aforementioned components may be installed or associated with the autonomous driving device 110 separately.
  • the data storage device 114 may exist partially or completely separately from the automatic driving device 110.
  • the above-mentioned components may be communicatively coupled together in a wired and/or wireless manner.
  • the above-mentioned components are just an example. In actual applications, the components in the above-mentioned modules may be added or deleted according to actual needs, and FIG. 1f should not be understood as a limitation to the embodiments of the present application.
  • An automatic driving vehicle traveling on a road may recognize objects in its surrounding environment to determine whether to adjust the current speed of the automatic driving device 110.
  • the object may be other vehicles, traffic control equipment, or other types of objects.
  • each recognized object can be considered independently, and the speed to be adjusted by the autonomous vehicle can be determined based on the respective characteristics of the object, for example, its current driving data, acceleration, and vehicle distance.
  • the automatic driving device 110 or the computer equipment associated with the automatic driving device 110 may be based on the identified object
  • the characteristics of the environment and the state of the surrounding environment for example, traffic, rain, ice on the road, etc.
  • each recognized object depends on each other's behavior. Therefore, all recognized objects can also be considered together to predict the behavior of a single recognized object.
  • the automatic driving device 110 can adjust its speed based on the predicted behavior of the recognized object.
  • the automatic driving device 110 can determine what stable state the vehicle will need to adjust to based on the predicted behavior of the object (for example, the adjustment operation may include acceleration, deceleration, or stop). In this process, other factors may also be considered to determine the speed of the automatic driving device 110, such as the lateral position of the automatic driving device 110 on the traveling road, the curvature of the road, and the proximity of static and dynamic objects.
  • the computer device may also provide instructions to modify the steering angle of the vehicle 100 so that the autonomous vehicle follows a given trajectory and/or maintains an object near the autonomous vehicle (e.g., The safe horizontal and vertical distances of cars in adjacent lanes on the road.
  • the above-mentioned automatic driving device 110 may be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, a recreational vehicle, a playground vehicle, a construction equipment, a tram, or a golf ball.
  • Cars, trains, and trolleys, etc. are not particularly limited in the embodiments of the present application.
  • the automatic driving device 110 may further include a hardware structure and/or a software module, and the above functions are realized by driving of a hardware structure, a software module, or a hardware structure and a software module. Whether a certain function of the above-mentioned various functions is executed by a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application of the technical solution and the related constraint conditions.
  • FIG. 1f illustrates a functional block diagram of the automatic driving device 110, and the automatic driving system 101 in the automatic driving device 110 is introduced below.
  • Fig. 1g is a schematic structural diagram of an automatic driving system provided by an embodiment of the present application.
  • Fig. 1f and Fig. 1g describe the automatic driving device 110 from different perspectives.
  • the computer system 101 in Fig. 1g is the computer system 112 in Fig. 1f.
  • the computer system 101 includes a processor 103, and the processor 103 is coupled to a system bus 105.
  • the processor 103 may be one or more processors, where each processor may include one or more processor cores.
  • a display adapter (video adapter) 107 the display adapter can drive the display 109, and the display 109 is coupled to the system bus 105.
  • the system bus 105 is coupled with an input/output (I/O) bus 113 through a bus bridge 111.
  • the I/O interface 115 is coupled to the I/O bus.
  • the I/O interface 115 communicates with various I/O devices, such as an input device 117 (such as a keyboard, a mouse, a touch screen, etc.), and a media tray 121 (such as a CD-ROM, a multimedia interface, etc.).
  • Transceiver 123 can send and/or receive radio communication signals
  • camera 155 can capture scene and dynamic digital video images
  • external USB interface 125 external USB interface 125.
  • the interface connected to the I/O interface 115 may be a USB interface.
  • the processor 103 may be any conventional processor, including a reduced instruction set computing ("RISC”) processor, a complex instruction set computing (“CISC”) processor, or a combination of the foregoing.
  • the processor may be a dedicated device such as an application specific integrated circuit (“ASIC").
  • the processor 103 may be a neural network processor or a combination of a neural network processor and the foregoing traditional processors.
  • the computer system 101 may be located far away from the autonomous vehicle and may communicate with the autonomous vehicle 100 wirelessly.
  • some of the processes described herein are executed on a processor provided in an autonomous vehicle, and others are executed by a remote processor, including taking actions required to perform a single manipulation.
  • the computer 101 can communicate with the software deployment server 149 through the network interface 129.
  • the network interface 129 is a hardware network interface, such as a network card.
  • the network 127 may be an external network, such as the Internet, or an internal network, such as an Ethernet or a virtual private network (VPN).
  • the network 127 may also be a wireless network, such as a WiFi network, a cellular network, and so on.
  • the hard disk drive interface is coupled to the system bus 105.
  • the hardware drive interface is connected with the hard drive.
  • the system memory 135 is coupled to the system bus 105.
  • the data running in the system memory 135 may include the operating system 137 and application programs 143 of the computer 101.
  • the operating system includes a shell (Shell) 139 and a kernel (kernel) 141.
  • Shell 139 is an interface between the user and the kernel of the operating system.
  • the shell is the outermost layer of the operating system. The shell manages the interaction between the user and the operating system: waiting for the user's input, interpreting the user's input to the operating system, and processing the output of various operating systems.
  • the kernel 141 is composed of those parts of the operating system that are used to manage memory, files, peripherals, and system resources. Directly interact with the hardware.
  • the operating system kernel usually runs processes and provides inter-process communication, providing CPU time slice management, interrupts, memory management, IO management, and so on.
  • Application programs 141 include programs related to controlling auto-driving cars, such as programs that manage the interaction between autonomous vehicles and obstacles on the road, programs that control the route or speed of autonomous vehicles, and programs that control interaction between autonomous vehicles and other autonomous vehicles on the road. .
  • the application 141 also exists on the deploying server 149 system. In one embodiment, when the application program 141 needs to be executed, the computer system 101 may download the application program 141 from the deploying server 14.
  • the application program 141 may be an application program that controls the vehicle to calculate a driving strategy based on sensor data obtained in real time.
  • the sensor data acquired in real time may include environmental information, the state information of the target vehicle itself, and the state information of the target vehicle's potential interaction target objects.
  • the environmental information is information about the current environment of the target vehicle (for example, the distribution of green belts, lanes, traffic lights, etc.), and the status information may include, but is not limited to, the speed, acceleration, and heading angle of the vehicle.
  • the vehicle obtains the distance between other vehicles and the speed of other vehicles, and so on.
  • the processor 103 of the computer system 101 can call the application 141 to obtain the second driving strategy.
  • the static layer information of the road ensures that the automatic driving can efficiently determine the driving path and avoid obstacles
  • the dynamic layer information of the road ensures Autonomous driving can respond to emergencies in a timely manner.
  • This implementation method can overcome the sensor’s perception defects. It can be known that the accuracy of the first driving strategy is often higher than that of the second driving strategy.
  • the application program 141 may also be an application program that controls the vehicle to determine the final driving strategy according to the first driving strategy and the second driving strategy sent by the cloud server (here, the second driving determination is determined by the vehicle through sensor data obtained in real time).
  • the application program 141 determines that the first driving strategy or the second driving strategy is the final driving strategy. In the case that the similarity between the first driving strategy and the second driving strategy is less than the set first threshold, the application program 141 determines that the first driving strategy is the final driving strategy.
  • the sensor 153 is associated with the computer system 101.
  • the sensor 153 is used to detect the environment around the computer 101.
  • the sensor 153 can detect animals, cars, obstacles, and crosswalks. Further, the sensor can also detect the surrounding environment of the above-mentioned animals, cars, obstacles, and crosswalks, such as: the environment around the animals, for example, when the animals appear around them. Other animals, weather conditions, the brightness of the surrounding environment, etc.
  • the sensor may be a camera, an infrared sensor, a chemical detector, a microphone, etc.
  • the sensor 153 When the sensor 153 is activated, it senses information at preset intervals and provides the sensed information to the computer system 101 in real time.
  • the computer system 101 may be located far away from the automatic driving device 110, and may perform wireless communication with the automatic driving device 110.
  • the transceiver 123 can send automatic driving tasks, sensor data collected by the sensor 153, and other data to the computer system 101; and can also receive control instructions sent by the computer system 101.
  • the automatic driving device can execute the control instructions from the computer system 101 received by the transceiver 123, and perform corresponding driving operations.
  • some of the processes described herein are configured to be executed on a processor in an autonomous vehicle, and others are executed by a remote processor, including taking actions required to perform a single operation.
  • FIG. 1e Based on the system architecture shown in FIG. 1e, the following is a flow diagram of an automatic driving method provided by an embodiment of the present application shown in FIG. 2 to specifically explain how the automatic driving of a vehicle is realized in the embodiment of the present application. Including but not limited to the following steps:
  • Step S200 The target vehicle reports its own vehicle attribute information and itinerary information to the cloud server.
  • Step S202 The cloud server receives the vehicle attribute information reported by the target vehicle and the travel information of the target vehicle; wherein the vehicle attribute information of the target vehicle is used to generate an automatic driving strategy.
  • the target vehicle is any one of multiple autonomous vehicles in the system architecture shown in FIG. 1e.
  • the itinerary information is used to indicate the itinerary of the target vehicle.
  • the itinerary information may include the current position of the target vehicle.
  • the current position is the coordinate value (X, Y) represented by a two-dimensional array, where X is the longitude value and Y is the latitude value; in an example ,
  • the travel information can include the starting position and the destination position.
  • the starting position is the coordinate value (X1, Y1) represented by the two-dimensional array
  • the destination position is the coordinate value (X2, Y2) represented by the two-dimensional array, where , X1 and X2 are longitude values, Y1 and Y2 are latitude values
  • the itinerary information can also be the planned driving section from the starting position to the destination position, and the planned driving section is a continuous and directional vector
  • the starting position is A
  • the destination position is B
  • the planned driving section is a continuous and directional vector line of ACDEB.
  • the vehicle-mounted terminal may send the itinerary information obtained by the location sensor to the server.
  • the target vehicle can be car-mounted navigation (for example, GPS system, Beidou system, or other positioning
  • the itinerary information determined by the system) is sent to the server.
  • the user inputs the starting position and the target position in the car navigation of the target vehicle, and the car navigation determines the starting position and the target position input by the user as the itinerary information.
  • the user selects a planned driving section from the starting position to the destination position in the car navigation of the target vehicle, and the car navigation determines the planned driving section selected by the user as the itinerary information.
  • the vehicle attribute information is used to indicate an automatic driving strategy.
  • the automatic driving strategy may include an automatic driving strategy that the target vehicle is capable of supporting.
  • the vehicle attribute information of the target vehicle may include at least one of the automatic driving capability of the target vehicle, sensor distribution information of the target vehicle, and the driving state of the driver in the target vehicle.
  • the automatic driving capability of the target vehicle refers to the highest level of automatic driving that the target vehicle can support.
  • the sensor distribution information of the target vehicle may include the type, number, and placement location of the sensors, and so on.
  • the driving state of the driver may include the fatigue degree of the driver in the vehicle, the driving ability of the driver in the vehicle, and so on.
  • the target vehicle contains one or more vehicle speed sensors, which can be distributed inside the target vehicle to detect the driving speed of the vehicle; the target vehicle contains There are one or more acceleration sensors, which can be distributed inside the target vehicle to detect the acceleration of the vehicle during driving.
  • the acceleration is the acceleration in a sudden braking state
  • the target vehicle contains one or more Video sensor, which can be distributed outside the target vehicle to obtain and monitor image data of the surrounding environment of the vehicle
  • the target vehicle contains one or more radar sensors, which can be distributed outside the entire target vehicle for use
  • To obtain and monitor the electromagnetic wave data of the surrounding environment of the vehicle it mainly transmits electromagnetic waves, and then detects the distance between the surrounding objects and the vehicle, the shape of the surrounding objects and other data by receiving the electromagnetic waves reflected by the surrounding objects.
  • a subset of multiple radar sensors is coupled to the front of the vehicle to locate objects in front of the vehicle.
  • One or more other radar sensors may be located at the rear of the vehicle to locate objects behind the vehicle when the vehicle is backing up.
  • Other radar sensors can be located on the side of the vehicle to locate objects, such as other vehicles, that approach the vehicle from the side.
  • a lidar (light detection and ranging, LIDAR) sensor may be installed on a vehicle, for example, the LIDAR sensor may be installed in a rotating structure installed on the top of the vehicle. Then, the rotating LIDAR sensor 120 can transmit light signals around the vehicle in a 360° mode, so as to continuously map all objects around the vehicle as the vehicle moves.
  • the target vehicle contains imaging sensors such as cameras, video cameras, or other similar image acquisition sensors, which can be installed on the vehicle to capture images as the vehicle moves.
  • the imaging sensor can not only capture visible spectrum images, but also infrared spectrum images.
  • the target vehicle contains a Global Positioning System (GPS) sensor (or Beidou system sensor), which can be located on the vehicle to provide the controller with geographic coordinates and coordinate generation time related to the location of the vehicle .
  • GPS includes an antenna for receiving GPS satellite signals and a GPS receiver coupled to the antenna. For example, when an object is observed in an image or by another sensor, GPS can provide the geographic coordinates and time of the discovery of the object.
  • the fatigue degree of the driver in the target vehicle may be one of mild fatigue, moderate fatigue, severe fatigue, and good state.
  • the driver's facial image can be acquired first, and then the acquired facial image of the driver can be recognized to determine the driver's fatigue level. After the classification result of the driver's facial image is recognized, the driving state of different types of drivers can be characterized by means of enumerated values. For example, 0 means good condition, 1 means mild fatigue, 2 means moderate fatigue, and 3 means severe fatigue.
  • the driving ability of the target vehicle driver may include one of high, medium, and low. In practical applications, the driving ability of the driver can be characterized by indication information.
  • 00 indicates that the driving ability of the driver is "high”
  • 01 indicates that the driving ability of the driver is “medium”
  • 02 indicates the driving ability of the driver. If it is "low”, the driver's driving ability can also be characterized by joint instruction information. It should be noted that the above examples are only examples and should not constitute a limitation.
  • Step S204 The cloud server obtains the layer information of the first road segment on which the target vehicle is traveling in the automatic driving strategy layer according to the itinerary information.
  • a road section refers to a continuous and directional vector line from point A to point B.
  • point A is the starting position in the travel information
  • point B is the destination position in the travel information.
  • the cloud server determines at least one road section to travel according to the itinerary information reported by the target vehicle (for example, the road section is the first road section, and the first road section is only an exemplary description and should not constitute a limitation), and then, Obtain the layer information of the first road segment in the automatic driving strategy layer.
  • the road section being driven may include a road section that is being driven, and may also include a road section that is planned to be driven, which is not specifically limited in the embodiment of the present application.
  • the layer information of the first road section may include static layer information of the first road section, where the static layer information of the first road section is used to indicate the infrastructure information of the first road section.
  • the layer information of the first link includes static layer information of the first link and dynamic layer information of the first link, wherein the static layer information of the first link is used to indicate the basis of the first link Facility information, the dynamic layer information of the first road section is used to indicate the dynamic traffic information of the first road section.
  • the infrastructure information of the road section refers to the infrastructure planned and constructed by the relevant department (for example, the transportation department or the government department) to meet the normal travel of vehicles.
  • the infrastructure information of the first section can include roads (including road grades, such as expressways, trunk roads, branch roads), bridges (including culverts), tunnels (including tunnel mechanical and electrical facilities, such as monitoring facilities, communication facilities, and lighting At least one of facilities, etc.) and traffic safety engineering facilities (including signs, markings, guardrails, etc.).
  • the indicator R can be used to characterize “roads”
  • the indicator L can be used to characterize “bridges”
  • the indicator U can be used to characterize “tunnels”
  • the indicator J can be used to characterize “traffic safety engineering facilities”.
  • the static layer information of the first road section may include at least one of the following: lane attributes, road digital equipment information, and green belt information.
  • the lane attribute is used to indicate the standardized information of the basic road
  • the digital equipment information of the road is used to indicate the hardware facility information of the road
  • the green belt information is used to indicate the location of the green belt.
  • lane attributes may include at least one of the following: lane line information, number of lanes, road boundary information, road driving parameters, traffic signals, and obstacles.
  • the lane line information is used to indicate the position of the lane line.
  • the lane line information may include lane width, lane line angle and curvature, etc.; the number of lanes is used to indicate the number of lanes, for example, you can pass left, left, right, and right.
  • the information of the four lanes on the right can distinguish three lanes, so that it can be determined that XX road includes three lanes;
  • the road boundary information is used to indicate the position of the road boundary line, and the road driving parameters are used to indicate the maximum speed limit allowed on the current road section and the driving that can be supported Speed, for example, the speed limit of YY road section is 60KM/h, the driving speed supported by XY road section is 100KM/h-120KM/h, and the driving speed level supported by XY road section is driving speed level 3;
  • traffic signals are used to indicate the progress of the vehicle Direction (for example, stop at red light, go at green light, turn left, turn right, U-turn, etc.), obstacles are used to indicate the boundary of the vehicle.
  • the digital device information of the road may include at least one of the following: a vehicle-to-outside information exchange system, and a drive test unit.
  • the information exchange between the vehicle and the outside means vehicle to everything, that is, Vehicle to X (V2X), which is the key technology of the intelligent transportation system, which makes the vehicle and the vehicle, the vehicle and the base station, the vehicle and the roadside unit, the vehicle and the cloud server , Cars and pedestrians can communicate, so that a series of traffic information such as real-time road conditions, road information, and pedestrian information can be obtained.
  • V2X Vehicle to X
  • the automatic driving device 110 can obtain richer real-time data, which can be used for analysis of real-time traffic information, selection of the best driving route, and so on.
  • a Road Side Unit is a device that is installed on the roadside and uses short-range communication technology (for example, Cellular-V2X technology) to communicate with an on-board unit (OBU).
  • the target vehicle can interact with digital devices on the road (such as roadside units) and through the network with cloud servers to update the environmental information of the driving section in time to prevent the target vehicle from driving outside the permitted operating area In the non-driving area, for example, on a traffic-controlled road while the target vehicle is still driving on the controlled road.
  • the dynamic traffic information of the road section refers to the traffic information with changing attributes when the vehicle is traveling.
  • the dynamic layer information of the first road section may be information associated with a time point (or time period), or information that is not associated with a time point (or time period).
  • the dynamic layer information of the first road section may include at least one of the following: weather information, road surface information, and the first road section in a certain time period (for example, the first time period, specifically, the first time period may be (8:00-9:00 in the morning) the congestion situation, the probability of pedestrians and non-motorized vehicles passing through the first road section in the first time, and the accident probability of driving accidents on the first road section in the first time period.
  • the congestion situation of the first road segment in the first time period may be a historical congestion situation, or may be a congestion situation in a certain time period in the future predicted based on the historical congestion situation.
  • the passing probability of pedestrians and non-motorized vehicles on the first road segment in the first time period may be the historical passing probability, or the passing probability in a certain time period in the future predicted based on the historical passing probability.
  • the accident probability of a driving accident occurring in the first time period in the first time period may be the historical accident probability, or the accident probability in a certain time period in the future predicted based on the historical accident probability.
  • the weather information may include sunny days, cloudy days, heavy rain, light rain, snowing, and so on.
  • the road surface information may include road interruption conditions, road maintenance conditions, road spillage conditions, stagnant water conditions, and so on.
  • road interruption it means that the road section cannot be driven.
  • road maintenance it means that the road section needs to be driven carefully or detoured.
  • the size of the leftover object is less than a certain threshold, it means that the impact of the leftover object on the automatic driving strategy is minimal or negligible. For example, if the litter is paper scraps, the road litter that occurs at this time has almost no effect on the driving of the automatic driving strategy. If the size of the litter is greater than a certain threshold, for example, if the litter is a stone, the road litter that occurs at this time has a greater impact on the driving of the autonomous vehicle. In this case, for the purpose of safe driving, it is often necessary to switch from an automatic driving strategy with a high degree of automation to an automatic driving strategy with a low degree of automation.
  • the first road section mentioned above is in a certain time period (for example, the first time period, specifically, the first time period may be the time period of 8:00-9:00 in the morning).
  • Congestion, the probability of pedestrians and non-motorized vehicles passing through the first road segment in the first time period, and the accident probability of driving accidents on the first road segment in the first time period all belong to the statistical values of the first road segment.
  • the so-called statistical value (also called sample value) is a comprehensive description of a certain variable in the sample, or the comprehensive quantitative performance of a certain characteristic of all elements in the sample. The statistical value is calculated from the sample, and it is an estimate of the corresponding parameter value.
  • time-sharing statistics can be performed on vehicles on the first road segment, for example, counting the number of vehicles driving on the first road segment in a certain period of time, and counting the number of vehicles traveling on the first road segment in a certain period of time.
  • the distribution of autonomous driving strategies of vehicles it is also possible to perform time-sharing statistics on the traffic participants on the first road segment except vehicles, for example, to count the number of pedestrians and non-motorized vehicles on the first road segment in a certain period of time And so on, so as to get the statistical value of the first road section.
  • the definition of road congestion can be done by judging whether the traffic flow speed of the road section is less than a set threshold.
  • the threshold may be 10km/h, if the traffic flow speed of the road section is less than 10km/h , Indicating that the road is congested; for another example, the threshold is 20 vehicles/lane, indicating that the road is congested.
  • Step S206 The cloud server obtains a first driving strategy according to the layer information of the first road section and the vehicle attribute information of the target vehicle.
  • obtaining the first driving strategy according to the layer information of the first link and the vehicle attribute information of the target vehicle may include: obtaining the first driving strategy according to the layer information of the first link and the vehicle attribute information of the target vehicle.
  • the first driving strategy corresponding to the first safe passing probability is obtained according to the corresponding relationship between the driving strategy and the safe passing probability.
  • the cloud server stores the corresponding relationship between the driving strategy and the safe passage probability. For example, the cloud server sets the driving strategy corresponding to the safe passage probability of 70% as driving strategy 1, and the cloud server sets the driving strategy setting corresponding to the safe passage probability of 75%. For driving strategy 2, the cloud server sets the driving strategy corresponding to a safe passage probability of 80% as driving strategy 3.
  • the cloud server sets the driving strategy corresponding to a safe passage probability of 85% as driving strategy 4, and the cloud server corresponds to a safe passage probability of 90%
  • the driving strategy is set to driving strategy 5
  • the cloud server sets the driving strategy corresponding to a safe passage probability of 95% to driving strategy 6, and so on. Therefore, the first safe passage probability determined by the cloud server according to the layer information of the first road section and the vehicle attribute information of the target vehicle is 90%, and the cloud server finds the first safe passage probability according to the correspondence between the preset driving strategy and the safe passage probability.
  • a driving strategy corresponding to a safe passage probability of 90% is driving strategy 5.
  • the correspondence between the driving strategy and the safe passage probability stored in the cloud server includes the correspondence between the safe passage probability interval and the driving strategy. Then, in this case, according to the correspondence between the driving strategy and the safe passage probability Obtaining the driving strategy corresponding to the first safe passing probability is specifically: searching for the driving strategy corresponding to the first safe passing probability according to the corresponding relationship between the driving strategy and the safe passing probability interval. Specifically, the cloud server divides the safe passage probability into different probability intervals, and each probability interval corresponds to a driving strategy. For example, the cloud server sets the driving strategy corresponding to the probability interval with a safe passage probability greater than or equal to 70% and less than 75% as driving strategy 1, and so on.
  • the realization process of obtaining the first safe passage probability according to the layer information of the first road section and the vehicle attribute information of the target vehicle may include: calculating the first safe passage probability through the first model.
  • the first model may be a formula, a function model, a trained neural network model, and so on. It should be understood that there may be multiple ways of calculating the probability of safe passage.
  • the calculation of the probability of the first safe passage may satisfy a formula or a function model.
  • the first model includes at least one information item and a weight parameter corresponding to the at least one information item.
  • the at least one information item is extracted based on the layer information of the first road section and the vehicle attribute information of the target vehicle.
  • the weight parameter is used to indicate the importance of the information item when it is used to determine the first safe passage probability.
  • the first model can be expressed as:
  • A1 represents the first information item
  • ⁇ 1 represents the weight parameter corresponding to the first information item
  • A2 represents the second information item
  • ⁇ 2 represents the weight parameter corresponding to the second information item
  • An represents the nth information item Item
  • ⁇ n represents the weight parameter corresponding to the nth information item.
  • the first model is associated with sample data. In one embodiment, the first model is associated with at least one sample data.
  • the first model may be a neural network model.
  • the neural network model may be a deep learning neural network model (Deep Neural Networks, DNN), or a recurrent neural network model (Recurrent Neural Networks, RNN), and so on.
  • DNN Deep Neural Networks
  • RNN recurrent neural network model
  • the hidden layer of the RNN contains multiple hidden layer nodes, and each hidden layer node corresponds to an information item. Each hidden layer node has a weight parameter. It is worth noting that the key to training the recurrent neural network model RNN is to determine the training samples.
  • the sample data of the recurrent neural network model RNN includes the layer information from the second road section and the target vehicle At least one information item obtained by extracting the vehicle attribute information of the vehicle, wherein the second road section is a road section adjacent to the first road section, and the second road section is an entrance of the first road section.
  • the first model trained based on the layer information of the second road section and the vehicle attribute information of the target vehicle can be improved. The accuracy of calculating the probability of the first safe passage.
  • a trained recurrent neural network model RNN can be obtained through multiple sample data of the second road section, and the trained recurrent neural network model RNN can be used to characterize the layer information and target vehicle of the second road section. The corresponding relationship between the vehicle attribute information and the safe driving probability.
  • the layer information of the first road section can be combined with the vehicle of the target vehicle
  • the attribute information is input into the trained recurrent neural network model to obtain the safe passage probability of the first section.
  • the layer information of the first road segment includes the first road segment.
  • the steps of obtaining the first driving strategy according to the layer information of the first road segment and the vehicle attribute information of the target vehicle may include: according to the static layer information of the first road segment and the target vehicle’s The vehicle attribute information acquires the first driving strategy.
  • the safe passage probability is calculated by the first model (for example, the first model is a formula)
  • the information items are extracted from the static layer information of the first road section and the vehicle attribute information of the target vehicle, and then each information is obtained
  • the weight parameter corresponding to the item can be calculated to obtain the probability of safe passage.
  • the safe passage probability is calculated by the first model (for example, the first model is a trained neural network model)
  • the static layer information of the first road section and the vehicle attribute information of the target vehicle are input into the first model to obtain the target vehicle Probability of safe passage for the first section of travel.
  • the steps of implementing the first driving strategy for information acquisition may include: acquiring the first driving strategy according to the static layer information of the first road section, the dynamic layer information of the first road section, and the vehicle attribute information of the target vehicle.
  • the safe passage probability is calculated by the first model (for example, the first model is a formula)
  • the information items are extracted from the static layer information of the first road section and the dynamic layer information of the first road section, and then each The weight parameters corresponding to each information item can be calculated to obtain the probability of safe passage.
  • the first model is a trained neural network model
  • the static layer information of the first road section, the dynamic layer information of the second road section, and the vehicle attribute information of the target vehicle Input the first model to obtain the safe passage probability of the first section of the target vehicle.
  • Step S208 The cloud server sends the first driving strategy to the target vehicle.
  • the cloud server after the cloud server obtains the first safe driving strategy according to the layer information of the first road section and the vehicle attribute information of the target vehicle, the cloud server issues the first safe driving strategy to the automatic driving device of the target vehicle , So that the target vehicle can drive automatically according to the first driving strategy.
  • the cloud server can obtain the layer information of the first road segment in the automatic driving strategy layer through the itinerary information reported by the target vehicle, and then, according to the layer information of the first road segment And the attribute information of the vehicle to obtain the driving strategy that meets the safety requirements, so that the target vehicle can drive automatically according to the driving strategy determined by the cloud server.
  • this implementation method can overcome the sensor’s perception defects (for example, the sensor’s perception defects are reflected in the limited data acquired by the sensor, the limited detection range of the sensor, the detection of the sensor is easily affected by the environment, etc.), which can improve Determining the accuracy of driving strategies that meet safety requirements reduces the risk of autonomous driving.
  • the target vehicle when the target vehicle is driving on the first road segment, the target vehicle may report the second driving strategy for driving on the first road segment to the cloud server, where the second driving strategy is that the target vehicle passes through as shown in Figure 1f.
  • the sensor data obtained by the sensor 104 in real time is determined.
  • the cloud server can determine whether the similarity between the first driving strategy and the second driving strategy meets the set preset conditions. When the similarity of the second driving strategy satisfies the set preset condition, a prompt message for switching the second driving strategy to the first driving strategy is sent to the target vehicle.
  • the form of the prompt information may be other forms such as voice, light, or graphic display.
  • the prompt information may be: "The safety of the current driving strategy is low. Please switch the second driving strategy to the first driving strategy".
  • the voice can be high and severe.
  • the prompt information may be brighter and with a sense of flicker.
  • the image and text can be "high-risk driving”.
  • the aforementioned similarity between the first driving strategy and the second driving strategy satisfies the preset condition, which may be that the similarity between the first driving strategy and the second driving strategy is less than the set first threshold (for example, The first threshold may be 0.8), and the similarity between the first driving strategy and the second driving strategy satisfies a preset condition, or the similarity between the first driving strategy and the second driving strategy satisfies a functional relationship, and so on.
  • the cloud server determines the safety of the first driving strategy based on the automatic driving strategy layer. It is higher than the second driving strategy determined by the target vehicle based on real-time sensor data. For the purpose of safe driving, when the cloud server determines that the similarity between the first driving strategy and the second driving strategy is less than the set threshold, it sends to the target vehicle an instruction to switch the second driving strategy to the first driving strategy. Prompt information, so that the target vehicle adopts a safer first driving strategy for automatic driving, which can reduce the risk of automatic driving.
  • FIG. 3a is an automatic driving method provided by an embodiment of the application. Specifically, the method is after the above step S208, as shown in FIG. 3a, the method may include the following steps:
  • Step S2010 The target vehicle receives the first driving strategy sent by the cloud server.
  • Step S2012 The target vehicle automatically drives the target vehicle according to the first driving strategy.
  • the target vehicle may determine the second driving strategy of the target vehicle driving on the first road segment through the sensor data obtained by the sensor shown in FIG. 1f. In this case, the target vehicle is determined according to the first driving strategy.
  • the implementation steps of performing automatic driving may include: performing automatic driving on the target vehicle according to the first driving strategy and the second driving strategy.
  • the target vehicle determines the second driving strategy based on the acquired sensor data.
  • the sensor data is used to indicate the current environment information of the target vehicle.
  • the sensor data is taken as an example of image data in a driving environment obtained by a camera.
  • the image data may be image data of a static target (for example, a green belt), or a dynamic target (for example, , The image data of the vehicle in front).
  • Fig. 3b it is the realization process of detecting and identifying the target.
  • the image data is input into the feature extraction model, and the feature extraction model selects candidate regions in the image, and extracts the features of the candidate regions.
  • the feature extraction model outputs the extracted features, and inputs these features into the classifier.
  • the extracted features are classified and recognized by the classifier, and the classifier outputs the probability of being recognized as the i-th object.
  • a frame selection representation can be performed on the recognized object.
  • Fig. 3b also shows the way to obtain the classifier. Specifically, the training samples are selected, and operations such as feature extraction are performed on the training samples, so that the training process of the training samples can be completed to obtain the classifier.
  • the training samples include positive samples and negative samples.
  • a positive sample refers to a sample that is related to the detection and identification object
  • a negative sample refers to a sample that is not related to or is low in the detection and identification object.
  • the target detection and recognition process shown in Figure 3b needs to detect whether an object is a vehicle, that is, the detection and recognition object is a vehicle.
  • the positive sample is the image of the vehicle
  • the negative sample is the image of other objects except the vehicle.
  • the negative sample is the image of the lane line, or the image of the green belt, etc.
  • the above-mentioned feature acquisition model may be a Convolutional Neural Network (CNN) model, and of course, it may also be other learning models with image data feature functions.
  • the classifier may be a support vector machine (Support Vector Machine, SVM), or other types of classifiers may be used.
  • SVM Support Vector Machine
  • the embodiments of the present application do not limit the types of classifiers and feature extraction models.
  • FIG. 3c is a schematic structural diagram of a convolutional neural network model provided by an embodiment of this application, wherein the convolutional neural network model It includes a convolutional layer, a pooling layer and three fully connected layers.
  • the target detection and recognition process is specifically as follows: the image is input to the convolutional neural network model, and the feature map of the candidate region of the image data is obtained through the convolutional layer of the model. The feature map is used to represent the candidate region from the The extracted features.
  • the pooling layer performs a pooling operation according to the feature map output by the convolutional layer, retains the main features of the candidate area, reduces the number of required calculation features, and reduces the calculation amount of the convolutional neural network model.
  • the feature vector output by the pooling layer is input to the fully connected layer, and the features of each candidate area are synthesized by the fully connected to obtain the features of the entire image, and the image features are output to the classifier.
  • the classifier can output the classification probability of the object in the image. For example, the probability that the object in the output image of the classifier is a vehicle is 98%.
  • frame selection is performed on the recognized object, and the position of the frame selection is finely trimmed using a regression.
  • the convolutional neural network model there may be multiple convolutional layers, multiple pooling layers, and multiple fully connected layers.
  • multiple convolutional layers/pooling layers as shown in FIG. 3d are in parallel, and the respectively extracted features are input to the neural network layer 130 for processing.
  • the target vehicle may obtain the second driving strategy according to the acquired image data of one or more static targets and one or more dynamic target image data in the current driving environment.
  • the realization process of the automatic driving of the target vehicle according to the first driving strategy and the second driving strategy can refer to the two situations described below:
  • Case 1 When it is judged that the similarity between the first driving strategy and the second driving strategy is greater than the first threshold, the target vehicle is automatically driven according to the first driving strategy, or the target vehicle is automatically driven according to the second driving strategy drive.
  • similarity is also called similarity measure, which is a measure for comprehensively evaluating the similarity between two things. It is understandable that the closer two things are, the greater their similarity.
  • the target vehicle determines that the autonomous driving strategy of the first road segment driven by the target vehicle is fully autonomous L5 through real-time sensor data
  • the cloud server determines that the autonomous driving strategy of the first road segment driven is altitude according to the automatic driving strategy layer Autopilot.
  • fully autonomous driving L5 means that all driving operations are completed by the vehicle, and the human driver does not need to maintain attention.
  • Highly automated driving L4 refers to the completion of all driving operations by the vehicle. The human driver does not need to maintain attention, but limits the road and environmental conditions.
  • the similarity calculation formula (for example, the Euclidean distance formula) is used to determine the difference between these two driving strategies.
  • the similarity is 0.85, which is greater than the set first threshold (for example, the first threshold is 0.8).
  • the target vehicle can automatically drive the target vehicle according to the first driving strategy, or the target The vehicle automatically drives the target vehicle according to the second driving strategy.
  • the target vehicle in the case where it is determined that the similarity between the first driving strategy and the second driving strategy is greater than or equal to the first threshold, the target vehicle can automatically drive the target vehicle according to the first driving strategy. Or, the target vehicle can automatically drive the target vehicle according to the second driving strategy, which indicates that the safety of the vehicle can be guaranteed when the target vehicle is automatically driven through the first driving strategy or the second driving strategy.
  • Case 2 In a case where it is determined that the similarity between the first driving strategy and the second driving strategy is less than the first threshold, the target vehicle is automatically driven according to the first driving strategy.
  • the perception defect is embodied in: limited data acquired by the sensor, limited detection range of the sensor, detection of the sensor is easily affected by the environment, etc.
  • the perception defect easily leads to the acquisition of The sensor data is not accurate enough, which can easily reduce the accuracy of the driving strategy determined based on the sensor data.
  • the autonomous driving strategy layer can be regarded as an extension of traditional hardware sensors (for example, radar, laser rangefinder or camera), which contains more data and is not affected by the environment, obstacles or interference. Among them, the road The static layer information ensures that the automatic driving can efficiently determine the driving path and avoid obstacles.
  • the dynamic layer information of the road ensures that the automatic driving can respond to emergencies in time, so it can be known that according to the automatic driving strategy map
  • the safety of the first driving strategy determined by the level is higher than that of the second driving strategy determined by the target vehicle through real-time sensor data. Based on this, when the automatic driving device determines that the similarity between the first driving strategy and the second driving strategy is less than the first threshold, for the purpose of safe driving, the target vehicle is automatically driven according to the first driving strategy to avoid Unnecessary driving risks.
  • the target vehicle in the case where it is determined that the similarity between the first driving strategy and the second driving strategy is less than or equal to the first threshold, the target vehicle automatically drives the target vehicle according to the first driving strategy, and That is, in order to avoid accidental risks of the vehicle, the target vehicle chooses the safer first driving strategy for automatic driving.
  • the embodiment of the present application in the case of "the similarity between the first driving strategy and the second driving strategy is equal to the first threshold", it can be considered that it satisfies the first situation described above, and it can also be considered that it satisfies the first situation described above. If the second situation described above is satisfied, the embodiment of the present application does not specifically limit it.
  • FIG. 4 is another automatic driving method provided by an embodiment of this application. As shown in FIG. 4, the method may include the following steps:
  • Step S400 The target vehicle reports the itinerary information to the cloud server.
  • Step S402 The cloud server receives the itinerary information reported by the target vehicle.
  • Step S404 The cloud server obtains the layer information of the first road section on which the target vehicle is traveling in the automatic driving strategy layer according to the itinerary information.
  • the layer information of the first road section may include the static layer information of the first road section, where the static layer information of the first road section is used to indicate the infrastructure information of the first road section.
  • the layer information of the first link includes the static layer information of the first link and the dynamic layer information of the first link. Specifically, the static layer information of the first link is used to indicate the information of the first link. Infrastructure information, the dynamic layer information of the first road section is used to indicate the dynamic traffic information of the first road section.
  • Step S406 The cloud server sends the layer information of the first road section to the target vehicle.
  • Step S408 The target vehicle receives the layer information of the first road section, and obtains the first driving strategy according to the layer information of the first road section and the vehicle attribute information of the target vehicle.
  • Step S4010 The target vehicle automatically drives the target vehicle according to the first driving strategy.
  • the target vehicle may report the first driving strategy obtained according to the layer information of the first road segment and the vehicle attribute information of the target vehicle to the cloud server, so that the cloud server can save it and update the first road segment.
  • the cloud server may send the first driving strategy (for example, the recommended operating mode) to other vehicles that are the same or similar to the target vehicle in the area, so as to assist other vehicles in driving.
  • the cloud server can obtain the layer information of the first road segment in the automatic driving strategy layer based on the itinerary information reported by the target vehicle, and then send the layer information of the first road segment to the target vehicle, and then The target vehicle can obtain a driving strategy that meets safety requirements according to the layer information of the first road section and the attribute information of the vehicle, so that it can perform automatic driving according to the determined driving strategy.
  • this implementation method can overcome the sensor’s perception defects (for example, the sensor’s perception defects are reflected in the limited data acquired by the sensor, the limited detection range of the sensor, the detection of the sensor is easily affected by the environment, etc.), which can improve Determining the accuracy of driving strategies that meet safety requirements reduces the risk of autonomous driving.
  • the embodiments of the present application provide a cloud server, which is used to execute the unit of the method according to any one of the foregoing first aspects to pass the automatic driving strategy
  • the layer determines the driving strategy.
  • Fig. 5 is a schematic block diagram of a cloud server 500 provided by an embodiment of the present application.
  • the cloud server 50 of the embodiment of the present application may include:
  • the receiving unit 500 is configured to receive vehicle attribute information reported by a target vehicle and travel information of the target vehicle; wherein the vehicle attribute information of the target vehicle is used to generate an automatic driving strategy;
  • the first acquiring unit 502 is configured to acquire the layer information of the first road segment on which the target vehicle is traveling in the automatic driving strategy layer according to the itinerary information;
  • the second obtaining unit 504 is configured to obtain a first driving strategy according to the layer information of the first road section and the vehicle attribute information of the target vehicle;
  • the first sending unit 506 is configured to send the first driving strategy to the target vehicle.
  • the cloud server stores a corresponding relationship between a driving strategy and a safe passage probability;
  • the second obtaining unit 504 may include a safe passage probability obtaining unit and a driving strategy obtaining unit, where:
  • the safe passage probability obtaining unit is configured to obtain the first safe passage probability according to the layer information of the first road section and the vehicle attribute information of the target vehicle;
  • the driving strategy obtaining unit is configured to obtain a first driving strategy corresponding to the first safe passing probability according to the corresponding relationship between the driving strategy and the safe passing probability.
  • the safe passage probability obtaining unit is specifically configured to:
  • the first safe passage probability is calculated using a first model, wherein the first model includes at least one information item and a weight parameter corresponding to the at least one information item, and the at least one information item is based on the first road section
  • the layer information of and the information item obtained by extracting the vehicle attribute information of the target vehicle, and the weight parameter is used to indicate the importance of the information item when it is used to determine the first safe passage probability.
  • the first model is a model trained based on at least one sample data, and the sample data includes information extracted from the layer information of the second road section and the vehicle attribute information of the target vehicle.
  • the second road section is a road section adjacent to the first road section, and the exit of the second road section is the entrance of the first road section.
  • the layer information of the first road section includes at least one of static layer information of the first road section and dynamic layer information of the first road section; wherein, the first road section
  • the static layer information of a road section is used to indicate the infrastructure information of the first road section; the dynamic layer information of the first road section is used to indicate the dynamic traffic information of the first road section.
  • the static layer information of the first road section includes at least one of lane attributes, digital device information, and green belt information;
  • the dynamic layer information of the first road section includes weather information, Road surface information, the congestion situation of the first road section in the first time period, the probability of pedestrians and non-motorized vehicles passing through the first road section in the first time period, the first road section in the first time period At least one of the accident probabilities of a driving accident occurring within the time period.
  • the first road section is the road section where the target vehicle is driving; the cloud server 50 further includes:
  • the third acquiring unit 508 is configured to acquire a second driving strategy of the target vehicle driving on the first road section; wherein, the second driving strategy is determined by the target vehicle according to sensor data acquired in real time;
  • the second sending unit 5010 is configured to send to the target vehicle to switch the second driving strategy to the second driving strategy when the similarity between the first driving strategy and the second driving strategy is less than a first threshold. Tips for the first driving strategy.
  • the vehicle attribute information of the target vehicle includes at least one of the automatic driving capability of the target vehicle, sensor distribution information of the target vehicle, and the driving state of the driver in the target vehicle.
  • cloud server described in the embodiment of the present application can refer to the related description of the automatic driving method in the method embodiment described in FIG. 3a and FIG. 4, which will not be repeated here.
  • the embodiments of the present application provide an automatic driving device, which is used to execute the unit of the method according to any one of the foregoing second aspects to perform the The first driving strategy determined by the server is for automatic driving.
  • FIG. 6, is a schematic block diagram of an automatic driving device 60 according to an embodiment of the present application.
  • the automatic driving device 60 of the embodiment of the present application may include:
  • the receiving unit 600 is configured to receive a first driving strategy sent by a cloud server; wherein the first driving strategy is the first driving strategy obtained by the method according to any one of the first aspects;
  • the control unit 602 is configured to automatically drive the target vehicle according to the first driving strategy.
  • the device 60 may further include:
  • the second driving strategy acquisition unit 604 is configured to acquire the second driving strategy of the target vehicle driving on the first road section through sensor data
  • the control unit 602 is specifically configured to:
  • the target vehicle is automatically driven according to the first driving strategy and the second driving strategy.
  • control unit 602 is specifically configured to:
  • the target vehicle is automatically driven according to the first driving strategy, or according to the second driving strategy.
  • the driving strategy performs automatic driving on the target vehicle;
  • the target vehicle is automatically driven according to the first driving strategy.
  • FIG. 7 is a schematic structural diagram of a cloud server provided by an embodiment of the present application.
  • the cloud server 70 includes at least one processor 701, at least one memory 702, and at least one communication interface 703.
  • the cloud server may also include general components such as antennas, which will not be described in detail here.
  • the processor 701 may be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the programs in the above scheme.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • the communication interface 703 is used to communicate with other devices or communication networks.
  • the memory 702 can be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, a random access memory (RAM), or other types that can store information and instructions.
  • the type of dynamic storage device can also be Electrically Erasable Programmable Read-Only Memory (EEPROM), CD-ROM (Compact Disc Read-Only Memory, CD-ROM), or other optical disk storage, CD-ROM Storage (including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program codes in the form of instructions or data structures and can be used by Any other medium accessed by the computer, but not limited to this.
  • the memory can exist independently and is connected to the processor through a bus.
  • the memory can also be integrated with the processor.
  • the memory 702 is used to store application program codes for executing the above solutions, and the processor 701 controls the execution.
  • the processor 701 is configured to execute application program codes stored in the memory 702.
  • the code stored in the memory 702 can execute the automatic driving method provided in FIG. 2 or FIG. 3a above.
  • the embodiments of the present application also provide a computer storage medium, which stores instructions in the computer-readable storage medium, and when it runs on a computer or a processor, the computer or the processor executes the method described in any of the above embodiments.
  • a computer storage medium which stores instructions in the computer-readable storage medium, and when it runs on a computer or a processor, the computer or the processor executes the method described in any of the above embodiments.
  • One or more steps If each component module of the above device is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in the computer readable storage medium.
  • the technical solution of the present application is essentially or The part that contributes to the prior art or all or part of the technical solution may be embodied in the form of a software product, and the computer product is stored in a computer-readable storage medium.
  • the foregoing computer-readable storage medium may be an internal storage unit of the device described in the foregoing embodiment, such as a hard disk or a memory.
  • the above-mentioned computer-readable storage medium may also be an external storage device of the above-mentioned device, such as an equipped plug-in hard disk, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, and a flash card (Flash Card). )Wait.
  • the aforementioned computer-readable storage medium may also include both an internal storage unit of the aforementioned device and an external storage device.
  • the aforementioned computer-readable storage medium is used to store the aforementioned computer program and other programs and data required by the aforementioned device.
  • the aforementioned computer-readable storage medium can also be used to temporarily store data that has been output or will be output.
  • the computer program can be stored in a computer readable storage medium. At this time, it may include the procedures of the embodiments of the above-mentioned methods.
  • the aforementioned storage medium includes: ROM, RAM, magnetic disk or optical disk and other media that can store program codes.
  • the modules in the device of the embodiment of the present application may be combined, divided, and deleted according to actual needs.
  • the computer-readable medium may include a computer-readable storage medium, which corresponds to a tangible medium, such as a data storage medium, or a communication medium that includes any medium that facilitates the transfer of a computer program from one place to another (for example, according to a communication protocol) .
  • a computer-readable medium may generally correspond to (1) a non-transitory tangible computer-readable storage medium, or (2) a communication medium, such as a signal or carrier wave.
  • Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, codes, and/or data structures for implementing the techniques described in this application.
  • the computer program product may include a computer-readable medium.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disks or optical disks and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

一种自动驾驶方法、相关设备及计算机可读存储介质,该方法应用于云端服务器,方法包括:接收目标车辆上报的车辆属性信息和目标车辆的行程信息(S202);其中,目标车辆的车辆属性信息用于生成自动驾驶策略;根据行程信息在自动驾驶策略图层中获取目标车辆行驶的第一路段的图层信息(S204);根据第一路段的图层信息和目标车辆的车辆属性信息获取第一驾驶策略(S206);将第一驾驶策略发送给目标车辆(S208)。该方法可以降低自动驾驶的风险。

Description

一种自动驾驶方法、相关设备及计算机可读存储介质
本申请要求于2019年12月31日递交中国知识产权局、申请号为201911425837.X,发明名称为“一种自动驾驶方法、相关设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及一种自动驾驶方法、相关设备及计算机可读存储介质。
背景技术
人工智能(Artificial Intelligence,AI)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用系统。换句话说,人工智能是计算机科学的一个分支,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式作出反应的智能机器。人工智能也就是研究各种智能机器的设计原理与实现方法,使机器具有感知、推理与决策的功能。人工智能领域的研究包括机器人,自然语言处理,计算机视觉,决策与推理,人机交互,推荐与搜索,AI基础理论等。
自动驾驶是人工智能领域的一种主流应用,自动驾驶技术依靠计算机视觉、雷达、监控装置和全球定位系统等协同合作,让机动车辆可以在不需要人类主动操作下,实现自动驾驶。自动驾驶的车辆使用各种计算系统来实现将乘客从一个位置运输到另一个位置。一些自动驾驶车辆可能要来自操作者(诸如驾驶员、乘客)的一些初始输入或者连续输入。自动驾驶车辆准许操作者从手动驾驶模式切换到自动驾驶模式或者介于两者之间的模式。由于自动驾驶技术无需人类来驾驶机动车辆,所以理论上能够有效避免人类驾驶员的驾驶失误,减少交通事故的发生,且能够提高公路的运输效率。因此,自动驾驶技术越来越受到重视。
现有的实现方式中,通过在自动驾驶车辆上设置传感器,传感器可以获取周围车辆的行驶状态数据(例如,周围车辆的速度、加速度、航向角等),从而自动驾驶车辆可以根据获取到的传感器数据确定满足安全要求的驾驶策略。由于需从传感器获取数据,且传感器获取的数据往往有限,在根据获取到的数据确定满足安全要求的驾驶策略时,若传感器出现故障或灵敏度不高或精度不够高时,容易产生安全性较差的驾驶策略,这无疑加大了自动驾驶的风险。因此,如何提高确定满足安全要求的驾驶策略的准确性,以提高车辆安全驾驶的安全性是亟需解决的技术问题。
发明内容
本申请提供了一种自动驾驶方法、相关设备及计算机可读存储介质,可以提高确定满足安全要求的驾驶策略的准确性,降低了自动驾驶的风险。
第一方面,提供了一种自动驾驶方法,所述方法应用于云端服务器,所述方法包括: 接收目标车辆上报的车辆属性信息和所述目标车辆的行程信息;其中,所述目标车辆的车辆属性信息用于生成自动驾驶策略;根据所述行程信息在自动驾驶策略图层中获取所述目标车辆行驶的第一路段的图层信息;根据所述第一路段的图层信息和所述目标车辆的车辆属性信息获取第一驾驶策略;将所述第一驾驶策略发送给所述目标车辆。
实施本申请实施例,云端服务器可以通过目标车辆上报的行程信息在自动驾驶策略图层中获取行驶的第一路段的图层信息,之后,根据第一路段的图层信息和车辆的属性信息来获取满足安全要求的驾驶策略,以便目标车辆可以根据云端服务器确定好的驾驶策略进行自动驾驶。由于自动驾驶策略图层中包含的信息更为丰富,且能够克服传感器的感知缺陷(例如,传感器的感知缺陷体现在:传感器获取的数据有限、传感器检测的范围有限、传感器的检测易受环境影响等等),从而可以提高确定满足安全要求的驾驶策略的准确性,降低了自动驾驶的风险。
在一种可能的实现方式中,所述云端服务器存储有驾驶策略与安全通行概率的对应关系;所述根据所述第一路段的图层信息和所述目标车辆的车辆属性信息获取第一驾驶策略,包括:根据所述第一路段的图层信息和所述目标车辆的车辆属性信息获取第一安全通行概率;根据所述驾驶策略与安全通行概率的对应关系获取所述第一安全通行概率对应的第一驾驶策略。实施本申请实施例,云端服务器可以根据行驶的第一路段的图层信息和目标车辆的车辆属性信息获取第一安全通行概率,之后,通过预置的驾驶策略与安全通行概率之间的对应关系获取第一安全通行概率对应的第一驾驶策略。由于图层中包含着丰富的信息,且能够克服传感器的感知缺陷,通过这一实现方式,可以提高确定满足安全要求的驾驶策略的准确性,降低了自动驾驶的风险。
在一种可能的实现方式中,所述根据所述第一路段的图层信息和所述目标车辆的车辆属性信息获取第一安全通行概率,包括:通过第一模型计算所述第一安全通行概率,其中,所述第一模型包括至少一个信息项和所述至少一个信息项对应的权重参数,所述至少一个信息项为根据所述第一路段的图层信息和所述目标车辆的车辆属性信息提取得到的信息项,所述权重参数用于指示信息项被用于确定所述第一安全通行概率时的重要程度。
在一种可能的实现方式中,所述第一模型为基于至少一个样本数据训练得到的模型,所述样本数据包括从第二路段的图层信息和所述目标车辆的车辆属性信息提取到的至少一个信息项,所述第二路段为与所述第一路段相邻的路段,且所述第二路段的出口为所述第一路段的入口。实施本申请实施例,由于相邻的路段之间具有相同或相似的特性,基于第二路段的图层信息和目标车辆的车辆属性信息得到训练好的第一模型,可以提高第一模型计算第一安全通行概率的准确性。
在一种可能的实现方式中,所述第一路段的图层信息包括所述第一路段的静态图层信息、所述第一路段的动态图层信息中的至少一个;其中,所述第一路段的静态图层信息用于指示所述第一路段的基础设施信息;所述第一路段的动态图层信息用于指示所述第一路段的动态交通信息。实施本申请实施例,云端服务器可以通过目标车辆上报的行程信息在自动驾驶策略图层中获取行驶的第一路段的图层信息,之后,根据第一路段的图层信息和车辆的属性信息来获取满足安全要求的驾驶策略,以便目标车辆可以根据云端服务器确定好的驾驶策略进行自动驾驶。由于自动驾驶策略图层中包含的信息更为丰富,道路的静态 图层信息保证了自动驾驶能够高效的确定行驶路径和规避障碍物,道路的动态化图层信息保证了自动驾驶能够及时地应对突发状况,这一实现方式能够克服传感器的感知缺陷(例如,传感器的感知缺陷体现在:传感器获取的数据有限、传感器检测的范围有限、传感器的检测易受环境影响等等),从而可以提高确定满足安全要求的驾驶策略的准确性,降低了自动驾驶的风险。
在一种可能的实现方式中,所述第一路段的静态图层信息包括车道属性、数字化设备信息、绿化带信息中的至少一种;所述第一路段的动态图层信息包括天气信息、路面信息、所述第一路段在第一时间段内的拥堵情况、所述第一路段在所述第一时间段内行人和非机动车的穿行概率、所述第一路段在所述第一时间段内发生驾驶事故的事故概率中的至少一种。实施本申请实施例,道路的静态图层信息保证了自动驾驶能够高效的确定行驶路径和规避障碍物,道路的动态化图层信息保证了自动驾驶能够及时地应对突发信息,这一实现方式能够克服传感器的感知缺陷(例如,传感器的感知缺陷体现在:传感器获取的数据有限、传感器检测的范围有限、传感器的检测易受环境影响等等),从而可以提高确定满足安全要求的驾驶策略的准确性,降低了自动驾驶的风险。
在一种可能的实现方式中,所述第一路段为所述目标车辆正在行驶的路段;所述方法还包括:获取所述目标车辆行驶在所述第一路段上的第二驾驶策略;其中,所述第二驾驶策略为所述目标车辆根据实时获取的传感器数据确定的;在所述第一驾驶策略和所述第二驾驶策略的相似度小于第一阈值的情况下,向所述目标车辆发送将所述第二驾驶策略切换为所述第一驾驶策略的提示信息。实施本申请实施例,由于自动驾驶策略图层包含着丰富的信息,且能够克服传感器的感知缺陷,这也意味着,云端服务器根据自动驾驶策略图层确定好的第一驾驶策略的安全性要高于目标车辆根据实时获取的传感器数据确定好的第二驾驶策略。出于安全驾驶的目的,云端服务器在判断出第一驾驶策略和第二驾驶策略的相似度小于设定好的阈值的情况下,向目标车辆发送将第二驾驶策略切换为第一驾驶策略的提示信息,以便目标车辆采用安全性更好的第一驾驶策略进行自动驾驶,可以降低自动驾驶的风险。
在一种可能的实现方式中,所述目标车辆的车辆属性信息包括所述目标车辆的自动驾驶能力、所述目标车辆的传感器分布信息、所述目标车辆中驾驶员的驾驶状态中的至少一种。
第二方面,本申请实施例提供了一种自动驾驶方法,该方法应用于目标车辆上的车载终端,该方法包括:接收云端服务器发送的第一驾驶策略;其中,所述第一驾驶策略为上述第一方面所述的方法获取的第一驾驶策略;根据所述第一驾驶策略对所述目标车辆进行自动驾驶。
实施本申请实施例,云端服务器根据自动驾驶策略图层确定好了第一驾驶策略之后,将第一驾驶策略发送给目标车辆,以便目标车辆可以根据云端服务器确定好的第一驾驶策略进行自动驾驶。由于自动驾驶策略图层中包含的信息更为丰富,且能够克服传感器的感知缺陷(例如,传感器的感知缺陷体现在:传感器获取的数据有限、传感器检测的范围有限、传感器的检测易受环境影响等等),从而可以提高确定满足安全要求的驾驶策略的准确 性,降低了自动驾驶的风险。
在一种可能的实现方式中,所述方法还包括:通过传感器数据获取所述目标车辆行驶在第一路段上的第二驾驶策略;所述根据所述第一驾驶策略对所述目标车辆进行自动驾驶,包括:根据所述第一驾驶策略和所述第二驾驶策略对所述目标车辆进行自动驾驶。实施本申请实施例,在目标车辆行驶在第一路段时,目标车辆可以根据实时获取的传感器数据获取第二驾驶策略,并基于第一驾驶策略和第二驾驶策略对目标车辆进行自动驾驶,可以避免目标车辆采用安全性低的驾驶策略对目标车辆进行自动驾驶,可以降低自动驾驶的风险。
在一种可能的实现方式中,所述根据所述第一驾驶策略和所述第二驾驶策略对所述目标车辆进行自动驾驶,包括:在判断出所述第一驾驶策略和所述第二驾驶策略的相似度大于第一阈值的情况下,根据所述第一驾驶策略对所述目标车辆进行自动驾驶,或,根据所述第二驾驶策略对所述目标车辆进行自动驾驶;在判断出所述第一驾驶策略和所述第二驾驶策略的相似度小于所述第一阈值的情况下,根据所述第一驾驶策略对所述目标车辆进行自动驾驶。实施本申请实施例,由于自动驾驶策略图层包含着丰富的信息,且能够克服传感器的感知缺陷,这也意味着,云端服务器根据自动驾驶策略图层确定好的第一驾驶策略的安全性要高于目标车辆根据实时获取的传感器数据确定好的第二驾驶策略。出于安全驾驶的目的,目标车辆在判断出第一驾驶策略和第二驾驶策略的相似度大于设定好的阈值的情况下,根据第一驾驶策略对目标车辆进行自动驾驶,或,根据第二驾驶策略对目标车辆进行自动驾驶;在判断出第一驾驶策略和第二驾驶策略的相似度小于设定好的阈值的情况下,根据第一驾驶策略对目标车辆进行自动驾驶。通过这一实现方式,可以避免目标车辆采用安全性低的驾驶策略对目标车辆进行自动驾驶,可以降低自动驾驶的风险。
第三方面,本申请实施例提供了一种云端服务器,该云端服务器可以包括:
接收单元,用于接收目标车辆上报的车辆属性信息和所述目标车辆的行程信息;其中,述目标车辆的车辆属性信息用于生成自动驾驶策略;第一获取单元,用于根据所述行程信息在自动驾驶策略图层中获取所述目标车辆行驶的第一路段的图层信息;第二获取单元,用于根据所述第一路段的图层信息和所述目标车辆的车辆属性信息获取第一驾驶策略;第一发送单元,用于将所述第一驾驶策略发送给所述目标车辆。
实施本申请实施例,云端服务器可以通过目标车辆上报的行程信息在自动驾驶策略图层中获取行驶的第一路段的图层信息,之后,根据第一路段的图层信息和车辆的属性信息来获取满足安全要求的驾驶策略,以便目标车辆可以根据云端服务器确定好的驾驶策略进行自动驾驶。由于自动驾驶策略图层中包含的信息更为丰富,且能够克服传感器的感知缺陷(例如,传感器的感知缺陷体现在:传感器获取的数据有限、传感器检测的范围有限、传感器的检测易受环境影响等等),从而可以提高确定满足安全要求的驾驶策略的准确性,降低了自动驾驶的风险。
在一种可能的实现方式中,所述云端服务器存储有驾驶策略与安全通行概率的对应关系;所述第二获取单元包括安全通行概率获取单元和驾驶策略获取单元,其中,所述安全通行概率获取单元,用于根据所述第一路段的图层信息和所述目标车辆的车辆属性信息获取第一安全通行概率;所述驾驶策略获取单元,用于根据所述驾驶策略与安全通行概率的 对应关系获取所述第一安全通行概率对应的第一驾驶策略。
在一种可能的实现方式中,所述安全通行概率获取单元具体用于:通过第一模型计算所述第一安全通行概率,其中,所述第一模型包括至少一个信息项和所述至少一个信息项对应的权重参数,所述至少一个信息项为根据所述第一路段的图层信息和所述目标车辆的车辆属性信息提取得到的信息项,所述权重参数用于指示信息项被用于确定所述第一安全通行概率时的重要程度。
在一种可能的实现方式中,所述第一模型为基于至少一个样本数据训练得到的模型,所述样本数据包括从第二路段的图层信息和所述目标车辆的车辆属性信息提取到的至少一个信息项,所述第二路段为与所述第一路段相邻的路段,且所述第二路段的出口为所述第一路段的入口。
在一种可能的实现方式中,所述第一路段的图层信息包括所述第一路段的静态图层信息、所述第一路段的动态图层信息中的至少一个;其中,所述第一路段的静态图层信息用于指示所述第一路段的基础设施信息;所述第一路段的动态图层信息用于指示所述第一路段的动态交通信息。
在一种可能的实现方式中,所述第一路段的静态图层信息包括车道属性、数字化设备信息、绿化带信息中的至少一种;所述第一路段的动态图层信息包括天气信息、路面信息、所述第一路段在第一时间段内的拥堵情况、所述第一路段在所述第一时间段内行人和非机动车的穿行概率、所述第一路段在所述第一时间段内发生驾驶事故的事故概率中的至少一种。
在一种可能的实现方式中,所述第一路段为所述目标车辆正在行驶的路段;所述云端服务器还包括:第三获取单元,用于获取所述目标车辆行驶在所述第一路段上的第二驾驶策略;其中,所述第二驾驶策略为所述目标车辆根据实时获取的传感器数据确定的;第二发送单元,用于在所述第一驾驶策略和所述第二驾驶策略的相似度小于第一阈值的情况下,向所述目标车辆发送将所述第二驾驶策略切换为所述第一驾驶策略的提示信息。
第四方面,本申请实施例提供了一种自动驾驶装置,该装置应用于目标车辆上的车载终端,该装置可以包括:接收单元,用于接收云端服务器发送的第一驾驶策略;其中,所述第一驾驶策略为上述第一方面所述的方法获取的第一驾驶策略;控制单元,用于根据所述第一驾驶策略对所述目标车辆进行自动驾驶。
在一种可能的实现方式中,所述装置还包括:第二驾驶策略获取单元,用于通过传感器数据获取所述目标车辆行驶在第一路段上的第二驾驶策略;所述控制单元具体用于:根据所述第一驾驶策略和所述第二驾驶策略对所述目标车辆进行自动驾驶。
在一种可能的实现方式中,所述控制单元具体用于:在判断出所述第一驾驶策略和所述第二驾驶策略的相似度大于第一阈值的情况下,根据所述第一驾驶策略对所述目标车辆进行自动驾驶,或,根据所述第二驾驶策略对所述目标车辆进行自动驾驶;在判断出所述第一驾驶策略和所述第二驾驶策略的相似度小于所述第一阈值的情况下,根据所述第一驾驶策略对所述目标车辆进行自动驾驶。
第五方面,本申请实施例提供了一种云端服务器,该云端服务器可以包括存储器和处理器,所述存储器用于存储支持协同设备执行上述方法的计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,执行上述第一方面的方法。
第六方面,本申请实施例提供了一种车载终端,该车载终端可以包括存储器和处理器,所述存储器用于存储支持协同设备执行上述方法的计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,执行上述第二方面的方法。
第七方面,本申请提供了一种芯片系统,所述芯片系统可以执行如上述第一方面中涉及的任意方法,使得相关功能得以实现。在一种可能的设计中,所述芯片系统还包括存储器,所述存储器,用于保存必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包含芯片和其他分立器件。
第八方面,本申请提供了一种芯片系统,所述芯片系统可以执行如上述第二方面中涉及的任意方法,使得相关功能得以实现。在一种可能的设计中,所述芯片系统还包括存储器,所述存储器,用于保存必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包含芯片和其他分立器件。
第九方面,本申请实施例还提供一种计算机可读存储介质,所述计算机存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行上述第一方面的方法。
第十方面,本申请实施例还提供一种计算机可读存储介质,所述计算机存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行上述第二方面的方法。
第十一方面,本申请实施例还提供了一种计算机程序,所述计算机程序包括计算机软件指令,所述计算机软件指令当被计算机执行时使所述计算机执行如第一方面所述的任一种自动驾驶方法。
第十二方面,本申请实施例还提供了一种计算机程序,所述计算机程序包括计算机软件指令,所述计算机软件指令当被计算机执行时使所述计算机执行如第二方面所述的任一种自动驾驶方法。
附图说明
图1a为本申请实施例提供的一种自动驾驶策略图层的示意图;
图1b为本申请实施例提供的另一种自动驾驶策略图层的示意图;
图1c为本申请实施例提供的另一种自动驾驶策略图层的示意图;
图1d为本申请实施例提供的另一种自动驾驶策略图层的示意图;
图1e为本申请实施例提供的一种自动驾驶系统的网络架构示意图;
图1f为本申请实施例提供的一种自动驾驶装置110的功能性框图;
图1g为本申请实施例提供的一种自动驾驶系统的结构示意图;
图2为本申请实施例提供的一种自动驾驶方法的流程示意图;
图3a为本申请实施例提供的另一种自动驾驶方法流程示意图;
图3b为本申请实施例提供的检测识别目标方法的流程示意图一;
图3c为本申请实施例提供的检测识别目标方法的流程示意图二;
图3d为本申请实施例提供的一种卷积神经网络模型的结构示意图;
图4为本申请实施例提供的另一种自动驾驶方法流程示意图;
图5为本申请实施例提供的一种云端服务器的结构示意图;
图6为本申请实施例提供的一种自动驾驶装置的结构示意图;
图7为本申请实施例提供的另一种云端服务器的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例进行描述。
本申请的说明书以及附图中的术语“第一”和“第二”等是用于区分不同的对象,或者用于区别对同一对象的不同处理,而不是用于描述对象的特定顺序。此外,本申请的描述中所提到的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。例如包含了一些列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括其他没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。需要说明的是,本申请实施例中,“示例性地”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性地”或者“例如”的任何实施例或设计方法不应被解释为比其他实施例或设计方案更优地或更具优势。确切而言,使用“示例性地”或者“例如”等词旨在以具体方式呈现相关概念。在本申请实施例中,“A和/或B”表示A和B,A或B两个含义。“A,和/或B,和/或C”表示A、B、C中的任一个,或者,表示A、B、C中的任两个,或者,表示A和B和C。
为了便于理解本申请所描述的技术方案,下面对本申请中的部分用语进行解释说明:
(1)自动驾驶车辆(Autonomous vehicles;Self-piloting automobile)
在本申请实施例中,自动驾驶车辆又称无人驾驶汽车、电脑驾驶汽车、或轮式移动机器人,是一种通过计算机系统实现无人驾驶的智能汽车。在实际应用中,自动驾驶车辆依靠人工智能、视觉计算、雷达、监控装置和全球定位系统协同合作,让计算机设备可以在没有任何人类主动的操作下,自动安全地操作机动车辆。
(2)自动驾驶策略图层
在本申请实施例中,自动驾驶策略图层,是自动驾驶地图的一个子集,可以用于指示车辆进行自动驾驶。具体地,自动驾驶策略图层可以包含静态图层信息,也可以包含动态图层信息。每一个图层都可以认为是一张具体的地图。举例来说,静态图层信息可以为:道路之间的连接关系、车道线的位置、车道线的数量以及道路周围的其他对象等;再例如, 静态图层信息可以为:交通标识的信息(例如,红绿灯的位置,高度,标识的内容,如限速标识、连续弯路、慢行等)、道路周围的树木、建筑物信息等。举例来说,动态图层信息可以为动态交通信息,该信息可以与时间点(或时间段)关联,也可以与时间点(或时间段)无关。在一些实现方式中,动态图层信息的格式可以为:时间戳(或者时间段)+路段+信息。如,某个时刻或某个时间段内路段1的天气信息,某个时刻或某个时间段内路段1的路面信息(例如,道路中断情况,道路维修情况、道路遗撒情况、道路积水情况)等等。
在一些实现方式中,自动驾驶策略图层中包含的图层信息可以是二维信息,也可以是三维信息。在本申请实施例中,二维信息,又称为矢量信息,所谓矢量,就是既有大小,又有方向的量。示例性地,二维信息可以为障碍物在道路中的坐标信息。在本申请实施例中,三维信息,是指在二维信息的基础上,还包括一些抽象信息,该抽象信息用于反映物体的特性。示例性地,三维信息可以为障碍物在道路中的坐标信息以及障碍物的大小。
在本申请实施例中,自动驾驶策略图层的使用对象往往为具有自动驾驶能力的车辆。
在本申请实施例中,图1a-图1d为本申请实施例提供的一种自动驾驶策略图层的示意图。以路段1为例,如图1a所示,自动驾驶策略图层中包含车道线信息、车道数、道路边界信息、道路行驶参数等道路的静态图层信息。以路段2为例,如图1b所示,自动驾驶策略图层中包含车道线信息、车道数、道路边界信息、绿化带信息等道路的静态图层信息,还包括横倒在路面的树木等动态图层信息;以路段3为例,如图1c所示,自动驾驶策略图层中包含车道线信息、车道数、道路边界信息、绿化带信息等道路的静态图层信息,还包含天气信息(例如,在T1时刻点,小雪转大雪)等动态图层信息;以路段4为例,如图1d所示,自动驾驶策略图层中包含车道线信息、车道数、道路边界信息、绿化带信息、数字化信息设备等道路的静态图层信息,还包含天气信息(在T1时刻点,晴转阴)、行人和非机动车的历史穿行概率60%、中度拥堵等动态图层信息。
在本申请实施例中,可以将自动驾驶策略图层看作是传统硬件传感器(例如,雷达、激光测距仪或摄像头)的延伸,其包含的数据更为丰富,且不受环境、障碍或者干扰的影响。具体地,道路的静态图层信息保证了自动驾驶能够高效的确定行驶路径和规避障碍物,道路的动态化图层信息保证了自动驾驶能够及时地应对突发状况。这一实现方式可以有效克服传统硬件传感器的感知缺陷,例如,传感器的感知缺陷体现在传感器获取的数据有限,传感器检测的范围有限,传感器的检测容易受环境的影响。
在实际应用中,可以将道路的静态图层信息看作先验信息,具体地,先验信息是指某些可以提前采集且短时间内不会改变的信息。也就是说,这些东西客观存在,不会随外部事物的变化而变化,因此可以提前采集,并作为先验信息传给自动驾驶车辆做决策。
(3)自动驾驶策略
目前,全球汽车行业公认的汽车自动驾驶技术等级标准有两个,分别为美国高速公路安全管理局(简称NHTSA)和国际自动机工程师学会(简称SAE)提出的。现有的自动驾驶等级分布情况可以如表1所示:
表1自动驾驶分级表
Figure PCTCN2020114265-appb-000001
在本申请实施例中,自动驾驶策略可以包括自动驾驶等级,也可以包括指示车辆进行加速、减速、前进、停止、启动的指令,还可以包括指示车辆的速度、加速度、运动方向、位置等。在实际行驶中,需结合道路的静态图层信息和/或道路的动态图层信息来确定具体使用什么样的驾驶策略,例如,采用哪个自动驾驶等级。
本申请实施例提供的自动驾驶方法应用在具有控制自动驾驶功能的其他设备(例如,云端服务器)中,或者,应用在具有自动驾驶功能的车辆上,下面对其进行具体介绍:
在一种实现方式中,云端服务器用于实施本申请实施例提供的自动驾驶方法,通过云端服务器存储的自动驾驶地图或云端服务器接收到其他设备转移的自动驾驶策略图层来获取目标车辆行驶的第一路段(第一路段为行程信息中的任意一个路段)对应的第一驾驶策略,并向目标车辆发送该第一驾驶策略,这里,第一驾驶策略用于指示目标车辆根据第一驾驶策略进行自动驾驶。此外,在第一路段为目标车辆正在行驶的路段时,云端服务器还可以获取目标车辆行驶在第一路段上的第二驾驶策略,其中,第二驾驶策略为目标车辆根据实时获取的传感器数据确定的,云端服务器在确定第一驾驶策略和第二驾驶策略的相似 度满足预设条件的情况下向目标车辆发送将第二驾驶策略切换为第一驾驶策略的提示信息。在本申请实施例中,第一驾驶策略和第二驾驶策略的相似度满足预设条件可以包括第一驾驶策略好和第二驾驶策略的相似度满足设定好的第一阈值,也可以包括第一驾驶策略和第二驾驶策略的相似度满足函数关系,本申请实施例不作具体限定。在这一实现方式中,可以降低目标车辆自动驾驶的风险。
在一种实现方式中,目标车辆可以通过接收云端服务器发送的自动驾驶地图,在自动驾驶地图中获取目标车辆行驶的第一路段对应的第一驾驶策略,继而可以根据第一驾驶策略对目标车辆进行自动驾驶。此外,对于目标车辆来说,目标车辆还可以通过实时获取的传感器数据获取第二驾驶策略,在判断出第一驾驶策略和第二驾驶策略的相似度大于设定好的第一阈值的情况下,可以根据第一驾驶策略对目标车辆进行自动驾驶,或,根据第二驾驶策略对目标车辆进行自动驾驶;在判断出第一驾驶策略和第二驾驶策略的相似度小于设定好的第一阈值的情况下,根据第一驾驶策略对目标车辆进行自动驾驶。在这一实现方式中,可以降低目标车辆自动驾驶的风险。
图1e是本申请实施例提供的一种自动驾驶系统的网络架构示意图。如图1e所示,该自动驾驶系统架构中包含有车辆10(即车辆1)、车辆2和车辆M(M为大于0的整数,图示的车辆数量为示例性的描述,不应构成限定)和云端服务器20。在实际应用中,云端服务器20可以通过有线网络或无线网络与多个车辆10建立通信连接。
在本申请实施例中,如图1e所示,车辆10包含自动驾驶装置110。
在本申请实施例中,云端服务器20可以通过自动驾驶策略图层中包含的多维度的数据,运行其存储的控制汽车自动驾驶相关的程序对车辆1、车辆2以及车辆M进行控制(例如,通过驾驶策略指示车辆如何行驶)。控制汽车自动驾驶相关的程序可以为,管理自动驾驶的汽车和路上障碍物交互的程序,控制自动驾驶汽车路线或者速度的程序,控制自动驾驶汽车和路上其他自动驾驶汽车交互的程序。
在一些示例中,云端服务器在自动驾驶图层中获取车辆行驶的路段(例如,第一路段)的图层信息,之后,根据车辆行驶的第一路段的图层信息向自动驾驶车辆发送对于行驶在第一路段上的驾驶情况所建议的驾驶策略,例如,通过第一路段的动态图层信息(例如,动态图层信息中包含遗撒物)确定前方存在障碍物,并告知自动驾驶车辆如何绕开它;又例如,通过第一路段的动态图层信息(例如,动态图层信息中包含路面信息)确定路面积水情况,告知自动驾驶车辆如何在积水路面上驾驶。云端服务器向自动驾驶车辆发送指示该车辆应当在给定场景中如何行进的响应。例如,云端服务器基于第一路段的图层信息,可以确认道路前方具有临时停车标志的存在,并告知自动驾驶车辆如何绕开该道路。相应地,云端服务器发送用于自动驾驶车辆通过封闭路段(亦或是障碍物)的建议操作模式(例如:指示车辆变道另一条道路上)。在实际应用中,云端服务器还可以将建议操作模式发送到该区域内可能遇到相同障碍的其它车辆,以便辅助其它车辆不仅识别出封闭的车道还知道如何通过。
图1f是本申请实施例提供的自动驾驶装置100的功能框图。在一些实施方式中,可以将自动驾驶装置100配置为完全自动驾驶模式或部分地自动驾驶模式,亦或是人工驾驶模 式。以表1中SAE提出的自动驾驶分级为例,完全自动驾驶模式可以为L5,表示由车辆完成所有驾驶操作,人类驾驶员无需保持注意力;部分地自动驾驶模式可以为L1、L2、L3、L4,其中,L1表示车辆对方向盘和加减速中的一项操作提供驾驶,人类驾驶员负责其余的驾驶操作;L2表示车辆对方向盘和加减速中的多项操作提供驾驶,人类驾驶员负责其余的驾驶动作;L3表示由车辆完成绝大部分驾驶操作,人类驾驶员需保持注意力集中以备不时之需;L4表示由车辆完成所有驾驶操作,人类驾驶员无需保持注意力,但限定道路和环境条件;人工驾驶模式可以为L0,表示由人类驾驶者全权驾驶汽车。
在实际应用中,自动驾驶装置100可以在处于自动驾驶模式的同时控制自身,并且可通过人为操作来确定车辆以及周边环境的当前状态,确定周边环境中的至少一个其他车辆的可能行为,并确定该其他车辆执行可能行为的可能性相对应的置信水平,基于所确定的信息来控制自动驾驶装置110。在自动驾驶装置110处于完全自动驾驶模式中时,可以将自动驾驶装置110置为在没有人交互的情况下操作。
自动驾驶装置110可以包括多种子系统,例如,行进系统102、传感器系统104、控制系统106、一个或多个外围设备108以及电源110、计算机系统112和用户接口116。在一些实现方式中,自动驾驶装置110可以包括更多或更少的子系统,并且每个子系统可以包括多个元件。另外,自动驾驶装置110的每个子系统和元件可以通过有线或无线互连。
在本申请实施例中,行进系统102可以包括为自动驾驶装置110提供动力运动的组件。在一些实现方式中,行进系统102可以包括引擎118、能量源119、传动装置120和车轮/轮胎121。引擎118可以是内燃引擎、电动机、空气压缩引擎或其他类型的引擎组合,例如,汽油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。在实际应用中,引擎118将能量源119转换成机械能量。
在本申请实施例中,能量引擎119可以包括但不限于:汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池或其他电力来源。能量源119也可以为自动驾驶装置110的其他系统提供能量。
在本申请实施例中,传动装置120可以将来自引擎118的机械动力传送到车轮121。传动装置120可以包括变速箱、差速器和驱动轴。在一些实现方式中,传动装置120还可以包括其他器件,比如离合器。其中,驱动轴包括可耦合到一个或多个车轮121的一个或多个轴。
在本申请实施例中,传感器104可以包括感测关于自动驾驶装置110周边的环境信息的若干个传感器。例如,传感器系统104可以包括定位系统122(这里,定位系统可以是GPS系统,也可以是北斗系统或者是其他定位系统)、惯性测量单元(Inertial measurement unit,IMU)124、雷达126、激光测距仪128以及相机130。传感器系统104还可以包括被监视自动驾驶装置110内部系统的传感器,例如,车内空气质量监测器、燃油量表、机油温度表等。来自这些传感器中的一个或多个传感器数据可以用于检测对象及其相应特性(例如,位置、形状、方向、速度等)。这些检测和识别是自主自动驾驶装置110的安全操作的关键功能。
在本申请实施例中,定位系统122可用于估计自动驾驶装置110的地理位置;IMU124用于基于惯性加速度来感测自动驾驶装置110的位置和朝向变化。在一些实现方式中, IMU124可以是加速度计和陀螺仪的组合。
在本申请实施例中,雷达126可利用无线电信号来感测自动驾驶装置110的周边环境内的物体。在一些实现方式中,除了感测物体之外,雷达126还可以用于感测物体的速度和/或前进方向。
在本申请实施例中,激光测距仪128可利用激光来感测自动驾驶装置110所处环境中的物体。在一些实现方式中,激光测距仪128可以包括一个或多个激光源、激光扫描器以及一个或多个监测器,以及其他系统组件。
在本申请实施例中,相机130可以用于捕捉自动驾驶装置110的周边环境的多个图像。在一些实现方式中,相机130可以是静态相机或视频相机,本申请实施例不作具体限定。
在本申请实施例中,控制系统106可控制自动驾驶装置110以及组件的操作。控制系统106可包括各种元件,其中包括转向系统132、油门134、制动单元136、计算机视觉系统140、路线控制系统142以及障碍规避系统。
在本申请实施例中,转向系统132可操作来调整自动驾驶装置110的前进方向。例如,在一个实施例中可以为方向盘系统。
在本申请实施例中,油门134用于控制引擎118的操作速度,并进而控制自动驾驶装置110的速度。
在本申请实施例中,制动单元136用于控制自动驾驶装置110的速度。制动单元136可使用摩擦力来减慢车轮121。在一些实现方式中,制动单元136可将车轮121的动能转换为电流。制动单元136也可以采取其他形式来减慢车轮121转速,从而控制自动驾驶装置110的速度。
在本申请实施例中,计算机视觉系统140可以操作来处理和分析由相机130捕捉的图像以便识别自动驾驶装置110周边环境中的物体和/或特征。在一些实现方式中,这里所提及的物体和/或特征可以包括但不限于:交通信号,道路边界和障碍物。计算机视觉系统140可使用物体识别算法、运动中恢复结构(Structure from motion,SFM)算法、视觉跟踪和其他计算机视觉技术。在一些实现方式中,计算机视觉系统140可以用于为环境绘制地图、跟踪物体、估计物体的速度等。
在本申请实施例中,路线控制系统142用于确定自动驾驶装置110的行驶路线。在一些实现方式中,路线控制系统142可结合来自传感器、定位系统122和一个或多个预定地图的数据以为自动驾驶装置110确定行驶路线。
在本申请实施例中,障碍规避系统144用于识别、评估和规避或者以其他方式越过自动驾驶装置110环境中的潜在障碍物。
可以理解的是,在一些实现方式中,控制系统106可以增加或替换地包括除了图1f所示出和描述的那些以外的组件。或者也可以减少一部分上述示出的组件,
在本申请实施例中,自动驾驶装置110通过外围设备108与外部传感器、其他车辆、其他计算机系统或用户之间进行交互。外围设备108可以包括无线通信系统146、车载电脑148、麦克风150和/或扬声器152。
在一些实现方式中,外围设备108提供自动驾驶装置110的用户与用户接口116交互的手段。例如,车载电脑148可向自动驾驶装置110的用户提供信息。用户接口116还可操 作车载电脑148来接收用户的输入。车载电脑148可以通过触摸屏进行操作。在其他情况中,外围设备108可提供用于自动驾驶装置110与车内的其他设备通信的手段。例如,麦克风150可从自动驾驶装置110的用户接收音频,例如,语音命令或其他音频输入。类似地,扬声器150可向自动驾驶装置110的用户输出音频。
在本申请实施例中,无线通信系统146可以直接地或经由通信网络来与一个或多个设备无线通信。例如,无线通信系统146可使用3G蜂窝通信,例如,CDMA、EVDO、GSM/GPRS,或者4G蜂窝通信,例如,LTE。或者5G蜂窝通信。在一些实现方式中,无线通信系统146可利用WIFI与无线局域网(Wireless local area network,WLAN)通信。在一些实现方式中,无线通信系统146可利用红外链路、蓝牙或ZigBee与设备直接通信。其他无线协议,例如,各种车辆通信系统,比如无线通信系统146可包括一个或多个专用短程通信(Dedicated short-range communications,DSRC)设备。
在本申请实施例中,电源110可向自动驾驶装置110的各种组件提供电力。在一些实现方式中,电源110可以为可充电锂离子或铅酸电池。这种电池的一个或多个电池组可被配置为电源,从而为自动驾驶装置110的各种组件提供电力。在一些实现方式中,电源110和能量源119可一起实现,例如,如一些全电动车中那样将这二者一起配置。
在本申请实施例中,自动驾驶装置110的部分或所有功能受计算机系统112控制。计算机系统112可包括至少一个处理器113,处理器113执行存储在例如数据存储装置114这样的非暂态计算机可读存储介质中的指令115。计算机系统112还可以是采用分布式控制自动驾驶装置110的个体组件或子系统中的多个计算设备。
在一些实现方式中,处理器113可以是任何常规的处理器,诸如商业可获得的中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。尽管图1f功能性地示出了处理器、存储器和在相同物理外壳中的其他元件,但是本领域的普通技术人员应该理解该处理器、计算机系统或存储器,或者包括可以不存储在相同的物理外壳内的多个处理器、计算机系统或存储器。例如,存储器可以是硬盘驱动器,或位于不同于物理外壳内的其他存储介质。因此,对处理器或计算机系统的引用将被理解为包括对可以并行操作的处理器或计算机系统或存储器的集合的引用,或者可以不并行操作的处理器或计算机系统或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,该处理器只执行与特定组件的功能相关的计算。
在此处所描述的各个方面中,处理器113可以位于远离该车辆并且与该车辆进行无线通信。在其他方面,此处所描述的过程中的一些布置于车辆内的处理器上执行而其他则由远程处理器执行,包括采取执行单一操作的必要步骤。
在一些实现方式中,数据存储装置114可以包括指令115(例如,程序逻辑),指令115可被处理器113执行来执行自动驾驶装置110的各种功能,包括以上描述的那些功能。数据存储装置114也可包含额外的指令,包括向行进系统102、传感器系统104、控制系统106和外围设备108中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的 指令。
除了指令115以外,数据存储装置114还可存储数据,例如,道路地图、路线消息、车辆的位置、方向、速度以及其他车辆数据,以及其他信息。上述信息可在自动驾驶装置110在自主、半自主和/或手动模式操作期间被自动驾驶装置110和计算机系统112使用。
比如,在本申请实施例中,数据存储装置114从传感器104或自动驾驶装置110的其他组件获取车辆的环境信息。环境信息例如可以为车辆当前所处环境中的车道线信息、车道数、道路边界信息、道路行驶参数、交通信号、绿化带信息和是否有行人、车辆等。数据存储装置114还可以存储该车辆自身的状态信息,以及与该车辆有交互的其他车辆的状态信息。状态信息可以包括但不限于:车辆的速度、加速度、航向角等。例如,车辆基于雷达126的测速、测距功能,得到其他车辆与自身之间的距离、其他车辆的速度等。那么,在这种情况下,处理器113可从数据存储装置114获取上述车辆数据,并基于车辆所处的环境信息确定满足安全需求的驾驶策略。
在本申请实施例中,用户接口116,用于向自动驾驶装置110的用户提供信息或从其接收信息。在一些实现方式中,用户接口116可包括外围设备108的集合内的一个或多个输入/输出设备,例如,无线通信系统146、车载电脑148、麦克风150和扬声器152中的一个或多个。
在本申请实施例中,计算机系统112可基于从各种子系统(例如,行进系统102、传感器系统104和控制系统)以及从用户接口116接收的输入来控制自动驾驶装置110的功能。例如,计算机系统112可利用来气控制系统106的输入,以便控制转向系统132,从而规避由传感器系统104和障碍规避系统144检测到的障碍物。在一些实现方式中,计算机系统112可操作来对自动驾驶装置110及其子系统的许多方面提供控制。
在一些实现方式中,上述组件中的一个或多个可与自动驾驶装置110分开安装或关联。例如,数据存储装置114可以部分或完全地与自动驾驶装置110分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。
在一些实现方式中,上述组件只是一个示例。在实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图1f不应理解为对本申请实施例的限制。
在道路行进的自动驾驶车辆,例如,自动驾驶装置110,可以识别其周围环境内的物体以确定是否对自动驾驶装置110当前行驶的速度进行调整。这里,物体可以是其他车辆、交通控制设备、或者其他类型的物体。在一些实现方式中,可以独立地考虑每个识别的物体,并且基于物体各自的特性,例如,它的当前行驶数据、加速度与车辆间距等,来确定自动驾驶车辆所要调整的速度。
在一些实现方式中,自动驾驶装置110或者与自动驾驶装置110相关联的计算机设备(例如,如图1f所示的计算机系统112、计算机视觉系统140、数据存储装置114)可以基于所识别的物体的特性和周围环境的状态(例如,交通、雨、道路上的冰等)来预测识别的物体的行为。在一些实现方式中,每一个识别的物体都依赖于彼此的行为,因此,还可以将识别的所有物体全部一起考虑来预测单个识别的物体的行为。自动驾驶装置110能够基于预测的识别的物体的行为来调整它的速度。换句话说,自动驾驶装置110能够基于所预测的物体的行为来确定车辆将需要调整到什么样的稳定状态(例如,调整操作可以包括 加速、减速或者停止)。在这个过程中,也可以考虑其他因素来确定自动驾驶装置110的速度,例如自动驾驶装置110在行驶的道路中的横向位置、道路的曲率、静态和动态物体的接近度等。
除了提供调整自动驾驶汽车的速度的指令之外,计算机设备还可以提供修改车辆100转向角的指令,以使得自动驾驶车辆遵循给定的轨迹和/或维持与自动驾驶车辆附近的物体(例如,道路上相邻的车道中的汽车)的安全横向和纵向距离。
在本申请实施例中,上述自动驾驶装置110可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车、和手推车等,本申请实施例不做特别的限定。
在一些实现方式中,自动驾驶装置110还可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的行驶来实现上述功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和涉及约束条件。
图1f介绍了自动驾驶装置110的功能性框图,下面介绍自动驾驶装置110中的自动驾驶系统101。图1g是本申请实施例提供的一种自动驾驶系统的结构示意图。图1f和图1g是从不同的角度来描述自动驾驶装置110,例如,图1g中的计算机系统101为图1f中的计算机系统112。如图1g所示,计算机系统101包括处理器103,处理器103和系统总线105耦合。处理器103可以是一个或者多个处理器,其中每个处理器都可以包括一个或多个处理器核。显示适配器(video adapter)107,显示适配器可以驱动显示器109,显示器109和系统总线105耦合。系统总线105通过总线桥111和输入输出(I/O)总线113耦合。I/O接口115和I/O总线耦合。I/O接口115和多种I/O设备进行通信,比如输入设备117(如:键盘,鼠标,触摸屏等),多媒体盘(media tray)121,(例如,CD-ROM,多媒体接口等)。收发器123(可以发送和/或接受无线电通信信号),摄像头155(可以捕捉景田和动态数字视频图像)和外部USB接口125。其中,可选地,和I/O接口115相连接的接口可以是USB接口。
其中,处理器103可以是任何传统处理器,包括精简指令集计算(“RISC”)处理器、复杂指令集计算(“CISC”)处理器或上述的组合。可选地,处理器可以是诸如专用集成电路(“ASIC”)的专用装置。可选地,处理器103可以是神经网络处理器或者是神经网络处理器和上述传统处理器的组合。
可选地,在本文所述的各种实施例中,计算机系统101可位于远离自动驾驶车辆的地方,并且可与自动驾驶车辆100无线通信。在其它方面,本文所述的一些过程在设置在自动驾驶车辆内的处理器上执行,其它由远程处理器执行,包括采取执行单个操纵所需的动作。
计算机101可以通过网络接口129和软件部署服务器149通信。网络接口129是硬件网络接口,比如,网卡。网络127可以是外部网络,比如因特网,也可以是内部网络,比如以太网或者虚拟私人网络(VPN)。可选地,网络127还尅是无线网络,比如WiFi网络,蜂窝网络等。
硬盘驱动接口和系统总线105耦合。硬件驱动接口和硬盘驱动器相连接。系统内存135和系统总线105耦合。运行在系统内存135的数据可以包括计算机101的操作系统137和应用程序143。
操作系统包括壳(Shell)139和内核(kernel)141。Shell 139是介于使用者和操作系统之内核(kernel)间的一个接口。shell是操作系统最外面的一层。shell管理使用者与操作系统之间的交互:等待使用者的输入,向操作系统解释使用者的输入,并且处理各种各样的操作系统的输出结果。
内核141由操作系统中用于管理存储器、文件、外设和系统资源的那些部分组成。直接与硬件交互,操作系统内核通常运行进程,并提供进程间的通信,提供CPU时间片管理、中断、内存管理、IO管理等等。
应用程序141包括控制汽车自动驾驶相关的程序,比如,管理自动驾驶的汽车和路上障碍物交互的程序,控制自动驾驶汽车路线或者速度的程序,控制自动驾驶汽车和路上其他自动驾驶汽车交互的程序。应用程序141也存在于deploying server 149的系统上。在一个实施例中,在需要执行应用程序141时,计算机系统101可以从deploying server14下载应用程序141。
又比如,应用程序141可以是控制车辆根据实时获取的传感器数据计算驾驶策略的应用程序。其中,实时获取的传感器数据可以包括环境信息、目标车辆的自身状态信息以及目标车辆潜在交互目标对象的状态信息。具体地,环境信息为目标车辆当前所处环境的信息(例如,绿化带分布情况、车道、交通信号灯等),状态信息可以包括但不限于车辆的速度、加速度、航向角。例如,车辆基于雷达126的测速、测距功能,得到其他车辆与自身之间的距离、其他车辆的速度等等。计算机系统101的处理器103可以调用应用程序141,得到第二驾驶策略。
在一些实现方式中,由于自动驾驶策略图层中包含的信息更为丰富,道路的静态图层信息保证了自动驾驶能够高效的确定行驶路径和规避障碍物,道路的动态化图层信息保证了自动驾驶能够及时地应对突发状况,这一实现方式能够克服传感器的感知缺陷,从而可以知道的是,第一驾驶策略的准确性往往要高于第二驾驶策略,那么,在这种情况下,应用程序141还可以是控制车辆根据云端服务器发送的第一驾驶策略以及第二驾驶策略(这里,第二驾驶确定为车辆通过实时获取的传感器数据确定的)来确定最终驾驶策略的应用程序。具体地,在第一驾驶策略和第二驾驶策略的相似度大于设定好的第一阈值的情况下,应用程序141确定第一驾驶策略或第二驾驶策略为最终的驾驶策略。在第一驾驶策略和第二驾驶策略的相似度小于设定好的第一阈值的情况下,应用程序141确定第一驾驶策略为最终的驾驶策略。
传感器153和计算机系统101关联。传感器153用于探测计算机101周围的环境。举例来说,传感器153可以探测动物,汽车,障碍物和人行横道等,进一步传感器还可以探测上述动物,汽车,障碍物和人行横道等物体周围的环境,比如:动物周围的环境,例如,动物周围出现的其他动物,天气条件,周围环境的光亮度等。可选地,如果计算机101位于自动驾驶的汽车上,传感器可以是摄像头,红外线感应器,化学检测器,麦克风等。传感器153在激活时,按照预设间隔感测信息并实时地将所感测到的信息提供给计算机系统 101。
可选的,在本文所述的各种实施例中,计算机系统101可位于远离自动驾驶装置110的地方,并且可以与自动驾驶装置110进行无线通信。收发器123可将自动驾驶任务、传感器153采集的传感器数据和其他数据发送给计算机系统101;还可以接收计算机系统101发送的控制指令。自动驾驶装置可执行收发器123接收的来自计算机系统101的控制指令,并执行相应的驾驶操作。在其他方面,本文所述的一些过程设置在自动驾驶车辆内的处理器上执行,其他由远程处理器执行,包括采取执行单个操作所需的动作。
基于图1e所示的系统架构,下面结合图2所示的本申请实施例提供的一种自动驾驶方法的流程示意图,具体说明在本申请实施例中,是如何实现车辆的自动驾驶的,可以包括但不限于如下步骤:
步骤S200、目标车辆向云端服务器上报自身的车辆属性信息和行程信息。
步骤S202、云端服务器接收目标车辆上报的车辆属性信息和所述目标车辆的行程信息;其中,所述目标车辆的车辆属性信息用于生成自动驾驶策略。
在本申请实施例中,目标车辆为图1e所示的系统架构中多个自动驾驶车辆中的任意一个。
在本申请实施例中,行程信息用于指示目标车辆的行程。在一个示例中,行程信息可以包含目标车辆的当前位置,例如,当前位置为通过二维数组表示的坐标值(X,Y),其中,X为经度值,Y为纬度值;在一个示例中,行程信息可以包含起始位置、目的位置,例如,起始位置为通过二维数组表示的坐标值(X1,Y1),目的位置为通过二维数组表示的坐标值(X2,Y2),其中,X1、X2为经度值,Y1、Y2为纬度值;在一个示例中,该行程信息也可以为起始位置至目的位置的计划行驶路段,该计划行驶路段为一条连续的、有方向的矢量线,例如,起始位置为A,目的位置为B,计划行驶路段为A-C-D-E-B的一条连续的、有方向的矢量线。当行程信息为目标车辆的当前位置时,车载终端可以将位置传感器获取的行程信息发送给服务器。当行程信息为目标车辆的起始位置和目的位置,或者是起始位置至目的位置的计划行驶路段时,目标车辆可以将车载导航(如,GPS系统,或者是北斗系统,亦或者是其他定位系统)确定的行程信息发送给服务器。例如,用户在目标车辆的车载导航中输入起始位置和目的位置,车载导航将用户输入的起始位置和目的位置确定为行程信息。又例如,用户在目标车辆的车载导航中选择了起始位置至目的位置的一条计划行驶路段,车载导航将用户选择的计划行驶路段确定为行程信息。
在本申请实施例中,车辆属性信息用于指示自动驾驶策略。示例性地,该自动驾驶策略可以包括目标车辆有能力支持的自动驾驶策略。具体地,目标车辆的车辆属性信息可以包括目标车辆的自动驾驶能力、目标车辆的传感器分布信息、目标车辆中驾驶员的驾驶状态中的至少一种。其中,目标车辆的自动驾驶能力,是指目标车辆可以支持的最高自动驾驶等级。目标车辆的传感器分布信息可以包括传感器的种类、数量和放置位置等等。驾驶员的驾驶状态可以包括车辆中驾驶员的疲劳程度、车辆中驾驶员的驾驶能力等。
接下来对目标车辆的传感器分布信息进行详细阐述,例如,目标车辆包含有一个或多个车辆速度传感器,该车辆速度传感器可以分布在目标车辆的内部,用于检测车辆的行驶 速度;目标车辆包含有一个或多个加速度传感器,该加速度传感器可以分布在目标车辆的内部,用于检测车辆在行驶过程中的加速度,例如,该加速度为急刹车状态下的加速度;目标车辆包含有一个或多个视频传感器,该视频传感器可以分布在目标车辆的外部,用于获取并监测车辆周围环境的图像数据;目标车辆包含有一个或多个雷达传感器,该雷达传感器可以分布在整个目标车辆的外部,用于获取并监测车辆周围环境的电磁波数据,主要通过发射电磁波,然后通过接收周围物体反射的电磁波来检测周围物体与车辆的距离、周围物体的外形等各项数据。以多个雷达传感器分布在整个目标车辆的外部为例,多个雷达传感器的子集耦合到车辆的前部,从而定位车辆前方的对象。一个或多个其他雷达传感器可位于车辆的后部,从而在车辆后退时定位车辆后方的对象。其他雷达传感器可位于车辆的侧面,从而定位从侧面靠近车辆的例如其他车辆等对象。例如,激光雷达(light detection and ranging,LIDAR)传感器可安装在车辆上,例如,将LIDAR传感器安装在车辆顶部安装的旋转结构中。然后旋转LIDAR传感器120能以360°模式传输车辆周围的光信号,从而随着车辆移动不断映射车辆周围所有对象。例如,目标车辆包含有相机、摄像机或其他类似图像采集传感器等成像传感器,该成像传感器可以安装在车辆上,从而随着车辆移动捕捉图像。可以在车辆的所有侧面放置多个成像传感器,从而以360°模式捕捉车辆周围的图像。成像传感器不仅可以捕捉可见光谱图像,还可以捕捉红外光谱图像。例如,目标车辆包含有全球定位系统(Global Positioning System,GPS)传感器(亦或者是北斗系统传感器),该传感器可位于车辆上,从而向控制器提供与车辆的位置相关的地理坐标和坐标生成时间。GPS包括用于接收GPS卫星信号的天线以及耦合到天线的GPS接收器。例如,当在图像中或另一传感器观察到对象时,GPS可提供发现物体的地理坐标和时间。
以驾驶员的驾驶状态为例,目标车辆中驾驶员的疲劳程度可以为轻度疲劳、中度疲劳、严重疲劳以及状态良好中的一种。在实际应用中,可以先获取驾驶员的面部图像,然后,对获取到的驾驶员的面部图像进行识别,以判定驾驶员的疲劳程度。当识别出驾驶员的面部图像的分类结果之后,可以通过枚举值的方式来表征不同类型的驾驶员的驾驶状态。例如,0表示状态良好、1表示轻度疲劳、2表示中度疲劳、3表示严重疲劳。目标车辆驾驶员的驾驶能力可以包括高、中、低中的一种。在实际应用中,可以通过指示信息来表征驾驶员的驾驶能力,例如,00表示驾驶员的驾驶能力为“高”,01表示驾驶员的驾驶能力为“中”,02表示驾驶员的驾驶能力为“低”,还可以通过联合指示信息来表征驾驶员的驾驶能力。需要说明的是,上述举例均只是一种示例,不应构成限定。
步骤S204、云端服务器根据所述行程信息在自动驾驶策略图层中获取所述目标车辆行驶的第一路段的图层信息。
在本申请实施例中,路段是指A点到B点之间连续的、有方向的一条矢量线。例如,A点为行程信息中的起始位置,B点为行程信息中的目的位置。
在本申请实施例中,云端服务器根据目标车辆上报的行程信息确定至少一条行驶的路段(例如,该路段为第一路段,第一路段只是一种示例性描述,不应构成限定),然后,在自动驾驶策略图层中获取第一路段的图层信息。这里,行驶的路段可以包括正在行驶的路段,也可以包括计划行驶的路段,本申请实施例不作具体限定。
在一些实现方式中,第一路段的图层信息可以包括第一路段的静态图层信息,其中, 第一路段的静态图层信息用于指示第一路段的基础设施信息。在一些实现方式中,第一路段的图层信息包括第一路段的静态图层信息和第一路段的动态图层信息,其中,第一路段的静态图层信息用于指示第一路段的基础设施信息,第一路段的动态图层信息用于指示第一路段的动态交通信息。
具体来说,路段的基础设施信息是指,相关部门(例如,交通部门或政府部门)为满足车辆的正常出行而规划建设的基本设施。例如,第一路段的基础设施信息可以包括道路(包含道路等级等,如快速路、主干路、支路)、桥梁(包含涵洞)、隧道(包含隧道机电设施,如监控设施、通信设施、照明设施等)和交通安全工程设施(包括标志、标线、护栏等)中的至少一种。在实际应用中,可以通过指示符R表征“道路”,可以通过指示符L表征“桥梁”,可以通过指示符U表征“隧道”,可以通过指示符J表征“交通安全工程设施”。示例性地,第一路段的静态图层信息可以包括以下至少一项:车道属性、道路的数字化设备信息,绿化带信息。其中,车道属性用于指示基础道路的规范化信息;道路的数字化设备信息用于指示道路的硬件设施信息;绿化带信息用于指示绿化带位置。
例如,车道属性可以包括以下至少一项:车道线信息、车道数、道路边界信息、道路行驶参数、交通信号和障碍物。其中,车道线信息用于指示车道线位置,例如,车道线信息可以包括车道宽度、车道线角度以及曲率等;车道数用于指示车道的数量,例如,可以通过左左,左,右,右右四条车道线信息,区分出三条车道,从而可以确定XX路包括三条车道;道路边界信息用于指示道路边界线的位置,道路行驶参数用于指示当前路段允许的最大限额速度、可以支持的驾驶速度,例如,YY路段限速60KM/h,XY路段可以支持的驾驶速度为100KM/h-120KM/h,XY路段可以支持的驾驶速度等级为驾驶速度等级3;交通信号用于指示车辆的前进方向(例如,红灯停、绿灯行,左转、右转、掉头等),障碍物用于指示车辆的行驶边界。
例如,道路的数字化设备信息可以包括以下至少一项:车对外界的信息交换系统、路测单元。其中,车对外界的信息交换意为vehicle to everything,即Vehicle to X(V2X),是智能交通运输系统的关键技术,使得车与车、车与基站、车与路侧单元、车与云端服务器、车与行人等之间能够通信,从而可以获得实时路况、道路信息、行人信息等一系列交通信息。通过V2X系统,自动驾驶装置110可以获取更丰富的实时数据,可用于进行实时交通信息的分析,最佳行驶路线选择等。路侧单元(Road Side Unit,RSU),是安装在路侧,采用短程通信技术(例如,Cellular-V2X技术)技术,与车载单元(On Board Unit,OBU)进行通讯的装置。具体来说,目标车辆可以通过与道路上的数字化设备(如路侧单元),以及通过网络与云端服务器进行信息交互,可以及时更新行驶路段的环境信息,避免目标车辆行驶在允许运行区域之外的非驾驶区域,例如,在出现交通管制的道路而目标车辆依旧行驶在管制的道路中。
具体来说,路段的动态交通信息是指,车辆出行时,具有变化属性的交通信息。在本申请实施例中,第一路段的动态图层信息可以是与时间点(或时间段)相关联的信息,也可以是与时间点(或时间段)无关联的信息。示例性地,第一路段的动态图层信息可以包括以下至少一项:天气信息、路面信息、第一路段在某个时间段(例如,第一时间段,具体地,第一时间段可以为上午8:00-9:00的时间段)的拥堵情况,第一路段在第一时间内行 人和非机动车的穿行概率、第一路段在第一时间段内发生驾驶事故的事故概率。这里,第一路段在第一时间段内的拥堵情况可以为历史拥堵情况,也可以为根据历史拥堵情况预测得到的未来某个时间段内的拥堵情况。类似地,第一路段在第一时间段内行人和非机动车的穿行概率可以为历史穿行概率,也可以为根据历史穿行概率预测到的未来某个时间段内的穿行概率。类似地,第一苦短在第一时间段内发生驾驶事故的事故概率可以为历史事故概率,也可以为根据历史事故概率预测得到的未来某个时间段内的事故概率。
示例性地,天气信息可以包括晴天、阴天、大雨、小雨、下雪等等。
示例性地,路面信息可以包括道路中断情况、道路维修情况、道路遗撒情况、积水情况等等。例如,若存在道路中断,表示该路段不可驾驶。例如,若存在道路维修,表示该路段需要小心驾驶或绕道驾驶,在这种情况下,出于安全行驶的目的,往往需要将自动化程度高的自动驾驶策略切换为自动化程度低的自动驾驶策略,例如,将完全自动驾驶策略切换为部分地自动驾驶策略。例如,若存在道路遗撒情况,则进一步获取遗撒物的尺寸是否大于某一阈值,若遗撒物的尺寸小于某一阈值,表示该遗撒物对自动驾驶策略产生的影响极小,或可以忽略不计。例如,若遗撒物为纸屑时,此时发生的道路遗撒对自动驾驶策略的行驶几乎没有影响。若遗撒物的尺寸大于某一阈值,例如,若遗撒物为石头时,此时发生的道路遗撒对自动驾驶车辆的行驶的影响比较大。在这种情况下,出于安全行驶的目的,往往需要将自动化程度高的自动驾驶策略切换为自动化程度低的自动驾驶策略。
在本申请实施例中,上述所提及的第一路段在某个时间段(例如,第一时间段,具体地,第一时间段可以为上午8:00-9:00的时间段)的拥堵情况,第一路段在第一时间内行人和非机动车的穿行概率、第一路段在第一时间段内发生驾驶事故的事故概率均属于第一路段的统计值。所谓统计值(也称为样本值),它是关于样本中某一变量的综合描述,或者说是样本中所有元素的某种特征的综合数量表现。统计值是从样本中计算出来的,它是相应的参数值的估计量。在一些实现方式中,可以通过对第一路段的车辆进行分时统计,例如,统计在某个时间段内第一路段上行驶的车辆的数量,统计在某个时间段内第一路段上行驶的车辆的自动驾驶策略分布情况;也可以对第一路段的除车辆之外其他交通参与者进行分时统计,例如,统计在某个时间段内第一路段上行人和非机动车的穿行数量等等,从而得到第一路段的统计值。
在本申请实施例中,关于道路拥堵情况的界定,可以通过判断路段的车流速度是否小于设定好的某一阈值,例如,该阈值可以为10km/h,若路段的车流速度小于10km/h,表明该道路出现拥堵;又例如,该阈值为20辆/车道,表明该道路出现拥堵。
步骤S206、云端服务器根据所述第一路段的图层信息和所述目标车辆的车辆属性信息获取第一驾驶策略。
在一些实现方式中,根据第一路段的图层信息和目标车辆的车辆属性信息获取第一驾驶策略的实现步骤可以包括:根据第一路段的图层信息和目标车辆的车辆属性信息获取第一安全通行概率,之后,根据驾驶策略与安全通行概率的对应关系获取第一安全通行概率对应的第一驾驶策略。具体地,云端服务器存储有驾驶策略与安全通行概率的对应关系,例如,云端服务器将安全通行概率70%对应的驾驶策略设置为驾驶策略1,云端服务器将安全通行概率75%对应的驾驶策略设置为驾驶策略2,云端服务器将安全通行概率80%对 应的驾驶策略设置为驾驶策略3,云端服务器将安全通行概率85%对应的驾驶策略设置为驾驶策略4,云端服务器将安全通行概率90%对应的驾驶策略设置为驾驶策略5,云端服务器将安全通行概率95%对应的驾驶策略设置为驾驶策略6,等等。因此,云端服务器根据第一路段的图层信息和目标车辆的车辆属性信息确定的第一安全通行概率为90%,则云端服务器根据预置的驾驶策略与安全通行概率的对应关系,查找到第一安全通行概率90%对应的驾驶策略为驾驶策略5。
在一些实现方式中,云端服务器存储的驾驶策略与安全通行概率的对应关系中包括安全通行概率区间段与驾驶策略的对应关系,那么,在这种情况下,根据驾驶策略与安全通行概率的对应关系获取第一安全通行概率对应的驾驶策略具体为:根据驾驶策略与安全通行概率区间段的对应关系,查找第一安全通行概率所对应的驾驶策略。具体地,云端服务器将安全通行概率划分为不同的概率区间段,每一个概率区间段对应一个驾驶策略。例如,云端服务器将安全通行概率为大于等于70%且小于75%的概率区间段对应的驾驶策略设置为驾驶策略1,等等。
在本申请实施例中,根据第一路段的图层信息和目标车辆的车辆属性信息获取第一安全通行概率的实现过程可以包括:通过第一模型计算第一安全通行概率。示例性地,第一模型可以为公式,也可以为函数模型,也可以为训练好的神经网络模型等等。应理解,这里的计算安全通行概率的方式可以有多种,一个实施方式中,第一安全通行概率的计算可以满足一个公式或函数模型。
在一种可能的实现方式中,第一模型包括至少一个信息项和至少一个信息项对应的权重参数,这里,至少一个信息项为根据第一路段的图层信息和目标车辆的车辆属性信息提取得到的信息项,权重参数用于指示信息项被用于确定第一安全通行概率时的重要程度。具体地,第一模型可以表示为:
P=A 11+A nn+......+A nn  (1)
其中,A1表示第一个信息项,ω 1表示第一个信息项对应的权重参数;A2表示第二个信息项,ω 2表示第二个信息项对应的权重参数;An表示第n个信息项,ω n表示第n个信息项对应的权重参数。需要说明的是,信息项对应的权重参数越大,表示该信息项被用于确定安全通信概率时的重要程度越高。
在一种可能的实现方式中,第一模型与样本数据相关联。一个实施方式中,所述第一模型与至少一个样本数据相关联。在实际应用中,该第一模型可以为神经网络模型。示例性地,该神经网络模型可以为深度学习神经网络模型(Deep Neural Networks,DNN),也可以为循环神经网络模型(Recurrent Neural Networks,RNN)等等。以第一模型为循环神经网络模型RNN为例,RNN的隐含层中包含多个隐含层节点,每一个隐含层节点分别对应一个信息项。每一个隐含层节点具有一个权重参数。值得注意的是,训练循环神经网络模型RNN的关键在于确定训练样本,考虑到相邻路段往往具有相似的交通特性,循环神经网络模型RNN的样本数据包括从第二路段的图层信息和目标车辆的车辆属性信息提取得到的至少一个信息项,其中,第二路段为与第一路段相邻的路段,且第二路段为第一路段的入口。在这一实现方式中,由于相邻的路段之间具有相同或相似的交通特性,基于第二路段的图层信息和目标车辆的车辆属性信息得到训练好的第一模型,可以提高第一模型计 算第一安全通行概率的准确性。
在本申请实施例中,通过第二路段的多个样本数据可以得到训练好的循环神经网络模型RNN,该训练好的循环神经网络模型RNN可以用于表征第二路段的图层信息、目标车辆的车辆属性信息与安全驾驶概率之间的对应关系,在确定目标车辆行驶在另一路段(例如,第一路段)的安全通行概率时,可以将第一路段的图层信息和目标车辆的车辆属性信息输入训练好的循环神经网络模型,以得到第一路段的安全通行概率。
需要说明的是,在上述阐述的根据第一路段的图层信息和目标车辆的车辆属性信息获取第一驾驶策略的实现过程中,在一些实现方式中,在第一路段的图层信息包括第一路段的静态图层信息的情况下,根据第一路段的图层信息和目标车辆的车辆属性信息获取第一驾驶策略的实现步骤可以包括:根据第一路段的静态图层信息和目标车辆的车辆属性信息获取第一驾驶策略。此时,当通过第一模型(例如,第一模型为公式)计算安全通行概率时,在第一路段的静态图层信息和目标车辆的车辆属性信息中提取信息项,然后,获取每个信息项对应的权重参数,以计算得到安全通行概率。当通过第一模型(例如,第一模型为训练好的神经网络模型)计算安全通行概率时,将第一路段的静态图层信息和目标车辆的车辆属性信息输入第一模型,以得到目标车辆行驶的第一路段的安全通行概率。在一些实现方式中,在第一路段的图层信息包括第一路段的静态图层信息和第一路段的动态图层信息的情况下,根据第一路段的图层信息和目标车辆的车辆属性信息获取第一驾驶策略的实现步骤可以包括:根据第一路段的静态图层信息、第一路段的动态图层信息和目标车辆的车辆属性信息获取第一驾驶策略。此时,当通过第一模型(例如,第一模型为公式)计算安全通行概率时,在第一路段的静态图层信息和第一路段的动态图层信息中提取信息项,然后,获取每个信息项对应的权重参数,以计算得到安全通行概率。当通过第一模型(例如,第一模型为训练好的神经网络模型)计算安全通行概率时,将第一路段的静态图层信息、第二路段的动态图层信息和目标车辆的车辆属性信息输入第一模型,以得到目标车辆行驶的第一路段的安全通行概率。
步骤S208、云端服务器将所述第一驾驶策略发送给所述目标车辆。
在本申请实施例中,在云端服务器根据第一路段的图层信息和目标车辆的车辆属性信息获取第一安全驾驶策略之后,云端服务器将第一安全驾驶策略下发给目标车辆的自动驾驶装置,以便目标车辆根据第一驾驶策略进行自动驾驶。
实施本申请实施例所描述的自动驾驶方法,云端服务器可以通过目标车辆上报的行程信息在自动驾驶策略图层中获取行驶的第一路段的图层信息,之后,根据第一路段的图层信息和车辆的属性信息来获取满足安全要求的驾驶策略,以便目标车辆可以根据云端服务器确定好的驾驶策略进行自动驾驶。由于自动驾驶策略图层中包含的信息更为丰富,道路的静态图层信息保证了自动驾驶能够高效的确定行驶路径和规避障碍物,道路的动态化图层信息保证了自动驾驶能够及时地应对突发状况,这一实现方式能够克服传感器的感知缺陷(例如,传感器的感知缺陷体现在:传感器获取的数据有限、传感器检测的范围有限、传感器的检测易受环境影响等等),从而可以提高确定满足安全要求的驾驶策略的准确性,降低了自动驾驶的风险。
在一些实现方式中,当目标车辆正行驶在第一路段时,目标车辆可以将行驶在第一路 段的第二驾驶策略上报给云端服务器,其中,第二驾驶策略为目标车辆通过图1f所示的传感器104实时获取的传感器数据确定,那么,在这种情况下,云端服务器可以判断第一驾驶策略和第二驾驶策略的相似度是否满足设定好的预设条件,在第一驾驶策略和第二驾驶策略的相似度满足设定好的预设条件的情况下,向目标车辆发送将第二驾驶策略切换为第一驾驶策略的提示信息。
在本申请实施例中,提示信息的形式可以是语音或光线或图文显示等其他形式。示例性地,以提示信息的形式为语音为例,该提示信息可以为:“当前行驶的第而驾驶策略的安全性低,请将第二驾驶策略切换为第一驾驶策略”。该语音可以为高昂严厉。以提示信息的形式为光线为例,该提示信息可以为光线较亮且带有闪烁感。以提示信息的形式为图文为例,该图文可以为“高危驾驶”。
示例性地,上述所提及的第一驾驶策略和第二驾驶策略的相似度满足预设条件可以为第一驾驶策略和第二驾驶策略的相似度小于设定好的第一阈值(例如,该第一阈值可以为0.8),第一驾驶策略和第二驾驶策略的相似度满足预设条件还可以为第一驾驶策略和第二驾驶策略的相似度满足函数关系等等。
在这一实现方式中,由于自动驾驶策略图层包含着丰富的信息,且能够克服传感器的感知缺陷,这也意味着,云端服务器根据自动驾驶策略图层确定好的第一驾驶策略的安全性要高于目标车辆根据实时获取的传感器数据确定好的第二驾驶策略。出于安全驾驶的目的,云端服务器在判断出第一驾驶策略和第二驾驶策略的相似度小于设定好的阈值的情况下,向目标车辆发送将第二驾驶策略切换为第一驾驶策略的提示信息,以便目标车辆采用安全性更好的第一驾驶策略进行自动驾驶,可以降低自动驾驶的风险。
图2所示的方法中阐述了云端服务器可以通过目标车辆上报的行程信息在自动驾驶策略图层中获取行驶的第一路段的图层信息,之后,根据第一路段的图层信息和车辆的属性信息来获取满足安全要求的驾驶策略,下面介绍一种应用于车载终端的自动驾驶方法,具体阐述车载终端如何根据第一驾驶策略进行自动驾驶的。图3a为本申请实施例提供的一种自动驾驶方法,具体地,该方法在上述步骤S208之后,如图3a所示,该方法可以包括如下步骤:
步骤S2010、目标车辆接收云端服务器发送的第一驾驶策略。
步骤S2012、目标车辆根据所述第一驾驶策略对所述目标车辆进行自动驾驶。
在一些实现方式中,目标车辆可以通过图1f所示的传感器获取到的传感器数据确定目标车辆行驶在第一路段上的第二驾驶策略,在这种情况下,根据第一驾驶策略对目标车辆进行自动驾驶的实现步骤可以包括:根据第一驾驶策略和第二驾驶策略对目标车辆进行自动驾驶。
接下来具体阐述目标车辆如何基于获取到的传感器数据确定第二驾驶策略的方法,示例性地,该传感器数据用于指示目标车辆当前行驶的环境信息。
在本申请实施例中,以传感器数据为摄像头获取的行驶环境中的图像数据为例,具体地,该图像数据可以为静态目标(例如,绿化带)的图像数据,也可以为动态目标(例如,前方车辆)的图像数据。
在识别静态目标或动态目标的过程中,首先,检测识别出来该目标是什么类型的物体。具体地,参见图3b,为检测识别目标的实现流程。将图像数据输入特征提取模型,由特征提取模型在图像中选择候选区域,并提取候选区域的特征。之后,特征提取模型输出提取的特征,并将这些特征输入分类器,通过分类器对提取的特征进行分类识别,由分类器输出识别为第i类物体的概率。进一步地,可对识别出的物体进行框选表示。图3b还示出了获取分类器的方式,具体地,选择训练样本,并对训练样本进行特征提取等操作,从而可以完成对训练样本的训练过程,得到分类器。
其中,训练样本包括正样本和负样本。正样本是指与检测识别对象相关的样本,负样本是指与检测识别对象不相关或相关较低的样本。举例来说,图3b所示的目标检测识别流程需检测某一物体是否为车辆,即检测识别对象为车辆,此时,正样本为车辆的图像,负样本为除车辆以外的其他物体的图像,比如,负样本为车道线的图像,或者,绿化带的图像等。
在一些实现方式中,上述特征取模型可以为卷积神经网络模型(Convolutional Neural Network,CNN),当然,也可以为其他具有图像数据特征功能的学习模型。分类器可以为支持向量机(Support Vector Machine,SVM),或者,采用其他类型的分类器。本申请实施例不对分类器和特征提取模型的类型进行限制。
以使用深度学习算法检测识别目标,且特征提取模型为神经网络模型为例,参见图3c,为本申请实施例提供的一种卷积神经网络模型的结构示意图,其中,该卷积神经网络模型包括一个卷积层、一个池化层和三个全连接层。如图3c所示,目标检测识别的流程具体为:将图像输入卷积神经网络模型,通过该模型的卷积层得到图像数据的候选区域的特征图,该特征图用于表示从该候选区域提取的特征。之后,池化层根据卷积层输出的特征图进行池化操作,保留候选区域的主要特征,减少所需计算特征的数量,以减少卷积神经网络模型的计算量。之后,池化层输出的特征向量输入全连接层,由全连接将各个候选区域的特征综合起来,得到整个图像的特征,并将图像特征输出到分类器。分类器可以输出图像中物体的分类概率。比如,分类器输出图像中物体为车辆的概率为98%。在一些实现方式中,在得到物体的分类概率之后,对识别出的物体进行框选,并使用回归器精细修整框选的位置。
需要说明的是,在卷积神经网络模型中,可能存在多个卷积层、多个池化层,以及多个全连接层。例如,如图3d所示的多个卷积层/池化层并行,将分别提取的特征均输入给神经网络层130进行处理。
那么,在这种情况下,目标车辆可以根据获取到的当前行驶环境中的一个或多个静态目标的图像数据和一个或多个动态目标图像数据获取第二驾驶策略。
在本申请实施例中,根据第一驾驶策略和第二驾驶策略对目标车辆进行自动驾驶的实现过程可以参见下面所阐述的两种情形:
情形一:在判断出第一驾驶策略和第二驾驶策略的相似度大于第一阈值的情况下,根据第一驾驶策略对目标车辆进行自动驾驶,或,根据第二驾驶策略对目标车辆进行自动驾驶。
在本申请实施例中,相似度,又称相似性度量,也即综合评定两个事物之间相近程度 的一种度量。可以理解的是,两个事物越亲近,他们的相似度也就越大。示例性地,目标车辆通过实时获取的传感器数据确定目标车辆行驶的第一路段的自动驾驶策略为完全自动驾驶L5,云端服务器根据自动驾驶策略图层确定行驶的第一路段的自动驾驶策略为高度自动驾驶。由表1可以知道的是,完全自动驾驶L5,是指由车辆完成所有驾驶操作,人类驾驶员无需保持注意力。高度自动驾驶L4,是指由车辆完成所有驾驶操作,人类驾驶员无需保持注意力,但限定道路和环境条件。这两个驾驶策略之间的不同之处体现在:高度自动驾驶L4限定道路和环境条件,而完全自动驾驶L5不限定道路和环境条件。可以理解的是,高度自动驾驶L4与完全自动驾驶L5之间的相似度极高,示例性地,通过相似度计算公式(例如,欧几里德距离公式)确定这二个驾驶策略之间的相似度为0.85,该相似度大于设定好的第一阈值(例如,第一阈值为0.8),在这种情况下,目标车辆可以根据第一驾驶策略对目标车辆进行自动驾驶,或者,目标车辆根据第二驾驶策略对目标车辆进行自动驾驶。
需要说明的是,在一些实现方式中,在判断出第一驾驶策略和第二驾驶策略的相似度大于等于第一阈值的情况下,目标车辆可以根据第一驾驶策略对目标车辆进行自动驾驶,或,目标车辆可以根据第二驾驶策略对目标车辆进行自动驾驶,这表明目标车辆通过第一驾驶策略或通过第二驾驶策略进行自动驾驶时,均可以保证车辆的安全性。
情形二:在判断出第一驾驶策略和第二驾驶策略的相似度小于第一阈值的情况下,根据第一驾驶策略对目标车辆进行自动驾驶。
在本申请实施例中,由于传感器存在感知缺陷,例如,该感知缺陷体现在:传感器获取的数据有限、传感器检测的范围有限、传感器的检测易受环境影响等等,该感知缺陷容易导致获取到的传感器数据不够准确,进而容易降低根据传感器数据确定的驾驶策略的准确性。而自动驾驶策略图层可以看作是传统硬件传感器(例如,雷达、激光测距仪或摄像头)的延伸,其包含的数据更为丰富,且不受环境、障碍或者干扰的影响,其中,道路的静态图层信息保证了自动驾驶能够高效的确定行驶路径和规避障碍物,道路的动态化图层信息保证了自动驾驶能够及时地应对突发状况,从而可以知道的是,根据自动驾驶策略图层确定的第一驾驶策略的安全性要高于目标车辆通过实时获取的传感器数据确定第二驾驶策略。基于此,在自动驾驶装置判断出第一驾驶策略和第二驾驶策略的相似度小于第一阈值的情况下,出于安全行驶的目的,根据第一驾驶策略对目标车辆进行自动驾驶,以规避不必要的驾驶风险。
需要说明的是,在一些实现方式中,在判断出第一驾驶策略和第二驾驶策略的相似度小于等于第一阈值的情况下,目标车辆根据第一驾驶策略对目标车辆进行自动驾驶,也即为了避免车辆的偶然风险,目标车辆选择安全性更高的第一驾驶策略进行自动驾驶。
可以理解的是,在本申请实施例中,在“第一驾驶策略和第二驾驶策略的相似度等于第一阈值”的情况下,可以认为其满足上述所描述的情形一,也可以认为其满足上述所描述的情形二,本申请实施例并不作具体限定。
图2所示的方法中阐述了云端服务器可以通过目标车辆上报的行程信息在自动驾驶策略图层中获取行驶的第一路段的图层信息,之后,根据第一路段的图层信息和车辆的属性信息来获取满足安全要求的驾驶策略,下面介绍一种应用于车载终端(也可以称为自动驾 驶装置)的自动驾驶方法,具体阐述车载终端如何根据云端服务器发送的自动驾驶策略图层获取驾驶策略的,图4为本申请实施例提供的另一种自动驾驶方法,如图4所示,该方法可以包括如下步骤:
步骤S400、目标车辆向云端服务器上报行程信息。
在本申请实施例中,行程信息的具体描述以及关于目标车辆如何向云端服务器上报行程信息的具体实现请参考前述方法实施例图2的相关描述,此处不多加赘述。
步骤S402、云端服务器接收目标车辆上报的行程信息。
步骤S404、云端服务器根据所述行程信息在自动驾驶策略图层中获取所述目标车辆行驶的第一路段的图层信息。
在一些实现方式中,第一路段的图层信息可以包括第一路段的静态图层信息,其中,第一路段的静态图层信息用于指示第一路段的基础设施信息。在一些实现方式中,第一路段的图层信息包括第一路段的静态图层信息和第一路段的动态图层信息,具体地,第一路段的静态图层信息用于指示第一路段的基础设施信息,第一路段的动态图层信息用于指示第一路段的动态交通信息。
步骤S406、云端服务器将第一路段的图层信息发送给目标车辆。
步骤S408、目标车辆接收第一路段的图层信息,并根据所述第一路段的图层信息和所述目标车辆的车辆属性信息获取第一驾驶策略。
在本申请实施例中,关于目标车辆根据第一路段的图层信息和目标车辆的车辆属性信息获取第一驾驶策略的具体实现可以参考前述步骤S206的相关描述,此处不多加赘述。
步骤S4010、目标车辆根据第一驾驶策略对目标车辆进行自动驾驶。
在一些实现方式中,目标车辆可以将根据第一路段的图层信息和目标车辆的车辆属性信息获取得到的第一驾驶策略上报给云端服务器,以便云端服务器将其保存,并更新第一路段的自动驾驶策略图层中的相应图层信息。在实际应用中,云端服务器可以将第一驾驶策略(例如,建议操作模式)发送到该区域内与目标车辆相同或相似的其它车辆,以便辅助其它车辆行驶。
实施本申请实施例,云端服务器可以通过目标车辆上报的行程信息在自动驾驶策略图层中获取行驶的第一路段的图层信息,之后,将第一路段的图层信息发送给目标车辆,继而目标车辆可以根据第一路段的图层信息和车辆的属性信息来获取满足安全要求的驾驶策略,从而可以根据确定好的驾驶策略进行自动驾驶。由于自动驾驶策略图层中包含的信息更为丰富,道路的静态图层信息保证了自动驾驶能够高效的确定行驶路径和规避障碍物,道路的动态化图层信息保证了自动驾驶能够及时地应对突发状况,这一实现方式能够克服传感器的感知缺陷(例如,传感器的感知缺陷体现在:传感器获取的数据有限、传感器检测的范围有限、传感器的检测易受环境影响等等),从而可以提高确定满足安全要求的驾驶策略的准确性,降低了自动驾驶的风险。
为了便于更好的实施本申请实施例的上述方案,本申请实施例提供了一种云端服务器,该云端服务器用于执行前述第一方面任一项所述的方法的单元,以通过自动驾驶策略图层确定驾驶策略。具体地,请参见图5,是本申请实施例提供的一种云端服务器500的示意 框图。本申请实施例的云端服务器50可以包括:
接收单元500,用于接收目标车辆上报的车辆属性信息和所述目标车辆的行程信息;其中,所述目标车辆的车辆属性信息用于生成自动驾驶策略;
第一获取单元502,用于根据所述行程信息在自动驾驶策略图层中获取所述目标车辆行驶的第一路段的图层信息;
第二获取单元504,用于根据所述第一路段的图层信息和所述目标车辆的车辆属性信息获取第一驾驶策略;
第一发送单元506,用于将所述第一驾驶策略发送给所述目标车辆。
在一种可能的实现方式中,所述云端服务器存储有驾驶策略与安全通行概率的对应关系;所述第二获取单元504可以包括安全通行概率获取单元和驾驶策略获取单元,其中,
所述安全通行概率获取单元,用于根据所述第一路段的图层信息和所述目标车辆的车辆属性信息获取第一安全通行概率;
所述驾驶策略获取单元,用于根据所述驾驶策略与安全通行概率的对应关系获取所述第一安全通行概率对应的第一驾驶策略。
在一种可能的实现方式中,所述安全通行概率获取单元具体用于:
通过第一模型计算所述第一安全通行概率,其中,所述第一模型包括至少一个信息项和所述至少一个信息项对应的权重参数,所述至少一个信息项为根据所述第一路段的图层信息和所述目标车辆的车辆属性信息提取得到的信息项,所述权重参数用于指示信息项被用于确定所述第一安全通行概率时的重要程度。
在一种可能的实现方式中,所述第一模型为基于至少一个样本数据训练得到的模型,所述样本数据包括从第二路段的图层信息和所述目标车辆的车辆属性信息提取到的至少一个信息项,所述第二路段为与所述第一路段相邻的路段,且所述第二路段的出口为所述第一路段的入口。
在一种可能的实现方式中,所述第一路段的图层信息包括所述第一路段的静态图层信息、所述第一路段的动态图层信息中的至少一个;其中,所述第一路段的静态图层信息用于指示所述第一路段的基础设施信息;所述第一路段的动态图层信息用于指示所述第一路段的动态交通信息。
在一种可能的实现方式中,所述第一路段的静态图层信息包括车道属性、数字化设备信息、绿化带信息中的至少一种;所述第一路段的动态图层信息包括天气信息、路面信息、所述第一路段在第一时间段内的拥堵情况、所述第一路段在所述第一时间段内行人和非机动车的穿行概率、所述第一路段在所述第一时间段内发生驾驶事故的事故概率中的至少一种。
在一种可能的实现方式中,所述第一路段为所述目标车辆正在行驶的路段;所述云端服务器50还包括:
第三获取单元508,用于获取所述目标车辆行驶在所述第一路段上的第二驾驶策略;其中,所述第二驾驶策略为所述目标车辆根据实时获取的传感器数据确定的;
第二发送单元5010,用于在所述第一驾驶策略和所述第二驾驶策略的相似度小于第一阈值的情况下,向所述目标车辆发送将所述第二驾驶策略切换为所述第一驾驶策略的提示 信息。
在一种可能的实现方式中,所述目标车辆的车辆属性信息包括所述目标车辆的自动驾驶能力、所述目标车辆的传感器分布信息、所述目标车辆中驾驶员的驾驶状态中的至少一种。
需要说明的是,本申请实施例中所描述的云端服务器可参见上述图3a和图4中所述的方法实施例中的自动驾驶方法的相关描述,此处不再赘述。
为了便于更好的实施本申请实施例的上述方案,本申请实施例提供了一种自动驾驶装置,该自动驾驶装置用于执行前述第二方面任一项所述的方法的单元,以根据云端服务器确定的第一驾驶策略进行自动驾驶。具体地,请参见图6,是本申请实施例提供的一种自动驾驶装置60的示意框图。本申请实施例的自动驾驶装置60可以包括:
接收单元600,用于接收云端服务器发送的第一驾驶策略;其中,所述第一驾驶策略为上述第一方面任一项所述的方法获取的第一驾驶策略;
控制单元602,用于根据所述第一驾驶策略对所述目标车辆进行自动驾驶。
在一种可能的实现方式中,所述装置60还可以包括:
第二驾驶策略获取单元604,用于通过传感器数据获取所述目标车辆行驶在第一路段上的第二驾驶策略;
所述控制单元602具体用于:
根据所述第一驾驶策略和所述第二驾驶策略对所述目标车辆进行自动驾驶。
在一种可能的实现方式中,所述控制单元602具体用于:
在判断出所述第一驾驶策略和所述第二驾驶策略的相似度大于第一阈值的情况下,根据所述第一驾驶策略对所述目标车辆进行自动驾驶,或,根据所述第二驾驶策略对所述目标车辆进行自动驾驶;
在判断出所述第一驾驶策略和所述第二驾驶策略的相似度小于所述第一阈值的情况下,根据所述第一驾驶策略对所述目标车辆进行自动驾驶。
需要说明的是,本申请实施例中所描述的自动驾驶装置可参见上述图4中所述的方法实施例中的自动驾驶方法的相关描述,此处不再赘述。
请参见图7,图7是本申请实施例提供的一种云端服务器的结构示意图。该云端服务器70包括至少一个处理器701,至少一个存储器702、至少一个通信接口703。此外,该云端服务器还可以包括天线等通用部件,在此不再详述。
处理器701可以是通用中央处理器(CPU),微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制以上方案程序执行的集成电路。
通信接口703,用于与其他设备或通信网络通信。
存储器702,可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(Electrically  Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过总线与处理器相连接。存储器也可以和处理器集成在一起。
其中,所述存储器702用于存储执行以上方案的应用程序代码,并由处理器701来控制执行。所述处理器701用于执行所述存储器702中存储的应用程序代码。例如,存储器702存储的代码可执行以上图2或者图3a提供的自动驾驶方法。
需要说明的是,本申请实施例中所描述的设备70的功能可参见上述图2和图3a中的所述的方法实施例中的相关描述,此处不再赘述。
本申请实施例还提供了一种计算机存储介质,该计算机可读存储介质中存储有指令,当其在计算机或处理器上运行时,使得计算机或处理器执行上述任一个实施例所述方法中的一个或多个步骤。上述装置的各组成模块如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在所述计算机可读取存储介质中,基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机产品存储在计算机可读存储介质中。
上述计算机可读存储介质可以是前述实施例所述的设备的内部存储单元,例如硬盘或内存。上述计算机可读存储介质也可以是上述设备的外部存储设备,例如配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,上述计算机可读存储介质还可以既包括上述设备的内部存储单元也包括外部存储设备。上述计算机可读存储介质用于存储上述计算机程序以及上述设备所需的其他程序和数据。上述计算机可读存储介质还可以用于暂时地存储已经输出或者将要输出的数据。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,可通过计算机程序来指令相关的硬件来完成,该计算机的程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可存储程序代码的介质。
本申请实施例方法中的步骤可以根据实际需要进行顺序调整、合并和删减。
本申请实施例装置中的模块可以根据实际需要进行合并、划分和删减。
可以理解,本领域普通技术人员可以意识到,结合本申请各个实施例中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本领域技术人员能够领会,结合本申请各个实施例中公开描述的各种说明性逻辑框、模块和算法步骤所描述的功能可以硬件、软件、固件或其任何组合来实施。如果以软件来实施,那么各种说明性逻辑框、模块、和步骤描述的功能可作为一或多个指令或代码在计算机可读媒体上存储或传输,且由基于硬件的处理单元执行。计算机可读媒体可包含计算 机可读存储媒体,其对应于有形媒体,例如数据存储媒体,或包括任何促进将计算机程序从一处传送到另一处的媒体(例如,根据通信协议)的通信媒体。以此方式,计算机可读媒体大体上可对应于(1)非暂时性的有形计算机可读存储媒体,或(2)通信媒体,例如信号或载波。数据存储媒体可为可由一或多个计算机或一或多个处理器存取以检索用于实施本申请中描述的技术的指令、代码和/或数据结构的任何可用媒体。计算机程序产品可包含计算机可读媒体。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (28)

  1. 一种自动驾驶方法,其特征在于,所述方法应用于云端服务器,所述方法包括:
    接收目标车辆上报的车辆属性信息和所述目标车辆的行程信息;其中,所述目标车辆的车辆属性信息用于生成自动驾驶策略;
    根据所述行程信息在自动驾驶策略图层中获取所述目标车辆行驶的第一路段的图层信息;
    根据所述第一路段的图层信息和所述目标车辆的车辆属性信息获取第一驾驶策略;
    将所述第一驾驶策略发送给所述目标车辆。
  2. 如权利要求1所述的方法,其特征在于,所述云端服务器存储有驾驶策略与安全通行概率的对应关系;所述根据所述第一路段的图层信息和所述目标车辆的车辆属性信息获取第一驾驶策略,包括:
    根据所述第一路段的图层信息和所述目标车辆的车辆属性信息获取第一安全通行概率;
    根据所述驾驶策略与安全通行概率的对应关系获取所述第一安全通行概率对应的第一驾驶策略。
  3. 如权利要求2所述的方法,其特征在于,所述根据所述第一路段的图层信息和所述目标车辆的车辆属性信息获取第一安全通行概率,包括:
    通过第一模型计算所述第一安全通行概率,其中,所述第一模型包括至少一个信息项和所述至少一个信息项对应的权重参数,所述至少一个信息项为根据所述第一路段的图层信息和所述目标车辆的车辆属性信息提取得到的信息项,所述权重参数用于指示信息项被用于确定所述第一安全通行概率时的重要程度。
  4. 如权利要求3所述的方法,其特征在于,所述第一模型为基于至少一个样本数据训练得到的模型,所述样本数据包括从第二路段的图层信息和所述目标车辆的车辆属性信息提取到的至少一个信息项,所述第二路段为与所述第一路段相邻的路段,且所述第二路段的出口为所述第一路段的入口。
  5. 如权利要求1-4任一项所述的方法,其特征在于,所述第一路段的图层信息包括所述第一路段的静态图层信息、所述第一路段的动态图层信息中的至少一个;其中,所述第一路段的静态图层信息用于指示所述第一路段的基础设施信息;所述第一路段的动态图层信息用于指示所述第一路段的动态交通信息。
  6. 如权利要求5所述的方法,其特征在于,所述第一路段的静态图层信息包括车道属性、数字化设备信息、绿化带信息中的至少一种;所述第一路段的动态图层信息包括天气信息、路面信息、所述第一路段在第一时间段内的拥堵情况、所述第一路段在所述第一时间段内行人和非机动车的穿行概率、所述第一路段在所述第一时间段内发生驾驶事故的事 故概率中的至少一种。
  7. 如权利要求1所述的方法,其特征在于,所述第一路段为所述目标车辆正在行驶的路段;所述方法还包括:
    获取所述目标车辆行驶在所述第一路段上的第二驾驶策略;其中,所述第二驾驶策略为所述目标车辆根据实时获取的传感器数据确定的;
    在所述第一驾驶策略和所述第二驾驶策略的相似度小于第一阈值的情况下,向所述目标车辆发送将所述第二驾驶策略切换为所述第一驾驶策略的提示信息。
  8. 如权利要求1-7任一项所述的方法,其特征在于,所述目标车辆的车辆属性信息包括所述目标车辆的自动驾驶能力、所述目标车辆的传感器分布信息、所述目标车辆中驾驶员的驾驶状态中的至少一种。
  9. 一种自动驾驶方法,其特征在于,所述方法应用于目标车辆上的车载终端;所述方法包括:
    接收云端服务器发送的第一驾驶策略;其中,所述第一驾驶策略为如权利要求1-8任一项所述的方法获取的第一驾驶策略;
    根据所述第一驾驶策略对所述目标车辆进行自动驾驶。
  10. 如权利要求9所述的方法,其特征在于,所述方法还包括:通过传感器数据获取所述目标车辆行驶在第一路段上的第二驾驶策略;
    所述根据所述第一驾驶策略对所述目标车辆进行自动驾驶,包括:
    根据所述第一驾驶策略和所述第二驾驶策略对所述目标车辆进行自动驾驶。
  11. 如权利要求10所述的方法,其特征在于,所述根据所述第一驾驶策略和所述第二驾驶策略对所述目标车辆进行自动驾驶,包括:
    在判断出所述第一驾驶策略和所述第二驾驶策略的相似度大于第一阈值的情况下,根据所述第一驾驶策略对所述目标车辆进行自动驾驶,或,根据所述第二驾驶策略对所述目标车辆进行自动驾驶;
    在判断出所述第一驾驶策略和所述第二驾驶策略的相似度小于所述第一阈值的情况下,根据所述第一驾驶策略对所述目标车辆进行自动驾驶。
  12. 一种云端服务器,其特征在于,所述云端服务器包括:
    接收单元,用于接收目标车辆上报的车辆属性信息和所述目标车辆的行程信息;其中,所述目标车辆的车辆属性信息用于生成自动驾驶策略;
    第一获取单元,用于根据所述行程信息在自动驾驶策略图层中获取所述目标车辆行驶的第一路段的图层信息;
    第二获取单元,用于根据所述第一路段的图层信息和所述目标车辆的车辆属性信息获 取第一驾驶策略;
    第一发送单元,用于将所述第一驾驶策略发送给所述目标车辆。
  13. 如权利要求12所述的云端服务器,其特征在于,所述云端服务器存储有驾驶策略与安全通行概率的对应关系;所述第二获取单元包括安全通行概率获取单元和驾驶策略获取单元,其中,
    所述安全通行概率获取单元,用于根据所述第一路段的图层信息和所述目标车辆的车辆属性信息获取第一安全通行概率;
    所述驾驶策略获取单元,用于根据所述驾驶策略与安全通行概率的对应关系获取所述第一安全通行概率对应的第一驾驶策略。
  14. 如权利要求13所述的云端服务器,其特征在于,所述安全通行概率获取单元具体用于:
    通过第一模型计算所述第一安全通行概率,其中,所述第一模型包括至少一个信息项和所述至少一个信息项对应的权重参数,所述至少一个信息项为根据所述第一路段的图层信息和所述目标车辆的车辆属性信息提取得到的信息项,所述权重参数用于指示信息项被用于确定所述第一安全通行概率时的重要程度。
  15. 如权利要求14所述的云端服务器,其特征在于,所述第一模型为基于至少一个样本数据训练得到的模型,所述样本数据包括从第二路段的图层信息和所述目标车辆的车辆属性信息提取到的至少一个信息项,所述第二路段为与所述第一路段相邻的路段,且所述第二路段的出口为所述第一路段的入口。
  16. 如权利要求12-15任一项所述的云端服务器,其特征在于,所述第一路段的图层信息包括所述第一路段的静态图层信息、所述第一路段的动态图层信息中的至少一个;其中,所述第一路段的静态图层信息用于指示所述第一路段的基础设施信息;所述第一路段的动态图层信息用于指示所述第一路段的动态交通信息。
  17. 如权利要求16所述的云端服务器,其特征在于,所述第一路段的静态图层信息包括车道属性、数字化设备信息、绿化带信息中的至少一种;所述第一路段的动态图层信息包括天气信息、路面信息、所述第一路段在第一时间段内的拥堵情况、所述第一路段在所述第一时间段内行人和非机动车的穿行概率、所述第一路段在所述第一时间段内发生驾驶事故的事故概率中的至少一种。
  18. 如权利要求12所述的云端服务器,其特征在于,所述第一路段为所述目标车辆正在行驶的路段;所述云端服务器还包括:
    第三获取单元,用于获取所述目标车辆行驶在所述第一路段上的第二驾驶策略;其中,所述第二驾驶策略为所述目标车辆根据实时获取的传感器数据确定的;
    第二发送单元,用于在所述第一驾驶策略和所述第二驾驶策略的相似度小于第一阈值的情况下,向所述目标车辆发送将所述第二驾驶策略切换为所述第一驾驶策略的提示信息。
  19. 如权利要求12-18任一项所述的云端服务器,其特征在于,所述目标车辆的车辆属性信息包括所述目标车辆的自动驾驶能力、所述目标车辆的传感器分布信息、所述目标车辆中驾驶员的驾驶状态中的至少一种。
  20. 一种自动驾驶装置,其特征在于,所述装置应用于目标车辆上的车载终端;所述装置包括:
    接收单元,用于接收云端服务器发送的第一驾驶策略;其中,所述第一驾驶策略为如权利要求1-8任一项所述的方法获取的第一驾驶策略;
    控制单元,用于根据所述第一驾驶策略对所述目标车辆进行自动驾驶。
  21. 如权利要求20所述的装置,其特征在于,所述装置还包括:
    第二驾驶策略获取单元,用于通过传感器数据获取所述目标车辆行驶在第一路段上的第二驾驶策略;
    所述控制单元具体用于:
    根据所述第一驾驶策略和所述第二驾驶策略对所述目标车辆进行自动驾驶。
  22. 如权利要求21所述的装置,其特征在于,所述控制单元具体用于:
    在判断出所述第一驾驶策略和所述第二驾驶策略的相似度大于第一阈值的情况下,根据所述第一驾驶策略对所述目标车辆进行自动驾驶,或,根据所述第二驾驶策略对所述目标车辆进行自动驾驶;
    在判断出所述第一驾驶策略和所述第二驾驶策略的相似度小于所述第一阈值的情况下,根据所述第一驾驶策略对所述目标车辆进行自动驾驶。
  23. 一种云端服务器,其特征在于,包括处理器和存储器,所述处理器和存储器相互连接,其中,所述存储器用于存储计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,执行如权利要求1-8任一项所述的方法。
  24. 一种车载终端,其特征在于,包括处理器和存储器,所述处理器和存储器相互连接,其中,所述存储器用于存储计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,执行如权利要求9-11任一项所述的方法。
  25. 一种芯片,所述芯片包括处理器、存储器和通信接口,其特征在于,所述芯片被配置用于执行权利要求1至8任意一项所述的方法。
  26. 一种芯片,所述芯片包括处理器、存储器和通信接口,其特征在于,所述芯片被配 置用于执行权利要求9至11任意一项所述的方法。
  27. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行如权利要求1-8任一项所述的方法。
  28. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行如权利要求9-11任一项所述的方法。
PCT/CN2020/114265 2019-12-31 2020-09-09 一种自动驾驶方法、相关设备及计算机可读存储介质 WO2021135371A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP20910434.8A EP4071661A4 (en) 2019-12-31 2020-09-09 AUTOMATIC DRIVING METHOD, ASSOCIATED DEVICE AND COMPUTER READABLE STORAGE MEDIA
JP2022540325A JP2023508114A (ja) 2019-12-31 2020-09-09 自動運転方法、関連装置及びコンピュータ読み取り可能記憶媒体
US17/855,253 US20220332348A1 (en) 2019-12-31 2022-06-30 Autonomous driving method, related device, and computer-readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911425837.XA CN113128303A (zh) 2019-12-31 2019-12-31 一种自动驾驶方法、相关设备及计算机可读存储介质
CN201911425837.X 2019-12-31

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/855,253 Continuation US20220332348A1 (en) 2019-12-31 2022-06-30 Autonomous driving method, related device, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
WO2021135371A1 true WO2021135371A1 (zh) 2021-07-08

Family

ID=76686866

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/114265 WO2021135371A1 (zh) 2019-12-31 2020-09-09 一种自动驾驶方法、相关设备及计算机可读存储介质

Country Status (5)

Country Link
US (1) US20220332348A1 (zh)
EP (1) EP4071661A4 (zh)
JP (1) JP2023508114A (zh)
CN (1) CN113128303A (zh)
WO (1) WO2021135371A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113867360A (zh) * 2021-10-19 2021-12-31 北京三快在线科技有限公司 一种基于远程油门控制无人驾驶设备的方法及装置
CN113869224A (zh) * 2021-09-29 2021-12-31 泰州市华达机电设备有限公司 基于目标判断的辅助驾驶系统
CN114454899A (zh) * 2022-04-07 2022-05-10 新石器慧通(北京)科技有限公司 车辆驾驶方法及装置
US11780463B2 (en) * 2019-02-19 2023-10-10 Baidu Online Network Technology (Beijing) Co., Ltd. Method, apparatus and server for real-time learning of travelling strategy of driverless vehicle
US11878717B2 (en) 2022-01-24 2024-01-23 International Business Machines Corporation Mirage detection by autonomous vehicles

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11700533B2 (en) 2020-05-01 2023-07-11 Digital Global Systems, Inc. System, method, and apparatus for providing dynamic, prioritized spectrum management and utilization
US11395149B2 (en) 2020-05-01 2022-07-19 Digital Global Systems, Inc. System, method, and apparatus for providing dynamic, prioritized spectrum management and utilization
US11653213B2 (en) 2020-05-01 2023-05-16 Digital Global Systems. Inc. System, method, and apparatus for providing dynamic, prioritized spectrum management and utilization
US20210354728A1 (en) * 2020-05-12 2021-11-18 Toyota Research Institute, Inc. Autonomous driving requirements deficiency determination
JP7429172B2 (ja) * 2020-09-03 2024-02-07 本田技研工業株式会社 車両制御装置、車両制御方法、およびプログラム
JP2022044155A (ja) * 2020-09-07 2022-03-17 株式会社Subaru 画像処理装置
CN112541475B (zh) * 2020-12-24 2024-01-19 北京百度网讯科技有限公司 感知数据检测方法及装置
CN113741459A (zh) * 2021-09-03 2021-12-03 阿波罗智能技术(北京)有限公司 确定训练样本的方法和自动驾驶模型的训练方法、装置
CN113807236B (zh) * 2021-09-15 2024-05-17 北京百度网讯科技有限公司 车道线检测的方法、装置、设备、存储介质及程序产品
CN114407915A (zh) * 2021-12-14 2022-04-29 高德软件有限公司 运行设计域odd的处理方法、装置及存储介质
CN116909202B (zh) * 2023-09-14 2023-12-29 毫末智行科技有限公司 车云协同自动驾驶车辆控制方法、装置、设备及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108010360A (zh) * 2017-12-27 2018-05-08 中电海康集团有限公司 一种基于车路协同的自动驾驶环境感知系统
CN109733391A (zh) * 2018-12-10 2019-05-10 北京百度网讯科技有限公司 车辆的控制方法、装置、设备、车辆及存储介质
CN110057373A (zh) * 2019-04-22 2019-07-26 上海蔚来汽车有限公司 用于生成高精细语义地图的方法、装置和计算机存储介质
CN110356401A (zh) * 2018-04-05 2019-10-22 北京图森未来科技有限公司 一种自动驾驶车辆及其变道控制方法和系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11283877B2 (en) * 2015-11-04 2022-03-22 Zoox, Inc. Software application and logic to modify configuration of an autonomous vehicle
CN108009475A (zh) * 2017-11-03 2018-05-08 东软集团股份有限公司 驾驶行为分析方法、装置、计算机可读存储介质及电子设备
CN115384486A (zh) * 2018-03-20 2022-11-25 御眼视觉技术有限公司 用于导航主车辆的导航系统和方法
US11086317B2 (en) * 2018-03-30 2021-08-10 Intel Corporation Emotional adaptive driving policies for automated driving vehicles
CN109901574B (zh) * 2019-01-28 2021-08-13 华为技术有限公司 自动驾驶方法及装置
US10953873B2 (en) * 2019-03-29 2021-03-23 Intel Corporation Extension to safety protocols for autonomous vehicle operation
CN110264720B (zh) * 2019-06-28 2023-01-06 腾讯科技(深圳)有限公司 驾驶模式提示方法、装置、设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108010360A (zh) * 2017-12-27 2018-05-08 中电海康集团有限公司 一种基于车路协同的自动驾驶环境感知系统
CN110356401A (zh) * 2018-04-05 2019-10-22 北京图森未来科技有限公司 一种自动驾驶车辆及其变道控制方法和系统
CN109733391A (zh) * 2018-12-10 2019-05-10 北京百度网讯科技有限公司 车辆的控制方法、装置、设备、车辆及存储介质
CN110057373A (zh) * 2019-04-22 2019-07-26 上海蔚来汽车有限公司 用于生成高精细语义地图的方法、装置和计算机存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4071661A4

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11780463B2 (en) * 2019-02-19 2023-10-10 Baidu Online Network Technology (Beijing) Co., Ltd. Method, apparatus and server for real-time learning of travelling strategy of driverless vehicle
CN113869224A (zh) * 2021-09-29 2021-12-31 泰州市华达机电设备有限公司 基于目标判断的辅助驾驶系统
CN113867360A (zh) * 2021-10-19 2021-12-31 北京三快在线科技有限公司 一种基于远程油门控制无人驾驶设备的方法及装置
US11878717B2 (en) 2022-01-24 2024-01-23 International Business Machines Corporation Mirage detection by autonomous vehicles
CN114454899A (zh) * 2022-04-07 2022-05-10 新石器慧通(北京)科技有限公司 车辆驾驶方法及装置
CN114454899B (zh) * 2022-04-07 2022-08-02 新石器慧通(北京)科技有限公司 车辆驾驶方法及装置

Also Published As

Publication number Publication date
EP4071661A4 (en) 2023-01-25
EP4071661A1 (en) 2022-10-12
CN113128303A (zh) 2021-07-16
US20220332348A1 (en) 2022-10-20
JP2023508114A (ja) 2023-02-28

Similar Documents

Publication Publication Date Title
WO2021135371A1 (zh) 一种自动驾驶方法、相关设备及计算机可读存储介质
WO2022027304A1 (zh) 一种自动驾驶车辆的测试方法及装置
WO2021000800A1 (zh) 道路可行驶区域推理方法及装置
WO2021103511A1 (zh) 一种设计运行区域odd判断方法、装置及相关设备
CN113968216B (zh) 一种车辆碰撞检测方法、装置及计算机可读存储介质
WO2021102955A1 (zh) 车辆的路径规划方法以及车辆的路径规划装置
CN112512887B (zh) 一种行驶决策选择方法以及装置
WO2021189210A1 (zh) 一种车辆换道方法及相关设备
EP4280129A1 (en) Trajectory prediction method and apparatus, and map
WO2022062825A1 (zh) 车辆的控制方法、装置及车辆
US20230222914A1 (en) Vehicle reminding method and system, and related device
EP4307251A1 (en) Mapping method, vehicle, computer readable storage medium, and chip
WO2022051951A1 (zh) 车道线检测方法、相关设备及计算机可读存储介质
US20230048680A1 (en) Method and apparatus for passing through barrier gate crossbar by vehicle
CN115100630B (zh) 障碍物检测方法、装置、车辆、介质及芯片
CN116135654A (zh) 一种车辆行驶速度生成方法以及相关设备
WO2021110166A1 (zh) 道路结构检测方法及装置
WO2021254000A1 (zh) 车辆纵向运动参数的规划方法和装置
CN115042814A (zh) 交通灯状态识别方法、装置、车辆及存储介质
CN115205848A (zh) 目标检测方法、装置、车辆、存储介质及芯片
WO2021159397A1 (zh) 车辆可行驶区域的检测方法以及检测装置
WO2022061725A1 (zh) 交通元素的观测方法和装置
CN115082886B (zh) 目标检测的方法、装置、存储介质、芯片及车辆
WO2022001432A1 (zh) 推理车道的方法、训练车道推理模型的方法及装置
WO2023102827A1 (zh) 一种路径约束方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20910434

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022540325

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020910434

Country of ref document: EP

Effective date: 20220705

NENP Non-entry into the national phase

Ref country code: DE