WO2024114426A1 - 人机共驾控制方法、装置、系统及车辆 - Google Patents
人机共驾控制方法、装置、系统及车辆 Download PDFInfo
- Publication number
- WO2024114426A1 WO2024114426A1 PCT/CN2023/132573 CN2023132573W WO2024114426A1 WO 2024114426 A1 WO2024114426 A1 WO 2024114426A1 CN 2023132573 W CN2023132573 W CN 2023132573W WO 2024114426 A1 WO2024114426 A1 WO 2024114426A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- driver
- human
- machine
- control method
- driving
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 230000008859 change Effects 0.000 claims abstract description 13
- 230000008447 perception Effects 0.000 claims description 25
- 230000006870 function Effects 0.000 description 11
- 238000004590 computer program Methods 0.000 description 7
- 230000035945 sensitivity Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000006399 behavior Effects 0.000 description 4
- 230000007613 environmental effect Effects 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 2
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 2
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 2
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
- B60W40/06—Road conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/08—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
Definitions
- the present invention relates to human-machine co-driving technology, and specifically provides a human-machine co-driving control method, device, system and vehicle.
- the human-machine co-driving mode is an effective way to cover the transition stage from Level 3 or lower (auxiliary functions) to Level 4 or Level 5 (automatic functions). Through the human-machine co-driving mode, it can not only improve the user experience, but also solve the technical limitations of the autonomous driving system. However, the current human-machine co-driving mode lacks scene processing considerations. The self-vehicle perception system cannot correctly detect all objects, and there may be object omissions, which cannot promptly warn the driver to take defensive actions, which may lead to accident risks.
- the present invention is proposed to provide a human-machine co-driving control method, device, system and vehicle that solves or at least partially solves the technical problem of how to reduce the accident rate of autonomous driving.
- the present invention provides a human-machine co-driving control method, comprising:
- the confidence of the perceived object is increased and the driving strategy is changed.
- the obtaining of the driver state includes: obtaining whether the driver state is in a distracted state according to whether the driver pays attention to a non-distracted area within a preset time.
- the method of increasing the confidence of the perceived object and controlling the change of the driving strategy according to the driver's state includes:
- the driver is in a distracted state, which increases the confidence of the perceived object and controls the driving strategy to change to a defensive driving strategy.
- the determining whether the driver is in a distracted state according to whether the driver pays attention to the non-distracted area within a preset time includes: if the driver does not pay attention to the non-distracted area within a preset time, the driver is in a distracted state;
- the method also includes: if the driver is in a distracted state, increasing the intensity of the warning issued to the driver.
- the method also includes changing the preset time length according to the risk scenario identifier.
- the confidence level of the perceived object is the confidence value output by the vehicle perception system after identifying the object.
- the step of obtaining the risk scenario identification of the preset lane includes:
- the risk scenario is identified to obtain a risk scenario tag.
- the present invention provides a human-machine co-driving control device, comprising a memory, one or more processors, and one or more applications, wherein the one or more applications are stored in the memory, and when the one or more applications are configured to be called by the one or more processors, the one or more processors execute the method as described in any technical solution of the first aspect.
- the present invention provides a control system, the system comprising the control device as described in the second aspect.
- the present invention provides a vehicle, comprising the control device as described in the second aspect.
- lane risk warnings, driver status and the results of the self-vehicle perception system are combined to make different control decisions according to the risk scenarios, namely, warning the driver, increasing the confidence of the perceived object and controlling the change of driving strategy. This can effectively avoid missed detection of objects by the self-vehicle perception system, comprehensively consider the driver's status, reduce the accident rate of autonomous driving and improve user experience.
- FIG1 is a schematic flow chart of the main steps of a human-machine co-driving control method according to an embodiment of the present invention
- FIG2 is a schematic diagram of a flow chart of main steps for obtaining risk scenario identification according to an embodiment of the present invention
- FIG3 is a schematic diagram of a flow chart of main steps of functional logic in a risk scenario according to an embodiment of the present invention
- FIG4 is a schematic diagram of the main structure of a human-machine co-driving control system according to an embodiment of the present invention.
- module and “processor” may include hardware, software or a combination of the two.
- a module may include hardware circuits, various suitable sensors, communication ports, and memory, and may also include software parts, such as program codes, or software and A combination of hardware.
- the processor may be a central processing unit, a microprocessor, an image processor, a digital signal processor or any other suitable processor.
- the processor has data and/or signal processing functions.
- the processor may be implemented in software, hardware or a combination of the two.
- Non-temporary computer-readable storage media include any suitable medium that can store program code, such as a disk, a hard disk, an optical disk, a flash memory, a read-only memory, a random access memory, and the like.
- a and/or B means all possible combinations of A and B, such as just A, just B or A and B.
- the term “at least one A or B” or “at least one of A and B” has a similar meaning to “A and/or B” and may include just A, just B or A and B.
- the singular terms “one” and “the” may also include plural forms.
- Self-car perception system the key technologies of autonomous driving are perception, planning and control.
- the autonomous driving system obtains the vehicle's own information and surrounding environment information through the perception system, and analyzes, calculates and processes the collected data information through the processor to make decisions and control the execution system to achieve vehicle acceleration, deceleration and steering and other actions.
- Perception is mainly based on environmental perception for positioning.
- Environmental perception refers to the ability to understand the scene of the environment, such as the semantic classification of data such as the type of obstacles, road signs and markings, detection of pedestrians and vehicles, and traffic signals.
- Positioning is the post-processing of the perception results, and the positioning function helps the automatic vehicle understand its position relative to the environment.
- the autonomous driving system has the following problems: perception cannot correctly detect all objects, the positioning system cannot meet the lane-level positioning accuracy in real time, and the control system does not take into account the processing methods in all scenarios.
- These technical limitations may lead to accident risks. For example, if the above-mentioned perception system misses an object (perception object), it will be "blind" to the vehicle, which will cause serious safety risks.
- FIG. 1 is a diagram of a human-machine co-driving control method according to an embodiment of the present invention.
- a human-machine co-driving control method in an embodiment of the present invention mainly includes the following steps S101 to S103.
- Step S101 obtaining the driver's state, the confidence level of the perceived object, and the risk scenario identification of the preset lane;
- the driver status refers to whether the driver is in a distracted state;
- the confidence of the perceived object is the confidence value output by the above-mentioned vehicle perception system after identifying the object.
- each object in the object list has a confidence value.
- the vehicle is controlled and driven according to the confidence value of the identified object. If the object confidence value is low, the system may ignore this object and continue the original driving behavior, but this object may also cause certain risks. Therefore, missing the detection of this object may bring accident risks; the risk scenario identification of the preset lane refers to whether the lane to be driven is a risk scenario.
- the driver's distracted state can be determined by monitoring whether the driver's eye gaze and head posture pay attention to the non-distracted area within a certain period of time, wherein the non-distracted area refers to the front windshield area corresponding to the front of the cab, that is, whether the driver is distracted can be determined by determining whether the driver pays attention to the road ahead.
- the driver's state can be obtained by monitoring the driver through a DMS (driver monitoring system) camera.
- the risk scenario identification of a preset road can be obtained through the following steps S201 to S203, which mainly include:
- Step S201 Acquire vehicle information and lane information on a preset lane
- the cloud-based big data engine will receive a report on the status of vehicles on the road, which includes vehicle and lane information on the road, such as construction roads, ODD boundaries, road accidents, and lanes where functions are often inactive or the driver always takes over control.
- Step S202 determining whether the preset lane is a risk scenario according to the moving vehicle and lane information
- the cloud-based big data engine can comprehensively determine whether the lane is a risk scenario lane based on the above information.
- Step S203 Identify the risk scenario and obtain a risk scenario tag.
- the lane ID and its corresponding attribute are formed by combining the risk scenario judgment with the corresponding lane.
- the attribute refers to whether it is a risk scenario. If it is a risk scenario, the risk scenario identification of the road is obtained, so as to determine whether to warn the driver.
- Step S102 issuing a warning to the driver according to the risk scenario identification
- the risk scenario identification of the preset lane is obtained, that is, whether there is a risk on the road where the vehicle is about to automatically drive. If there is a risk, a warning is issued to the driver in time to remind the driver not to be distracted.
- a request for attention may be sent to the driver via the vehicle's HMI (human-machine interface) to prevent the driver from being distracted in risky scenarios, and the warning may be in the form of an audio warning or a display warning.
- HMI human-machine interface
- the sensitivity of driver status monitoring can be changed according to the risk scenario identification.
- the sensor sensitivity of the DMS can be divided into 3 levels, high, medium and low. If a risk scenario identification appears, the sensor sensitivity is adjusted to high, and if there is no risk, it is low. For high sensitivity, if the driver does not look at the non-distracting area for 4 seconds (or even shorter), the driver is pushed to a distracted state; for low sensitivity, if the driver does not see the non-distracting area for 8 seconds (greater than high sensitivity, only used as an example here), the driver is pushed to a distracted state.
- Step S103 According to the driver's state, increase the confidence of the perceived object and control the change of driving strategy.
- the preset lane is a risk scenario
- decision control is performed according to the driver's status, and the decision control includes increasing the confidence of the perceived object and controlling the change of driving strategy; according to the previous introduction to confidence, if the object confidence value is low, the system may ignore this object and continue the original driving behavior, but this object may also cause certain risks. Therefore, in the risk scenario, in addition to warning the driver, it is also necessary to decide whether to increase the confidence of the perceived object according to the driver's status, so as to change the original driving strategy, take defensive behavior, and prevent risks from occurring.
- the driver in a risk scenario, the driver is still in a distracted state. At this time, the confidence of the perceived object is increased, thereby changing the original driving strategy. For example, the object that was originally ignored may be changed to a defensive driving strategy after the confidence is improved to avoid the risk.
- the defensive driving strategy in the present invention can be longitudinal control, lateral control, deceleration or even braking control.
- the functional logic of Figure 3 the high-level lane ID and attributes of 3 kilometers are obtained to the functional logic (which can come from the cloud map). If a risk scenario is marked, the HMI will send a warning to the driver. If the driver continues to be distracted, the function logic will also change the perception result, where the confidence value of each object increases to avoid missing detected objects, and change the control logic to defensive behavior to avoid any accidents.
- the driver when the driver is in a distracted state, in addition to increasing the confidence of the perceived object and controlling the change of driving strategy, it is also necessary to increase the warning intensity, such as increasing the audio warning volume, strobing the screen, etc., to remind the driver not to be distracted in risky scenarios.
- the warning intensity such as increasing the audio warning volume, strobing the screen, etc.
- the method of this embodiment combines lane risk prompts, driver status and the results of the self-vehicle perception system, and makes different control decisions according to risk scenarios, namely, warning the driver, increasing the confidence of the perceived object and controlling the change of driving strategy, which can effectively avoid the omission of the self-vehicle perception system to identify the object, comprehensively consider the driver's status, reduce the accident rate of autonomous driving, and improve the user experience.
- the present invention provides a human-machine co-driving control device, including a storage device and a processor, wherein the storage device can be configured to store a program for executing the human-machine co-driving control method of the above method embodiment, and the processor can be configured to execute the program in the storage device, which includes but is not limited to the program for executing the human-machine co-driving control method of the above method embodiment.
- the storage device can be configured to store a program for executing the human-machine co-driving control method of the above method embodiment
- the processor can be configured to execute the program in the storage device, which includes but is not limited to the program for executing the human-machine co-driving control method of the above method embodiment.
- control device may be a control device device including various electronic devices.
- the computer device may include multiple storage devices and multiple processors.
- the program for executing the human-machine co-driving control method of the above method embodiment can be divided into multiple subprograms, and each subprogram can be loaded and run by the processor to execute different steps of the human-machine co-driving control method of the above method embodiment.
- each subprogram can be stored in different storage devices, and each processor can be configured to execute programs in one or more storage devices to jointly implement the human-machine co-driving method of the above method embodiment, that is, each processor executes different steps of the human-machine co-driving control method of the above method embodiment, to jointly implement the human-machine co-driving control method of the above method embodiment.
- the above-mentioned multiple processors may be processors deployed on the same device.
- the above-mentioned computer device may be a high-performance device composed of multiple processors, and the above-mentioned multiple processors may be processors configured on the high-performance device.
- the above-mentioned multiple processors may also be processors deployed on different devices.
- the above-mentioned computer device may be a server cluster, and the above-mentioned multiple processors may be processors on different servers in the server cluster.
- the above-mentioned human-machine co-driving control device is used to execute an embodiment of a human-machine co-driving control method shown in Figure 1.
- the technical principles, technical problems solved and technical effects produced by the two are similar.
- Technical personnel in this technical field can clearly understand that for the convenience and conciseness of description, the specific working process and related instructions of the human-machine co-driving control device can refer to the contents described in an embodiment of a human-machine co-driving control method, which will not be repeated here.
- the present invention implements all or part of the processes in the method of the above embodiment, and can also be completed by instructing the relevant hardware through a computer program
- the computer program can be stored in a computer-readable storage medium, and the computer program can implement the steps of the above-mentioned various method embodiments when executed by the processor.
- the computer program includes computer program code, and the computer program code can be in source code form, object code form, executable file or some intermediate form.
- the computer-readable storage medium may include: any entity or device, medium, U disk, mobile hard disk, disk, optical disk, computer memory, read-only memory, random access memory, electric carrier signal, telecommunication signal and software distribution medium that can carry the computer program code.
- computer-readable storage medium can be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction.
- computer-readable storage media do not include electric carrier signals and telecommunication signals.
- the present invention also provides a control system, the system comprising the control device as described above.
- the system comprising the control device as described above.
- the human-machine co-driving control system receives images from the DMS camera and the environmental perception camera, and generates two outputs through the perception system: eye gaze and head posture are used to judge the driver's state to determine whether the driver is distracted.
- the perception system also outputs the confidence of the perceived object.
- Each object in the object list has a confidence value as the input of the function logic, which integrates the confidence input, the driver's state, and the cloud map.
- the risk identification (lane ID and its corresponding attribute, which indicates whether it is a risk scenario) sent by the system determines different decision responses, such as warning the driver, changing the control logic to a defensive maneuver, and increasing the confidence value of the perceived object.
- the function logic executes the scheme of steps S101-S103 of the above-mentioned human-machine co-driving control method.
- the system integrates the DMS perception results and map data, and cooperates with the vehicle perception system to ensure that the driver is in a normal driving state in certain high-risk scenarios and extreme situations of the autonomous driving function, and increases confidence and takes defensive measures when the driver does not monitor the environmental conditions, thereby reducing the accident rate of the autonomous driving function and improving the user's self-driving experience.
- the present invention also provides a vehicle, comprising the human-machine co-driving control device or human-machine co-driving control system of the above embodiment.
- the human-machine co-driving control method described above can be implemented on a vehicle.
- the present invention also provides a computer-readable storage medium.
- the computer-readable storage medium can be configured to store a program for executing a human-machine co-driving control method of the above-mentioned method embodiment, and the program can be loaded and run by a processor to implement the above-mentioned human-machine co-driving control method.
- the computer-readable storage medium may be a storage device including various electronic devices.
- the computer-readable storage medium in the embodiment of the present invention is a non-temporary computer-readable storage medium.
- each module is only for illustrating the functional modules of the device of the present invention
- the physical devices corresponding to these modules may be the processor itself, or a part of the software in the processor, a part of the hardware, or a part of the combination of software and hardware. Therefore, the number of each module in the figure is only schematic.
- modules in the device can be adaptively split or merged. Such splitting or merging of specific modules will not cause the technical solution to deviate from the principle of the present invention, and therefore, the technical solutions after splitting or merging will fall within the protection scope of the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Traffic Control Systems (AREA)
Abstract
本发明涉及人机共驾技术领域,具体提供一种人机共驾控制方法、装置、系统及车辆,旨在解决如何降低自动驾驶事故率的问题。为此目的,本发明的一种人机共驾控制方法,包括:获取驾驶员状态、感知对象的置信度及预设车道的风险场景标识;根据风险场景标识,向驾驶员发出警告;根据驾驶员状态,增加感知对象的置信度并控制改变驾驶策略。
Description
本申请要求2022年11月30日提交的、发明名称为“人机共驾控制方法、装置、系统及车辆”的中国专利申请CN202211525287.0的优先权,上述中国专利申请的全部内容通过引用并入本申请中。
本发明涉及人机共驾技术,具体提供一种人机共驾控制方法、装置、系统及车辆。
目前由于自动驾驶系统的技术限制,事故风险率较高,且经过长时间的研究、发展,业界发现高度自动驾驶功能(4级和5级)复杂且难以达到。人机共驾模式是覆盖从3级或更低(辅助功能)到4级或5级(自动功能)过渡阶段的有效方法,通过人机共驾模式,不仅能提升用户体验,还能解决自动驾驶系统的技术限制,但目前的人机共驾模式缺乏场景处理考虑,自车感知系统无法正确检测所有物体,有可能出现对象漏检情况,从而无法及时警告驾驶员做出防御动作,可能导致事故风险。
发明内容
为了克服上述缺陷,提出了本发明,以提供解决或至少部分地解决如何降低自动驾驶事故率的技术问题的一种人机共驾控制方法、装置、系统及车辆。
在第一方面,本发明提供一种人机共驾控制方法,包括:
获取驾驶员状态、感知对象的置信度及预设车道的风险场景标识;
根据风险场景标识,向驾驶员发出警告;
根据驾驶员状态,增加感知对象的置信度并控制改变驾驶策略。
在上述人机共驾控制方法的一个技术方案中,
所述获取驾驶员状态包括:根据驾驶员在预设时间内是否注意非分心区域,得到驾驶员状态是否处于分心状态。
在上述人机共驾控制方法的一个技术方案中,
所述根据驾驶员状态,增加感知对象的置信度并控制改变驾驶策略,包括:
根据感知对象的置信度控制驾驶策略;
所述驾驶员状态处于分心状态,增加感知对象的置信度并控制驾驶策略改变为防御性驾驶策略。
在上述人机共驾控制方法的一个技术方案中,
所述根据驾驶员在预设时间内是否注意非分心区域,得到驾驶员状态是否处于分心状态,包括:驾驶员在预设时间内未注意到非分心区域,驾驶员状态处于分心状态;
所述方法还包括:驾驶员状态处于分心状态,增加向驾驶员发出警告的强度。
在上述人机共驾控制方法的一个技术方案中,
所述方法还包括根据风险场景标识,改变所述预设时间长度。
在上述人机共驾控制方法的一个技术方案中,
所述感知对象的置信度为自车感知系统识别对象后输出的置信值。
在上述人机共驾控制方法的一个技术方案中,
所述获取预设车道的风险场景标识包括:
获取预设车道上行驶车辆及车道信息;
根据所述行驶车辆及车道信息判断预设车道是否为风险场景;
对风险场景进行标识,得到风险场景标记。
在第二方面,本发明提供一种人机共驾控制装置,包括存储器、一个或多个处理器、一个或多个应用程序,其中,所述一个或多个应用程序存储在所述存储器中,所述一个或多个应用程序被配置为由所述一个或多个处理器调用时,使得所述一个或多个处理器执行如第一方面任一项技术方案中所述的方法。
在第三方面,本发明提供一种控制系统,所述系统包括如第二方面中所述的控制装置。
在第四方面,本发明提供一种车辆,所述车辆包括如第二方面中所述的控制装置。
本发明上述一个或多个技术方案,至少具有如下一种或多种有益效果:
在实施本发明的技术方案中,将车道风险提示、驾驶员状态与自车感知系统结果相结合,根据风险场景做出不同控制决策,即警告驾驶员、增加感知对象的置信度和控制改变驾驶策略,可有效避免自车感知系统识别对象漏检,综合考虑驾驶员状态,降低自动驾驶的事故率,改善用户体验。
参照附图,本发明的公开内容将变得更易理解。本领域技术人员容易理解的是:这些附图仅仅用于说明的目的,而并非意在对本发明的保护范围组成限制。此外,图中类似的数字用以表示类似的部件,其中:
图1是根据本发明的一个实施例的一种人机共驾控制方法的主要步骤流程示意图;
图2是根据本发明的一个实施例的风险场景标识获取主要步骤流程示意图;
图3是根据本发明的一个实施例的风险场景下功能逻辑主要步骤流程示意图;
图4是根据本发明的一个实施例的人机共驾控制系统主要结构示意图。
下面参照附图来描述本发明的一些实施方式。本领域技术人员应当理解的是,这些实施方式仅仅用于解释本发明的技术原理,并非旨在限制本发明的保护范围。
在本发明的描述中,“模块”、“处理器”可以包括硬件、软件或者两者的组合。一个模块可以包括硬件电路,各种合适的感应器,通信端口,存储器,也可以包括软件部分,比如程序代码,也可以是软件和
硬件的组合。处理器可以是中央处理器、微处理器、图像处理器、数字信号处理器或者其他任何合适的处理器。处理器具有数据和/或信号处理功能。处理器可以以软件方式实现、硬件方式实现或者二者结合方式实现。非暂时性的计算机可读存储介质包括任何合适的可存储程序代码的介质,比如磁碟、硬盘、光碟、闪存、只读存储器、随机存取存储器等等。术语“A和/或B”表示所有可能的A与B的组合,比如只是A、只是B或者A和B。术语“至少一个A或B”或者“A和B中的至少一个”含义与“A和/或B”类似,可以包括只是A、只是B或者A和B。单数形式的术语“一个”、“这个”也可以包含复数形式。
在本文中用到的方位术语如“前”、“前侧”、“前部”、“后”、“后侧”和“后部”等均以部件安装至车辆后车辆的前后方向为基准。在本文中提到的“纵”、“纵向”、“纵截面”均以部件安装至车辆后的前后方向为基准,而“横”、“横向”、“横截面”则表示垂直于纵向方向。
这里先解释本发明涉及到的一些术语。
自车感知系统(self-car perception system),自动驾驶的关键技术为感知、规划和控制三部分,自动驾驶系统通过感知系统,获取车辆自身信息与周围环境信息,经过处理器对采集到的数据信息进行分析计算和处理,从而做出决策控制执行系统实现车辆加减速和转向等动作。感知主要是通过环境感知进行定位。环境感知指对于环境的场景理解能力,例如障碍物的类型、道路标志及标线、行人车辆的检测、交通信号等数据的语义分类。定位是对感知结果的后处理,通过定位功能从而帮助自动车了解其相对于所处环境的位置。
目前,自动驾驶系统存在以下问题:感知无法正确检测所有物体,定位系统无法实时满足车道级定位精度,控制系统没有考虑到所有场景下的处理方式,这些技术限制可能导致事故风险,如上述感知系统如果遗漏了物体(感知对象),对自车来说是“视而不见”的状态,会造成严重的安全风险。
鉴于此,本领域需要一种新的人机共驾控制方法来解决上述问题。
参阅附图1,图1是根据本发明的一个实施例的一种人机共驾控制方
法的主要步骤流程示意图。如图1所示,本发明实施例中的一种人机共驾控制方法主要包括下列步骤S101-步骤S103。
步骤S101:获取驾驶员状态、感知对象的置信度及预设车道的风险场景标识;
在本实施例中,驾驶员状态指驾驶员是否处于分心状态;感知对象的置信度即上述自车感知系统识别对象后输出的置信值,感知系统识别对象后,对象列表每个对象都有置信值,具体的,在自动驾驶系统中,根据识别物体的置信值来对车辆进行控制驾驶,如果对象置信值较低的话,系统可能会忽略此对象,从而继续本来的驾驶行为,但是此对象也有可能造成一定风险,因此,漏检此对象可能会带来事故风险;预设车道的风险场景标识,指将要行驶的车道是否为风险场景。
一个实施方式中,可以通过监控驾驶员眼睛凝视和头部姿势是否在一定时间内注意非分心区域,判断驾驶员状态是否为分心状态,其中,非分心区域指驾驶室正前方对应的车前挡风玻璃区域,即通过判断驾驶员是否注意前方道路情况判断其是否分心。在本实施例中,可以通过DMS(驾驶员监控系统)摄像头对驾驶员进行监控从而得到驾驶员状态。
一个实施方式中,参考图2,可以通过以下步骤S201-步骤S203获取预设道路的风险场景标识,主要包括:
步骤S201:获取预设车道上行驶车辆及车道信息;
具体的,云端大数据引擎将接收道路上行驶的车辆状况形成报告,该报告包括该道路上的车辆及车道信息,诸如施工道路、ODD边界、道路事故以及功能经常处于非活动状态或驾驶员始终接管控制的车道等信息。
步骤S202:根据行驶车辆及车道信息判断预设车道是否为风险场景;
具体的,可以通过云端大数据引擎根据上述信息综合判断该车道是否为风险场景车道。
步骤S203:对风险场景进行标识,得到风险场景标记。
具体的,结合风险场景判断与对应车道形成车道ID及其对应的属性,该属性即指是否为风险场景,如为风险场景进行标识,得到道路的风险场景标识,从而判断是否要对驾驶员进行警告。
步骤S102:根据风险场景标识,向驾驶员发出警告;
在本实施例中,根据得到预设车道的风险场景标识,即车辆即将自动行驶的道路是否存在风险,若存在风险,及时向驾驶员发出警告,提醒驾驶员不要分心。
在一个实施方式中,可以通过车辆的HMI(人机交互界面)向驾驶员发送注意请求,避免驾驶员在风险场景下分心,警告方式可以是音频警告或显示警告。
在一个实施方式中,可以根据风险场景标识,改变驾驶员状态监控的灵敏度,即驾驶员注意非分心区域的时间长度。例如,可以将DMS的传感器灵敏度分为3个等级,高、中和低,如果出现风险场景标识,将传感器灵敏度调整为高,如果无风险,则为低。对应高灵敏度,设定驾驶员没有看非分心区域持续4秒(甚至更短),则推送驾驶员为分心状态;对应低灵敏度,则相对设定驾驶员没有看到非分心区域持续8秒(大于高灵敏度,这里只用于举例),则推送驾驶员为分心状态。
步骤S103:根据驾驶员状态,增加感知对象的置信度并控制改变驾驶策略。
在本实施例中,如果预设车道为风险场景,此时,根据驾驶员状态,进行决策控制,该决策控制包括增加感知对象的置信度和控制改变驾驶策略;根据前文置信度介绍,如果对象置信值较低的话,系统可能会忽略此对象,从而继续本来的驾驶行为,但是此对象也有可能造成一定风险,因此,在风险场景下,除了对驾驶员进行警告,还需根据驾驶员状态来决定是否增加感知对象的置信度,从而改变原本的驾驶策略,做出防御行为,防止风险发生。
在一个实施方式中,在风险场景下,驾驶员还处于分心状态,此时,增加感知对象的置信度,从而改变原本的驾驶策略,例如,可能原本忽略的对象,在提高置信度后,驾驶策略改变为防御性驾驶策略,从而避开风险,本发明中防御性驾驶策略可以是纵向控制、横向控制或减速甚至刹车控制。
例如参考图3的功能逻辑,获取3公里的高级车道ID与属性到功能逻辑(可以来自云图),如果标记了风险场景,HMI将向驾驶员发送注
意请求,以避免驾驶员在此场景下分心。如果驾驶员继续处于分心状态,功能逻辑也会改变感知结果,其中每个对象的置信度值增加,避免遗漏检测对象,并将控制逻辑更改为防御行为,以避免发生任何事故。
在一个实施方式中,驾驶员处于分心状态,除了增加感知对象的置信度和控制改变驾驶策略,还需提高警告强度,如增大音频警告音量、画面频闪等,提示驾驶员在风险场景下勿分心。
基于上述步骤S101-步骤S103,本实施例的方法将车道风险提示、驾驶员状态与自车感知系统结果相结合,根据风险场景做出不同控制决策,即警告驾驶员、增加感知对象的置信度和控制改变驾驶策略,可有效避免自车感知系统识别对象漏检,综合考虑驾驶员状态,降低自动驾驶的事故率,改善用户体验。
需要指出的是,尽管上述实施例中将各个步骤按照特定的先后顺序进行了描述,但是本领域技术人员可以理解,为了实现本发明的效果,不同的步骤之间并非必须按照这样的顺序执行,其可以同时(并行)执行或以其他顺序执行,这些变化都在本发明的保护范围之内。
进一步,本发明提供一种人机共驾控制装置,包括存储装置和处理器,存储装置可以被配置成存储执行上述方法实施例的人机共驾控制方法的程序,处理器可以被配置成用于执行存储装置中的程序,该程序包括但不限于执行上述方法实施例的人机共驾控制方法的程序。为了便于说明,仅示出了与本发明实施例相关的部分,具体技术细节未揭示的,请参照本发明实施例方法部分。
在本发明实施例中控制装置可以是包括各种电子设备形成的控制装置设备。在一些可能的实施方式中,计算机设备可以包括多个存储装置和多个处理器。而执行上述方法实施例的人机共驾控制方法的程序可以被分割成多段子程序,每段子程序分别可以由处理器加载并运行以执行上述方法实施例的人机共驾控制方法的不同步骤。具体地,每段子程序可以分别存储在不同的存储装置中,每个处理器可以被配置成用于执行一个或多个存储装置中的程序,以共同实现上述方法实施例的人机共驾方法,即每个处理器分别执行上述方法实施例的人机共驾控制方法的不同步骤,来共同实现上述方法实施例的人机共驾控制方法。
上述多个处理器可以是部署于同一个设备上的处理器,例如上述计算机设备可以是由多个处理器组成的高性能设备,上述多个处理器可以是该高性能设备上配置的处理器。此外,上述多个处理器也可以是部署于不同设备上的处理器,例如上述计算机设备可以是服务器集群,上述多个处理器可以是服务器集群中不同服务器上的处理器。
上述人机共驾控制装置以用于执行图1所示的一种人机共驾控制方法实施例,两者的技术原理、所解决的技术问题及产生的技术效果相似,本技术领域技术人员可以清楚地了解到,为了描述的方便和简洁,人机共驾控制装置的具体工作过程及有关说明,可以参考一种人机共驾控制方法的实施例所描述的内容,此处不再赘述。
本领域技术人员能够理解的是,本发明实现上述一实施例的方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读存储介质可以包括:能够携带所述计算机程序代码的任何实体或装置、介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器、随机存取存储器、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读存储介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读存储介质不包括电载波信号和电信信号。
进一步,本发明还提供了一种控制系统,所述系统包括如上所述的控制装置。为了便于说明,仅示出了与本发明实施例相关的部分,具体技术细节未揭示的,请参照本发明实施例方法部分。
一个实施方式中,参考图4的系统框图,人机共驾控制系统接收来自DMS摄像头和环境感知摄像头的图像,经过感知系统产生2种输出:眼睛凝视和头部姿势用于判断驾驶员状态部分,以决定驾驶员是否分心。感知系统还输出有感知对象的置信度,对象列表每个对象都有置信度值作为函数逻辑的输入,该逻辑综合置信度输入、驾驶员状态以及云图发
送的风险标识(车道ID及其对应的属性,该属性即指是否为风险场景),决定不同的决策反应,例如警告驾驶员,控制逻辑更改为防御机动,增加感知对象的置信值。其中,函数逻辑即执行如上述人机共驾控制方法步骤S101-S103的方案,该系统综合了DMS感知结果和地图数据,配合自车感知系统,在某些高风险场景和自动驾驶功能的极端情况下,确保驾驶员处于正常驾驶状态中,并在驾驶员不监控环境情况时增加置信度,做出防御,从而降低自动驾驶功能的事故率,改善用户自驾体验。
进一步,本发明还提供了一种车辆,包括上述实施例的人机共驾控制装置或人机共驾控制系统。根据一些实施例,上面描述的人机共驾控制方法可以在车辆上实施。
进一步,本发明还提供了一种计算机可读存储介质。在根据本发明的一个计算机可读存储介质实施例中,计算机可读存储介质可以被配置成存储执行上述方法实施例的一种人机共驾控制方法的程序,该程序可以由处理器加载并运行以实现上述一种人机共驾控制方法。为了便于说明,仅示出了与本发明实施例相关的部分,具体技术细节未揭示的,请参照本发明实施例方法部分。该计算机可读存储介质可以是包括各种电子设备形成的存储装置设备,可选的,本发明实施例中计算机可读存储介质是非暂时性的计算机可读存储介质。
进一步,应该理解的是,由于各个模块的设定仅仅是为了说明本发明的装置的功能模块,这些模块对应的物理器件可以是处理器本身,或者处理器中软件的一部分,硬件的一部分,或者软件和硬件结合的一部分。因此,图中的各个模块的数量仅仅是示意性的。
本领域技术人员能够理解的是,可以对装置中的各个模块进行适应性地拆分或合并。对具体模块的这种拆分或合并并不会导致技术方案偏离本发明的原理,因此,拆分或合并之后的技术方案都将落入本发明的保护范围内。
至此,已经结合附图所示的优选实施方式描述了本发明的技术方案,但是,本领域技术人员容易理解的是,本发明的保护范围显然不局限于这些具体实施方式。在不偏离本发明的原理的前提下,本领域技术人员可以对相关技术特征作出等同的更改或替换,这些更改或替换之后的技
术方案都将落入本发明的保护范围之内。
Claims (10)
- 一种人机共驾控制方法,其特征在于,包括:获取驾驶员状态、感知对象的置信度及预设车道的风险场景标识;根据风险场景标识,向驾驶员发出警告;根据驾驶员状态,增加感知对象的置信度并控制改变驾驶策略。
- 根据权利要求1所述的人机共驾控制方法,其特征在于,所述获取驾驶员状态包括:根据驾驶员在预设时间内是否注意非分心区域,得到驾驶员状态是否处于分心状态。
- 根据权利要求2所述的人机共驾控制方法,其特征在于,所述根据驾驶员状态,增加感知对象的置信度并控制改变驾驶策略,包括:根据感知对象的置信度控制驾驶策略;所述驾驶员状态处于分心状态,增加感知对象的置信度并控制驾驶策略改变为防御性驾驶策略。
- 根据权利要求2所述的人机共驾控制方法,其特征在于,所述根据驾驶员在预设时间内是否注意非分心区域,得到驾驶员状态是否处于分心状态,包括:驾驶员在预设时间内未注意到非分心区域,驾驶员状态处于分心状态;所述方法还包括:驾驶员状态处于分心状态,增加向驾驶员发出警告的强度。
- 根据权利要求2所述的人机共驾控制方法,其特征在于,所述方法还包括根据风险场景标识,改变所述预设时间长度。
- 根据权利要求1所述的人机共驾控制方法,其特征在于,所述感知对象的置信度为自车感知系统识别对象后输出的置信值。
- 根据权利要求1-6中任一项所述的人机共驾控制方法,其特征在于,所述获取预设车道的风险场景标识包括:获取预设车道上行驶车辆及车道信息;根据所述行驶车辆及车道信息判断预设车道是否为风险场景;对风险场景进行标识,得到风险场景标记。
- 一种人机共驾控制装置,其特征在于,包括存储器、一个或多个处理器、一个或多个应用程序,其中,所述一个或多个应用程序存储在所述存储器中,所述一个或多个应用程序被配置为由所述一个或多个处理器调用时,使得所述一个或多个处理器执行如权利要求1-7中任一项所述的方法。
- 一种控制系统,其特征在于,所述系统包括如权利要求8中所述的控制装置。
- 一种车辆,其特征在于,所述车辆包括如权利要求8中所述的控制装置。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211525287.0 | 2022-11-30 | ||
CN202211525287.0A CN115923809A (zh) | 2022-11-30 | 2022-11-30 | 人机共驾控制方法、装置、系统及车辆 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024114426A1 true WO2024114426A1 (zh) | 2024-06-06 |
Family
ID=86555268
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/132573 WO2024114426A1 (zh) | 2022-11-30 | 2023-11-20 | 人机共驾控制方法、装置、系统及车辆 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115923809A (zh) |
WO (1) | WO2024114426A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115923809A (zh) * | 2022-11-30 | 2023-04-07 | 蔚来软件科技(上海)有限公司 | 人机共驾控制方法、装置、系统及车辆 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110178104A (zh) * | 2016-11-07 | 2019-08-27 | 新自动公司 | 用于确定驾驶员分心的系统和方法 |
CN110654379A (zh) * | 2018-06-28 | 2020-01-07 | 株式会社万都 | 防撞设备和防撞方法以及驾驶支持设备 |
CN111862680A (zh) * | 2019-04-25 | 2020-10-30 | 通用汽车环球科技运作有限责任公司 | 动态前向碰撞警报系统 |
US20210063179A1 (en) * | 2019-09-03 | 2021-03-04 | Allstate Insurance Company | Systems and Methods of Connected Driving Based on Dynamic Contextual Factors |
US20210380115A1 (en) * | 2021-06-15 | 2021-12-09 | Nauto, Inc. | Devices and methods for predicting collisions and/or intersection violations |
CN114469097A (zh) * | 2021-12-13 | 2022-05-13 | 同济大学 | 一种人机共驾接管状态测试方法 |
CN115923809A (zh) * | 2022-11-30 | 2023-04-07 | 蔚来软件科技(上海)有限公司 | 人机共驾控制方法、装置、系统及车辆 |
-
2022
- 2022-11-30 CN CN202211525287.0A patent/CN115923809A/zh active Pending
-
2023
- 2023-11-20 WO PCT/CN2023/132573 patent/WO2024114426A1/zh unknown
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110178104A (zh) * | 2016-11-07 | 2019-08-27 | 新自动公司 | 用于确定驾驶员分心的系统和方法 |
CN110654379A (zh) * | 2018-06-28 | 2020-01-07 | 株式会社万都 | 防撞设备和防撞方法以及驾驶支持设备 |
CN111862680A (zh) * | 2019-04-25 | 2020-10-30 | 通用汽车环球科技运作有限责任公司 | 动态前向碰撞警报系统 |
US20210063179A1 (en) * | 2019-09-03 | 2021-03-04 | Allstate Insurance Company | Systems and Methods of Connected Driving Based on Dynamic Contextual Factors |
US20210380115A1 (en) * | 2021-06-15 | 2021-12-09 | Nauto, Inc. | Devices and methods for predicting collisions and/or intersection violations |
CN114469097A (zh) * | 2021-12-13 | 2022-05-13 | 同济大学 | 一种人机共驾接管状态测试方法 |
CN115923809A (zh) * | 2022-11-30 | 2023-04-07 | 蔚来软件科技(上海)有限公司 | 人机共驾控制方法、装置、系统及车辆 |
Also Published As
Publication number | Publication date |
---|---|
CN115923809A (zh) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3865363B1 (en) | Vehicle control method and apparatus, electronic device and storage medium | |
CN108509832B (zh) | 用于产生虚拟车道的方法和装置 | |
US10259457B2 (en) | Traffic light anticipation | |
JP6698945B2 (ja) | 危険車両予測装置、危険車両警報システムおよび危険車両予測方法 | |
US9368031B2 (en) | Vehicle surface tinting for visual indication of environmental conditions | |
CN110660256B (zh) | 一种信号灯状态的估计方法及装置 | |
KR20200110702A (ko) | 기본 미리 보기 영역 및 시선 기반 운전자 주의 산만 검출 | |
US20210089049A1 (en) | Vehicle control method and device | |
US20200027351A1 (en) | In-vehicle device, control method, and program | |
WO2018004858A2 (en) | Road condition heads up display | |
US10861336B2 (en) | Monitoring drivers and external environment for vehicles | |
WO2024114426A1 (zh) | 人机共驾控制方法、装置、系统及车辆 | |
US20200180635A1 (en) | Apparatus for controlling driving of a vehicle, a system having the same and a method thereof | |
JP6558356B2 (ja) | 自動運転システム | |
US11524697B2 (en) | Computer-assisted driving method and apparatus including automatic mitigation of potential emergency | |
KR102598953B1 (ko) | 차량 제어 장치, 그를 포함한 시스템 및 그 방법 | |
US9478137B1 (en) | Detecting and communicating lane splitting maneuver | |
KR102534960B1 (ko) | 자율주행 차량들을 위한 행렬들의 검출 및 그에 대한 대응 | |
JP2017087923A (ja) | 運転支援装置 | |
US20220363266A1 (en) | Systems and methods for improving driver attention awareness | |
US20220092313A1 (en) | Method for deep neural network functional module deduplication | |
JP2019144971A (ja) | 移動体の制御システムおよび制御方法 | |
CN104097587A (zh) | 一种行车提示控制装置及方法 | |
US11687155B2 (en) | Method for vehicle eye tracking system | |
CN114511834A (zh) | 一种确定提示信息的方法、装置、电子设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23896578 Country of ref document: EP Kind code of ref document: A1 |