WO2023071024A1 - 驾驶辅助模式切换方法、装置、设备及存储介质 - Google Patents

驾驶辅助模式切换方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2023071024A1
WO2023071024A1 PCT/CN2022/080961 CN2022080961W WO2023071024A1 WO 2023071024 A1 WO2023071024 A1 WO 2023071024A1 CN 2022080961 W CN2022080961 W CN 2022080961W WO 2023071024 A1 WO2023071024 A1 WO 2023071024A1
Authority
WO
WIPO (PCT)
Prior art keywords
driving assistance
area
assistance mode
driver
sight
Prior art date
Application number
PCT/CN2022/080961
Other languages
English (en)
French (fr)
Inventor
罗文�
常健
覃毅哲
赵芸
覃远航
李帅
Original Assignee
东风柳州汽车有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 东风柳州汽车有限公司 filed Critical 东风柳州汽车有限公司
Publication of WO2023071024A1 publication Critical patent/WO2023071024A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0098Details of control systems ensuring comfort, safety or stability not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0818Inactivity or incapacity of driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/225Direction of gaze
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/80Technologies aiming to reduce greenhouse gasses emissions common to all road transportation technologies
    • Y02T10/84Data processing systems or methods, management, administration

Definitions

  • the present application relates to the technical field of vehicle control, and in particular to a driving assistance mode switching method, device, device and storage medium.
  • the fatigue monitoring method detects the status of the driver, such as: whether they are on the phone, whether they are smoking, the number of blinks within a time period, etc. These detection methods cannot effectively detect the driver’s status. Some drivers like to smoke or blink. Higher, not applicable to the existing fatigue monitoring methods; the driver seems to be in a normal state, but his eyes are blurred. The system should recognize that the driver is fatigued.
  • the driving assistance mode switch is set according to the driver's initiative; while the driving assistance mode is strongly related to the driver's state and the surrounding environment of the vehicle.
  • the assistance mode should be adaptively adjusted to the highest level to ensure the safety of the driver and the vehicle to the greatest extent.
  • the main purpose of the present application is to provide a driving assistance mode switching method, which aims to solve the technical problem of how to accurately judge the driving state of the driver and switch the driving mode according to the driving state in the prior art.
  • the present application provides a driving assistance mode switching method, the method includes the following steps:
  • the determining the area to be concerned with according to the environment image includes:
  • the area that should be paid attention to is determined.
  • the environment image is searched according to the initial global saliency threshold and the initial search radius to obtain search results, including:
  • determining the driver's line of sight area according to the driver's facial image includes:
  • the driver's face image is segmented into a plurality of face candidate regions
  • the face candidate area corresponding to the gray value greater than the gray value threshold is used as the pupil candidate area;
  • the driver's sight area is determined according to the pupil center feature.
  • the determination of the driver's sight area according to the pupil center feature includes:
  • the target mapping relationship and the pupil center feature vector determine the driver's sight area.
  • the target mapping relationship of determining the pupil center feature vector and the gaze direction vector includes:
  • a target mapping relationship between the pupil center feature vector and the gaze direction vector is determined according to the first order derivative function and a preset value.
  • the determination of the target driving assistance mode according to the attention area and the driver's sight area includes:
  • the first driving assistance mode is used as the target driving assistance mode
  • the third driving assistance mode is used as the target driving assistance mode.
  • the present application also proposes a driving assistance mode switching device, and the driving assistance mode switching device includes:
  • the facial acquisition module is configured to acquire the environment image around the vehicle and the facial image of the driver;
  • an area determination module configured to determine an area to be concerned with according to the environment image
  • a line of sight determination module configured to determine the driver's line of sight area according to the driver's facial image
  • a mode determination module configured to determine a target driving assistance mode according to the area that should be paid attention to and the area of sight of the driver;
  • a mode switching method configured to switch the current driving assistance mode to a target driving assistance mode.
  • the present application also proposes a driving assistance mode switching device, the driving assistance mode switching device includes: a memory, a processor, and a driving assistance system stored in the memory and operable on the processor.
  • An assistance mode switching program, the driving assistance mode switching program is configured to implement the steps of the above-mentioned driving assistance mode switching method.
  • the present application also proposes a storage medium, on which a driving assistance mode switching program is stored, and when the driving assistance mode switching program is executed by a processor, the driving assistance mode as described above is realized. Steps to switch methods.
  • the present application acquires the environment image around the vehicle and the driver's face image; determines the attention area according to the environment image; determines the driver's sight area according to the driver's face image; according to the attention area and the driver's sight line
  • the area determines the target driving assistance mode; the current driving assistance mode is switched to the target driving assistance mode.
  • FIG. 1 is a schematic structural diagram of a driving assistance mode switching device in a hardware operating environment involved in an embodiment of the present application
  • FIG. 2 is a schematic flowchart of the first embodiment of the driving assistance mode switching method of the present application
  • FIG. 3 is a schematic flowchart of the second embodiment of the driving assistance mode switching method of the present application.
  • Fig. 4 is a structural block diagram of the first embodiment of the driving assistance mode switching device of the present application.
  • FIG. 1 is a schematic structural diagram of a driving assistance mode switching device in a hardware operating environment involved in an embodiment of the present application.
  • the driving assistance mode switching device may include: a processor 1001, such as a central processing unit (Central Processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005.
  • the communication bus 1002 is configured to realize connection and communication between these components.
  • the user interface 1003 may include a display screen (Display), an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a Wireless-Fidelity (Wi-Fi) interface).
  • Wi-Fi Wireless-Fidelity
  • the memory 1005 may be a high-speed random access memory (Random Access Memory, RAM), or a stable non-volatile memory (Non-Volatile Memory, NVM), such as a disk memory.
  • RAM Random Access Memory
  • NVM Non-Volatile Memory
  • the memory 1005 may also be a storage device independent of the aforementioned processor 1001.
  • Figure 1 does not constitute a limitation on the driving assistance mode switching device, and may include more or less components than those shown in the illustration, or combine certain components, or arrange different components .
  • the memory 1005 as a storage medium may include an operating system, a network communication module, a user interface module, and a driving assistance mode switching program.
  • the network interface 1004 is mainly configured to communicate data with the network server; the user interface 1003 is mainly configured to perform data interaction with the user; the processing in the driving assistance mode switching device of the present application
  • the device 1001 and the memory 1005 can be set in the driving assistance mode switching device, and the driving assistance mode switching device calls the driving assistance mode switching program stored in the memory 1005 through the processor 1001, and executes the driving assistance mode switching provided by the embodiment of the present application. method.
  • FIG. 2 is a schematic flowchart of a first embodiment of a method for switching a driving assistance mode of the present application.
  • the driving assistance mode switching method includes the following steps:
  • Step S10 Obtain the environment image around the vehicle and the driver's facial image.
  • the execution subject of this implementation is the vehicle-mounted terminal, and the vehicle-mounted terminal can perform analysis and calculation based on the data collected by the vehicle sensor, so as to realize corresponding functions.
  • a first camera is arranged above the cab of the vehicle and is configured to capture the driver's face image in real time.
  • a second camera is arranged in front of the outside of the vehicle, and the second camera can capture images of the environment around the vehicle that the driver can observe when the driver is in the driver's cab.
  • the second camera can be a wide-angle camera.
  • the two cameras need to be calibrated, so that the two cameras can convert the captured image content into the same world coordinate system.
  • calibrating the positions of the two cameras are first transformed into the same world coordinate system, and then the conversion relationship of the contents captured by the two cameras into the world coordinate system is calculated respectively.
  • Step S20 Determine the attention area according to the environment image.
  • the vehicle terminal acquires the environment image around the vehicle according to the second camera, it first identifies the objects of interest in the environment image.
  • the objects of interest can be vehicles, pedestrians, lane lines, traffic lights, etc., which will affect driving. kind.
  • the environment image can be input into the trained object recognition model for recognition.
  • Step S30 Determine the driver's sight area according to the driver's facial image.
  • step S30 includes: segmenting the driver's facial image Be a plurality of facial candidate areas; Determine the gray value of each facial candidate area; The facial candidate area corresponding to the gray value greater than the gray value threshold is used as the pupil candidate area; Determine the pupil center feature according to the pupil candidate area; The pupil center feature determines the driver's field of view.
  • the threshold segmentation method can use one of the Otsu threshold segmentation method, adaptive threshold segmentation method, maximum entropy threshold segmentation method or iterative threshold segmentation method.
  • Otsu (Otsu method or maximum inter-class variance method) uses the idea of clustering, divides the gray level of the image into two parts according to the gray level, so that the gray value difference between the two parts is the largest, each part The gray level difference between them is the smallest, and the calculation of the variance is used to find a suitable gray level to divide. Therefore, the otsu algorithm can be used to automatically select the threshold for binarization during binarization.
  • the iterative threshold method is a process of first guessing an initial threshold, and then improving the threshold through multiple calculations on the image. The image is repeatedly thresholded, the image is divided into multiple types, and then the threshold is improved by using the gray level in each class.
  • the gray value of each face candidate region is calculated.
  • a large amount of data is used to analyze the grayscale histogram of the driver's facial image, and it is concluded that the grayscale value of the driver's pupil area is generally stable at greater than 220, and it accounts for a small proportion of the driver's facial image, which is close to the cumulative distribution of the histogram. the right side of the function.
  • Select 220 in the cumulative distribution of the gray histogram as the threshold of the gray value and select the facial candidate area greater than the gray value threshold, which is the pupil candidate area.
  • the pupil center is the position with the least cost of all position points in the pupil candidate area, so the relationship between the pupil center position and all position points in the area is:
  • x i represents the pixel position in the pupil region, i ⁇ 1,2,...,N ⁇ , and c is the pupil center position.
  • the pupil center feature of the pupil center position c is obtained through the minimum cost function.
  • the determination of the driver's sight area according to the pupil center feature includes: determining the pupil center feature vector and the gaze direction vector according to the pupil center feature; determining the pupil center feature vector and the target of the gaze direction vector Mapping relationship; the target mapping relationship and the pupil center feature vector determine the driver's sight area.
  • the determination of the target mapping relationship between the pupil center feature vector and the gaze direction vector includes: establishing the target loss function of the pupil center feature vector and the gaze direction vector; calculating the target loss function Deriving a first-order derivative function; determining a target mapping relationship between the pupil center feature vector and the gaze direction vector according to the first-order derivative function and a preset value.
  • the feature regression method intends to use linear regression to learn the best mapping from X to Y (ie, the target mapping relationship), so that ⁇ ' is obtained by using the minimum loss function, and there is a minimum loss function (ie, the target loss function):
  • Equation 2 E( ⁇ ) is the target loss function, ⁇ is the regularization parameter, and ⁇ is an intermediate variable in the process of calculating the target mapping relationship.
  • ⁇ ' is the target mapping relationship.
  • the driver's sight area can be obtained:
  • X is the feature vector of the pupil center
  • Y is the driver's sight area
  • Step S40 Determine a target driving assistance mode according to the attention area and the driver's sight area.
  • the area of attention is compared with the area of sight of the driver, so that the relationship between the area of attention and the area of sight of the driver can be obtained.
  • the vehicle is hard switched to a different driving assistance mode.
  • step S40 includes: if the area that should be paid attention to is equal to the driver's sight area, setting the first driving assistance mode as the target driving assistance mode; if the area that should be paid attention to belongs to the driver's sight area, setting The second driving assistance mode is used as the target driving assistance mode; if the area to be paid attention to does not belong to the driver's sight area, the third driving assistance mode is used as the target driving assistance mode.
  • the driving assistance mode is divided into a first driving assistance mode, a second driving assistance mode and a third driving assistance mode
  • the first driving assistance mode, the second driving assistance mode and the third driving assistance mode respectively correspond to levels I, Level II and Level III, among which Level I corresponds to the slow mode.
  • the slow mode lowers the threshold of driving assistance, such as emergency braking, adaptive cruise, etc.
  • the braking distance is adjusted to the minimum, that is, the minimum boundary of the safety distance;
  • Level II corresponds to the normal mode.
  • the driving assistance threshold is adjusted to medium, such as emergency braking, adaptive cruise, etc., and the braking distance is adjusted to medium;
  • Level III corresponds to the emergency mode.
  • the driving assistance threshold is adjusted to a higher level, such as The emergency braking, adaptive cruise and other braking distances are adjusted to the maximum, which is the maximum boundary of the safety distance.
  • the mode switching logic is as follows:
  • Mode is the target driving assistance mode
  • I is the first driving assistance mode
  • II is the second driving assistance mode
  • III is the third driving assistance mode
  • Y is the driver's sight area
  • the current driving assistance mode is adjusted to the slow mode; when the driver's sight area does not completely overlap with the area that should be paid attention to and the driver's sight line partially overlaps with the area that should be paid attention to, it is considered that the current driver's condition is good and the driver is paying attention If the target needs to be paid attention to, the current driving assistance mode is adjusted to the normal mode; when there is no intersection between the driver's line of sight area and the area to be paid attention to, that is, the driver's line of sight is not on the area to be paid attention to, it is considered that the current driver's state is poor, and the driver If there is no target that needs attention, the current driving assistance mode is adjusted to the emergency mode.
  • Step S50 Switch the current driving assistance mode to the target driving assistance mode.
  • the current driving assistance mode is switched to the target driving assistance mode, and if the current driving assistance mode is the target driving assistance mode, no switching is required.
  • the driver's sight area is determined according to the driver's facial image, and the area that needs attention is analyzed according to the current environment around the vehicle, and whether it is necessary to switch the driving assistance mode is determined according to the overlap between the sight area and the area that needs attention.
  • FIG. 3 is a schematic flowchart of a second embodiment of a driving assistance mode switching method of the present application.
  • the driving assistance mode switching method in this embodiment includes:
  • Step S21 Determine an initial global saliency threshold and an initial search radius according to the environment image.
  • the maximum gray value of the environment image is calculated first, and the maximum gray value is used as the initial global saliency threshold.
  • the initial search radius determines the search range for the first search in the environment image, for example: when the initial search radius is 100 pixels, the search range is a circle with a radius of 100 pixels.
  • Step S22 Search the environment image according to the initial global saliency threshold and the initial search radius, and obtain a search result.
  • step S22 includes: determining a search area in the environment image according to the initial search radius; comparing the pixel value of each pixel in the search area with the initial global saliency threshold to obtain a comparison value; When the comparison value is in the preset threshold interval, reduce the initial search radius according to the preset reduction value, and search the environment image according to the reduced initial search radius; when the comparison value is equal to the preset threshold , generating search results according to the initial search radius corresponding to the comparison value.
  • the initial search radius is set to 1/2 of the side length of the environment image, and the search area is searched in the environment image according to the initial search radius and the initial global saliency threshold:
  • Num() is used to calculate the number of pixels in it, R(r) represents the search area with a search radius of r, P(x,y) represents the pixel point, and k(r,T) represents the search area
  • T the global saliency threshold
  • the non-significant area of that is, the filtered area at this time is not the required search area.
  • the initial search radius is reduced according to the preset reduction value, and the environment image is searched according to the reduced initial search radius until k(r, T) approaches 1 indefinitely. At this time, the area is considered to be driving The part that members should pay attention to.
  • Step S23 Determine the attention area according to the search result.
  • the search result includes the final initial search radius, and the area within the initial search radius is the area that should be paid attention to.
  • the initial global saliency threshold and the initial search radius are determined according to the environmental image; the environmental image is searched according to the initial global saliency threshold and the initial search radius to obtain a search result; and the search result is determined according to the search result. Areas of concern.
  • the environment image is searched according to the threshold, so as to obtain the area that the driver should pay attention to, so that the part that needs attention can be analyzed from the environment.
  • the embodiment of the present application also proposes a storage medium, on which a driving assistance mode switching program is stored, and when the driving assistance mode switching program is executed by a processor, the above-mentioned driving assistance mode switching method is implemented. step.
  • the storage medium adopts all the technical solutions of all the above-mentioned embodiments, it at least has all the functions brought by the technical solutions of the above-mentioned embodiments, which will not be repeated here.
  • FIG. 4 is a structural block diagram of the first embodiment of the driving assistance mode switching device of the present application.
  • the driving assistance mode switching device proposed in the embodiment of the present application includes:
  • the facial acquisition module 10 is configured to acquire the environment image around the vehicle and the facial image of the driver.
  • the area determining module 20 is configured to determine an area that should be paid attention to according to the environment image.
  • the line of sight determining module 30 is configured to determine the driver's line of sight area according to the driver's facial image.
  • the mode determination module 40 is configured to determine a target driving assistance mode according to the attention area and the driver's sight area.
  • the mode switching method 50 is configured to switch the current driving assistance mode to a target driving assistance mode.
  • the area determination module 20 is further configured to determine an initial global saliency threshold and an initial search radius according to the environment image; search for the The environment image is obtained to obtain search results; according to the search results, an area to be concerned is determined.
  • the area determination module 20 is further configured to determine a search area in the environment image according to the initial search radius; Comparing the saliency threshold to obtain a comparison value; when the comparison value is in a preset threshold range, reducing the initial search radius according to a preset reduction value, and searching the environment image according to the reduced initial search radius; When the comparison value is equal to the preset threshold, a search result is generated according to the initial search radius corresponding to the comparison value.
  • the line of sight determining module 30 is further configured to divide the driver's facial image into a plurality of facial candidate areas; determine the gray value of each facial candidate area; The face candidate area corresponding to the degree value is used as the pupil candidate area; the pupil center feature is determined according to the pupil candidate area; the driver's line of sight area is determined according to the pupil center feature.
  • the line of sight determination module 30 is further configured to determine a pupil center feature vector and a gaze direction vector according to the pupil center feature; determine the target mapping relationship between the pupil center feature vector and the gaze direction vector ; The target mapping relationship and the pupil center feature vector determine the driver's sight area.
  • the line of sight determination module 30 is further configured to establish a target loss function of the pupil center feature vector and the gaze direction vector; deriving the target loss function to obtain a first-order derivative function; according to The first derivative function and the preset value determine the target mapping relationship between the pupil center feature vector and the gaze direction vector.
  • the mode determination module 40 is further configured to use the first driving assistance mode as the target driving assistance mode if the area to be paid attention to is equal to the area of sight of the driver; If it belongs to the driver's sight area, the second driving assistance mode is used as the target driving assistance mode; if the attention area does not belong to the driver's sight area, the third driving assistance mode is used as the target driving assistance mode.
  • the driver's sight area is determined according to the driver's facial image, and the area that needs attention is analyzed according to the current environment around the vehicle, and whether it is necessary to switch the driving assistance mode is determined according to the overlap between the sight area and the area that needs attention.
  • the methods of the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or the part that contributes to the prior art, and the computer software product is stored in a storage medium (such as a read-only memory (Read Only Memory) , ROM)/RAM, magnetic disk, optical disk), including several instructions to make a terminal device (which can be a mobile phone, computer, server, or network device, etc.) execute the methods described in various embodiments of the present application.
  • a storage medium such as a read-only memory (Read Only Memory) , ROM)/RAM, magnetic disk, optical disk
  • a terminal device which can be a mobile phone, computer, server, or network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种驾驶辅助模式切换方法,包括:获取车辆周围的环境图像以及驾驶员面部图像(S10);根据环境图像确定应关注区域(S20);根据驾驶员面部图像确定驾驶员视线区域(S30);根据应关注区域以及驾驶员视线区域确定目标驾驶辅助模式(S40);将当前驾驶辅助模式切换为目标驾驶辅助模式(S50)。还公开了一种驾驶辅助模式切换装置、驾驶辅助模式切换设备及存储介质。

Description

驾驶辅助模式切换方法、装置、设备及存储介质
相关申请
本申请要求于2021年10月26号申请的、申请号为202111251279.7的中国专利申请的优先权,其全部内容通过引用结合于此。
技术领域
本申请涉及车辆控制技术领域,尤其涉及一种驾驶辅助模式切换方法、装置、设备及存储介质。
背景技术
现阶段疲劳监测方法通过检测驾驶员状态如:是否在打电话、是否在抽烟、时间段内眨眼的次数等,这些检测方法均不能有效的检测驾驶员状态,有些驾驶员喜欢抽烟或眨眼评率高等,均不适用于现有疲劳监测方法;而驾驶员看似状态正常,但眼神迷离如:过路口时应该关注左右斑马线两侧是否有行人目标,但驾驶员仍然注视前方,这种情况下系统应该识别出驾驶员处于疲劳状态。
并且现阶段驾驶辅助模式切换根据驾驶员主动设置;而驾驶辅助模式与驾驶员状态及车辆所处周边环境强相关,驾驶员状态差、车辆所处环境属于高危风险且驾驶员尚未意识,则驾驶辅助模式应自适应调整至最高等级,以最大程度保证驾驶员及车辆安全。
上述内容仅用于辅助理解本申请的技术方案,并不代表承认上述内容是现有技术。
申请内容
本申请的主要目的在于提供一种驾驶辅助模式切换方法,旨在解决现有技术如何准确判断驾驶员的驾驶状态,并根据驾驶状态切换驾驶模式的技术问题。
为实现上述目的,本申请提供了一种驾驶辅助模式切换方法,所述方法包括以下步骤:
获取车辆周围的环境图像以及驾驶员面部图像;
根据所述环境图像确定应关注区域;
根据所述驾驶员面部图像确定驾驶员视线区域;
根据所述应关注区域以及所述驾驶员视线区域确定目标驾驶辅助模式;
将当前驾驶辅助模式切换为目标驾驶辅助模式。
在一实施方式中,所述根据所述环境图像确定应关注区域,包括:
根据所述环境图像确定初始全局显著度阈值以及初始搜索半径;
根据所述初始全局显著度阈值以及所述初始搜索半径搜索所述环境图像,得到搜索结果;
根据所述搜索结果确定应关注区域。
在一实施方式中,根据所述初始全局显著度阈值以及所述初始搜索半径搜索所述环境图像,得到搜索结果,包括:
根据所述初始搜索半径确定所述环境图像中的搜索区域;
将所述搜索区域中各像素点的像素值与所述初始全局显著度阈值进行比较,得到比较值;
当所述比较值处于预设阈值区间时,根据预设缩减值减小所述初始搜索半径,并根据减小后的初始搜索半径搜索所述环境图像;
当所述比较值等于预设阈值时,根据所述比较值对应的初始搜索半径生成搜索结果。
在一实施方式中,根据所述驾驶员面部图像确定驾驶员视线区域,包括:
将所述驾驶员面部图像分割为多个面部候选区域;
确定各个面部候选区域的灰度值;
将大于灰度值阈值的灰度值所对应的面部候选区域作为瞳孔候选区域;
根据所述瞳孔候选区域确定瞳孔中心特征;
根据所述瞳孔中心特征确定驾驶员视线区域。
在一实施方式中,所述根据所述瞳孔中心特征确定驾驶员视线区域,包括:
根据所述瞳孔中心特征确定瞳孔中心特征向量以及注视方向向量;
确定所述瞳孔中心特征向量以及所述注视方向向量的目标映射关系;
所述目标映射关系以及所述瞳孔中心特征向量确定驾驶员视线区域。
在一实施方式中,所述确定所述瞳孔中心特征向量以及所述注视方向向 量的目标映射关系,包括:
建立所述瞳孔中心特征向量以及所述注视方向向量的目标损失函数;
对所述目标损失函数求导得到一阶导函数;
根据所述一阶导函数以及预设值确定所述瞳孔中心特征向量以及所述注视方向向量的目标映射关系。
在一实施方式中,所述根据所述应关注区域以及所述驾驶员视线区域确定目标驾驶辅助模式,包括:
若所述应关注区域等于所述驾驶员视线区域时,将第一驾驶辅助模式作为目标驾驶辅助模式;
若所述应关注区域属于所述驾驶员视线区域时,将第二驾驶辅助模式作为目标驾驶辅助模式;
若所述应关注区域不属于所述驾驶员视线区域时,将第三驾驶辅助模式作为目标驾驶辅助模式。
此外,为实现上述目的,本申请还提出一种驾驶辅助模式切换装置,所述驾驶辅助模式切换装置包括:
面部获取模块,被配置为获取车辆周围的环境图像以及驾驶员面部图像;
区域确定模块,被配置为根据所述环境图像确定应关注区域;
视线确定模块,被配置为根据所述驾驶员面部图像确定驾驶员视线区域;
模式确定模块,被配置为根据所述应关注区域以及所述驾驶员视线区域确定目标驾驶辅助模式;
模式切换方法,被配置为将当前驾驶辅助模式切换为目标驾驶辅助模式。
此外,为实现上述目的,本申请还提出一种驾驶辅助模式切换设备,所述驾驶辅助模式切换设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的驾驶辅助模式切换程序,所述驾驶辅助模式切换程序配置为实现如上文所述的驾驶辅助模式切换方法的步骤。
此外,为实现上述目的,本申请还提出一种存储介质,所述存储介质上存储有驾驶辅助模式切换程序,所述驾驶辅助模式切换程序被处理器执行时实现如上文所述的驾驶辅助模式切换方法的步骤。
本申请通过获取车辆周围的环境图像以及驾驶员面部图像;根据所述环 境图像确定应关注区域;根据所述驾驶员面部图像确定驾驶员视线区域;根据所述应关注区域以及所述驾驶员视线区域确定目标驾驶辅助模式;将当前驾驶辅助模式切换为目标驾驶辅助模式。通过上述方式,根据驾驶员面部图像确定驾驶员的视线区域,并根据当前车辆周围的环境分析需要关注的区域,并根据视线区域以及需要关注的区域的重合情况判断是否需要切换驾驶辅助模式。
附图说明
图1是本申请实施例方案涉及的硬件运行环境的驾驶辅助模式切换设备的结构示意图;
图2为本申请驾驶辅助模式切换方法第一实施例的流程示意图;
图3为本申请驾驶辅助模式切换方法第二实施例的流程示意图;
图4为本申请驾驶辅助模式切换装置第一实施例的结构框图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
参照图1,图1为本申请实施例方案涉及的硬件运行环境的驾驶辅助模式切换设备的结构示意图。
如图1所示,该驾驶辅助模式切换设备可以包括:处理器1001,例如中央处理器(Central Processing Unit,CPU),通信总线1002、用户接口1003,网络接口1004,存储器1005。其中,通信总线1002被配置为实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如无线保真(Wireless-Fidelity,Wi-Fi)接口)。存储器1005可以是高速的随机存取存储器(Random Access Memory,RAM),也可以是稳定的非易失性存储器(Non-Volatile Memory,NVM),例如磁盘存储器。存储器1005可选的还可 以是独立于前述处理器1001的存储装置。
本领域技术人员可以理解,图1中示出的结构并不构成对驾驶辅助模式切换设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
如图1所示,作为一种存储介质的存储器1005中可以包括操作系统、网络通信模块、用户接口模块以及驾驶辅助模式切换程序。
在图1所示的驾驶辅助模式切换设备中,网络接口1004主要被配置为与网络服务器进行数据通信;用户接口1003主要被配置为与用户进行数据交互;本申请驾驶辅助模式切换设备中的处理器1001、存储器1005可以设置在驾驶辅助模式切换设备中,所述驾驶辅助模式切换设备通过处理器1001调用存储器1005中存储的驾驶辅助模式切换程序,并执行本申请实施例提供的驾驶辅助模式切换方法。
本申请实施例提供了一种驾驶辅助模式切换方法,参照图2,图2为本申请一种驾驶辅助模式切换方法第一实施例的流程示意图。
本实施例中,所述驾驶辅助模式切换方法包括以下步骤:
步骤S10:获取车辆周围的环境图像以及驾驶员面部图像。
需要说明的是,本实施的执行主体为车载终端,车载终端可以基于车辆传感器采集到的数据进行分析计算,从而实现相应的功能。在本实施例中,车辆驾驶室上方设置有第一摄像头,被配置为实时拍摄驾驶员的面部图像。车辆外部前方设置有第二摄像头,第二摄像头能够拍摄到驾驶员处于驾驶室时能够观察到的车辆周围的环境图像,第二摄像头可为广角摄像头。
需要说明的是,在第一摄像头以及第二摄像头安装后,需要对两摄像头进行标定,使得两摄像头能够将拍摄到的图像内容转换为同一世界坐标系下。在标定时,首先将两摄像头的位置转换到同一世界坐标系下,在分别计算两摄像头拍摄到图像中的内容转换到此世界坐标系下的转换关系。
步骤S20:根据所述环境图像确定应关注区域。
需要说明的是,车辆终端根据第二摄像头获取到车辆周围的环境图像后,首先识别环境图像中的应关注物体,应关注物体可以为车辆、行人、车道线、交通信号灯等等会影响驾驶的实物。在识别应关注物体时,可以将环境图像输入至训练后的物体识别模型中进行识别。
可以理解的是,在识别后,环境图像中应关注被标记,再将所有被标记的位置形成的连续区域作为应关注区域。
步骤S30:根据所述驾驶员面部图像确定驾驶员视线区域。
需要说明的是,驾驶员视线区域是指驾驶员驾驶时眼睛所关注的区域,为了能够更准确地根据驾驶员的面部图像确定驾驶员视线区域,步骤S30包括:将所述驾驶员面部图像分割为多个面部候选区域;确定各个面部候选区域的灰度值;将大于灰度值阈值的灰度值所对应的面部候选区域作为瞳孔候选区域;根据所述瞳孔候选区域确定瞳孔中心特征;根据所述瞳孔中心特征确定驾驶员视线区域。
首先,将驾驶员面部图像阈值分割为多个面部候选区域,阈值分割方法可采用Otsu阈值分割法、自适应阈值分割法、最大熵阈值分割法或迭代阈值分割法中的一种。Otsu(大津法或最大类间方差法)使用的是聚类的思想,把图像的灰度数按灰度级分成2个部分,使得两个部分之间的灰度值差异最大,每个部分之间的灰度差异最小,通过方差的计算来寻找一个合适的灰度级别来划分。所以可以在二值化的时候采用otsu算法来自动选取阈值进行二值化。迭代阈值法是首先猜测一个初始阈值,然后再通过对图像的多次计算从而对阈值进行改进的过程。重复地对图像进行阈值操作,将图像分割为多个类型,然后利用每一个类中的灰阶级别对阈值进行改进。
在将驾驶员面部图像阈值分割为多个面部候选区域后,计算每个面部候选区域的灰度值。本实施例通过大量数据分析驾驶员面部图像灰度直方图,得出驾驶员瞳孔区域的灰度值一般稳定在大于220,且占驾驶员脸部图像中的比例较小,靠近直方图累积分布函数的右侧。选取灰度直方图累积分布中220作为灰度值的阈值,选取大于灰度值阈值的面部候选区域,即为瞳孔候选区域。
可以理解的是,瞳孔中心为瞳孔候选区域中所有位置点代价最小的位置,因此瞳孔中心位置与区域内所有位置点的关系为:
Figure PCTCN2022080961-appb-000001
其中,在公式1中,x i表示瞳孔区域内像素位置,i∈{1,2,...,N},c为瞳 孔中心位置。通过最小代价函数得到瞳孔中心位置c的瞳孔中心特征。
进一步地,所述根据所述瞳孔中心特征确定驾驶员视线区域,包括:根据所述瞳孔中心特征确定瞳孔中心特征向量以及注视方向向量;确定所述瞳孔中心特征向量以及所述注视方向向量的目标映射关系;所述目标映射关系以及所述瞳孔中心特征向量确定驾驶员视线区域。
需要说明的是,驾驶员瞳孔中心特征到注视角度的特征回归可以认为是建立图像表现特征空间和注视方向空间的目标映射关系。因此需要确定目标映射关系。给定X=[x 1,x 2,...,x n]为瞳孔中心特征向量,Y=[y 1,y 2,...,y n]为注视方向向量。
进一步地,所述确定所述瞳孔中心特征向量以及所述注视方向向量的目标映射关系,包括:建立所述瞳孔中心特征向量以及所述注视方向向量的目标损失函数;对所述目标损失函数求导得到一阶导函数;根据所述一阶导函数以及预设值确定所述瞳孔中心特征向量以及所述注视方向向量的目标映射关系。特征回归方法意在利用线性回归学习得到从X到Y的最佳映射(即目标映射关系),使得,其中β'采用最小损失函数得到,有最小损失函数(即目标损失函数):
Figure PCTCN2022080961-appb-000002
在公式2中,E(β)为目标损失函数,λ为正则化参数,β为计算目标映射关系过程中的中间变量。
对目标损失函数求导得到一阶导函数后,令一阶导函数等于预设值,其中预设值为0,可以得到:
β'=(XX T+λ) -1XY T   公式3;
在公式3中,β’为目标映射关系。
根据瞳孔中心特征向量以及目标映射关系可得驾驶员视线区域:
Y=Xβ'  公式4;
在公式4中,X为瞳孔中心特征向量,Y为驾驶员视线区域。
步骤S40:根据所述应关注区域以及所述驾驶员视线区域确定目标驾驶辅助模式。
在具体实现中,将应关注区域与驾驶员视线区域进行比较,从而可以得 到应关注区域与驾驶员视线区域的关系,处于不同的关系时,车辆硬切换到不同的驾驶辅助模式。
进一步地,步骤S40包括:若所述应关注区域等于所述驾驶员视线区域时,将第一驾驶辅助模式作为目标驾驶辅助模式;若所述应关注区域属于所述驾驶员视线区域时,将第二驾驶辅助模式作为目标驾驶辅助模式;若所述应关注区域不属于所述驾驶员视线区域时,将第三驾驶辅助模式作为目标驾驶辅助模式。
可以理解的是,驾驶辅助模式分为第一驾驶辅助模式、第二驾驶辅助模式以及第三驾驶辅助模式,第一驾驶辅助模式、第二驾驶辅助模式以及第三驾驶辅助模式分别对应I级、II级、III级,其中,I级对应迟缓模式,在迟缓模式下,迟缓模式将驾驶辅助阈值调低,如紧急制动、自适应巡航等刹车距离调整至最小,即安全距离的最小边界;II级对应常规模式,在常规模式下,驾驶辅助阈值调至中等,如紧急制动、自适应巡航等刹车距离调整至中等;III级对应紧急模式,在紧急模式下驾驶辅助阈值调高,如紧急制动、自适应巡航等刹车距离调整至最大,即安全距离的最大边界。模式切换逻辑如下:
Figure PCTCN2022080961-appb-000003
在公式5中,Mode为目标驾驶辅助模式,I为第一驾驶辅助模式,II为第二驾驶辅助模式,III为第三驾驶辅助模式,Y为驾驶员视线区域,r为应关注区域,if为“如果”,表条件选择,例如:当Y=r时,目标驾驶辅助模式则为第一驾驶辅助模式。
可以理解的是,当驾驶员视线区域与应关注区域重叠时,即驾驶员在很短的时间内视线均保持在应关注区域上,认为当前驾驶员状态为优,驾驶员正在关注所有需要关注的目标,则当前驾驶辅助模式调整至迟缓模式;当驾驶员视线区域与应关注区域并不完全重叠且驾驶员视线与应关注区域存在部分重合,认为当前驾驶员状态较好,驾驶员正关注需要关注的目标,则当前驾驶辅助模式调整至常规模式;当驾驶员视线区域与应关注区域不存在交集时,即驾驶员视线均不在应关注区域上,认为当前驾驶员状态较差,驾驶员没有关注需要关注的目标,则当前驾驶辅助模式调整至紧急模式。
步骤S50:将当前驾驶辅助模式切换为目标驾驶辅助模式。
需要说明的是,若当前驾驶辅助模式不为目标驾驶辅助模式,则将当前驾驶辅助模式切换为目标驾驶辅助模式,若当前驾驶辅助模式为目标驾驶辅助模式,则无需切换。
本实施例通过获取车辆周围的环境图像以及驾驶员面部图像;根据所述环境图像确定应关注区域;根据所述驾驶员面部图像确定驾驶员视线区域;根据所述应关注区域以及所述驾驶员视线区域确定目标驾驶辅助模式;将当前驾驶辅助模式切换为目标驾驶辅助模式。通过上述方式,根据驾驶员面部图像确定驾驶员的视线区域,并根据当前车辆周围的环境分析需要关注的区域,并根据视线区域以及需要关注的区域的重合情况判断是否需要切换驾驶辅助模式。
参考图3,图3为本申请一种驾驶辅助模式切换方法第二实施例的流程示意图。
基于上述第一实施例,本实施例驾驶辅助模式切换方法在所述步骤S20,包括:
步骤S21:根据所述环境图像确定初始全局显著度阈值以及初始搜索半径。
在具体实现中,首先计算环境图像的灰度最大值,并将灰度最大值作为初始全局显著度阈值。初始搜索半径决定在环境图像中首次搜索时的搜索范围,例如:当初始搜索半径为100像素时,则搜索的范围为半径为100像素的圆。
步骤S22:根据所述初始全局显著度阈值以及所述初始搜索半径搜索所述环境图像,得到搜索结果。
进一步地,步骤S22包括:根据所述初始搜索半径确定所述环境图像中的搜索区域;将所述搜索区域中各像素点的像素值与所述初始全局显著度阈值进行比较,得到比较值;当所述比较值处于预设阈值区间时,根据预设缩减值减小所述初始搜索半径,并根据减小后的初始搜索半径搜索所述环境图像;当所述比较值等于预设阈值时,根据所述比较值对应的初始搜索半径生成搜索结果。
本实施例中,初始搜索半径设定为环境图像边长的1/2,并根据初始搜索半径以及初始全局显著度阈值在环境图像中进行搜索区域的搜索:
Figure PCTCN2022080961-appb-000004
在公式6中,Num()用于计算其中的像素个数,R(r)表示搜索半径为r的搜索区域,P(x,y)表示像素点,k(r,T)表示搜索区域内像素值在全局显著度阈值T之上的像素比例,即比较值,k(r,T)的取值范围为0~1,当k(r,T)=1时,搜索区域内所有像素值均在全局显著度阈值之上,当0<k(r,T)<1时,搜索区域内并非所有像素的灰度值都在全局显著度阈值之上,此时搜索区域内有包含一定比例的非显著区域,即此时筛选出的区域并不是需求的搜索区域。此时根据预设缩减值减小所述初始搜索半径,并根据减小后的初始搜索半径搜索所述环境图像,直至k(r,T)无限趋近于1,此时认为该区域是驾驶员应该关注的部分。
步骤S23:根据所述搜索结果确定应关注区域。
需要说明的是,搜索结果中包括最后初始搜索半径,初始搜索半径范围内的区域即为应关注区域。
本实施例通过根据所述环境图像确定初始全局显著度阈值以及初始搜索半径;根据所述初始全局显著度阈值以及所述初始搜索半径搜索所述环境图像,得到搜索结果;根据所述搜索结果确定应关注区域。通过上述方式,在环境图像中根据阈值进行搜索,从而得到驾驶员应该关注的区域,从而能够从环境中分析出需要关注的部分。
此外,本申请实施例还提出一种存储介质,所述存储介质上存储有驾驶辅助模式切换程序,所述驾驶辅助模式切换程序被处理器执行时实现如上文所述的驾驶辅助模式切换方法的步骤。
由于本存储介质采用了上述所有实施例的全部技术方案,因此至少具有上述实施例的技术方案所带来的所有功能,在此不再一一赘述。
参照图4,图4为本申请驾驶辅助模式切换装置第一实施例的结构框图。
如图4所示,本申请实施例提出的驾驶辅助模式切换装置包括:
面部获取模块10,被配置为获取车辆周围的环境图像以及驾驶员面部图像。
区域确定模块20,被配置为根据所述环境图像确定应关注区域。
视线确定模块30,被配置为根据所述驾驶员面部图像确定驾驶员视线区域。
模式确定模块40,被配置为根据所述应关注区域以及所述驾驶员视线区域确定目标驾驶辅助模式。
模式切换方法50,被配置为将当前驾驶辅助模式切换为目标驾驶辅助模式。
在一实施例中,所述区域确定模块20,还被配置为根据所述环境图像确定初始全局显著度阈值以及初始搜索半径;根据所述初始全局显著度阈值以及所述初始搜索半径搜索所述环境图像,得到搜索结果;根据所述搜索结果确定应关注区域。
在一实施例中,所述区域确定模块20,还被配置为根据所述初始搜索半径确定所述环境图像中的搜索区域;将所述搜索区域中各像素点的像素值与所述初始全局显著度阈值进行比较,得到比较值;当所述比较值处于预设阈值区间时,根据预设缩减值减小所述初始搜索半径,并根据减小后的初始搜索半径搜索所述环境图像;当所述比较值等于预设阈值时,根据所述比较值对应的初始搜索半径生成搜索结果。
在一实施例中,所述视线确定模块30,还被配置为将所述驾驶员面部图像分割为多个面部候选区域;确定各个面部候选区域的灰度值;将大于灰度值阈值的灰度值所对应的面部候选区域作为瞳孔候选区域;根据所述瞳孔候选区域确定瞳孔中心特征;根据所述瞳孔中心特征确定驾驶员视线区域。
在一实施例中,所述视线确定模块30,还被配置为根据所述瞳孔中心特征确定瞳孔中心特征向量以及注视方向向量;确定所述瞳孔中心特征向量以及所述注视方向向量的目标映射关系;所述目标映射关系以及所述瞳孔中心特征向量确定驾驶员视线区域。
在一实施例中,所述视线确定模块30,还被配置为建立所述瞳孔中心特征向量以及所述注视方向向量的目标损失函数;对所述目标损失函数求导得 到一阶导函数;根据所述一阶导函数以及预设值确定所述瞳孔中心特征向量以及所述注视方向向量的目标映射关系。
在一实施例中,所述模式确定模块40,还被配置为若所述应关注区域等于所述驾驶员视线区域时,将第一驾驶辅助模式作为目标驾驶辅助模式;若所述应关注区域属于所述驾驶员视线区域时,将第二驾驶辅助模式作为目标驾驶辅助模式;若所述应关注区域不属于所述驾驶员视线区域时,将第三驾驶辅助模式作为目标驾驶辅助模式。
应当理解的是,以上仅为举例说明,对本申请的技术方案并不构成任何限定,在具体应用中,本领域的技术人员可以根据需要进行设置,本申请对此不做限制。
本实施例通过获取车辆周围的环境图像以及驾驶员面部图像;根据所述环境图像确定应关注区域;根据所述驾驶员面部图像确定驾驶员视线区域;根据所述应关注区域以及所述驾驶员视线区域确定目标驾驶辅助模式;将当前驾驶辅助模式切换为目标驾驶辅助模式。通过上述方式,根据驾驶员面部图像确定驾驶员的视线区域,并根据当前车辆周围的环境分析需要关注的区域,并根据视线区域以及需要关注的区域的重合情况判断是否需要切换驾驶辅助模式。
需要说明的是,以上所描述的工作流程仅仅是示意性的,并不对本申请的保护范围构成限定,在实际应用中,本领域的技术人员可以根据实际的需要选择其中的部分或者全部来实现本实施例方案的目的,此处不做限制。
另外,未在本实施例中详尽描述的技术细节,可参见本申请任意实施例所提供的驾驶辅助模式切换方法,此处不再赘述。
此外,需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如只读存储器(Read Only Memory,ROM)/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的可选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (10)

  1. 一种驾驶辅助模式切换方法,其中,所述驾驶辅助模式切换方法包括:
    获取车辆周围的环境图像以及驾驶员面部图像;
    根据所述环境图像确定应关注区域;
    根据所述驾驶员面部图像确定驾驶员视线区域;
    根据所述应关注区域以及所述驾驶员视线区域确定目标驾驶辅助模式;
    将当前驾驶辅助模式切换为目标驾驶辅助模式。
  2. 如权利要求1所述的方法,其中,所述根据所述环境图像确定应关注区域,包括:
    根据所述环境图像确定初始全局显著度阈值以及初始搜索半径;
    根据所述初始全局显著度阈值以及所述初始搜索半径搜索所述环境图像,得到搜索结果;
    根据所述搜索结果确定应关注区域。
  3. 如权利要求2所述的方法,其中,根据所述初始全局显著度阈值以及所述初始搜索半径搜索所述环境图像,得到搜索结果,包括:
    根据所述初始搜索半径确定所述环境图像中的搜索区域;
    将所述搜索区域中各像素点的像素值与所述初始全局显著度阈值进行比较,得到比较值;
    当所述比较值处于预设阈值区间时,根据预设缩减值减小所述初始搜索半径,并根据减小后的初始搜索半径搜索所述环境图像;
    当所述比较值等于预设阈值时,根据所述比较值对应的初始搜索半径生成搜索结果。
  4. 如权利要求1所述的方法,其中,根据所述驾驶员面部图像确定驾驶员视线区域,包括:
    将所述驾驶员面部图像分割为多个面部候选区域;
    确定各个面部候选区域的灰度值;
    将大于灰度值阈值的灰度值所对应的面部候选区域作为瞳孔候选区域;
    根据所述瞳孔候选区域确定瞳孔中心特征;
    根据所述瞳孔中心特征确定驾驶员视线区域。
  5. 如权利要求4所述的方法,其中,所述根据所述瞳孔中心特征确定驾驶员视线区域,包括:
    根据所述瞳孔中心特征确定瞳孔中心特征向量以及注视方向向量;
    确定所述瞳孔中心特征向量以及所述注视方向向量的目标映射关系;
    根据所述目标映射关系以及所述瞳孔中心特征向量确定驾驶员视线区域。
  6. 如权利要求5所述的方法,其中,所述确定所述瞳孔中心特征向量以及所述注视方向向量的目标映射关系,包括:
    建立所述瞳孔中心特征向量以及所述注视方向向量的目标损失函数;
    对所述目标损失函数求导得到一阶导函数;
    根据所述一阶导函数以及预设值确定所述瞳孔中心特征向量以及所述注视方向向量的目标映射关系。
  7. 如权利要求1-6任一项所述的方法,其中,所述根据所述应关注区域以及所述驾驶员视线区域确定目标驾驶辅助模式,包括:
    若所述应关注区域等于所述驾驶员视线区域时,将第一驾驶辅助模式作为目标驾驶辅助模式;
    若所述应关注区域属于所述驾驶员视线区域时,将第二驾驶辅助模式作为目标驾驶辅助模式;
    若所述应关注区域不属于所述驾驶员视线区域时,将第三驾驶辅助模式作为目标驾驶辅助模式。
  8. 一种驾驶辅助模式切换装置,其中,所述驾驶辅助模式切换装置包括:
    面部获取模块,被配置为获取车辆周围的环境图像以及驾驶员面部图像;
    区域确定模块,被配置为根据所述环境图像确定应关注区域;
    视线确定模块,被配置为根据所述驾驶员面部图像确定驾驶员视线区域;
    模式确定模块,被配置为根据所述应关注区域以及所述驾驶员视线区域确定目标驾驶辅助模式;
    模式切换方法,被配置为将当前驾驶辅助模式切换为目标驾驶辅助模式。
  9. 一种驾驶辅助模式切换设备,其中,所述设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的驾驶辅助模式切换程序,所述驾驶辅助模式切换程序配置为实现如权利要求1至7中任一项所述的驾驶辅助模式切换方法。
  10. 一种存储介质,其中,所述存储介质上存储有驾驶辅助模式切换程序,所述驾驶辅助模式切换程序被处理器执行时实现如权利要求1至7任一项所述的驾驶辅助模式切换方法。
PCT/CN2022/080961 2021-10-26 2022-03-15 驾驶辅助模式切换方法、装置、设备及存储介质 WO2023071024A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111251279.7A CN114162130B (zh) 2021-10-26 2021-10-26 驾驶辅助模式切换方法、装置、设备及存储介质
CN202111251279.7 2021-10-26

Publications (1)

Publication Number Publication Date
WO2023071024A1 true WO2023071024A1 (zh) 2023-05-04

Family

ID=80477386

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/080961 WO2023071024A1 (zh) 2021-10-26 2022-03-15 驾驶辅助模式切换方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN114162130B (zh)
WO (1) WO2023071024A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114162130B (zh) * 2021-10-26 2023-06-20 东风柳州汽车有限公司 驾驶辅助模式切换方法、装置、设备及存储介质
CN115909254B (zh) * 2022-12-27 2024-05-10 钧捷智能(深圳)有限公司 一种基于摄像头原始图像的dms系统及其图像处理方法
CN117197786B (zh) * 2023-11-02 2024-02-02 安徽蔚来智驾科技有限公司 驾驶行为检测方法、控制装置及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006172215A (ja) * 2004-12-16 2006-06-29 Fuji Photo Film Co Ltd 運転支援システム
CN103770733A (zh) * 2014-01-15 2014-05-07 中国人民解放军国防科学技术大学 一种驾驶员安全驾驶状态检测方法及装置
US20190126821A1 (en) * 2017-11-01 2019-05-02 Acer Incorporated Driving notification method and driving notification system
CN111169483A (zh) * 2018-11-12 2020-05-19 奇酷互联网络科技(深圳)有限公司 一种辅助驾驶方法、电子设备和具有存储功能的装置
US20210070359A1 (en) * 2019-09-11 2021-03-11 Mando Corporation Driver assistance apparatus and method thereof
CN114162130A (zh) * 2021-10-26 2022-03-11 东风柳州汽车有限公司 驾驶辅助模式切换方法、装置、设备及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6778872B2 (ja) * 2016-06-28 2020-11-04 パナソニックIpマネジメント株式会社 運転支援装置及び運転支援方法
WO2019028798A1 (zh) * 2017-08-10 2019-02-14 北京市商汤科技开发有限公司 驾驶状态监控方法、装置和电子设备
CN109492514A (zh) * 2018-08-28 2019-03-19 初速度(苏州)科技有限公司 一种单相机采集人眼视线方向的方法及系统
CN109664891A (zh) * 2018-12-27 2019-04-23 北京七鑫易维信息技术有限公司 辅助驾驶方法、装置、设备及存储介质
CN111580522A (zh) * 2020-05-15 2020-08-25 东风柳州汽车有限公司 无人驾驶汽车的控制方法、汽车和存储介质
CN111931579B (zh) * 2020-07-09 2023-10-31 上海交通大学 利用眼动追踪和手势识别技术的自动驾驶辅助系统及方法
CN113378771B (zh) * 2021-06-28 2022-07-26 济南大学 驾驶员状态确定方法、装置、驾驶员监控系统、车辆

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006172215A (ja) * 2004-12-16 2006-06-29 Fuji Photo Film Co Ltd 運転支援システム
CN103770733A (zh) * 2014-01-15 2014-05-07 中国人民解放军国防科学技术大学 一种驾驶员安全驾驶状态检测方法及装置
US20190126821A1 (en) * 2017-11-01 2019-05-02 Acer Incorporated Driving notification method and driving notification system
CN111169483A (zh) * 2018-11-12 2020-05-19 奇酷互联网络科技(深圳)有限公司 一种辅助驾驶方法、电子设备和具有存储功能的装置
US20210070359A1 (en) * 2019-09-11 2021-03-11 Mando Corporation Driver assistance apparatus and method thereof
CN114162130A (zh) * 2021-10-26 2022-03-11 东风柳州汽车有限公司 驾驶辅助模式切换方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN114162130B (zh) 2023-06-20
CN114162130A (zh) 2022-03-11

Similar Documents

Publication Publication Date Title
WO2023071024A1 (zh) 驾驶辅助模式切换方法、装置、设备及存储介质
EP3539054B1 (en) Neural network image processing apparatus
WO2021098796A1 (zh) 图像处理方法、装置、设备及计算机可读存储介质
CN111178245B (zh) 车道线检测方法、装置、计算机设备和存储介质
Arróspide et al. Image-based on-road vehicle detection using cost-effective histograms of oriented gradients
WO2019114036A1 (zh) 人脸检测方法及装置、计算机装置和计算机可读存储介质
US9384401B2 (en) Method for fog detection
CN108629292B (zh) 弯曲车道线检测方法、装置及终端
Kühnl et al. Monocular road segmentation using slow feature analysis
WO2021016873A1 (zh) 基于级联神经网络的注意力检测方法、计算机装置及计算机可读存储介质
JP2018508078A (ja) オブジェクト追跡のためのシステムおよび方法
JP5223675B2 (ja) 車両検知装置,車両検知方法並びに車両検知プログラム
CN110789517A (zh) 自动驾驶横向控制方法、装置、设备及存储介质
WO2021184718A1 (zh) 卡片边框识别方法、装置、设备和计算机存储介质
WO2013116598A1 (en) Low-cost lane marker detection
CN114930402A (zh) 点云法向量计算方法、装置、计算机设备和存储介质
CN113159198A (zh) 一种目标检测方法、装置、设备及存储介质
CN108090425B (zh) 一种车道线检测方法、装置及终端
CN113297939A (zh) 障碍物检测方法、系统、终端设备及存储介质
CN112529011A (zh) 目标检测方法及相关装置
JP2018124963A (ja) 画像処理装置、画像認識装置、画像処理プログラム、及び画像認識プログラム
CN109543610B (zh) 车辆检测跟踪方法、装置、设备及存储介质
CN116486351A (zh) 行车预警方法、装置、设备及存储介质
US11314968B2 (en) Information processing apparatus, control method, and program
WO2018143278A1 (ja) 画像処理装置、画像認識装置、画像処理プログラム、及び画像認識プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22884941

Country of ref document: EP

Kind code of ref document: A1