CN114162130A - Driving assistance mode switching method, device, equipment and storage medium - Google Patents

Driving assistance mode switching method, device, equipment and storage medium Download PDF

Info

Publication number
CN114162130A
CN114162130A CN202111251279.7A CN202111251279A CN114162130A CN 114162130 A CN114162130 A CN 114162130A CN 202111251279 A CN202111251279 A CN 202111251279A CN 114162130 A CN114162130 A CN 114162130A
Authority
CN
China
Prior art keywords
driving assistance
driver
determining
assistance mode
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111251279.7A
Other languages
Chinese (zh)
Other versions
CN114162130B (en
Inventor
罗文�
常健
覃毅哲
赵芸
覃远航
李帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongfeng Liuzhou Motor Co Ltd
Original Assignee
Dongfeng Liuzhou Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongfeng Liuzhou Motor Co Ltd filed Critical Dongfeng Liuzhou Motor Co Ltd
Priority to CN202111251279.7A priority Critical patent/CN114162130B/en
Publication of CN114162130A publication Critical patent/CN114162130A/en
Priority to PCT/CN2022/080961 priority patent/WO2023071024A1/en
Application granted granted Critical
Publication of CN114162130B publication Critical patent/CN114162130B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0098Details of control systems ensuring comfort, safety or stability not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0818Inactivity or incapacity of driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/225Direction of gaze
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/80Technologies aiming to reduce greenhouse gasses emissions common to all road transportation technologies
    • Y02T10/84Data processing systems or methods, management, administration

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention belongs to the technical field of vehicle control, and discloses a driving assistance mode switching method, device, equipment and storage medium. The method comprises the following steps: acquiring an environment image around a vehicle and a driver face image; determining a region to be focused according to the environment image; determining a driver sight line area according to the driver face image; determining a target driving assistance mode according to the attention area and the driver sight area; the current driving assistance mode is switched to the target driving assistance mode. In the manner, the sight line area of the driver is determined according to the face image of the driver, the area needing attention is analyzed according to the environment around the current vehicle, and whether the driving assistance mode needs to be switched or not is judged according to the superposition condition of the sight line area and the area needing attention.

Description

Driving assistance mode switching method, device, equipment and storage medium
Technical Field
The present invention relates to the field of vehicle control technologies, and in particular, to a driving assistance mode switching method, device, apparatus, and storage medium.
Background
The present fatigue monitoring method detects the driver state as follows: whether the driver is on the phone or not, whether the driver is smoking or not, the number of blinks in a time period and the like cannot be effectively detected by the detection methods, and some drivers like smoking or have high blink evaluation rate and the like, so that the detection methods are not suitable for the existing fatigue monitoring method; the driver looks normal, but the eyes get lost like: when crossing, the system should pay attention to whether pedestrian objects are arranged on the two sides of the left zebra crossing and the right zebra crossing, but the driver still looks ahead, and in this case, the system should recognize that the driver is in a fatigue state.
And the driving assistance mode switching at the present stage is actively set according to the driver; and the driving assistance mode is strongly related to the state of the driver and the surrounding environment where the vehicle is located, the state of the driver is poor, the environment where the vehicle is located belongs to high risk, and the driver is not aware, so that the driving assistance mode is adaptively adjusted to the highest level to ensure the safety of the driver and the vehicle to the maximum extent.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a driving assistance mode switching method, and aims to solve the technical problems of accurately judging the driving state of a driver and switching the driving mode according to the driving state in the prior art.
To achieve the above object, the present invention provides a driving assistance mode switching method, including the steps of:
acquiring an environment image around a vehicle and a driver face image;
determining a region to be focused according to the environment image;
determining a driver sight line area according to the driver face image;
determining a target driving assistance mode according to the attention area and the driver sight area;
the current driving assistance mode is switched to the target driving assistance mode.
Optionally, the determining a region to be focused according to the environment image includes:
determining an initial global significance threshold value and an initial search radius according to the environment image;
searching the environment image according to the initial global significance threshold and the initial search radius to obtain a search result;
and determining the region to be focused according to the search result.
Optionally, searching the environment image according to the initial global significance threshold and the initial search radius to obtain a search result, including:
determining a search area in the environment image according to the initial search radius;
comparing the pixel value of each pixel point in the search area with the initial global significance threshold value to obtain a comparison value;
when the comparison value is in a preset threshold interval, reducing the initial search radius according to a preset reduction value, and searching the environment image according to the reduced initial search radius;
and when the comparison value is equal to a preset threshold value, generating a search result according to the initial search radius corresponding to the comparison value.
Optionally, determining a driver's sight line region from the driver face image includes:
segmenting the driver face image into a plurality of face candidate regions;
determining a gray value of each face candidate region;
taking the face candidate area corresponding to the gray value larger than the gray value threshold value as a pupil candidate area;
determining pupil center characteristics according to the pupil candidate area;
and determining the sight line area of the driver according to the pupil center characteristics.
Optionally, the determining the driver sight line area according to the pupil center feature includes:
determining a pupil center feature vector and a gaze direction vector according to the pupil center feature;
determining a target mapping relation between the pupil center characteristic vector and the gazing direction vector;
and determining a driver sight line area by the target mapping relation and the pupil center characteristic vector.
Optionally, the determining a target mapping relationship between the pupil center feature vector and the gaze direction vector includes:
establishing a target loss function of the pupil center characteristic vector and the gaze direction vector;
obtaining a first derivative function by derivation of the target loss function;
and determining the target mapping relation of the pupil center characteristic vector and the gazing direction vector according to the first derivative function and a preset value.
Optionally, the determining a target driving assistance mode according to the attention area and the driver sight line area includes:
if the attention area is equal to the driver sight line area, taking a first driving assistance mode as a target driving assistance mode;
if the attention area belongs to the driver sight line area, taking a second driving assistance mode as a target driving assistance mode;
and if the attention area does not belong to the driver sight line area, taking the third driving assistance mode as a target driving assistance mode.
Further, in order to achieve the above object, the present invention also proposes a driving assistance mode switching device including:
the face acquisition module is used for acquiring an environment image around the vehicle and a face image of a driver;
the area determining module is used for determining an area to be focused according to the environment image;
the sight line determining module is used for determining a sight line area of the driver according to the facial image of the driver;
the mode determining module is used for determining a target driving assistance mode according to the attention area and the driver sight area;
a mode switching method for switching a current driving assistance mode to a target driving assistance mode.
Further, to achieve the above object, the present invention also proposes a driving assistance mode switching apparatus including: a memory, a processor and a driving assistance mode switching program stored on the memory and executable on the processor, the driving assistance mode switching program being configured to implement the steps of the driving assistance mode switching method as described above.
Further, in order to achieve the above object, the present invention also proposes a storage medium having stored thereon a driving assistance mode switching program that, when executed by a processor, implements the steps of the driving assistance mode switching method as described above.
The method comprises the steps of obtaining an environment image around a vehicle and a face image of a driver; determining a region to be focused according to the environment image; determining a driver sight line area according to the driver face image; determining a target driving assistance mode according to the attention area and the driver sight area; the current driving assistance mode is switched to the target driving assistance mode. In the manner, the sight line area of the driver is determined according to the face image of the driver, the area needing attention is analyzed according to the environment around the current vehicle, and whether the driving assistance mode needs to be switched or not is judged according to the superposition condition of the sight line area and the area needing attention.
Drawings
Fig. 1 is a schematic structural diagram of a driving assistance mode switching apparatus in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a first exemplary embodiment of a driving assistance mode switching method according to the present invention;
FIG. 3 is a flowchart illustrating a driving assistance mode switching method according to a second embodiment of the present invention;
fig. 4 is a block diagram showing the configuration of the driving assistance mode switching apparatus according to the first embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a driving assistance mode switching device in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the driving assistance mode switching apparatus may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The Memory 1005 may be a Random Access Memory (RAM) Memory, or may be a Non-Volatile Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of the driving assistance mode switching apparatus, and may include more or less components than those shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and a driving assistance mode switching program.
In the driving assistance mode switching apparatus shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the driving assistance mode switching device of the present invention may be provided in the driving assistance mode switching device that calls the driving assistance mode switching program stored in the memory 1005 through the processor 1001 and executes the driving assistance mode switching method provided by the embodiment of the present invention.
An embodiment of the present invention provides a driving assistance mode switching method, and referring to fig. 2, fig. 2 is a flowchart illustrating a first embodiment of the driving assistance mode switching method according to the present invention.
In this embodiment, the driving assistance mode switching method includes the steps of:
step S10: an environmental image around the vehicle and an image of the face of the driver are acquired.
It should be noted that the execution subject of the present embodiment is a vehicle-mounted terminal, and the vehicle-mounted terminal can perform analysis and calculation based on data collected by a vehicle sensor, so as to implement a corresponding function. In the embodiment, a first camera is arranged above the cab of the vehicle and used for shooting the face image of the driver in real time. The second camera is arranged in front of the outside of the vehicle, can shoot an environment image around the vehicle, which can be observed by a driver when the driver is in a cab, and can be a wide-angle camera.
It should be noted that after the first camera and the second camera are installed, the two cameras need to be calibrated, so that the two cameras can convert the captured image content into the same world coordinate system. During calibration, the positions of the two cameras are firstly converted into the same world coordinate system, and the conversion relation of the contents in the images shot by the two cameras into the world coordinate system is calculated respectively.
Step S20: and determining a region to be focused according to the environment image.
It should be noted that, after the vehicle terminal acquires the environment image around the vehicle according to the second camera, the object to be paid attention to in the environment image is firstly identified, and the object to be paid attention may be a real object that may affect driving, such as a vehicle, a pedestrian, a lane line, a traffic signal lamp, and the like. When the object to be focused is recognized, the environment image may be input to the trained object recognition model for recognition.
It is understood that after the recognition, the environment image should be focused on the marked area, and then the continuous area formed by all the marked positions is used as the attention-focused area.
Step S30: and determining a driver sight line area according to the driver face image.
The driver 'S sight line region is a region where the eyes of the driver focus on driving, and in order to be able to more accurately specify the driver' S sight line region from the face image of the driver, step S30 includes: segmenting the driver face image into a plurality of face candidate regions; determining a gray value of each face candidate region; taking the face candidate area corresponding to the gray value larger than the gray value threshold value as a pupil candidate area; determining pupil center characteristics according to the pupil candidate area; and determining the sight line area of the driver according to the pupil center characteristics.
First, the driver face image is threshold-segmented into a plurality of face candidate regions, and the threshold segmentation method may employ one of Otsu threshold segmentation, adaptive threshold segmentation, maximum entropy threshold segmentation, or iterative threshold segmentation. Otsu (the Otsu method or the maximum inter-class variance method) uses the concept of clustering, which divides the gray scale number of an image into 2 parts according to gray scale, so that the gray scale difference between the two parts is maximum, the gray scale difference between each part is minimum, and an appropriate gray scale is found through variance calculation to divide the gray scale. Therefore, the otsu algorithm can be adopted to automatically select the threshold value for binarization during binarization. Iterative thresholding is a process of first guessing an initial threshold and then refining the threshold by performing multiple calculations on the image. The image is repeatedly thresholded, segmented into multiple types, and then refined with gray scale levels in each class.
After threshold-dividing the driver face image into a plurality of face candidate regions, a gradation value of each face candidate region is calculated. In the embodiment, the gray level histogram of the face image of the driver is analyzed through a large amount of data, and the gray level value of the pupil area of the driver is generally stabilized to be more than 220, occupies a smaller proportion in the face image of the driver and is close to the right side of the cumulative distribution function of the histogram. And selecting 220 in the cumulative distribution of the gray histogram as a threshold of gray values, and selecting a face candidate region larger than the threshold of the gray values, namely a pupil candidate region.
It can be understood that the pupil center is the position with the minimum cost of all the position points in the pupil candidate region, so the relationship between the pupil center position and all the position points in the region is:
Figure BDA0003322383640000061
wherein, in formula 1, xiThe pupil position is represented, i belongs to {1, 2.,. N }, and c is the pupil center position. And obtaining the pupil center characteristic of the pupil center position c through the minimum cost function.
Further, the determining the driver sight line area according to the pupil center feature includes: determining a pupil center feature vector and a gaze direction vector according to the pupil center feature; determining a target mapping relation between the pupil center characteristic vector and the gazing direction vector; and determining a driver sight line area by the target mapping relation and the pupil center characteristic vector.
It should be noted that the feature regression from the pupil center feature of the driver to the gaze angle may be considered to establish a target mapping relationship between the image expression feature space and the gaze direction space. Therefore, a target mapping relationship needs to be determined. Given X ═ X1,x2,...,xn]Is the pupil center feature vector, Y ═ Y1,y2,...,yn]Is the gaze direction vector.
Further, the determining the target mapping relationship between the pupil center feature vector and the gaze direction vector includes: establishing a target loss function of the pupil center characteristic vector and the gaze direction vector; obtaining a first derivative function by derivation of the target loss function; and determining the target mapping relation of the pupil center characteristic vector and the gazing direction vector according to the first derivative function and a preset value. The feature regression method is intended to use linear regression learning to obtain the best mapping (i.e., the target mapping relationship) from X to Y, such that where β' is obtained using the minimum loss function, there is the minimum loss function (i.e., the target loss function):
Figure BDA0003322383640000071
in equation 2, E (β) is the target loss function, λ is the regularization parameter, and β is an intermediate variable in the process of calculating the target mapping relationship.
After the target loss function is derived to obtain a first derivative function, the first derivative function is made equal to a preset value, where the preset value is 0, and the following results can be obtained:
β'=(XXT+λ)-1XYTformula 3;
in equation 3, β' is the target mapping relationship.
Obtaining a driver sight line area according to the pupil center feature vector and the target mapping relation:
y ═ X β' formula 4;
in equation 4, X is the pupil center feature vector, and Y is the driver's sight line region.
Step S40: and determining a target driving assistance mode according to the attention area and the driver sight area.
In a specific implementation, the attention area and the driver sight line area are compared, so that the relationship between the attention area and the driver sight line area can be obtained, and when the attention area and the driver sight line area are in different relationships, the vehicle is switched to different driving assistance modes.
Further, step S40 includes: if the attention area is equal to the driver sight line area, taking a first driving assistance mode as a target driving assistance mode; if the attention area belongs to the driver sight line area, taking a second driving assistance mode as a target driving assistance mode; and if the attention area does not belong to the driver sight line area, taking the third driving assistance mode as a target driving assistance mode.
It is understood that the driving assistance modes are divided into a first driving assistance mode, a second driving assistance mode and a third driving assistance mode, and the first driving assistance mode, the second driving assistance mode and the third driving assistance mode respectively correspond to a level I, a level II and a level III, wherein the level I corresponds to a slow mode, and in the slow mode, the slow mode adjusts a driving assistance threshold value, such as emergency braking, adaptive cruise and the like, to be minimum, that is, a minimum boundary of a safety distance; the II level corresponds to a conventional mode, and in the conventional mode, the driving assistance threshold is adjusted to be medium, such as the braking distance of emergency braking, adaptive cruise and the like is adjusted to be medium; the level III corresponds to an emergency mode, in which the driving assistance threshold is increased, and the braking distance, such as emergency braking, adaptive cruise, etc., is adjusted to be maximum, i.e., the maximum boundary of the safety distance. The mode switching logic is as follows:
Figure BDA0003322383640000081
in equation 5, Mode is the target driving assistance Mode, I is the first driving assistance Mode, II is the second driving assistance Mode, III is the third driving assistance Mode, Y is the driver's sight line region, r is the region to be paid attention, if is "if", and the table conditions are selected, for example: when Y is equal to r, the target driving assistance mode is the first driving assistance mode.
It can be understood that when the driver sight line area overlaps with the attention area, that is, the driver keeps sight lines on the attention area in a short time, the current driver state is considered as the best, and the driver is paying attention to all the targets needing attention, the current driving assistance mode is adjusted to the slow mode; when the driver sight line area is not completely overlapped with the attention area and the driver sight line is partially overlapped with the attention area, the current driver state is considered to be good, and the driver pays attention to the target needing attention, and the current driving assistance mode is adjusted to the conventional mode; when the driver sight line area and the attention area do not have an intersection, namely the driver sight lines are not on the attention area, the current driver state is considered to be poor, the driver does not pay attention to the target needing attention, and the current driving auxiliary mode is adjusted to the emergency mode.
Step S50: the current driving assistance mode is switched to the target driving assistance mode.
It should be noted that, if the current driving assistance mode is not the target driving assistance mode, the current driving assistance mode is switched to the target driving assistance mode, and if the current driving assistance mode is the target driving assistance mode, the switching is not required.
The embodiment obtains the environment image around the vehicle and the face image of the driver; determining a region to be focused according to the environment image; determining a driver sight line area according to the driver face image; determining a target driving assistance mode according to the attention area and the driver sight area; the current driving assistance mode is switched to the target driving assistance mode. In the manner, the sight line area of the driver is determined according to the face image of the driver, the area needing attention is analyzed according to the environment around the current vehicle, and whether the driving assistance mode needs to be switched or not is judged according to the superposition condition of the sight line area and the area needing attention.
Referring to fig. 3, fig. 3 is a flowchart illustrating a driving assistance mode switching method according to a second embodiment of the present invention.
Based on the first embodiment described above, the driving assistance mode switching method of the present embodiment includes, at the step S20:
step S21: an initial global saliency threshold and an initial search radius are determined from the environmental image.
In a specific implementation, the maximum grayscale value of the environment image is first calculated, and the maximum grayscale value is used as an initial global saliency threshold. The initial search radius determines the search range when searching for the first time in the environment image, for example: when the initial search radius is 100 pixels, the search range is a circle with a radius of 100 pixels.
Step S22: and searching the environment image according to the initial global significance threshold and the initial search radius to obtain a search result.
Further, step S22 includes: determining a search area in the environment image according to the initial search radius; comparing the pixel value of each pixel point in the search area with the initial global significance threshold value to obtain a comparison value; when the comparison value is in a preset threshold interval, reducing the initial search radius according to a preset reduction value, and searching the environment image according to the reduced initial search radius; and when the comparison value is equal to a preset threshold value, generating a search result according to the initial search radius corresponding to the comparison value.
In this embodiment, the initial search radius is set to 1/2 of the side length of the environment image, and the search area is searched in the environment image according to the initial search radius and the initial global saliency threshold:
Figure BDA0003322383640000091
in formula 6, Num () is used to calculate the number of pixels therein, r (r) represents a search area with a search radius r, P (x, y) represents a pixel point, k (r, T) represents a pixel proportion of a pixel value in the search area above a global significance threshold T, i.e., a comparison value, a value range of k (r, T) is 0 to 1, when k (r, T) is equal to 1, all pixel values in the search area are above the global significance threshold, when 0< k (r, T) <1, gray values of all pixels in the search area are not above the global significance threshold, at this time, an insignificant area including a certain proportion exists in the search area, i.e., the area at this time is not a search area required to be screened out. At this time, the initial search radius is reduced according to a preset reduction value, and the environment image is searched according to the reduced initial search radius until k (r, T) infinitely approaches 1, and the area is considered to be a part which the driver should pay attention to.
Step S23: and determining the region to be focused according to the search result.
It should be noted that the search result includes a final initial search radius, and an area within the range of the initial search radius is an area to be focused on.
The embodiment determines an initial global significance threshold value and an initial search radius according to the environment image; searching the environment image according to the initial global significance threshold and the initial search radius to obtain a search result; and determining the region to be focused according to the search result. In this way, the area to be focused on by the driver is obtained by searching the environment image based on the threshold value, and the portion to be focused on can be separated from the environment.
Furthermore, an embodiment of the present invention also proposes a storage medium having a driving assistance mode switching program stored thereon, which when executed by a processor implements the steps of the driving assistance mode switching method as described above.
Since the storage medium adopts all technical solutions of all the embodiments, at least all the beneficial effects brought by the technical solutions of the embodiments are achieved, and no further description is given here.
Referring to fig. 4, fig. 4 is a block diagram showing the configuration of the driving assistance mode switching apparatus according to the first embodiment of the present invention.
As shown in fig. 4, a driving assistance mode switching apparatus according to an embodiment of the present invention includes:
the face acquisition module 10 is used for acquiring an environment image around the vehicle and a face image of the driver.
And the region determining module 20 is used for determining a region to be focused according to the environment image.
And the sight line determining module 30 is used for determining a sight line area of the driver according to the facial image of the driver.
A mode determination module 40, configured to determine a target driving assistance mode according to the attention area and the driver sight line area.
A mode switching method 50 for switching the current driving assistance mode to the target driving assistance mode.
In an embodiment, the region determining module 20 is further configured to determine an initial global saliency threshold and an initial search radius according to the environment image; searching the environment image according to the initial global significance threshold and the initial search radius to obtain a search result; and determining the region to be focused according to the search result.
In an embodiment, the region determining module 20 is further configured to determine a search region in the environment image according to the initial search radius; comparing the pixel value of each pixel point in the search area with the initial global significance threshold value to obtain a comparison value; when the comparison value is in a preset threshold interval, reducing the initial search radius according to a preset reduction value, and searching the environment image according to the reduced initial search radius; and when the comparison value is equal to a preset threshold value, generating a search result according to the initial search radius corresponding to the comparison value.
In an embodiment, the gaze determination module 30 is further configured to segment the driver facial image into a plurality of facial candidate regions; determining a gray value of each face candidate region; taking the face candidate area corresponding to the gray value larger than the gray value threshold value as a pupil candidate area; determining pupil center characteristics according to the pupil candidate area; and determining the sight line area of the driver according to the pupil center characteristics.
In an embodiment, the gaze determining module 30 is further configured to determine a pupil center feature vector and a gaze direction vector according to the pupil center feature; determining a target mapping relation between the pupil center characteristic vector and the gazing direction vector; and determining a driver sight line area by the target mapping relation and the pupil center characteristic vector.
In an embodiment, the gaze determining module 30 is further configured to establish an objective loss function of the pupil center feature vector and the gaze direction vector; obtaining a first derivative function by derivation of the target loss function; and determining the target mapping relation of the pupil center characteristic vector and the gazing direction vector according to the first derivative function and a preset value.
In an embodiment, the mode determining module 40 is further configured to take the first driving assistance mode as the target driving assistance mode if the attention area is equal to the driver sight line area; if the attention area belongs to the driver sight line area, taking a second driving assistance mode as a target driving assistance mode; and if the attention area does not belong to the driver sight line area, taking the third driving assistance mode as a target driving assistance mode.
It should be understood that the above is only an example, and the technical solution of the present invention is not limited in any way, and in a specific application, a person skilled in the art may set the technical solution as needed, and the present invention is not limited thereto.
The embodiment obtains the environment image around the vehicle and the face image of the driver; determining a region to be focused according to the environment image; determining a driver sight line area according to the driver face image; determining a target driving assistance mode according to the attention area and the driver sight area; the current driving assistance mode is switched to the target driving assistance mode. In the manner, the sight line area of the driver is determined according to the face image of the driver, the area needing attention is analyzed according to the environment around the current vehicle, and whether the driving assistance mode needs to be switched or not is judged according to the superposition condition of the sight line area and the area needing attention.
It should be noted that the above-described work flows are only exemplary, and do not limit the scope of the present invention, and in practical applications, a person skilled in the art may select some or all of them to achieve the purpose of the solution of the embodiment according to actual needs, and the present invention is not limited herein.
In addition, the technical details that are not elaborated in the embodiment may refer to the driving assistance mode switching method provided by any embodiment of the present invention, and are not described herein again.
Further, it is to be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g. Read Only Memory (ROM)/RAM, magnetic disk, optical disk), and includes several instructions for enabling a terminal device (e.g. a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A driving assistance mode switching method characterized by comprising:
acquiring an environment image around a vehicle and a driver face image;
determining a region to be focused according to the environment image;
determining a driver sight line area according to the driver face image;
determining a target driving assistance mode according to the attention area and the driver sight area;
the current driving assistance mode is switched to the target driving assistance mode.
2. The method of claim 1, wherein said determining regions of interest from the environmental image comprises:
determining an initial global significance threshold value and an initial search radius according to the environment image;
searching the environment image according to the initial global significance threshold and the initial search radius to obtain a search result;
and determining the region to be focused according to the search result.
3. The method of claim 2, wherein searching the environmental image according to the initial global saliency threshold and the initial search radius, resulting in search results, comprises:
determining a search area in the environment image according to the initial search radius;
comparing the pixel value of each pixel point in the search area with the initial global significance threshold value to obtain a comparison value;
when the comparison value is in a preset threshold interval, reducing the initial search radius according to a preset reduction value, and searching the environment image according to the reduced initial search radius;
and when the comparison value is equal to a preset threshold value, generating a search result according to the initial search radius corresponding to the comparison value.
4. The method of claim 1, wherein determining a driver gaze area from the driver facial image comprises:
segmenting the driver face image into a plurality of face candidate regions;
determining a gray value of each face candidate region;
taking the face candidate area corresponding to the gray value larger than the gray value threshold value as a pupil candidate area;
determining pupil center characteristics according to the pupil candidate area;
and determining the sight line area of the driver according to the pupil center characteristics.
5. The method of claim 4, wherein said determining a driver gaze area from said pupil center feature comprises:
determining a pupil center feature vector and a gaze direction vector according to the pupil center feature;
determining a target mapping relation between the pupil center characteristic vector and the gazing direction vector;
and determining a driver sight line area by the target mapping relation and the pupil center characteristic vector.
6. The method of claim 5, wherein the determining the target mapping of the pupil center feature vector and the gaze direction vector comprises:
establishing a target loss function of the pupil center characteristic vector and the gaze direction vector;
obtaining a first derivative function by derivation of the target loss function;
and determining the target mapping relation of the pupil center characteristic vector and the gazing direction vector according to the first derivative function and a preset value.
7. The method according to any one of claims 1 to 6, wherein the determining a target driving assistance mode according to the attention area and the driver's sight line area includes:
if the attention area is equal to the driver sight line area, taking a first driving assistance mode as a target driving assistance mode;
if the attention area belongs to the driver sight line area, taking a second driving assistance mode as a target driving assistance mode;
and if the attention area does not belong to the driver sight line area, taking the third driving assistance mode as a target driving assistance mode.
8. A driving assistance mode switching device characterized by comprising:
the face acquisition module is used for acquiring an environment image around the vehicle and a face image of a driver;
the area determining module is used for determining an area to be focused according to the environment image;
the sight line determining module is used for determining a sight line area of the driver according to the facial image of the driver;
the mode determining module is used for determining a target driving assistance mode according to the attention area and the driver sight area;
a mode switching method for switching a current driving assistance mode to a target driving assistance mode.
9. A driving assistance mode switching apparatus characterized in that the apparatus comprises: a memory, a processor, and a driving assistance mode switching program stored on the memory and executable on the processor, the driving assistance mode switching program being configured to implement the driving assistance mode switching method according to any one of claims 1 to 7.
10. A storage medium, characterized in that the storage medium has stored thereon a driving assistance mode switching program that, when executed by a processor, implements the driving assistance mode switching method according to any one of claims 1 to 7.
CN202111251279.7A 2021-10-26 2021-10-26 Driving assistance mode switching method, device, equipment and storage medium Active CN114162130B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111251279.7A CN114162130B (en) 2021-10-26 2021-10-26 Driving assistance mode switching method, device, equipment and storage medium
PCT/CN2022/080961 WO2023071024A1 (en) 2021-10-26 2022-03-15 Driving assistance mode switching method, apparatus, and device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111251279.7A CN114162130B (en) 2021-10-26 2021-10-26 Driving assistance mode switching method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114162130A true CN114162130A (en) 2022-03-11
CN114162130B CN114162130B (en) 2023-06-20

Family

ID=80477386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111251279.7A Active CN114162130B (en) 2021-10-26 2021-10-26 Driving assistance mode switching method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN114162130B (en)
WO (1) WO2023071024A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115909254A (en) * 2022-12-27 2023-04-04 钧捷智能(深圳)有限公司 DMS system based on camera original image and image processing method thereof
WO2023071024A1 (en) * 2021-10-26 2023-05-04 东风柳州汽车有限公司 Driving assistance mode switching method, apparatus, and device, and storage medium
CN117197786A (en) * 2023-11-02 2023-12-08 安徽蔚来智驾科技有限公司 Driving behavior detection method, control device and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107539318A (en) * 2016-06-28 2018-01-05 松下知识产权经营株式会社 Drive assistance device and driving assistance method
WO2019029195A1 (en) * 2017-08-10 2019-02-14 北京市商汤科技开发有限公司 Driving state monitoring method and device, driver monitoring system, and vehicle
CN109492514A (en) * 2018-08-28 2019-03-19 初速度(苏州)科技有限公司 A kind of method and system in one camera acquisition human eye sight direction
CN109664891A (en) * 2018-12-27 2019-04-23 北京七鑫易维信息技术有限公司 Auxiliary driving method, device, equipment and storage medium
US20190126821A1 (en) * 2017-11-01 2019-05-02 Acer Incorporated Driving notification method and driving notification system
CN111169483A (en) * 2018-11-12 2020-05-19 奇酷互联网络科技(深圳)有限公司 Driving assisting method, electronic equipment and device with storage function
CN111931579A (en) * 2020-07-09 2020-11-13 上海交通大学 Automatic driving assistance system and method using eye tracking and gesture recognition technology
DE102020123658A1 (en) * 2019-09-11 2021-03-11 Mando Corporation DRIVER ASSISTANCE DEVICE AND PROCEDURE FOR IT
CN112965502A (en) * 2020-05-15 2021-06-15 东风柳州汽车有限公司 Visual tracking confirmation method, device, equipment and storage medium
CN113378771A (en) * 2021-06-28 2021-09-10 济南大学 Driver state determination method and device, driver monitoring system and vehicle

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006172215A (en) * 2004-12-16 2006-06-29 Fuji Photo Film Co Ltd Driving support system
CN103770733B (en) * 2014-01-15 2017-01-11 中国人民解放军国防科学技术大学 Method and device for detecting safety driving states of driver
CN114162130B (en) * 2021-10-26 2023-06-20 东风柳州汽车有限公司 Driving assistance mode switching method, device, equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107539318A (en) * 2016-06-28 2018-01-05 松下知识产权经营株式会社 Drive assistance device and driving assistance method
WO2019029195A1 (en) * 2017-08-10 2019-02-14 北京市商汤科技开发有限公司 Driving state monitoring method and device, driver monitoring system, and vehicle
US20190126821A1 (en) * 2017-11-01 2019-05-02 Acer Incorporated Driving notification method and driving notification system
CN109492514A (en) * 2018-08-28 2019-03-19 初速度(苏州)科技有限公司 A kind of method and system in one camera acquisition human eye sight direction
CN111169483A (en) * 2018-11-12 2020-05-19 奇酷互联网络科技(深圳)有限公司 Driving assisting method, electronic equipment and device with storage function
CN109664891A (en) * 2018-12-27 2019-04-23 北京七鑫易维信息技术有限公司 Auxiliary driving method, device, equipment and storage medium
DE102020123658A1 (en) * 2019-09-11 2021-03-11 Mando Corporation DRIVER ASSISTANCE DEVICE AND PROCEDURE FOR IT
CN112965502A (en) * 2020-05-15 2021-06-15 东风柳州汽车有限公司 Visual tracking confirmation method, device, equipment and storage medium
CN111931579A (en) * 2020-07-09 2020-11-13 上海交通大学 Automatic driving assistance system and method using eye tracking and gesture recognition technology
CN113378771A (en) * 2021-06-28 2021-09-10 济南大学 Driver state determination method and device, driver monitoring system and vehicle

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
宫德麟;施家栋;张广月;王建中;: "头戴式眼动跟踪系统设计与实现", 科技创新与应用, no. 31 *
常健;龙玉林;: "商用车转向系统与整车操纵稳定性的相关性研究", 山东工业技术, no. 05 *
张立保;李浩;: "基于自适应半径搜索的图像感兴趣区域检测", 中国激光, no. 07 *
朱博;迟健男;张天侠;: "视线追踪系统头动状态下的视线落点补偿方法", 公路交通科技, no. 10 *
李直龙;左军成;纪棋严;罗凤云;庄圆;: "基于Argo剖面和SST以及SLA数据重构三维网格温度场", 海洋预报, no. 04 *
毛云丰;沈文忠;滕童;: "基于深度神经网络的视线跟踪技术研究", 现代电子技术, no. 16 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023071024A1 (en) * 2021-10-26 2023-05-04 东风柳州汽车有限公司 Driving assistance mode switching method, apparatus, and device, and storage medium
CN115909254A (en) * 2022-12-27 2023-04-04 钧捷智能(深圳)有限公司 DMS system based on camera original image and image processing method thereof
CN115909254B (en) * 2022-12-27 2024-05-10 钧捷智能(深圳)有限公司 DMS system based on camera original image and image processing method thereof
CN117197786A (en) * 2023-11-02 2023-12-08 安徽蔚来智驾科技有限公司 Driving behavior detection method, control device and storage medium
CN117197786B (en) * 2023-11-02 2024-02-02 安徽蔚来智驾科技有限公司 Driving behavior detection method, control device and storage medium

Also Published As

Publication number Publication date
WO2023071024A1 (en) 2023-05-04
CN114162130B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
CN114162130A (en) Driving assistance mode switching method, device, equipment and storage medium
CN109635685B (en) Target object 3D detection method, device, medium and equipment
CN108725440B (en) Forward collision control method and apparatus, electronic device, program, and medium
CN111178245B (en) Lane line detection method, lane line detection device, computer equipment and storage medium
US8773535B2 (en) Adaptation for clear path detection using reliable local model updating
US8005266B2 (en) Vehicle surroundings monitoring apparatus
Wang et al. Applying fuzzy method to vision-based lane detection and departure warning system
US9384401B2 (en) Method for fog detection
CN110789517A (en) Automatic driving lateral control method, device, equipment and storage medium
US8681222B2 (en) Adaptation for clear path detection with additional classifiers
JP2008021034A (en) Image recognition device, image recognition method, pedestrian recognition device and vehicle controller
CN111158491A (en) Gesture recognition man-machine interaction method applied to vehicle-mounted HUD
CN114170826B (en) Automatic driving control method and device, electronic device and storage medium
CN111158457A (en) Vehicle-mounted HUD (head Up display) human-computer interaction system based on gesture recognition
CN114930402A (en) Point cloud normal vector calculation method and device, computer equipment and storage medium
US20120189161A1 (en) Visual attention apparatus and control method based on mind awareness and display apparatus using the visual attention apparatus
CN113297939B (en) Obstacle detection method, obstacle detection system, terminal device and storage medium
CN108090425B (en) Lane line detection method, device and terminal
JP2017165345A (en) Object recognition device, object recognition method and object recognition program
CN112800989A (en) Method and device for detecting zebra crossing
Venkateswaran et al. Deep learning based robust forward collision warning system with range prediction
US11886995B2 (en) Recognition of objects in images with equivariance or invariance in relation to the object size
US20200279103A1 (en) Information processing apparatus, control method, and program
Heidarizadeh Preprocessing Methods of Lane Detection and Tracking for Autonomous Driving
JP2013149146A (en) Object detection device, object detection method and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant