CN117197786B - Driving behavior detection method, control device and storage medium - Google Patents

Driving behavior detection method, control device and storage medium Download PDF

Info

Publication number
CN117197786B
CN117197786B CN202311445561.8A CN202311445561A CN117197786B CN 117197786 B CN117197786 B CN 117197786B CN 202311445561 A CN202311445561 A CN 202311445561A CN 117197786 B CN117197786 B CN 117197786B
Authority
CN
China
Prior art keywords
current
target
attention
driver
focused
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311445561.8A
Other languages
Chinese (zh)
Other versions
CN117197786A (en
Inventor
周航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Weilai Zhijia Technology Co Ltd
Original Assignee
Anhui Weilai Zhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Weilai Zhijia Technology Co Ltd filed Critical Anhui Weilai Zhijia Technology Co Ltd
Priority to CN202311445561.8A priority Critical patent/CN117197786B/en
Publication of CN117197786A publication Critical patent/CN117197786A/en
Application granted granted Critical
Publication of CN117197786B publication Critical patent/CN117197786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention relates to the technical field of automatic driving, in particular to a driving behavior detection method, a control device and a storage medium, and aims to solve the technical problem that the existing driving behavior detection method is not fine enough and cannot better meet the auxiliary driving requirement. To this end, the invention provides a method comprising: acquiring a current sight line attention range of a driver based on the acquired face image of the driver; based on the current sight line attention range, a current attention target of the driver and a current non-attention target positioned in a preset range around the vehicle are obtained; and obtaining the concentration condition detection result of the driver in a preset time period based on the current target of attention and the current target of no attention. The technical scheme provided by the invention can more accurately and finely detect the driving behavior of the driver, thereby optimizing the auxiliary driving function.

Description

Driving behavior detection method, control device and storage medium
Technical Field
The invention relates to the technical field of automatic driving, and particularly provides a driving behavior detection method, a control device and a storage medium.
Background
Driver behavior detection systems play an increasing role in the area of assisted driving. Current driving behavior detection can only detect whether the driver is distracted or in a state of fatigue driving. In particular, in detecting whether or not the driver is distracted, the distraction condition is determined by detecting whether or not the driver's line of sight falls in the cabin area for a long time.
However, in practical applications, when the driver focuses on the target condition outside the cabin, the line of sight does not fall in the cabin area for a long time, and the driver is not necessarily in a distracted state at this time. In addition, when the auxiliary driving function is more and more complex, the function is more and more required to be linked with the intention of the driver, the attention condition of the driver to the specific targets in the cabin and outside the cabin is more required to be finely judged, and the current detection mode obviously cannot meet the current requirement.
Based on this, there is a need in the art for a new driving behavior detection scheme to solve the above-mentioned problems.
Disclosure of Invention
In order to solve the technical problems, the invention provides a driving behavior detection method, a control device and a storage medium, which can more accurately and finely detect the driving behavior of a driver so as to optimize the auxiliary driving function.
In order to achieve the above purpose, the technical scheme of the invention is realized as follows:
in a first aspect, the present invention provides a driving behavior detection method, the method comprising:
acquiring a current sight line attention range of a driver based on the acquired face image of the driver;
based on the current sight line attention range, a current attention target of the driver and a current non-attention target positioned in a preset range around the vehicle are obtained;
and obtaining the concentration condition detection result of the driver in a preset time period based on the current target of attention and the current target of no attention.
In one aspect of the above driving behavior detection method, the obtaining the concentration detection result of the driver in the preset time period based on the current target of interest and the current target of no interest includes:
for each frame of the face image within the preset time period, performing the following operation to obtain a attention score of each target within the preset range: acquiring a focus target of the driver under the frame part image as the current focus target, and increasing the focus score of the current focus target by a first preset value on the basis of the focus score of the focus target in the previous frame part image; acquiring an unfocused target of the driver in the preset range in the frame part image as the current unfocused target, and reducing the attention score of the current unfocused target by a second preset value on the basis of the attention score of the unfocused target in the previous frame part image;
And obtaining the concentration condition detection result of the driver in the preset time period based on the attention degree score of each target.
In one aspect of the above driving behavior detection method, the obtaining the concentration condition detection result of the driver in the preset time period based on the current target of interest and the current target of no interest further includes:
acquiring motion state information of each target;
and adjusting the first preset numerical value and/or the second preset numerical value in real time based on the motion state information of each target.
In one aspect of the above driving behavior detection method, the motion state information of each target includes: the distance between the current attention target and the vehicle; the adjusting the first predetermined value and/or the second predetermined value in real time based on the motion state information of each target includes:
based on the distance between the current attention target and the vehicle, the first preset numerical value is adjusted in real time; wherein the first predetermined value is inversely proportional to the distance.
In one aspect of the above driving behavior detection method, the motion state information of each target includes: speed information of the current non-focused target; the adjusting the first predetermined value and/or the second predetermined value in real time based on the motion state information of each target includes:
And adjusting the second preset numerical value in real time based on the speed information of the current non-focused target.
In one aspect of the above driving behavior detection method, the adjusting the second predetermined value in real time based on the speed information of the current non-focused target includes:
judging whether the current non-focused target is focused by the driver in the previous frame part image;
when the current non-focused object is focused by the driver in the last frame face image, acquiring the speed of the current non-focused object focused in the last frame face image as a first speed;
acquiring the current speed of the current non-focused target as a second speed;
calculating a ratio of change between the second speed and the first speed;
adjusting the second predetermined value in real time based on the change ratio; wherein the second predetermined value is proportional to the ratio of change.
In one aspect of the above driving behavior detection method, the obtaining the concentration condition detection result of the driver in the preset time period based on the current target of interest and the current target of no interest further includes:
acquiring the type of the current non-focused target;
The reducing the attention score of the current non-attention target by a second preset value based on the attention score of the non-attention target in the previous frame of the facial image comprises the following steps:
and reducing the attention score of the current non-attention target by a second preset value corresponding to the type on the basis of the attention score of the non-attention target in the previous frame part image based on the type of the current non-attention target.
In one aspect of the above driving behavior detection method, the types of the currently-unfocused targets include: a current unseen target, a current blocked target, and a current line-of-sight invalid target; the second predetermined value comprises: third, fourth and fifth values; the step of reducing the attention score of the current non-attention target by a second preset value corresponding to the type based on the type of the current non-attention target on the basis of the attention score of the non-attention target in the previous frame image comprises the following steps:
when the type of the current non-focused target is the current non-focused target, reducing the focused degree score of the current non-focused target by the third numerical value on the basis of the focused degree score of the non-focused target in the previous frame of facial image;
When the type of the current non-focused target is the current blocked target, reducing the focus score of the current non-focused target by the fourth value on the basis of the focus score of the non-focused target in the previous frame of facial image;
when the type of the current non-focused target is the current sight-line invalid target, reducing the focused degree score of the current non-focused target by the fifth numerical value on the basis of the focused degree score of the non-focused target in the previous frame image; wherein the third value is less than the fourth value, and the fourth value is less than the fifth value.
In one technical scheme of the driving behavior detection method, the current blocked target is determined by adopting the following method:
modeling each target to obtain a model corresponding to each target; wherein each of the models comprises a plurality of corner points;
acquiring a model of a target which can be focused by the driver as an occlusion model;
acquiring connecting lines of the sight origin of the driver and all corner points of the shielding model to form a shielding area;
for each of the models other than the occlusion model, detecting whether each of the corner points of the model is located in the occlusion region; when all the corner points of the model are located in the shielding area, determining that the target corresponding to the model is the current shielded target.
In one aspect of the above driving behavior detection method, the obtaining the current sight line attention range of the driver based on the collected face image of the driver includes:
based on the facial image, obtaining a current sight line three-dimensional vector of the driver;
and obtaining the current sight line attention range of the driver based on the current sight line three-dimensional vector and the preset sight angle.
In one aspect of the above driving behavior detection method, the method further includes:
acquiring the current concentration angle of the driver;
the method for obtaining the current sight line attention range of the driver based on the collected face image of the driver comprises the following steps:
based on the facial image, obtaining a current sight line three-dimensional vector of the driver;
and obtaining the current sight line attention range of the driver based on the current sight line three-dimensional vector and the current attention angle.
In one aspect of the above driving behavior detection method, the obtaining the current focus angle of the driver includes:
acquiring the current speed of the vehicle;
and obtaining the current concentration angle based on the current vehicle speed.
In one aspect of the above driving behavior detection method, the obtaining the current focus angle based on the current vehicle speed includes:
Inputting the current vehicle speed into a pre-established linear difference function so that the linear difference function outputs the current concentration angle; the linear difference function is established based on a preset maximum concentration angle and a preset minimum concentration angle, and the current vehicle speed is inversely proportional to the current concentration angle.
In one aspect of the above driving behavior detection method, the obtaining the current focus angle based on the current vehicle speed further includes:
judging whether the current focus angle is larger than the preset maximum focus angle or not;
when the current focus angle is larger than the preset maximum focus angle, acquiring the preset maximum focus angle as the current focus angle;
judging whether the current concentration angle is smaller than the preset minimum concentration angle or not;
and when the current focus angle is smaller than the preset minimum focus angle, acquiring the preset minimum focus angle as the current focus angle.
In one aspect of the driving behavior detection method, the obtaining the current attention target of the driver based on the current sight line attention range includes:
Acquiring a current three-dimensional position area of each target in the preset range;
for each of the targets, the following operations are performed: judging whether the current three-dimensional position area of the target and the current sight line attention range have an overlapping area or not; and when the current three-dimensional position area of the target and the current sight line attention range have overlapping areas, determining that the target is the current attention target of the driver.
In a second aspect, the present invention provides a control device, which includes a processor and a storage device, the storage device being adapted to store a plurality of program codes, the program codes being adapted to be loaded and executed by the processor to perform the driving behavior detection method according to any one of the above-mentioned driving behavior detection methods.
In a third aspect, the present invention provides a computer-readable storage medium having stored therein a plurality of program codes adapted to be loaded and executed by a processor to perform the driving behavior detection method according to any one of the above-described technical aspects of the driving behavior detection method.
According to the driving behavior detection method, the control device and the storage medium, the current sight line attention range of the driver is obtained based on the collected face image of the driver, the current attention target of the driver and the current non-attention target which is located in the preset range around the vehicle are obtained based on the current sight line attention range, and the attention condition detection result of the driver in the preset time period is obtained based on the current attention target and the current non-attention target, so that the system can automatically detect the current attention target and the current non-attention target of the driver at each moment, and the system can acquire the driving behavior of the driver more accurately and finely based on the current attention target and the current non-attention target, and further acquire the current action intention of the driver, and the auxiliary driving function can be further optimized.
Drawings
The present disclosure will become more readily understood with reference to the accompanying drawings. As will be readily appreciated by those skilled in the art: the drawings are only for the purpose of illustrating the invention and are not intended to limit the scope of the invention. Moreover, like numerals in the figures are used to designate like parts, wherein:
fig. 1 is a flow chart of main steps of a driving behavior detection method according to an embodiment of the present invention;
FIG. 2 is a flow chart of driver concentration detection in an embodiment of the present invention;
fig. 3 a-3 c are schematic views of driver sight line switching in an application scenario according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a target being occluded in another application scenario according to an embodiment of the present invention;
FIG. 5 is a flow chart of a method for calculating the current focus angle in an embodiment of the present invention;
FIG. 6 is a schematic diagram of a method for determining a currently occluded object according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a resolution mechanism for a target focus in an embodiment of the invention;
fig. 8 is a main structural block diagram of a driving behavior detection device according to an embodiment of the present invention.
List of reference numerals
11: a sight line range acquisition unit; 12: a target acquisition unit;
13: and a detection unit.
Detailed Description
Some embodiments of the invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
In the description of the present invention, a "module," "processor" may include hardware, software, or a combination of both. A module may comprise hardware circuitry, various suitable sensors, communication ports, memory, or software components, such as program code, or a combination of software and hardware. The processor may be a central processor, a microprocessor, an image processor, a digital signal processor, or any other suitable processor. The processor has data and/or signal processing functions. The processor may be implemented in software, hardware, or a combination of both. Non-transitory computer readable storage media include any suitable medium that can store program code, such as magnetic disks, hard disks, optical disks, flash memory, read-only memory, random access memory, and the like. The term "a and/or B" means all possible combinations of a and B, such as a alone, B alone or a and B. The term "at least one A or B" or "at least one of A and B" has a meaning similar to "A and/or B" and may include A alone, B alone or A and B. The singular forms "a", "an" and "the" include plural referents.
Current driving behavior detection can only detect whether the driver is distracted or in a state of fatigue driving. In particular, in detecting whether or not the driver is distracted, the distraction condition is determined by detecting whether or not the driver's line of sight falls in the cabin area for a long time. While the auxiliary driving functions are more and more complex, the functions are more and more required to be linked with the intention of the driver, the situation that the driver pays attention to specific objects in the cabin and out of the cabin needs to be finely judged, for example, which area the driver's line of sight falls in the cabin, which objects are continuously focused by the driver, which objects are not focused by the driver, and the like. Only knowing such specific information, the auxiliary driving function can provide interactive reminders more specifically. For example, the driver is alerted to an object/region that needs attention; when the driver has enough attention to the corresponding information, unnecessary disturbance can be reduced; the intention of the driver is predicted through the attention condition of the driver to the target, so that more humanized driving action recommendation is provided. By the design, the driving assisting function can be realized to a greater extent, and the 'learning me' wish can be realized.
In order to achieve the purpose, the invention better refines the attention of a driver to the surrounding environment of the vehicle through linkage of information in the cabin and information outside the cabin. In one aspect, the "driver in-loop" judgment can be subdivided from a two-dimensional judgment of distraction/non-distraction to: the driver notices each target around the vehicle, so that the accurate recall rate of the reminding function and the avoiding function of the auxiliary driving can be optimized; on the other hand, the recognition of the attention area of the driver can help to better judge the intention of the driver, and the final goal of 'learning me' of the auxiliary driving function is realized.
Referring to fig. 1, fig. 1 is a flowchart illustrating main steps of a driving behavior detection method according to an embodiment of the present invention, and as shown in fig. 1, the driving behavior detection method according to an embodiment of the present invention mainly includes the following steps S101 to S103.
Step S101, acquiring a current sight line attention range of a driver based on an acquired face image of the driver;
in this embodiment, the face image of the driver can be acquired in real time by a camera installed in the cab. In one embodiment, the camera is mounted against the face of the driver to obtain an optimal facial image.
In order to accurately and effectively obtain a current sight line attention range of a driver, the method for obtaining the current sight line attention range of the driver based on the collected face image of the driver according to the embodiment includes: based on the facial image, obtaining a current sight line three-dimensional vector of the driver; and obtaining the current sight line attention range of the driver based on the current sight line three-dimensional vector and the preset sight angle.
Specifically, an eye image of the driver is extracted from the face image described in the present embodiment, and based on the eye image, a three-dimensional vector of the current line of sight of the driver, which reflects the direction of the current line of sight of the driver, can be obtained by adopting a line of sight detection algorithm. And then, taking the direction of the current sight line of the driver as an angular bisector, and making the preset sight angle, wherein the area surrounded by two sides of the preset sight angle is the current sight line attention range of the driver.
Further, in order to obtain the current sight line attention range of the driver more accurately, the method according to the embodiment further includes: the current focus angle of the driver is obtained. On this premise, the acquiring the current sight line attention range of the driver based on the acquired face image of the driver according to the embodiment includes: based on the facial image, obtaining a current sight line three-dimensional vector of the driver; and obtaining the current sight line attention range of the driver based on the current sight line three-dimensional vector and the current attention angle.
Further, in practical application, because the spatial view range (i.e. the current focus angle size) of the driver is affected by the current vehicle speed, in order to accurately obtain the current focus angle size, and further more accurately obtain the current sight line focus range of the driver, the method for obtaining the current focus angle size of the driver includes: acquiring the current speed of the vehicle; and obtaining the current concentration angle based on the current vehicle speed.
In this embodiment, it is determined that the current sight line attention range of the driver is no longer based on the preset view angle, but the current attention angle is obtained in real time according to the current vehicle speed, then the current attention angle is made by taking the direction of the current sight line of the driver as an angular bisector, and then the area surrounded by two sides of the current attention angle is the current sight line attention range of the driver.
Further, in one embodiment, to obtain the current focus angle size more quickly and accurately, the obtaining the current focus angle size based on the current vehicle speed includes: inputting the current vehicle speed into a pre-established linear difference function so that the linear difference function outputs the current concentration angle; the linear difference function is established based on a preset maximum concentration angle and a preset minimum concentration angle, and the current vehicle speed is inversely proportional to the current concentration angle, namely, the larger the current vehicle speed is, the smaller the current concentration angle is.
Further, in one embodiment, to avoid that the calculated sight line attention range exceeds the normal range of the actual application, the obtaining the current attention angle based on the current vehicle speed further includes: judging whether the current focus angle is larger than the preset maximum focus angle or not; when the current focus angle is larger than the preset maximum focus angle, acquiring the preset maximum focus angle as the current focus angle; judging whether the current concentration angle is smaller than the preset minimum concentration angle or not; and when the current focus angle is smaller than the preset minimum focus angle, acquiring the preset minimum focus angle as the current focus angle.
Specifically, based on an eye image extracted from a driver face image, a three-dimensional vector of a human eye line of sight can be recognized, and based on the three-dimensional vector, the present embodiment introduces the concept of "focus angle"/"focus angle". The focus angle/focus angle refers to an area within a certain range of the eye's sight, the eye's attention can be focused on the objects within the area, the sensitivity to the change of their position and movement state is the highest, i.e. the driver can pay sharp attention to the visual field area, the movement state of the objects appearing in the three-dimensional visual field area can be quickly perceived by the driver, the driver can quickly perceive the sudden behavior of the objects within the area, and the driver has the ability to take quick countermeasures to the sudden movements (such as sudden acceleration, sudden deceleration, cut-in and cut-out actions) of the corresponding objects.
As described above, in practical applications, the spatial view range of the driver may be affected by the current vehicle speed, i.e. the focus angle is related to the current vehicle speed of the host vehicle, and in order to obtain the focus angle more accurately, the present embodiment may adjust the focus angle in real time based on the current vehicle speed of the host vehicle for each frame portion image, i.e. perform real-time adaptive adjustment on the range defined by the focus angle.
It will be appreciated that the field of view sensitive area of the driver will decrease as the speed of the host vehicle increases, i.e. the higher the current speed, the narrower the area the driver can concentrate on sensing, i.e. the smaller the concentration angle. Based on the above, the present embodiment adopts a linear relation association to the adjustment logic of the current vehicle speed-concentration angle, so that the concentration angle can be adjusted in real time, the concentration condition judged by the algorithm is ensured to be consistent with the actual feeling of the driver, and the real-time calculation force can be saved. Meanwhile, a minimum focusing angle and a maximum focusing angle are set for a low-speed creep and a high-speed scene, so that the calculated sight line focusing range is prevented from deviating from a normal section.
Fig. 5 shows a focus angle adaptation flow chart. Firstly, calculating the focus angle of a current frame through a pre-established linear difference function based on the current speed of the vehicle, then judging whether the calculated focus angle is larger than a preset maximum focus angle, if so, assigning the value of the preset maximum focus angle to the current focus angle, namely, the current focus angle is equal to the preset maximum focus angle; if not, continuing to judge whether the calculated focus angle is smaller than the preset minimum focus angle, if so, assigning the numerical value of the preset minimum focus angle to the current focus angle, namely, enabling the current focus angle to be equal to the preset minimum focus angle. If the calculated focus angle is between the preset maximum focus angle and the preset minimum focus angle, the numerical value is kept unchanged.
Step S102, based on the current sight line attention range, a current attention target of the driver and a current non-attention target located in a preset range around the vehicle are obtained.
In order to quickly and accurately obtain the current attention target of the driver, the method for obtaining the current attention target of the driver based on the current sight line attention range according to the embodiment includes: acquiring a current three-dimensional position area of each target in a preset range; for each of the targets, the following operations are performed: judging whether the current three-dimensional position area of the target and the current sight line attention range have an overlapping area or not; and when the current three-dimensional position area of the target and the current sight line attention range have overlapping areas, determining that the target is the current attention target of the driver.
In this embodiment, the system automatically detects not only the target that the driver pays attention to in real time, but also the target that the driver does not pay attention to, so that more detailed driving behavior data can be obtained to better optimize the auxiliary driving function.
In this embodiment, the preset range refers to a certain range formed by a position with a certain distance from the driver, with the driver as the center; the preset range may be a certain range formed by a position at a certain distance from the host vehicle with the host vehicle as the center. Targets within the preset range can be understood as targets around the host vehicle. The target may be any object that needs attention of the driver in the driving process, including a moving target (such as a vehicle running around the host vehicle, etc.) and a fixed target (such as a sign on the road side, a warning sign, a stationary vehicle, etc.).
In this embodiment, if each target within the preset range has a current three-dimensional position area, all targets within the preset range may be traversed, and for each target, if the current three-dimensional position area of the target and the current sight line attention range described above have an overlapping area, the target is considered to be in an "attention" state in the current frame image; conversely, if the current three-dimensional position area of the target does not overlap with the current line-of-sight attention range described above, the target is considered to be in an "unfocused" state in the current frame image.
Step S103, obtaining a concentration condition detection result of the driver in a preset time period based on the current target of interest and the current target of no interest.
In this embodiment, in order to accurately obtain the detection result of the concentration of the driver in a period of time, the obtaining the detection result of the concentration of the driver in a preset period of time based on the current target of interest and the current target of no interest includes: for each frame of the face image within the preset time period, performing the following operation to obtain a attention score of each target within the preset range: acquiring a focus target of the driver under the frame part image as the current focus target, and increasing the focus score of the current focus target by a first preset value on the basis of the focus score of the focus target in the previous frame part image; acquiring an unfocused target of the driver in the preset range in the frame part image as the current unfocused target, and reducing the attention score of the current unfocused target by a second preset value on the basis of the attention score of the unfocused target in the previous frame part image; and obtaining the concentration condition detection result of the driver in the preset time period based on the attention degree score of each target.
In the above-described score accumulation process, in order to simplify the calculation, the initial values of the attention score of each target are equal, and preferably, the initial values of the attention score of each target are all 0.
In this embodiment, the judgment processing for each frame is performed for the face image data of the driver acquired within a preset period of time (e.g., 2 s), that is, the attention situation of the driver is judged for each frame image. Under the current frame, if a driver pays attention to a certain target, the attention score of the target is increased by a certain value; if a target is not focused by the driver, the focus score of the target is reduced by a certain value. In the above determination, the targets in each frame of image are accumulated in time sequence, that is, the target attention score in the current frame is increased/decreased based on the previous frame score, so that the attention score of each target in a period of time can be obtained, and the attention of the driver in the period of time can be inferred based on the score.
For example, the judgment processing time of one frame of image is 0.03s, and according to the final attention score of a certain target in a period of time, the attention accumulated duration of the target in the period of time and the attention times of the target in the period of time can be calculated.
For the above-mentioned judgment processing of each frame, this embodiment designs a target pool, and performs internal maintenance on the target pool. As shown in fig. 2, the system will update the focus status of all targets in the target pool for each frame of image and provide a query interface for other modules to obtain the "focus on case"/"focus on duration of the specified target. For example, if a certain object is focused, its focus score is increased by 1 point; if the target is not focused, continuously judging whether the target is shielded, if so, subtracting the focus score by 0.5 score, and if not, subtracting the focus score by 0.3 score.
It should be noted that, fig. 2 is only an example, only considers the case that the non-focused object is an occluded object, and in practical application, the non-focused object may also include the case that the non-focused object is not seen or the line of sight is invalid, and the corresponding flowchart may be designed according to the specific case, which is not limited in this embodiment.
The driver's focus on the target can be easily understood by the first scenario shown in fig. 3 a-3 c, and the second scenario shown in fig. 4.
The scenario shown in fig. 3 a-3 c is that a left lane has a vehicle cutting into the lane, the driver is in a focus driving state, the line of sight of the driver can be switched back and forth on the front vehicle and the left object, the driver is guaranteed to be in a safe distance from the two objects, and the two objects are both focused by the driver. The angle formed by taking the vehicle as a starting point in the figure is the concentration angle.
Aiming at the scene, the characteristics of the normal open car of the driver are combined: there is a feature that the line of sight stays on a certain object for a long time or the line of sight moves between a plurality of targets, and in both cases, the corresponding targets are continuously paid attention to. The 'focus state scoring' mechanism is designed, namely a certain score is increased every time a frame is focused, a certain score is reduced every time a frame is not focused, and when a certain target accumulated score exceeds a certain threshold value, the target is considered to be focused. In addition, the upper score limit and the lower score limit of the target can be further set, so that the actual score is positioned in a reasonable interval, and the occurrence of the condition of calculation errors of the accumulated score is avoided.
The scenario shown in fig. 4 is that there is a truck in front of the right of the host vehicle, the driver is in the state of focusing on the target, and there is a trolley in the view blocked by the truck, and the truck is in the "unfocused-blocked" state because it is completely blocked by the truck and cannot be focused by the driver.
Further, in an embodiment, in order to obtain the concentration detection result conveniently, the obtaining the concentration detection result of the driver in the preset time period based on the current target of interest and the current target of no interest further includes: outputting the concentration condition detection result when receiving preset inquiry information; wherein the concentration detection result comprises at least one of the following items: the target with the highest attention score in each target, the attention accumulated duration of each target, and the attention times of each target.
That is, in the present embodiment, a specific query interface is also provided for other modules to query the attention condition/attention duration of each target or specified target, so as to learn the attention condition of the driver in a certain period of time.
Further, in one embodiment, in order to score the attention degree of the target more accurately to obtain a more accurate attention situation detection result, the obtaining the attention situation detection result of the driver in the preset time period based on the current attention target and the current non-attention target further includes: acquiring motion state information of each target; and adjusting the first preset numerical value and/or the second preset numerical value in real time based on the motion state information of each target.
Further, as described above, the "degree of interest" for each target may be accumulated on a frame-by-frame basis: for the target of which the current frame is focused, accumulating an increasing score; for the targets which are not focused, the scores of the targets are accumulated and reduced for simulating the prediction and judgment of the ' expected motion ' of the targets by a human, namely, the position and motion information of a certain target are still in the range of the driver's prejudgment after the sight of the driver is removed. And how long this "prejudgement situation" can last, affected by the motion condition of the corresponding target: if the motion state of the corresponding target is stable and the change amplitude is small, the driver has the ability of prejudging the target in a period of time; if a sudden change in the motion of the corresponding target (e.g., sudden acceleration/deceleration) occurs, the driver loses the ability to anticipate the target in a short period of time (if the driver continues to not notice the target). Thus, the "score resolution mechanism" should be adjusted in real time as affected by the degree of mutation in each target movement.
Based on the above-mentioned idea, in order to more accurately adjust the first predetermined value to obtain a more accurate concentration condition detection result, the motion state information of each target includes: the distance between the current attention target and the vehicle. On this premise, the adjusting the first predetermined value and/or the second predetermined value in real time based on the motion state information of each target according to the present embodiment includes: based on the distance between the current attention target and the vehicle, the first preset numerical value is adjusted in real time; wherein the first predetermined value is inversely proportional to the distance.
In this embodiment, the real-time distance between the target of interest and the driver (or the host vehicle) affects the attention score of the target, and the farther the target of interest is from the driver, the lower the attention score thereof is; accordingly, the closer the object of interest is to the driver, the higher the attention score thereof. The first preset value is adjusted in real time in the driving process, so that the actual situation in the driving process can be more met, and the final score of the target is more accurate.
In order to more accurately adjust the second predetermined value to obtain a more accurate concentration detection result, the motion state information of each target includes: the speed information of the current non-focused object. On this premise, the adjusting the first predetermined value and/or the second predetermined value in real time based on the motion state information of each target according to the present embodiment includes: and adjusting the second preset numerical value in real time based on the speed information of the current non-focused target.
Further, in one embodiment, to more accurately adjust the second predetermined value to obtain a more accurate concentration detection result, as shown in fig. 7, the adjusting the second predetermined value in real time based on the speed information of the current non-target of interest includes: judging whether the current non-focused target is focused by the driver in the previous frame part image; when the current non-focused object is focused by the driver in the last frame face image, acquiring the speed of the current non-focused object focused in the last frame face image as a first speed; acquiring the current speed of the current non-focused target as a second speed; calculating a ratio of change between the second speed and the first speed; adjusting the second predetermined value in real time based on the change ratio; wherein the magnitude of the second predetermined value is proportional to the ratio of changes.
Further, in one embodiment, in order to accurately calculate the ratio of change between the second speed and the first speed, the ratio of change is calculated using the following expression:
wherein,for the variation ratio; />Is the second speed; / >Is the first speed.
That is, in the present embodiment, the resolution of the "focused state" of the target not focused on by the current frame is affected by the degree of change in the motion state thereof: i.e. the object once being focused, should increase the resolution of its "focused state" if its subsequent movement state changes a lot.
In this embodiment, the motion state information of a certain target includes a change in the speed of the target and a distance between the target and the host vehicle. The system will track and record the motion state of each target and adjust the fading amplitude of the 'attention degree' of each target in real time.
Further, in one embodiment, in order to refine the target attention score under different conditions to obtain a more accurate attention detection result, the present embodiment further subdivides the types of the targets that are not currently focused. The method for obtaining the concentration condition detection result of the driver in the preset time period based on the current target of interest and the current target of no interest further includes: and acquiring the type of the current non-focused object. On this premise, the reducing the attention score of the current non-attention target by the second predetermined value based on the attention score of the non-attention target in the previous frame image according to the embodiment includes: and reducing the attention score of the current non-attention target by a second preset value corresponding to the type on the basis of the attention score of the non-attention target in the previous frame part image based on the type of the current non-attention target.
Specifically, the currently-untraced objects described in this embodiment include: a current non-seen target, a current blocked target, and a current gaze-invalidating target of the driver; the second predetermined value in this embodiment includes: third, fourth and fifth values. On the premise of the above, the reducing, based on the type of the current non-focused target, the focus score of the current non-focused target by a second predetermined value corresponding to the type on the basis of the focus score of the non-focused target in the previous frame image according to the embodiment includes: when the type of the current non-focused target is the current non-focused target, reducing the focused degree score of the current non-focused target by the third numerical value on the basis of the focused degree score of the non-focused target in the previous frame of facial image; when the type of the current non-focused target is the current blocked target, reducing the focus score of the current non-focused target by the fourth value on the basis of the focus score of the non-focused target in the previous frame of facial image; when the type of the current non-focused target is the current sight-line invalid target, reducing the focused degree score of the current non-focused target by the fifth numerical value on the basis of the focused degree score of the non-focused target in the previous frame image; wherein the third value is less than the fourth value, and the fourth value is less than the fifth value.
That is, in the present embodiment, the resolution of the degree of attention of the currently unseen target should be smaller than that of the currently blocked target, and the resolution of the currently blocked target should be smaller than that of the current sight-line ineffective target. The design can be more in line with the practical application, so that the obtained concentration condition detection result is more accurate.
Further, in one embodiment, because the "focus" of a target is determined by whether there is overlap between the current line of sight focus of the driver of the host vehicle and the three-dimensional location area of the corresponding target, a mutual occlusion relationship between the targets needs to be considered.
In this embodiment, the system determines the shielding relationship of each target around the host vehicle according to three points: firstly, judging the shielding relation under a three-dimensional vehicle body coordinate system, and judging the shielding condition between a certain target and all the targets at a shorter distance, wherein the condition that a plurality of vehicles shield one vehicle or one vehicle shields a plurality of vehicles exists; second, the logic of the occlusion relationship is to determine whether the driver can see the corresponding target, so the origin of the line-of-sight vector should be the eye position of the driver; thirdly, the three-dimensional cone-shaped area formed by the sight origin of the driver and all the corner points of each target extends outwards, namely the 'shielding area' of the target. Other target angular points falling in the shielding area are judged to be in a shielding state; and so on, after traversing all targets, if all the corner points of a certain target are in a blocked state, the target state is updated to be blocked.
Based on the above thought, in order to accurately and effectively determine the current blocked target, the current blocked target is determined by adopting the following method: modeling each target in the preset range to obtain a model corresponding to each target; wherein each of the models comprises a plurality of corner points; acquiring a model of a target which can be focused by the driver as an occlusion model; acquiring connecting lines of the sight origin of the driver and all corner points of the shielding model to form a shielding area; for each of the models other than the occlusion model, detecting whether each of the corner points of the model is located in the occlusion region; when all the corner points of the model are located in the shielding area, determining that the target corresponding to the model is the current shielded target.
That is, the present embodiment further adds a determination of the blocked state of the target, specifically, from the perspective view, that an object is within the range of the focus angle of the driver, but if the object is completely blocked by a nearer object, the driver cannot notice the target, and the target should be in the "unfocused" state.
In this embodiment, the currently blocked target refers to a target blocked in the current frame image. To simplify the model structure, to simplify the corresponding algorithm, each object is abstracted to a cuboid model with 8 corner points, as shown in fig. 6. And obtaining the shielding area through the connecting line of the sight origin of the driver and each corner point of the shielding model. For example, in fig. 6, 4 corner points on the left of the object 3 are located in the shielding area formed by the object 1, and 4 corner points on the right of the object 3 are located in the shielding area formed by the object 2, that is, all corner points of the object 3 are shielded, then the object 3 is the currently shielded object.
Further, in one embodiment, to accurately and efficiently determine the current gaze invalidation target, the current gaze invalidation target is determined in the following manner: judging whether the face image under the current frame is valid or not; when the face image under the current frame is invalid, determining that all targets within the preset range under the current frame are the current sight-line invalid targets.
For example, if a camera for capturing a face image of a driver at a certain time is blocked, the face image at that time is invalid, and accordingly, all the targets in the invalid face image are current line-of-sight invalid targets.
Based on the steps S101-S103, the method can solve the technical problem that the existing driving behavior detection method is not fine enough and cannot better meet the auxiliary driving requirement.
According to the technical scheme provided by the embodiment of the invention, the current sight line attention range of the driver is obtained based on the collected face image of the driver, the current attention target of the driver and the current non-attention target positioned in the preset range around the vehicle are obtained based on the current sight line attention range, and the attention condition detection result of the driver in the preset time period is obtained based on the current attention target and the current non-attention target, so that the system can automatically detect the current attention target and the current non-attention target of the driver at each moment, and the system can acquire the driving behavior of the driver more accurately and finely based on the current attention target and the current non-attention target, and further acquire the current action intention of the driver, thereby further optimizing the auxiliary driving function.
The embodiment of the invention provides a method for judging the concentration degree of a driver based on sight, which realizes the function of judging the concentration degree of the driver on all objects outside a cabin in a continuous period based on single-frame sight information of the driver; the strategy can refine the focus state of each object, not just distinguish whether the driver is distracted or not; the method can be used as atomic capability to be provided for a plurality of auxiliary driving application parties for judging whether a certain object is in a state of continuous attention of a driver, and a plurality of auxiliary driving functions can make a differential strategy to optimize the whole accurate calling performance in combination with the state.
It should be noted that, although the foregoing embodiments describe the steps in a specific order, it will be understood by those skilled in the art that, in order to achieve the effects of the present invention, the steps are not necessarily performed in such an order, and may be performed simultaneously (in parallel) or in other orders, and these variations are within the scope of the present invention.
The user information (including but not limited to user equipment information, user personal information, object information corresponding to vehicle usage data, etc.) and data (including but not limited to data for analysis, stored data, displayed data, vehicle usage data, etc.) according to the present embodiment are both information and data authorized by the user or sufficiently authorized by each party. The data acquisition, collection and other actions involved in the embodiment are all executed after the authorization of the user and the object or after the full authorization of all the parties.
Further, the invention also provides a driving behavior detection device.
Referring to fig. 8, fig. 8 is a main block diagram of a driving behavior detection device according to an embodiment of the present invention. As shown in fig. 8, the driving behavior detection device in the embodiment of the invention mainly includes a line-of-sight range acquisition unit 11, a target acquisition unit 12, and a detection unit 13. Wherein,
a sight line range acquisition unit 11 for acquiring a current sight line attention range of the driver based on the acquired face image of the driver;
a target acquiring unit 12 for acquiring a current target of interest of the driver and a current target of no interest located within a preset range around the host vehicle based on the current line-of-sight attention range;
a detection unit 13, configured to obtain a concentration condition detection result of the driver in a preset time period based on the current target of interest and the current target of no interest.
In this embodiment, the detecting unit 13 includes:
a attention score acquisition unit configured to perform, for each frame of the face image within the preset period, the following operations to obtain an attention score of each target within the preset range: acquiring a focus target of the driver under the frame part image as the current focus target, and increasing the focus score of the current focus target by a first preset value on the basis of the focus score of the focus target in the previous frame part image; acquiring an unfocused target of the driver in the preset range in the frame part image as the current unfocused target, and reducing the attention score of the current unfocused target by a second preset value on the basis of the attention score of the unfocused target in the previous frame part image;
And the concentration result acquisition unit is used for acquiring a concentration condition detection result of the driver in the preset time period based on the attention score of each target.
Further, in one embodiment, the detecting unit 13 further includes:
a motion information acquisition unit for acquiring motion state information of each target;
and the adjusting unit is used for adjusting the first preset numerical value and/or the second preset numerical value in real time based on the motion state information of each target.
In this embodiment, the motion state information of each target includes: the distance between the current attention target and the vehicle; the adjusting unit includes:
the first numerical value adjusting unit is used for adjusting the first preset numerical value in real time based on the distance between the current attention target and the vehicle; wherein the first predetermined value is inversely proportional to the distance.
In this embodiment, the motion state information of each target includes: speed information of the current non-focused target; the adjusting unit includes:
and the second numerical value adjusting unit is used for adjusting the second preset numerical value in real time based on the speed information of the current non-focused target.
In this embodiment, the second value adjusting unit adjusts the second predetermined value in real time by:
judging whether the current non-focused target is focused by the driver in the previous frame part image;
when the current non-focused object is focused by the driver in the last frame face image, acquiring the speed of the current non-focused object focused in the last frame face image as a first speed;
acquiring the current speed of the current non-focused target as a second speed;
calculating a ratio of change between the second speed and the first speed;
adjusting the second predetermined value in real time based on the change ratio; wherein the second predetermined value is proportional to the ratio of change.
In this embodiment, the detecting unit 13 further includes:
a target type acquisition unit, configured to acquire a type of the current non-focused target;
the attention score obtaining unit is further configured to reduce, based on the type of the current non-attention target, the attention score of the current non-attention target by a second predetermined value corresponding to the type on the basis of the attention score of the non-attention target in the previous frame image.
In this embodiment, the currently non-focused target includes: a current unseen target, a current blocked target, and a current line-of-sight invalid target; the second predetermined value comprises: third, fourth and fifth values; the attention score acquisition unit reduces a second predetermined value corresponding to the target type in the following manner:
when the type of the current non-focused target is the current non-focused target, reducing the focused degree score of the current non-focused target by the third numerical value on the basis of the focused degree score of the non-focused target in the previous frame of facial image;
when the type of the current non-focused target is the current blocked target, reducing the focus score of the current non-focused target by the fourth value on the basis of the focus score of the non-focused target in the previous frame of facial image;
when the type of the current non-focused target is the current sight-line invalid target, reducing the focused degree score of the current non-focused target by the fifth numerical value on the basis of the focused degree score of the non-focused target in the previous frame image; wherein the third value is less than the fourth value, and the fourth value is less than the fifth value.
In this embodiment, the current occluded target is determined in the following manner:
modeling each target to obtain a model corresponding to each target; wherein each of the models comprises a plurality of corner points;
acquiring a model of a target which can be focused by the driver as an occlusion model;
acquiring connecting lines of the sight origin of the driver and all corner points of the shielding model to form a shielding area;
for each of the models other than the occlusion model, detecting whether each of the corner points of the model is located in the occlusion region; when all the corner points of the model are located in the shielding area, determining that the target corresponding to the model is the current shielded target.
In the present embodiment, the sight line range-obtaining unit 11 obtains the current sight line attention range of the driver in the following manner:
based on the facial image, obtaining a current sight line three-dimensional vector of the driver;
and obtaining the current sight line attention range of the driver based on the current sight line three-dimensional vector and the preset sight angle.
Further, the apparatus described in this embodiment further includes:
the concentration angle acquisition unit is used for acquiring the current concentration angle of the driver;
The sight line range obtaining unit 11 also obtains the current sight line attention range of the driver in the following manner:
based on the facial image, obtaining a current sight line three-dimensional vector of the driver;
and obtaining the current sight line attention range of the driver based on the current sight line three-dimensional vector and the current attention angle.
In this embodiment, the concentration angle acquisition unit includes:
the vehicle speed acquisition unit is used for acquiring the current vehicle speed of the vehicle;
and the concentration angle acquisition subunit is used for acquiring the current concentration angle based on the current vehicle speed.
In this embodiment, the focus angle obtaining subunit obtains the current focus angle by using the following manner:
inputting the current vehicle speed into a pre-established linear difference function so that the linear difference function outputs the current concentration angle; the linear difference function is established based on a preset maximum concentration angle and a preset minimum concentration angle, and the current vehicle speed is inversely proportional to the current concentration angle.
Further, the focus angle obtaining subunit further obtains the current focus angle by adopting the following method:
judging whether the current focus angle is larger than the preset maximum focus angle or not;
When the current focus angle is larger than the preset maximum focus angle, acquiring the preset maximum focus angle as the current focus angle;
judging whether the current concentration angle is smaller than the preset minimum concentration angle or not;
and when the current focus angle is smaller than the preset minimum focus angle, acquiring the preset minimum focus angle as the current focus angle.
In the present embodiment, the current attention target obtaining unit 13 obtains the current attention target of the driver in the following manner:
acquiring a current three-dimensional position area of each target in the preset range;
for each of the targets, the following operations are performed: judging whether the current three-dimensional position area of the target and the current sight line attention range have an overlapping area or not; and when the current three-dimensional position area of the target and the current sight line attention range have overlapping areas, determining that the target is the current attention target of the driver.
In some embodiments, one or more of the line-of-sight acquisition unit 11, the target acquisition unit 12, and the detection unit 13 may be combined together into one module. In one embodiment, the description of the specific implementation function thereof may be described with reference to steps S101-S103.
The technical principles of the two embodiments of the driving behavior detection method shown in fig. 1, the technical problems to be solved and the technical effects to be produced are similar, and those skilled in the art can clearly understand that, for convenience and brevity of description, the specific working process and the related description of the driving behavior detection device may refer to the description of the embodiments of the driving behavior detection method, and will not be repeated herein.
The device in the embodiment of the invention can be a control device formed by various electronic equipment. In some possible implementations, the device may include multiple storage devices and multiple processors. While the program for executing the driving behavior detection method of the above-described method embodiment may be divided into a plurality of sub-programs, each of which may be loaded and executed by a processor to perform the different steps of the driving behavior detection method of the above-described method embodiment, respectively. Specifically, each of the sub-programs may be stored in different memories, respectively, and each of the processors may be configured to execute the programs in one or more memories to collectively implement the driving behavior detection method of the above-described method embodiment, that is, each of the processors executes different steps of the driving behavior detection method of the above-described method embodiment, respectively, to collectively implement the driving behavior detection method of the above-described method embodiment.
The plurality of processors may be processors disposed on the same device, for example, the computer device may be a high-performance device composed of a plurality of processors, and the plurality of processors may be processors configured on the high-performance device. In addition, the plurality of processors may be processors disposed on different devices, for example, the computer device may be a server cluster, and the plurality of processors may be processors on different servers in the server cluster.
It will be appreciated by those skilled in the art that the present invention may implement all or part of the above-described methods according to the above-described embodiments, or may be implemented by means of a computer program for instructing relevant hardware, where the computer program may be stored in a computer readable storage medium, and where the computer program may implement the steps of the above-described embodiments of the method when executed by a processor. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable storage medium may include: any entity or device, medium, usb disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory, random access memory, electrical carrier wave signals, telecommunications signals, software distribution media, and the like capable of carrying the computer program code. It should be noted that the computer readable storage medium may include content that is subject to appropriate increases and decreases as required by jurisdictions and by jurisdictions in which such computer readable storage medium does not include electrical carrier signals and telecommunications signals.
Further, the invention also provides a control device. In one control device embodiment according to the present invention, the control device includes a processor and a storage device, the storage device may be configured to store a program for executing the driving behavior detection method of the above-described method embodiment, and the processor may be configured to execute the program in the storage device, including, but not limited to, the program for executing the driving behavior detection method of the above-described method embodiment. For convenience of explanation, only those portions of the embodiments of the present invention that are relevant to the embodiments of the present invention are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present invention. The control device may be a control device formed of various electronic devices.
Further, the invention also provides a computer readable storage medium. In one embodiment of the computer-readable storage medium according to the present invention, the computer-readable storage medium may be configured to store a program for executing the driving behavior detection method of the above-described method embodiment, which may be loaded and executed by a processor to implement the driving behavior detection method described above. For convenience of explanation, only those portions of the embodiments of the present invention that are relevant to the embodiments of the present invention are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present invention. The computer readable storage medium may be a storage device including various electronic devices, and optionally, the computer readable storage medium in the embodiments of the present invention is a non-transitory computer readable storage medium.
Further, it should be understood that, since the respective modules are merely set to illustrate the functional units of the apparatus of the present invention, the physical devices corresponding to the modules may be the processor itself, or a part of software in the processor, a part of hardware, or a part of a combination of software and hardware. Accordingly, the number of individual modules in the figures is merely illustrative.
Those skilled in the art will appreciate that the various modules in the apparatus may be adaptively split or combined. Such splitting or combining of specific modules does not cause the technical solution to deviate from the principle of the present invention, and therefore, the technical solution after splitting or combining falls within the protection scope of the present invention.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will fall within the scope of the present invention.

Claims (16)

1. A driving behavior detection method, characterized in that the method comprises:
acquiring a current sight line attention range of a driver based on the acquired face image of the driver;
based on the current sight line attention range, a current attention target of the driver and a current non-attention target positioned in a preset range around the vehicle are obtained;
for each frame of the face image within a preset period of time, performing the following operation to obtain a focus score for each target within the preset range: acquiring a focus target of the driver under the frame part image as the current focus target, and increasing the focus score of the current focus target by a first preset value on the basis of the focus score of the focus target in the previous frame part image; acquiring an unfocused target of the driver in the preset range in the frame part image as the current unfocused target, and reducing the attention score of the current unfocused target by a second preset value on the basis of the attention score of the unfocused target in the previous frame part image;
and obtaining the concentration condition detection result of the driver in the preset time period based on the attention degree score of each target.
2. The driving behavior detection method according to claim 1, wherein the obtaining the concentration detection result of the driver in a preset period of time based on the current target of interest and the current target of no interest further comprises:
acquiring motion state information of each target;
and adjusting the first preset numerical value and/or the second preset numerical value in real time based on the motion state information of each target.
3. The driving behavior detection method according to claim 2, wherein the movement state information of each target includes: the distance between the current attention target and the vehicle; the adjusting the first predetermined value and/or the second predetermined value in real time based on the motion state information of each target includes:
based on the distance between the current attention target and the vehicle, the first preset numerical value is adjusted in real time; wherein the first predetermined value is inversely proportional to the distance.
4. The driving behavior detection method according to claim 2, wherein the movement state information of each target includes: speed information of the current non-focused target; the adjusting the first predetermined value and/or the second predetermined value in real time based on the motion state information of each target includes:
And adjusting the second preset numerical value in real time based on the speed information of the current non-focused target.
5. The driving behavior detection method according to claim 4, wherein the adjusting the second predetermined value in real time based on the speed information of the current non-attention target includes:
judging whether the current non-focused target is focused by the driver in the previous frame part image;
when the current non-focused object is focused by the driver in the last frame face image, acquiring the speed of the current non-focused object focused in the last frame face image as a first speed;
acquiring the current speed of the current non-focused target as a second speed;
calculating a ratio of change between the second speed and the first speed;
adjusting the second predetermined value in real time based on the change ratio; wherein the second predetermined value is proportional to the ratio of change.
6. The driving behavior detection method according to claim 1, wherein the obtaining the concentration detection result of the driver in a preset period of time based on the current target of interest and the current target of no interest further comprises:
Acquiring the type of the current non-focused target;
the reducing the attention score of the current non-attention target by a second preset value based on the attention score of the non-attention target in the previous frame of the facial image comprises the following steps:
and reducing the attention score of the current non-attention target by a second preset value corresponding to the type on the basis of the attention score of the non-attention target in the previous frame part image based on the type of the current non-attention target.
7. The driving behavior detection method according to claim 6, wherein the type of the current non-attention object includes: a current unseen target, a current blocked target, and a current line-of-sight invalid target; the second predetermined value comprises: third, fourth and fifth values; the step of reducing the attention score of the current non-attention target by a second preset value corresponding to the type based on the type of the current non-attention target on the basis of the attention score of the non-attention target in the previous frame image comprises the following steps:
when the type of the current non-focused target is the current non-focused target, reducing the focused degree score of the current non-focused target by the third numerical value on the basis of the focused degree score of the non-focused target in the previous frame of facial image;
When the type of the current non-focused target is the current blocked target, reducing the focus score of the current non-focused target by the fourth value on the basis of the focus score of the non-focused target in the previous frame of facial image;
when the type of the current non-focused target is the current sight-line invalid target, reducing the focused degree score of the current non-focused target by the fifth numerical value on the basis of the focused degree score of the non-focused target in the previous frame image; wherein the third value is less than the fourth value, and the fourth value is less than the fifth value.
8. The driving behavior detection method according to claim 7, wherein the current blocked target is determined by:
modeling each target to obtain a model corresponding to each target; wherein each of the models comprises a plurality of corner points;
acquiring a model of a target which can be focused by the driver as an occlusion model;
acquiring connecting lines of the sight origin of the driver and all corner points of the shielding model to form a shielding area;
for each of the models other than the occlusion model, detecting whether each of the corner points of the model is located in the occlusion region; when all the corner points of the model are located in the shielding area, determining that the target corresponding to the model is the current shielded target.
9. The driving behavior detection method according to claim 1, wherein the obtaining the current line-of-sight attention range of the driver based on the collected face image of the driver includes:
based on the facial image, obtaining a current sight line three-dimensional vector of the driver;
and obtaining the current sight line attention range of the driver based on the current sight line three-dimensional vector and the preset sight angle.
10. The driving behavior detection method according to claim 1, characterized in that the method further comprises:
acquiring the current concentration angle of the driver;
the method for obtaining the current sight line attention range of the driver based on the collected face image of the driver comprises the following steps:
based on the facial image, obtaining a current sight line three-dimensional vector of the driver;
and obtaining the current sight line attention range of the driver based on the current sight line three-dimensional vector and the current attention angle.
11. The driving behavior detection method according to claim 10, wherein the obtaining the current focus angle size of the driver includes:
acquiring the current speed of the vehicle;
and obtaining the current concentration angle based on the current vehicle speed.
12. The driving behavior detection method according to claim 11, characterized in that the obtaining the current focus angle size based on the current vehicle speed includes:
inputting the current vehicle speed into a pre-established linear difference function so that the linear difference function outputs the current concentration angle; the linear difference function is established based on a preset maximum concentration angle and a preset minimum concentration angle, and the current vehicle speed is inversely proportional to the current concentration angle.
13. The driving behavior detection method according to claim 12, wherein the obtaining the current focus angle size based on the current vehicle speed further includes:
judging whether the current focus angle is larger than the preset maximum focus angle or not;
when the current focus angle is larger than the preset maximum focus angle, acquiring the preset maximum focus angle as the current focus angle;
judging whether the current concentration angle is smaller than the preset minimum concentration angle or not;
and when the current focus angle is smaller than the preset minimum focus angle, acquiring the preset minimum focus angle as the current focus angle.
14. The driving behavior detection method according to claim 1, wherein the obtaining the current attention target of the driver based on the current line-of-sight attention range includes:
acquiring a current three-dimensional position area of each target in the preset range;
for each of the targets, the following operations are performed: judging whether the current three-dimensional position area of the target and the current sight line attention range have an overlapping area or not; and when the current three-dimensional position area of the target and the current sight line attention range have overlapping areas, determining that the target is the current attention target of the driver.
15. A control device comprising a processor and a storage device, the storage device being adapted to store a plurality of program codes, characterized in that the program codes are adapted to be loaded and executed by the processor to perform the driving behavior detection method of any one of claims 1 to 14.
16. A computer readable storage medium, in which a plurality of program codes are stored, characterized in that the program codes are adapted to be loaded and executed by a processor to perform the driving behavior detection method according to any one of claims 1 to 14.
CN202311445561.8A 2023-11-02 2023-11-02 Driving behavior detection method, control device and storage medium Active CN117197786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311445561.8A CN117197786B (en) 2023-11-02 2023-11-02 Driving behavior detection method, control device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311445561.8A CN117197786B (en) 2023-11-02 2023-11-02 Driving behavior detection method, control device and storage medium

Publications (2)

Publication Number Publication Date
CN117197786A CN117197786A (en) 2023-12-08
CN117197786B true CN117197786B (en) 2024-02-02

Family

ID=89000130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311445561.8A Active CN117197786B (en) 2023-11-02 2023-11-02 Driving behavior detection method, control device and storage medium

Country Status (1)

Country Link
CN (1) CN117197786B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017204015A (en) * 2016-05-09 2017-11-16 株式会社東海理化電機製作所 Driving assistance device
JP2019087143A (en) * 2017-11-09 2019-06-06 トヨタ自動車株式会社 Driver state detection apparatus
CN110638474A (en) * 2019-09-25 2020-01-03 中控智慧科技股份有限公司 Method, system and equipment for detecting driving state and readable storage medium
CN110928620A (en) * 2019-11-01 2020-03-27 天津卡达克数据有限公司 Method and system for evaluating distraction of automobile HMI design to attract driving attention
CN111709264A (en) * 2019-03-18 2020-09-25 北京市商汤科技开发有限公司 Driver attention monitoring method and device and electronic equipment
CN111976598A (en) * 2020-08-31 2020-11-24 北京经纬恒润科技有限公司 Vehicle blind area monitoring method and system
CN114162130A (en) * 2021-10-26 2022-03-11 东风柳州汽车有限公司 Driving assistance mode switching method, device, equipment and storage medium
CN114831639A (en) * 2022-03-22 2022-08-02 湖北文理学院 Method, device and equipment for detecting driver distraction and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017204015A (en) * 2016-05-09 2017-11-16 株式会社東海理化電機製作所 Driving assistance device
JP2019087143A (en) * 2017-11-09 2019-06-06 トヨタ自動車株式会社 Driver state detection apparatus
CN111709264A (en) * 2019-03-18 2020-09-25 北京市商汤科技开发有限公司 Driver attention monitoring method and device and electronic equipment
CN110638474A (en) * 2019-09-25 2020-01-03 中控智慧科技股份有限公司 Method, system and equipment for detecting driving state and readable storage medium
CN110928620A (en) * 2019-11-01 2020-03-27 天津卡达克数据有限公司 Method and system for evaluating distraction of automobile HMI design to attract driving attention
CN111976598A (en) * 2020-08-31 2020-11-24 北京经纬恒润科技有限公司 Vehicle blind area monitoring method and system
CN114162130A (en) * 2021-10-26 2022-03-11 东风柳州汽车有限公司 Driving assistance mode switching method, device, equipment and storage medium
CN114831639A (en) * 2022-03-22 2022-08-02 湖北文理学院 Method, device and equipment for detecting driver distraction and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
单柱式禁止停车标志安装高度和角度对驾驶人注视特性的影响;李娜等;《公路交通科技》;第40卷(第4期);179-186 *

Also Published As

Publication number Publication date
CN117197786A (en) 2023-12-08

Similar Documents

Publication Publication Date Title
CN112965502B (en) Visual tracking confirmation method, device, equipment and storage medium
US20210362724A1 (en) Systems and methods for smart suspension control for a vehicle
CN108725440B (en) Forward collision control method and apparatus, electronic device, program, and medium
US11977675B2 (en) Primary preview region and gaze based driver distraction detection
EP3376432B1 (en) Method and device to generate virtual lane
JP5966640B2 (en) Abnormal driving detection device and program
CN107074246A (en) Device for controlling dynamically and correlation technique
CN112489425A (en) Vehicle anti-collision early warning method and device, vehicle-mounted terminal equipment and storage medium
CN110660256A (en) Method and device for estimating state of signal lamp
CN112819864B (en) Driving state detection method and device and storage medium
CN113104041B (en) Driving track prediction method and device, electronic equipment and storage medium
CN112644507B (en) Driver state determination device
CN117197786B (en) Driving behavior detection method, control device and storage medium
KR20180129044A (en) Driver assistance apparatus in vehicle and method for guidance a safety driving thereof
CN113954838B (en) Vehicle lane change control method and device, electronic device and storage medium
CN115891989A (en) Vehicle control method, system and device and vehicle
US11802032B2 (en) Processing device, processing method, notification system, and recording medium
CN115123291A (en) Behavior prediction method and device based on obstacle recognition
WO2019123570A1 (en) Parking position determination device and parking position determination program
JP7309817B2 (en) Method, system and computer program for detecting motion of vehicle body
CN113895462B (en) Method, device, computing equipment and storage medium for predicting lane change of vehicle
US20240132154A1 (en) Road profile along a predicted path
JP7164275B2 (en) Centralized state estimator
CN115019281A (en) Lane line detection method and system and vehicle
CN117975723A (en) Vehicle queuing state simulation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant