CN113313019A - Distracted driving detection method, system and related equipment - Google Patents
Distracted driving detection method, system and related equipment Download PDFInfo
- Publication number
- CN113313019A CN113313019A CN202110585285.XA CN202110585285A CN113313019A CN 113313019 A CN113313019 A CN 113313019A CN 202110585285 A CN202110585285 A CN 202110585285A CN 113313019 A CN113313019 A CN 113313019A
- Authority
- CN
- China
- Prior art keywords
- target user
- eye
- area
- information
- safe
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims description 18
- 230000004438 eyesight Effects 0.000 claims abstract description 35
- 238000000034 method Methods 0.000 claims abstract description 34
- 230000035945 sensitivity Effects 0.000 claims description 14
- 230000001815 facial effect Effects 0.000 claims description 5
- 238000013136 deep learning model Methods 0.000 claims description 4
- 210000003128 head Anatomy 0.000 description 42
- 238000004891 communication Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 206010039203 Road traffic accident Diseases 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 210000000887 face Anatomy 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
- B60W2050/146—Display means
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Ophthalmology & Optometry (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention relates to the technical field of safe driving, in particular to a method and a system for detecting distracted driving and related equipment. Wherein, the method comprises the following steps: acquiring a face image of a target user; determining the head posture information, the eye opening and closing information and the eye sight line movement information of the target user according to the face image; determining whether the head of the target user is in a set safe driving area or not according to the head posture information; determining the continuous eye-closing duration of the target user according to the eye opening and closing information; determining whether the gazing area of the target user is in a set safe gazing area or not according to the eye sight movement information; determining that the target user is in a distracted driving state if the target user satisfies one or more of the following conditions: the duration of the head outside the safe driving area is greater than a first threshold, the duration of the eye closing duration is greater than a second threshold, and the duration of the fixation area outside the safe fixation area is greater than a third threshold.
Description
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of safe driving, in particular to a method and a system for detecting distracted driving and related equipment.
[ background of the invention ]
Because the population base of China is large and the automobile holding capacity is high, the number of traffic accidents in China every year is large, and the traffic accident rate is high. Among them, the number of traffic accidents caused by fatigue driving and distraction driving is higher in the total number of traffic accidents. For fatigue driving, there are related detection methods. However, no practical detection method exists for distracted driving at present.
[ summary of the invention ]
In order to solve the above problem, an embodiment of the present invention provides a distracted driving detection method, which extracts head posture information and open/close eye information from a face image of a target user, and determines whether the target user is in a distracted driving state according to the extracted relevant information.
In a first aspect, an embodiment of the present invention provides a method for detecting split driving, including:
acquiring a face image of a target user;
determining the head posture information, the eye opening and closing information and the eye sight line movement information of the target user according to the face image;
determining whether the head of the target user is in a set safe driving area or not according to the head posture information;
determining the continuous eye-closing duration of the target user according to the eye opening and closing information;
determining whether the gazing area of the target user is in the safe gazing area or not according to the eye sight movement information;
determining that the target user is in a distracted driving state if the target user satisfies one or more of the following conditions:
the duration of the head outside the safe driving area is greater than a first threshold, the duration of the eye closing duration is greater than a second threshold, and the duration of the fixation area outside the safe fixation area is greater than a third threshold.
In the embodiment of the invention, the head posture information, the eye opening and closing information and the eye sight line movement information of the target user are determined from the face image of the target user as judgment bases. And when the duration of the head of the target user outside the safe driving area is greater than a first threshold value, or the duration of the eye closing duration is greater than a second threshold value, or the duration of the gaze area outside the safe gaze area is greater than a third threshold value, determining that the target user is in the distracted driving state. The reliability and the accuracy of the distracted driving judgment are improved.
In one possible implementation manner, determining whether the gazing area of the target user is in the safe gazing area according to the eye gaze movement information includes:
determining the opening and closing eye attribute of the target user according to the opening and closing eye information;
if the eye opening and closing attribute of the target user is eye opening, determining that the eye sight movement information of the target user is effective eye sight movement information;
and determining whether the gazing area of the target user is in a safe area or not according to the effective eye sight line movement information.
In a possible implementation manner, before acquiring a face image of a target user, the method further includes:
determining whether the target user sets the safe driving area and the safe watching area;
and if the target user does not set the safe driving area and the safe watching area, sending prompt information to the target user so that the target user sets the safe driving area and the safe watching area according to the prompt information.
In one possible implementation, the method further includes:
acquiring a system sensitivity threshold;
adjusting the sizes of the safe driving area and the safe watching area according to the system sensitivity threshold;
adjusting the size of the first threshold and the third threshold according to the system sensitivity threshold.
In one possible implementation, the method further includes:
determining whether only the effective face of the target user exists in the face image according to the face image;
and if the effective face of the target user does not exist in the face image or the interference face exists in the face image, determining that the target user is in a distracted driving state.
In one possible implementation manner, determining the head posture information, the open/close eye information, and the eye sight line movement information of the target user according to the facial image includes:
and recognizing the head posture information, the open and closed eye information and the eye sight line movement information from the human face image by adopting a deep learning model.
In a possible implementation manner, the face image is an infrared face image acquired by an infrared device.
In a second aspect, an embodiment of the present invention provides a split driving detection system, including:
the acquisition module is used for acquiring a face image of a target user;
the determining module is used for determining the head posture information, the eye opening and closing information and the eye sight movement information of the target user according to the face image;
the determining module is further configured to determine whether the head of the target user is in a set safe driving area according to the head posture information;
the determining module is further configured to determine the continuous eye-closing duration of the target user according to the eye-opening and closing information;
the determining module is further configured to determine whether the gazing area of the target user is within a safe gazing area according to the eye sight movement information;
the determination module is further configured to determine that the target user is in a distracted driving state if the target user satisfies one or more of the following conditions:
the duration of the head outside the safe driving area is greater than a first threshold, the duration of the eye closing duration is greater than a second threshold, and the duration of the fixation area outside the safe fixation area is greater than a third threshold.
In a third aspect, an embodiment of the present invention provides an electronic device, including:
at least one processor; and
at least one memory communicatively coupled to the processor, wherein:
the memory stores program instructions executable by the processor, the processor being capable of performing the method of the first aspect when invoked by the program instructions.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium storing computer instructions for causing a computer to perform the method of the first aspect.
It should be understood that the second to fourth aspects of the embodiment of the present invention are consistent with the technical solution of the first aspect of the embodiment of the present invention, and the beneficial effects obtained by the aspects and the corresponding possible implementation manners are similar, and are not described again.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a split driving detection method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another split-driving detection method provided by the embodiment of the invention;
fig. 3 is a schematic structural diagram of a split driving detection system according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
[ detailed description ] embodiments
For better understanding of the technical solutions in the present specification, the following detailed description of the embodiments of the present invention is provided with reference to the accompanying drawings.
It should be understood that the described embodiments are only a few embodiments of the present specification, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step are within the scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the specification. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In the embodiment of the invention, the physiological reaction characteristics of the target user, such as the head posture information, the open-close eye information and the like of the target user, are determined from the face image of the target user. And comparing the determined head posture information and the eye opening and closing information of the target user with the set safe driving area to determine that the target user is in a distracted driving state.
Fig. 1 is a flowchart of a split driving detection method according to an embodiment of the present invention. The method can be applied to a mobile phone terminal or a vehicle-mounted terminal. As shown in fig. 1, the method includes:
In some embodiments, in order to avoid the influence of different light conditions on the determination and increase the usability of the system at night, an infrared device may be used to collect a face image of a target user, so as to obtain an infrared face image. Optionally, the infrared device may be an image acquisition device such as an infrared camera, for example, a mobile phone terminal or a vehicle-mounted terminal equipped with an infrared camera.
And step 102, determining the head posture information, the eye opening and closing information and the eye sight line movement information of the target user according to the face image. The head posture information may specifically include parameter information such as a head orientation, a pitch angle, a yaw angle, and a roll angle. The eye gaze movement information may be any form of information that identifies the direction of the target user's gaze, such as vector information or vertical direction angle information, and horizontal direction angle information, etc.
And 103, determining whether the head of the target user is in the set safe driving area or not according to the head posture information. The safe driving area is an area preset by a user, the user can keep a normal driving posture for a period of time during the setting, a camera of the mobile phone terminal or the vehicle-mounted terminal records an average value of head posture information of the target user in the period of time, and the safe driving area is determined according to the obtained average value of the head posture information. And when the head of the target user is in the set safe driving area, the head posture of the target user is in accordance with the judgment requirement of safe driving. If the head of the target user is outside the set safe driving area, the situation shows that the user may be distracted from driving. Whether distracted driving is caused by the head pose may then be determined based on the duration of time that the target user's head is outside of the safe driving zone.
And step 104, determining the continuous eye-closing time length of the target user according to the eye opening and closing information. Here, the open/close eye information of only one eye of the target user may be acquired. For example, the opening/closing eye information of only the right eye of the target user or the opening/closing eye information of only the left eye is acquired to reduce the amount of information that needs to be processed and increase the determination efficiency. In some embodiments, the open/close eye information of both eyes may also be acquired to increase the accuracy of the determination result.
And 105, determining whether the gazing area of the target user is in the set safe gazing area or not according to the eye sight movement information.
Step 106 determines that the target user is in a distracted driving state if the target user meets one or more of the following conditions: the duration of the head outside the safe driving area is greater than a first threshold, the duration of the eye closing duration is greater than a second threshold, and the duration of the fixation area outside the safe fixation area is greater than a third threshold. When the head of the target user is outside the safe driving area for a long time, it may be considered that the current driving posture of the target user is abnormal, for example, the target user looks at the mobile phone by lowering his head for a long time. It can thus be determined that the target user is already in the distracted driving state. Similarly, when the target user closes his eyes for a long time, the target user may be considered to be currently in a distracted driving state, for example, a situation such as sleeping during driving due to excessive fatigue.
The safety watching area can comprise an instrument panel area, a center console area, a gear control area, a left and right rearview mirror area and a rearview mirror area. When setting the safe watching areas, the user can respectively watch the areas according to the prompt and keep watching state for a period of time, the camera of the mobile phone terminal or the vehicle-mounted terminal records the average value of the eye sight movement information of the target user in the period of time, and the sight range of the target user for each safe watching area is determined according to the average value of the eye sight movement information. For example, when setting the left and right rearview mirror regions, the center of the line of sight of the target user may be determined based on the eye line of sight movement information of the target user, and a region having a radius of 20 cm may be defined as the left and right rearview mirror regions based on the center of the line of sight. Preferably, the first threshold may be 10 seconds, the second threshold may be 5 seconds, and the third threshold may be 10 seconds.
In some embodiments, upon determining that the target user is in the distracted driving state, an alert may be issued to prompt the target user to return from the distracted driving state to the normal driving state.
In some embodiments, it may be determined whether the eye gaze movement information is valid information based on the open/close eye information. As shown in fig. 2, the method comprises the following processing steps:
In some embodiments, it may also be determined whether the target user's gaze region is within the safe gaze region from the head pose information and the eye gaze motion information. For example, a three-dimensional coordinate system is firstly constructed, the current sight line vector of the target user is determined according to the head posture information and the eye sight line movement information, and whether the gazing area of the target user is in the set safe gazing area or not is determined according to the coordinate relation between the sight line vector and each set safe area.
In some embodiments, it may also be determined that the target user is in a distracted driving state when the target user continuously gazes at an area labeled as a safety area, such as a center console area or a dashboard area. For example, when the target user continues to look at the dashboard for more than 30 seconds, it may be determined that the target user is in a distracted driving state.
In some embodiments, the target user is required to set a safe driving area and a safe watching area before acquiring a face image of the target user and making a distracted driving decision. Since the heights and driving habits of the users are different, the safe driving area and the safe watching area need to be set after the mobile phone terminal or the vehicle-mounted terminal is started. Optionally, different users may be identified by information such as an account number, and if the user has set a safe driving area and a safe watching area, the related information may be stored, and the user may not set the safe driving area and the safe watching area again when driving the same vehicle using the same account number the next time.
Therefore, after the mobile phone terminal or the vehicle-mounted terminal is started, whether the target user sets a safe driving area or a safe watching area can be determined. And if the target user does not set the safe driving area and the safe watching area, sending prompt information to the target user so that the target user sets the safe driving area and the safe watching area according to the prompt information. The prompt information may be a voice prompt or a text prompt.
In one specific example, the process of setting the safe driving area and the safe watching area may be: voice broadcast "the safe driving area will be demarcated after 10 seconds, please prepare" and wait for 10 seconds to make the target user prepare for setting the safe driving area. The setup process may be 5 seconds, during which the target user needs to maintain a normal driving posture. The head posture information of the target user within 5 seconds is collected, and the average value is taken as a safe driving area. Before setting a safe driving area, shooting position calibration can be performed firstly to ensure that only one face of a target user exists in an image acquisition area of a mobile phone terminal or a vehicle-mounted terminal, and the face of the target user is positioned near the center position of the image acquisition area. If the number of the faces acquired by the image acquisition area is not 1 or the faces are not in the center position of the image acquisition area, the target user can be prompted to adjust the shooting angle of the mobile phone terminal or the vehicle-mounted terminal. After the user completes the setting of the safe driving area, the setting of the safe watching area can be continued. And the user respectively finishes the acquisition of eye sight movement information corresponding to each safe watching area according to the voice prompt, and calculates the mean value to finish the setting of each safe watching area.
In some embodiments, during driving, the user may also manually trigger the setting of the safe driving area or the safe watching area to update the relevant determination parameters of the safe driving area or the safe watching area.
In some embodiments, a system sensitivity may also be set, and a user may adjust the size of the safe driving area and the safe gaze area and the size of the third threshold of the first threshold by adjusting the sensitivity. The method specifically comprises the following steps: and acquiring a system sensitivity threshold, adjusting the sizes of the safe driving area and the safe watching area according to the system sensitivity threshold, and adjusting the sizes of the first threshold and the third threshold according to the system sensitivity threshold. The higher the system sensitivity threshold is, the smaller the sizes of the corresponding safe driving area and the safe watching area are, and the smaller the corresponding first threshold and the third threshold are. The lower the system sensitivity threshold, the larger the size of the corresponding safe driving area and safe gaze area, and the larger the corresponding first threshold and third threshold. The user may select different system sensitivity thresholds based on driving habits.
In some embodiments, it may also be determined whether only the valid face of the target user exists in the face image according to the face image. And if the effective face of the target user does not exist in the face image or the interference face exists in the face image, determining that the target user is in a distracted driving state. When the effective face of the target user does not exist in the face image, the target user can be understood to move the head of the target user out of the image acquisition area of the mobile phone terminal or the vehicle-mounted terminal, the driving posture of the target user is greatly different from the normal driving at the moment, the danger degree is high, and therefore the target user can be determined to be in the distracted driving state and corresponding warning can be carried out. When an interfering face exists in the face image, it can be considered that there is driving interference caused by other users to the target user (driver), for example, a user sitting in a copilot extends his head in front of the head of the target user. At the moment, the user sitting in the copilot shields the view of the target user, so that the target user can be determined to be in a distracted driving state, and corresponding warning is carried out.
In some embodiments, a deep learning model may be employed to identify head pose information, open and closed eye information, and eye gaze information from images of human faces. Because the current deep learning model technology is mature, the head pose information can be extracted from the face image by adopting the technology such as the convolutional neural network.
Corresponding to the foregoing method for detecting distracted driving, an embodiment of the present invention provides a system for detecting distracted driving, as shown in fig. 3, where the system includes: an acquisition module 301 and a determination module 302.
An obtaining module 301, configured to obtain a face image of a target user.
And the determining module 302 is configured to determine head posture information, eye opening and closing information, and eye sight movement information of the target user according to the facial image.
The determining module 302 is further configured to determine whether the head of the target user is within the set safe driving area according to the head pose information.
The determining module 302 is further configured to determine the continuous eye-closing time duration of the target user according to the eye-opening and closing information.
The determining module 302 is further configured to determine whether the gazing area of the target user is within the safe gazing area according to the eye gaze movement information.
The determining module 302 is further configured to determine that the target user is in the distracted driving state if the target user satisfies one or more of the following conditions:
the duration of the head outside the safe driving area is greater than a first threshold, the duration of the eye closing duration is greater than a second threshold, and the duration of the fixation area outside the safe fixation area is greater than a third threshold.
The distracted driving detection system provided by the embodiment shown in fig. 3 may be used to implement the technical solutions of the method embodiments shown in fig. 1 to fig. 2 in this specification, and the implementation principle and technical effects thereof may further refer to the related descriptions in the method embodiments.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 4, the electronic device may include at least one processor; and at least one memory communicatively coupled to the processor, wherein: the memory stores program instructions executable by the processor, and the processor calls the program instructions to execute the distracted driving detection method provided by the embodiments shown in fig. 1-2 in the present specification.
As shown in fig. 4, the electronic device is in the form of a general purpose computing device. Components of the electronic device may include, but are not limited to: one or more processors 410, a communication interface 420, and a memory 430, a communication bus 440 that connects the various system components (including the memory 430, the communication interface 420, and the processing unit 410).
Electronic devices typically include a variety of computer system readable media. Such media may be any available media that is accessible by the electronic device and includes both volatile and nonvolatile media, removable and non-removable media.
A program/utility having a set (at least one) of program modules, including but not limited to an operating system, one or more application programs, other program modules, and program data, may be stored in memory 430, each of which examples or some combination may include an implementation of a network environment. The program modules generally perform the functions and/or methodologies of the embodiments described herein.
The processor 410 executes various functional applications and data processing by executing programs stored in the memory 430, for example, implementing the distracted driving detection method provided by the embodiments shown in fig. 1 to 2 of the present specification.
The embodiment of the present specification provides a computer-readable storage medium, which stores computer instructions, and the computer instructions enable the computer to execute the distracted driving detection method provided by the embodiment shown in fig. 1 to 2 of the present specification.
The computer-readable storage medium described above may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a flash Memory, an optical fiber, a portable compact disc Read Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
In the description of the specification, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the specification. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present specification, "a plurality" means at least two, e.g., two, three, etc., unless explicitly defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present description in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present description.
The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It should be noted that the apparatuses referred to in the embodiments of the present disclosure may include, but are not limited to, a Personal Computer (Personal Computer; hereinafter, PC), a Personal Digital Assistant (Personal Digital Assistant; hereinafter, PDA), a wireless handheld apparatus, a Tablet Computer (Tablet Computer), a mobile phone, an MP3 display, an MP4 display, and the like.
In the several embodiments provided in this specification, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present description may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a connector, or a network device) or a Processor (Processor) to execute some steps of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.
Claims (10)
1. A split driving detection method, comprising:
acquiring a face image of a target user;
determining the head posture information, the eye opening and closing information and the eye sight line movement information of the target user according to the face image;
determining whether the head of the target user is in a set safe driving area or not according to the head posture information;
determining the continuous eye-closing duration of the target user according to the eye opening and closing information;
determining whether the gazing area of the target user is in a set safe gazing area or not according to the eye sight movement information;
determining that the target user is in a distracted driving state if the target user satisfies one or more of the following conditions:
the duration of the head outside the safe driving area is greater than a first threshold, the duration of the eye closing duration is greater than a second threshold, and the duration of the fixation area outside the safe fixation area is greater than a third threshold.
2. The method of claim 1, wherein determining whether the gazing area of the target user is within the set safe gazing area according to the eye gaze movement information comprises:
determining the opening and closing eye attribute of the target user according to the opening and closing eye information;
if the eye opening and closing attribute of the target user is eye opening, determining that the eye sight movement information of the target user is effective eye sight movement information;
and determining whether the gazing area of the target user is in a safe area or not according to the effective eye sight line movement information.
3. The method of claim 1, wherein prior to obtaining the facial image of the target user, the method further comprises:
determining whether the target user sets the safe driving area and the safe watching area;
and if the target user does not set the safe driving area and the safe watching area, sending prompt information to the target user so that the target user sets the safe driving area and the safe watching area according to the prompt information.
4. The method of claim 1, further comprising:
acquiring a system sensitivity threshold;
adjusting the sizes of the safe driving area and the safe watching area according to the system sensitivity threshold;
adjusting the size of the first threshold and the third threshold according to the system sensitivity threshold.
5. The method of claim 1, further comprising:
determining whether only the effective face of the target user exists in the face image according to the face image;
and if the effective face of the target user does not exist in the face image or the interference face exists in the face image, determining that the target user is in a distracted driving state.
6. The method of claim 1, wherein determining the head pose information, the open/closed eye information, and the eye gaze movement information of the target user from the facial image comprises:
and recognizing the head posture information, the open and closed eye information and the eye sight line movement information from the human face image by adopting a deep learning model.
7. The method according to any one of claims 1 to 6, wherein the face image is an infrared face image acquired by an infrared device.
8. A split-drive detection system, comprising:
the acquisition module is used for acquiring a face image of a target user;
the determining module is used for determining the head posture information, the eye opening and closing information and the eye sight movement information of the target user according to the face image;
the determining module is further configured to determine whether the head of the target user is in a set safe driving area according to the head posture information;
the determining module is further configured to determine the continuous eye-closing duration of the target user according to the eye-opening and closing information;
the determining module is further configured to determine whether the gazing area of the target user is within a safe gazing area according to the eye sight movement information;
the determination module is further configured to determine that the target user is in a distracted driving state if the target user satisfies one or more of the following conditions:
the duration of the head outside the safe driving area is greater than a first threshold, the duration of the eye closing duration is greater than a second threshold, and the duration of the fixation area outside the safe fixation area is greater than a third threshold.
9. An electronic device, comprising:
at least one processor; and
at least one memory communicatively coupled to the processor, wherein:
the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of any of claims 1 to 7.
10. A computer-readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110585285.XA CN113313019A (en) | 2021-05-27 | 2021-05-27 | Distracted driving detection method, system and related equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110585285.XA CN113313019A (en) | 2021-05-27 | 2021-05-27 | Distracted driving detection method, system and related equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113313019A true CN113313019A (en) | 2021-08-27 |
Family
ID=77375644
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110585285.XA Pending CN113313019A (en) | 2021-05-27 | 2021-05-27 | Distracted driving detection method, system and related equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113313019A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115097933A (en) * | 2022-06-13 | 2022-09-23 | 华能核能技术研究院有限公司 | Concentration determination method and device, computer equipment and storage medium |
WO2023178714A1 (en) * | 2022-03-25 | 2023-09-28 | 北京魔门塔科技有限公司 | Distraction determination method and apparatus, and storage medium, electronic device and vehicle |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019028798A1 (en) * | 2017-08-10 | 2019-02-14 | 北京市商汤科技开发有限公司 | Method and device for monitoring driving condition, and electronic device |
CN109409259A (en) * | 2018-10-11 | 2019-03-01 | 百度在线网络技术(北京)有限公司 | Drive monitoring method, device, equipment and computer-readable medium |
CN109501807A (en) * | 2018-08-15 | 2019-03-22 | 初速度(苏州)科技有限公司 | Automatic Pilot pays attention to force detection system and method |
CN110390285A (en) * | 2019-07-16 | 2019-10-29 | 广州小鹏汽车科技有限公司 | System for distraction of driver detection method, system and vehicle |
CN111079476A (en) * | 2018-10-19 | 2020-04-28 | 上海商汤智能科技有限公司 | Driving state analysis method and device, driver monitoring system and vehicle |
CN112270283A (en) * | 2020-11-04 | 2021-01-26 | 北京百度网讯科技有限公司 | Abnormal driving behavior determination method, device, equipment, vehicle and medium |
CN112289003A (en) * | 2020-10-23 | 2021-01-29 | 江铃汽车股份有限公司 | Method for monitoring end-of-life driving behavior of fatigue driving and active safe driving monitoring system |
-
2021
- 2021-05-27 CN CN202110585285.XA patent/CN113313019A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019028798A1 (en) * | 2017-08-10 | 2019-02-14 | 北京市商汤科技开发有限公司 | Method and device for monitoring driving condition, and electronic device |
CN109803583A (en) * | 2017-08-10 | 2019-05-24 | 北京市商汤科技开发有限公司 | Driver monitoring method, apparatus and electronic equipment |
CN109501807A (en) * | 2018-08-15 | 2019-03-22 | 初速度(苏州)科技有限公司 | Automatic Pilot pays attention to force detection system and method |
CN109409259A (en) * | 2018-10-11 | 2019-03-01 | 百度在线网络技术(北京)有限公司 | Drive monitoring method, device, equipment and computer-readable medium |
CN111079476A (en) * | 2018-10-19 | 2020-04-28 | 上海商汤智能科技有限公司 | Driving state analysis method and device, driver monitoring system and vehicle |
CN110390285A (en) * | 2019-07-16 | 2019-10-29 | 广州小鹏汽车科技有限公司 | System for distraction of driver detection method, system and vehicle |
CN112289003A (en) * | 2020-10-23 | 2021-01-29 | 江铃汽车股份有限公司 | Method for monitoring end-of-life driving behavior of fatigue driving and active safe driving monitoring system |
CN112270283A (en) * | 2020-11-04 | 2021-01-26 | 北京百度网讯科技有限公司 | Abnormal driving behavior determination method, device, equipment, vehicle and medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023178714A1 (en) * | 2022-03-25 | 2023-09-28 | 北京魔门塔科技有限公司 | Distraction determination method and apparatus, and storage medium, electronic device and vehicle |
CN115097933A (en) * | 2022-06-13 | 2022-09-23 | 华能核能技术研究院有限公司 | Concentration determination method and device, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210009150A1 (en) | Method for recognizing dangerous action of personnel in vehicle, electronic device and storage medium | |
WO2020078465A1 (en) | Method and device for driving state analysis, driver monitoring system and vehicle | |
US20200334477A1 (en) | State estimation apparatus, state estimation method, and state estimation program | |
KR101868597B1 (en) | Apparatus and method for assisting in positioning user`s posture | |
TW202036465A (en) | Method, device and electronic equipment for monitoring driver's attention | |
CN113313019A (en) | Distracted driving detection method, system and related equipment | |
BRPI0712837A2 (en) | Method and apparatus for determining and analyzing a location of visual interest. | |
US11112602B2 (en) | Method, apparatus and system for determining line of sight, and wearable eye movement device | |
US20180229654A1 (en) | Sensing application use while driving | |
CN110341617B (en) | Eyeball tracking method, device, vehicle and storage medium | |
CN110155072B (en) | Carsickness prevention method and carsickness prevention device | |
Varma et al. | Accident prevention using eye blinking and head movement | |
CN113491519A (en) | Digital assistant based on emotion-cognitive load | |
CN109291794A (en) | Driver status monitoring method, automobile and storage medium | |
EP3440592B1 (en) | Method and system of distinguishing between a glance event and an eye closure event | |
US20220036101A1 (en) | Methods, systems and computer program products for driver monitoring | |
EP3857442A1 (en) | Driver attention state estimation | |
CN112083795A (en) | Object control method and device, storage medium and electronic equipment | |
JP2017091013A (en) | Driving support device | |
CN113744499B (en) | Fatigue early warning method, glasses, system and computer readable storage medium | |
CN116883977A (en) | Passenger state monitoring method and device, terminal equipment and vehicle | |
JP2019006183A (en) | Steering support device, steering support method, and program | |
JP7046748B2 (en) | Driver status determination device and driver status determination method | |
CN115643483A (en) | Terminal equipment control method and device, readable storage medium and terminal equipment | |
CN115830579A (en) | Driving state monitoring method and system and vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210827 |