WO2023103708A1 - 分心区域的自动标定方法及装置、道路车辆、电子设备 - Google Patents

分心区域的自动标定方法及装置、道路车辆、电子设备 Download PDF

Info

Publication number
WO2023103708A1
WO2023103708A1 PCT/CN2022/131200 CN2022131200W WO2023103708A1 WO 2023103708 A1 WO2023103708 A1 WO 2023103708A1 CN 2022131200 W CN2022131200 W CN 2022131200W WO 2023103708 A1 WO2023103708 A1 WO 2023103708A1
Authority
WO
WIPO (PCT)
Prior art keywords
distraction
area
driver
angle
normal
Prior art date
Application number
PCT/CN2022/131200
Other languages
English (en)
French (fr)
Inventor
戴海能
王进
石屿
Original Assignee
虹软科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 虹软科技股份有限公司 filed Critical 虹软科技股份有限公司
Publication of WO2023103708A1 publication Critical patent/WO2023103708A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Definitions

  • the present disclosure relates to the technical field of information processing, in particular, to a method and device for automatically marking a distraction area, a road vehicle, and an electronic device.
  • the driver is often disturbed by fatigue, external affairs, etc., and is in a distracted state. This distracted state is likely to cause traffic accidents. Therefore, it is necessary to monitor whether the driver is in a distracted state. Before monitoring whether the driver is in a state of distraction, it is necessary to quickly locate the distraction area of each vehicle; currently, when calibrating the distraction area, it is easy to be affected by the driver's own differences (differences in driving posture or personnel replacement), driving habits, and each The model of the vehicle is affected, resulting in a large calibration error.
  • the current distraction area calibration method usually uses a fixed area threshold to calibrate the distraction area, which is prone to errors in the calibrated distraction area due to driving posture or personnel replacement.
  • the current calibration method by obtaining the driver's eye movement behavior information, realizes the real-time detection of the driver's distraction state, provides early warning for the driver's distraction behavior during driving, and effectively improves road traffic safety.
  • this method uses the traditional machine learning method, which is affected by the light and individual driver differences, and the accuracy is poor. At the same time, this method does not deal with the special scene of the real vehicle, and there are many false detections.
  • the present disclosure provides an automatic calibration method and device for distraction areas, road vehicles, and electronic equipment, so as to at least solve the problem of changes in distraction areas caused by driving posture or personnel replacement in related technologies, while distraction detection still uses fixed distraction areas. Technical issues leading to false positives.
  • a method for automatically marking a distraction area including: collecting multiple face images of drivers contained in the current vehicle within a preset time period; combining the multiple face images Determining the normal driving angle of the driver in the current vehicle; based on the normal driving angle and the predetermined critical distraction deflection angle of the current vehicle, calibrate the driver's normal driving angle in the current vehicle Areas of distraction and areas of distraction.
  • determining the normal driving angle of the driver in the current vehicle in combination with the multiple face images includes: determining the abnormal driving state of the driver in combination with abnormal driving information, and Excluding the image corresponding to the abnormal driving state from the face image to obtain a normal driving image set; counting and updating the normal driving angle according to the normal driving image set.
  • the method before determining the normal driving angle of the driver in the current vehicle in combination with the plurality of face images, the method further includes: initializing the normal driving angle, including: initializing with a factory preset value The normal driving angle; or, the normal driving angle is initialized by adopting the first line of sight angle when the driver gazes at the first marked point.
  • the abnormal driving information includes at least one of the following: low vehicle speed, turn signal trigger, distraction deflection, and grip strength.
  • determining the abnormal driving state of the driver in combination with the low vehicle speed includes: collecting the vehicle speed of the current vehicle; if the vehicle speed is lower than a preset speed threshold, determining that the driver is in an abnormal driving state. normal driving condition.
  • determining the abnormal driving state of the driver in combination with the triggering of the turn signal includes: collecting the signal trigger state of the turn signal of the current vehicle; if the signal trigger state indicates that the turn signal is not triggered, then determining The driver is in a normal driving state; if the signal trigger state indicates that the turn signal has been triggered, it is determined that the current vehicle is in a turning state, and the driver is determined to be in an abnormal driving state.
  • determining the abnormal driving state of the driver in combination with distraction deflection includes: collecting the driver's face and line of sight angle; counting the driver's face and line of sight angle as preset abnormal The duration of the driving zone; if the duration in the abnormal driving zone reaches the first duration threshold, it is determined that the driver is in an abnormal driving state.
  • determining the abnormal driving state of the driver in combination with grip strength includes: collecting the steering wheel grip strength of the driver; if the steering wheel grip strength is lower than a preset grip strength threshold, determining that the driver is in abnormal driving state.
  • counting and updating the normal driving angle according to the normal driving image set includes: each image in the normal driving image set passes the face angle and sight angle models, and outputs the normal human face and angle corresponding to each image.
  • Sight angle value count all the normal faces and sight angle values, and update the normal driving angle.
  • the predetermined critical distraction deflection angle of the current vehicle includes: marking a non-distraction marked area and a distracted marked area inside the current vehicle according to a preset area of interest, wherein the non-distracted marked area Distraction marking area comprises: normal fixation mark point, and described distraction zone at least includes: border mark point; Gathers the images of a plurality of drivers towards described normal fixation mark point and described boundary mark point respectively, obtains normal fixation image and An image of distracted gazing: analyzing the normal gazing image and the distracted gazing image to obtain a critical distraction deflection angle of the distraction area of the current vehicle.
  • marking the non-distracting labeling area and the distracting labeling area inside the current vehicle according to a preset region of interest includes: characterizing the preset region of interest as a non-distracting labeling area inside the current vehicle area; determine the central point of the non-distraction labeling area, obtain the normal fixation mark point; determine the multiple boundaries of the non-distraction labeling area, and use each of the boundaries as a side to determine the non-distraction A distraction labeling area outside the center labeling area, wherein any point on the boundary is characterized as the boundary identification point.
  • analyzing the normal gazing image and the distracted gazing image to obtain the critical distraction deflection angle of the distraction area of the current vehicle includes: analyzing the normal gazing image and determining to fixate on the normal gazing mark point The line of sight angle, based on the distribution of the line of sight angles of the normal gazing at the mark point, obtain the first normal driving angle; analyze all the distracted gazing images, obtain the distribution of the critical distracted driving angle of the distracted area, based on The distribution of critical distracted driving angles in the distraction area, calculating the mean value of the critical distracted driving angles in the distracted area; calculating the difference between the first normal driving angle and the mean value of the critical distracted driving angles , to obtain all the critical distraction deflection angles of the current vehicle.
  • marking the driver's non-distraction area and distraction area in the current vehicle includes: Add the critical distraction deflection angle of the current vehicle on the basis of the normal driving angle to obtain the critical position of the border of the distracted region; mark the region included in the critical position of the border of the distracted region as the non-distracted region, and An area other than the area included in the critical position of the boundary of the distraction area is marked as the distraction area.
  • an automatic marking device for a distraction area including: an acquisition unit configured to acquire multiple face images of drivers contained in the current vehicle within a preset time period; A unit configured to determine the normal driving angle of the driver in the current vehicle in combination with the plurality of face images; a calibration unit configured to determine the normal driving angle based on the normal driving angle and the predetermined threshold of the current vehicle A deflection angle of the heart, marking the driver's non-distraction area and distraction area in the current vehicle.
  • the determining unit includes: a first determining module, configured to determine the abnormal driving state of the driver in combination with the abnormal driving information, and remove images corresponding to the abnormal driving state from all face images , acquiring a normal driving image set; an updating module configured to make statistics and update the normal driving angle according to the normal driving image set.
  • the automatic calibration device further includes: an initialization unit configured to initialize the normal driving angle before determining the normal driving angle of the driver in the current vehicle in combination with the multiple face images, so
  • the initialization unit includes: a first initialization module, configured to initialize the normal driving angle with a factory preset value; or, a second initialization module, configured to initialize the normal driving angle with the first line of sight angle when the driver looks at the first mark point Describe the normal driving angle.
  • the abnormal driving information includes at least one of the following: low vehicle speed, turn signal trigger, distraction deflection, and grip strength.
  • the first determination module includes: a first collection submodule, configured to collect the vehicle speed of the current vehicle; a first determination submodule, configured to collect the vehicle speed when the vehicle speed is lower than a preset speed threshold Next, it is determined that the driver is in an abnormal driving state.
  • the first determination module includes: a second collection submodule, configured to collect the signal trigger state of the steering signal of the current vehicle; a second determination submodule, configured to indicate that the signal trigger state has not triggered the In the case of a turn signal, it is determined that the driver is in a normal driving state; the third determining submodule is configured to determine that the current vehicle is in a turning state when the signal trigger state indicates that the turn signal has been triggered, It is determined that the driver is in an abnormal driving state.
  • the first determination module includes: a third collection submodule, configured to collect the driver's face and line of sight angle; a first statistical submodule, configured to count the driver's face and line of sight angle at The duration of the preset abnormal driving area; the fourth determination submodule is configured to determine that the driver is in an abnormal driving state when the time in the abnormal driving area reaches the first duration threshold.
  • the first determining module includes: a fourth collecting submodule, configured to collect the driver's steering wheel grip; a fifth determining submodule, configured to collect the driver's steering wheel grip when the steering wheel grip is lower than a preset grip threshold , it is determined that the driver is in an abnormal driving state.
  • the update module includes: an output sub-module, configured to output the normal face and line-of-sight angle values corresponding to each image through the face angle and line-of-sight angle models for each image in the normal driving image set;
  • the statistics sub-module is configured to make statistics of all the normal face and line-of-sight angle values, and update the normal driving angle.
  • the automatic labeling device for the distraction area further includes: an area labeling module, configured to label the non-distraction labeling area and the distraction labeling area inside the current vehicle according to a preset area of interest, wherein the non-distraction labeling area
  • the distraction labeling area includes: normal gazing marker points, and the distraction zone at least includes: boundary marker points
  • an image acquisition module configured to collect images of a plurality of drivers facing the normal gazing marker points and the boundary marker points respectively , to obtain the normal gaze image and the distracted gaze image
  • the image analysis module is configured to analyze the normal gaze image and the distracted gaze image to obtain the critical distraction deflection angle of the distraction area of the current vehicle.
  • the area labeling module includes: a sixth determining submodule, configured to characterize the preset region of interest as a non-distracting marked area inside the current vehicle; a seventh determining submodule, configured to determine the The center point of the non-distraction labeling area is obtained to obtain the normal fixation mark point; the eighth determining submodule is configured to determine multiple boundaries of the non-distraction labeling area, and each boundary is determined as a side A distraction labeling area outside the non-distraction labeling area, wherein any point on the boundary is characterized as the boundary identification point.
  • the image analysis module includes: an analysis submodule, configured to analyze the normal gaze image, determine the line-of-sight angle for gazing at the normal gazing marker point, based on the distribution of the gaze angles for gazing at the normal gazing marker point, Obtain the first normal driving angle; the first calculation submodule is configured to analyze all the distracted gazing images, obtain the distribution of the critical distracted driving angle of the distracted area, and obtain the distribution of the critical distracted driving angle based on the distracted area. Distribution, calculating the mean value of the critical distracted driving angle in the distraction area; the second calculation submodule is configured to calculate the difference between the first normal driving angle and the mean value of the critical distracted driving angle, and obtain all Describe the critical distraction deflection angle of the current vehicle.
  • an analysis submodule configured to analyze the normal gaze image, determine the line-of-sight angle for gazing at the normal gazing marker point, based on the distribution of the gaze angles for gazing at the normal gazing marker point, Obtain the first normal driving angle
  • the first calculation submodule is configured to analyze
  • the calibration unit includes: a first calibration module, configured to add the critical distraction deflection angle of the current vehicle on the basis of the normal driving angle, and obtain the critical position of the boundary of the distraction area; a second calibration module , set to mark the area included in the critical position of the border of the distraction area as the non-distraction area, and mark the area other than the area included in the critical position of the border of the distraction area as the distraction area.
  • a road vehicle including: a vehicle-mounted camera, installed at the windshield in front of the vehicle, and configured to collect road images of the road ahead; a vehicle-mounted control unit, connected to the vehicle-mounted camera , executing the method for automatically marking the distraction region described in any one of the above.
  • a vehicle-mounted electronic device including: a processor; and a memory configured to store executable instructions of the processor; wherein, the processor is configured to Executing instructions to implement the method for automatically marking the distraction area described in any one of the above.
  • a computer-readable storage medium including a stored computer program, wherein when the computer program is running, the computer-readable storage medium is controlled
  • the device implements the method for automatically marking the distraction area described in any one of the above.
  • automatic calibration can be used to perform follow-up processing for different individual drivers and driving states. Compared with the fixed threshold method, the accuracy of distraction can be improved and false detection can be effectively reduced.
  • a multi-information fusion scheme is adopted to filter abnormal driving states during driving to avoid false detection.
  • multiple face images of the driver included in the current vehicle within a preset period of time are used, combined with multiple face images to determine the normal driving angle of the driver in the current vehicle, based on the normal driving angle and the predetermined The critical distraction deflection angle of the current vehicle, and calibrate the driver's non-distraction area and distraction area in the current vehicle.
  • FIG. 1 is a flow chart of an optional automatic marking method for a distraction area according to an embodiment of the present disclosure
  • Fig. 2 is a schematic diagram of an optional preset abnormal driving area according to an embodiment of the present disclosure
  • Fig. 3 is a schematic diagram of an optional calibration distraction area according to an embodiment of the present disclosure.
  • Fig. 4 is a schematic diagram of an optional automatic marking device for distraction regions according to an embodiment of the present disclosure.
  • the present disclosure can be applied to various types of vehicles (cars, buses, motorcycles, airplanes, trains, etc.). detection.
  • Vehicle types include, but are not limited to: cars, trucks, sports cars, SUVs, and MINIs.
  • This disclosure adapts to various parking spaces and areas, automatically calibrates the distraction area, and at the same time, analyzes whether the driver is in a distracted state based on the calibrated distraction area, performs follow-up processing for different individual drivers and driving states, and fixes the threshold Compared with the method, the accuracy of distraction detection is improved and the false detection is effectively reduced.
  • the present disclosure can adaptively adjust the distraction area to avoid false detection of distraction caused by driving posture or personnel replacement. The present disclosure will be described in detail below in conjunction with various embodiments.
  • an embodiment of an automatic marking method for a distraction area is provided. It should be noted that the steps shown in the flow chart of the accompanying drawings can be executed in a computer system such as a set of computer-executable instructions , and, although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that shown or described herein.
  • Fig. 1 is a flow chart of an optional method for automatically marking a distraction area according to an embodiment of the present disclosure. As shown in Fig. 1 , the method includes the following steps:
  • Step S102 collecting a plurality of face images of drivers included in the current vehicle within a preset time period
  • Step S104 combining multiple face images to determine the normal driving angle of the driver in the current vehicle
  • Step S106 based on the normal driving angle and the predetermined critical distraction deflection angle of the current vehicle, demarcate the non-distraction area and the distraction area of the driver in the current vehicle.
  • the above steps it is possible to collect multiple face images of the driver included in the current vehicle within the preset time period, and determine the normal driving angle of the driver in the current vehicle based on the multiple face images.
  • the determined critical distraction deflection angle of the current vehicle is used to calibrate the driver's non-distraction area and distraction area in the current vehicle.
  • multiple face images of the driver in the current vehicle can be analyzed, the non-distraction area and the distraction area of the driver in the current vehicle can be automatically marked, and different individual drivers and driving states can be followed. processing, improve the calibration accuracy of the distraction area, and then improve the detection accuracy of the distraction state, so as to solve the problem of distraction area changes caused by driving posture or personnel replacement in related technologies, while the distraction detection still uses a fixed distraction area to cause errors.
  • Step S102 collecting multiple face images of the driver included in the current vehicle within a preset time period.
  • the preset deflection area area shape and area size
  • the driver's driving attributes driver's body shape and driving angle
  • the camera module installed in the car collects multiple face images of the driver within a preset period of time, and recovers real-time information that accurately marks the distraction area from multiple face images.
  • the preset time period in this embodiment can be preset, and the specific length of the preset time period is related to the actual detection accuracy requirements, for example, within 1 minute or within 30 seconds.
  • the camera module collects video streams, and the application slides to process overlay frames with a preset time length. After the current frame is processed, the next frame is input and the earliest frame is deleted to update the image set, and the image set is processed again, so as to automatically calibrate in real time.
  • At least one camera module can be installed in the vehicle close to the driving area of the vehicle.
  • the specific installation location of the above camera module is not limited, as long as the image including the driver's face can be collected.
  • the type of camera module includes but not limited to: camera, depth camera, infrared camera, etc.; the image type of the face image collected includes but not limited to: common RGB image, depth image, thermal imaging, etc. A schematic illustration is given.
  • Step S104 combining multiple face images to determine the normal driving angle of the driver in the current vehicle.
  • combining multiple face images to determine the driver's normal driving angle in the current vehicle including: determining the driver's abnormal driving state in combination with abnormal driving information, and removing the abnormal driving state from all face images According to the corresponding image, the normal driving image set is obtained; and the normal driving angle is calculated and updated according to the normal driving image set.
  • the normal driving angle refers to the current driver’s sight angle when maintaining focused driving in the current vehicle. Focused driving means that the driver’s sight is kept straight ahead, and there is no such thing as making a phone call, turning the head, or looking at the rearview mirror. Condition. Furthermore, the normal driving angle will change as the scene changes. When driving in sunny weather, the normal driving angle remains straight ahead, with a wide field of vision. When entering a tunnel or driving in fog, the driver will unconsciously lower his head to focus on the immediate environment, and the field of vision will decrease. , the normal driving angle will be lower than the front position in sunny weather. This application collects video streams in real time, and slides and processes the collected video streams in real time to ensure that the normal driving angle information is updated in real time.
  • This application obtains the normal driving image by deleting the images corresponding to the abnormal driving state in all face images. Based on the normal driving image set, the long-term normal driving angle is counted and updated.
  • the abnormal driving information includes at least one of the following: low vehicle speed, turn signal trigger, distraction deflection, and grip strength. The manner of determining the abnormal driving state of the driver will be described below in conjunction with each piece of abnormal driving information.
  • determining the abnormal driving state of the driver in combination with the low vehicle speed includes: collecting the vehicle speed of the current vehicle; if the vehicle speed is lower than a preset speed threshold, determining that the driver is in an abnormal driving state.
  • This application judges whether it is in a low-speed running state by comparing the vehicle speed with the preset speed threshold.
  • the preset speed threshold can be set in advance, and the setting of the specific preset speed threshold is determined according to actual user needs. For example, if the preset speed threshold is set to 30km/h, if the current vehicle speed is >30km/h, it is considered to be in a normal speed running state; In the normal driving state, the corresponding frames of the abnormal driving stage are further removed from the image set.
  • determining the abnormal driving state of the driver in combination with the turn signal triggering includes: collecting the signal trigger state of the turn signal of the current vehicle; if the signal trigger state indicates that the turn signal is not triggered, it is determined that the driver is in a normal driving state; if If the signal trigger state indicates that the turn signal has been triggered, then it is determined that the current vehicle is in a turning state, and it is determined that the driver is in an abnormal driving state.
  • the turn signal When turning or turning around, the turn signal needs to be on, and the driver needs to look around the surrounding environment to make the right decision.
  • This application realizes the judgment of the driving state by detecting the trigger state of the steering signal. If the steering signal is not triggered, the driver is in a normal driving state; if the steering signal is triggered, the vehicle is in a turning state, and the abnormal driving stage Frames corresponding to are removed from the image set.
  • the abnormal driving state of the driver is determined in combination with the distraction deflection, including: collecting the driver's face and line of sight angle; counting the time when the driver's face and line of sight angle are in the preset abnormal driving area; If the duration in the abnormal driving area reaches the first duration threshold, it is determined that the driver is in the abnormal driving state.
  • the range of the distraction area is greater than or equal to a preset abnormal driving area, and the preset abnormal driving area is set by the user, indicating that the driver is obviously in a distracted driving area.
  • FIG. 2 is a schematic diagram of an optional preset abnormal driving area according to an embodiment of the disclosure.
  • the non-distracted area except the distracted area, except the preset normal driving area The area is a preset abnormal driving area.
  • the preset abnormal driving area is a roughly roughly positioned area. When the line of sight falls in this area, it means that the driver is obviously in the area of distracted driving.
  • the driver For example, if the driver’s line of sight is attracted by the billboard on the roadside for a long time while driving, if the driver stays in the preset abnormal driving area for a long time , it is determined that the driver is in an abnormal driving state, and the corresponding frame of the abnormal driving state is further removed from the image set.
  • Another option is to determine the abnormal driving state of the driver in combination with grip strength, including: collecting the grip strength of the driver's steering wheel; if the grip strength of the steering wheel is lower than a preset grip strength threshold, it is determined that the driver is in an abnormal driving state.
  • the embodiment of the present disclosure will use the sensor to obtain the value of the grip force, combined with the grip strength to determine the abnormal driving state of the driver.
  • the grip strength threshold of this embodiment is adaptively preset according to the vehicle type of each vehicle. Set the grip strength threshold to determine that the driver is in an abnormal driving state, and further remove the corresponding frame of the abnormal driving state from the image set.
  • the embodiment of the present application does not limit any combination of the types of abnormal driving information mentioned above, nor does it limit the order of priority of the types of abnormal driving information when filtering the image set.
  • count and update the normal driving angle according to the normal driving image set including: each image in the normal driving image set passes the face angle and line of sight angle model, and outputs the normal face and line of sight angle values corresponding to each image; counts all Normal face and line of sight angle values, update the normal driving angle.
  • the output value of the output face and sight angle can be obtained by inputting the image into the face angle and sight angle model.
  • the present disclosure does not limit the form and type of the face angle and line-of-sight angle models, and traditional geometric models or neural network models may be used.
  • This embodiment can count and update the normal driving angle for a long time, for example, count the normal driving face and sight angle of the driver in the previous 60s, take the average value and update it.
  • the method before determining the normal driving angle of the driver in the current vehicle in combination with multiple face images, the method further includes: initializing the normal driving angle, including: initializing the normal driving angle with factory preset values; Or, the normal driving angle is initialized by using the first line of sight angle when the driver looks at the first marked point.
  • the first marked point in this embodiment may refer to a single point, or may be an average value of multiple points.
  • the front windshield of the current vehicle may be used as the first marking point, or the average value of any number of points contained in a specific area of the front windshield may be used as the first marking point.
  • Step S106 based on the normal driving angle and the predetermined critical distraction deflection angle of the current vehicle, demarcate the non-distraction area and the distraction area of the driver in the current vehicle.
  • Another optional, pre-determined critical distraction deflection angle of the current vehicle includes: marking a non-distraction marked region and a distracted marked region inside the current vehicle according to a preset region of interest, wherein the non-distracted mark
  • the area includes: normal gazing mark points, and the distraction area includes at least: boundary mark points; collect images of multiple drivers facing the normal gazing mark points and boundary mark points respectively, and obtain normal gazing images and distracted gazing images; analyze normal gazing images and the distracted gaze image to obtain the critical distraction deflection angle of the current vehicle distraction area.
  • marking the non-distraction labeling area and the distraction labeling area inside the current vehicle according to the preset interest area includes: characterizing the preset interest area as the non-distraction labeling area inside the current vehicle; Determining the center point of the non-distraction labeling area to obtain the normal fixation mark point; determining multiple boundaries of the non-distraction labeling area, and determining the distraction labeling area outside the non-distraction labeling area with each boundary as a side, Among them, any point on the boundary is represented as a boundary identification point.
  • FIG. 3 is a schematic diagram of an optional calibration distraction area according to an embodiment of the present disclosure.
  • the image of the driving area in the car is divided into an area of interest (as shown in the label box above the steering wheel in FIG. 3 ) , which is a non-distracting label area.
  • the shape of the region of interest is not limited, and it can be a circle, a triangle, or a square, etc., and is generally set according to the attributes of the vehicle at the factory. In FIG. 3 , a square is taken as an example.
  • Flag bit 1 is set in the non-distraction marked area, and flag bit 1 is the center point in the area, that is, the normal fixation mark point.
  • Figure 3 sets the flags 2, 3, 4, and 5 on the four boundaries of the upper, lower, left, and right of the non-distraction marked area, and the flags 2, 3, 4, and 5 are respectively the upper, right, lower, and left distraction area boundaries position, and the area outside the boundary position is the distraction labeling area.
  • position 1 is the middle of the front windshield
  • position 2 is the upper edge of the front glass
  • position 3 is the right edge of the rearview mirror
  • position 4 is the middle of the steering wheel
  • position 5 is the left rearview mirror.
  • analyze the normal gazing image and each distracted gazing image to obtain the critical distraction deflection angle of the current vehicle distraction area including: analyzing the normal gazing image, determining the line-of-sight angle for gazing at the normal gazing mark point, based on gazing at the normal gazing
  • the distribution of the line-of-sight angles of the marked points is obtained to obtain the first normal driving angle; all distracted gaze images are analyzed to obtain the distribution of the critical distracted driving angles of each distracted area, based on the distribution of the critical distracted driving angles of each distracted area distribution, calculate the mean value of the critical distracted driving angle of each distraction area; calculate the difference between the first normal driving angle and the mean value of the critical distracted driving angle, and obtain the critical distracted deflection angles of all current vehicles.
  • this embodiment collects multiple images of the driver's direction toward sign positions 1-5, inputs the images through the pre-trained face angle and line of sight angle model, and outputs the output values of the face and line of sight angle. Specifically, based on the image set collected by the fixation mark 1, the distribution of the line-of-sight angles when gazing at the normal fixation mark point can be obtained by calculating the mean value Mean_A1, which is the first normal angle.
  • the image set determines the mean value of the critical distraction driving angle Mean_A2, Mean_A3...Mean_A5, calculates the difference between the first normal driving angle and the mean value of the critical distraction driving angle in the yaw and pitch directions, and obtains the critical distraction of all current vehicles deflection angle.
  • the critical distraction deflection angle at flag 2 is calculated by:
  • Yaw_A2_1 Yaw(Mean_A2)-Yaw(Mean_A1);
  • Pitch_A2_1 Pitch(Mean_A2) ⁇ Pitch(Mean_A1).
  • the distraction deflection angle for obtaining faces and sight lines, avoid re-collecting data training models, and achieve better applicability.
  • calibrate the driver's non-distracted area and distracted area in the current vehicle including: adding the current vehicle critical area on the basis of the normal driving angle
  • the distraction deflection angle of the distraction area boundary is obtained to obtain the critical position of the boundary of the distraction area; the area included in the critical position of the boundary of the distraction area is marked as a non-distraction area, and the area other than the area included in the critical position of the boundary of the distraction area is marked as a distraction area. heart area.
  • the driver's long-term distraction state is output. For example, if it is in the distraction area for 5 consecutive seconds, it is a distracted driving state, otherwise it is a normal driving state. When the vehicle speed reaches the threshold or the turn signal is turned on, it is considered to be in a non-distracted state.
  • reminder information for example, voice reminder, alarm sound
  • a multi-information fusion scheme is adopted to filter the abnormal driving state during driving to avoid false detection; at the same time, the embodiment of the present disclosure also adopts automatic calibration to perform follow-up processing for different individual drivers and driving states, Compared with the fixed threshold calibration method, the present application can improve the accuracy of distraction and effectively reduce false detection.
  • This embodiment provides an automatic marking device for a distraction area, and each unit included in the automatic marking device corresponds to each implementation step in the first embodiment above.
  • Fig. 4 is a schematic diagram of an optional automatic marking device for a distraction area according to an embodiment of the present disclosure.
  • the automatic marking device may include: an acquisition unit 41, a determination unit 43, and a marking unit 45, wherein ,
  • the collection unit 41 is configured to collect a plurality of face images of the driver included in the current vehicle within a preset time period
  • the determination unit 43 is configured to determine the normal driving angle of the driver in the current vehicle in combination with multiple face images
  • the calibration unit 45 is configured to calibrate the driver's non-distraction area and distraction area in the current vehicle based on the normal driving angle and the predetermined critical distraction deflection angle of the current vehicle.
  • the above-mentioned automatic marking device for the distraction area can collect a plurality of face images of the driver included in the current vehicle within a preset time period through the acquisition unit 41, and determine the driver's current position in the current vehicle through the determination unit 43 in combination with the plurality of face images.
  • the calibration unit 45 calibrates the driver's non-distraction area and distraction area in the current vehicle based on the normal driving angle and the predetermined current vehicle critical distraction deflection angle.
  • multiple face images of the driver in the current vehicle can be analyzed, the non-distraction area and the distraction area of the driver in the current vehicle can be automatically marked, and different individual drivers and driving states can be followed. processing, improve the calibration accuracy of the distraction area, and then improve the detection accuracy of the distraction state, so as to solve the problem of distraction area changes caused by driving posture or personnel replacement in related technologies, while the distraction detection still uses a fixed distraction area to cause errors. checked
  • the determination unit includes: a first determination module, configured to determine the abnormal driving state of the driver in combination with the abnormal driving information, and remove images corresponding to the abnormal driving state from all face images to obtain the normal driving image set ;
  • the update module is set to count and update the normal driving angle according to the normal driving image set.
  • an initialization unit configured to initialize the normal driving angle before determining the normal driving angle of the driver in the current vehicle in combination with multiple face images
  • the initialization unit includes: a first initialization module, configured to adopt a factory preset Set a value to initialize the normal driving angle; or, the second initialization module is configured to initialize the normal driving angle using the first line of sight angle when the driver gazes at the first mark point.
  • the abnormal driving information includes at least one of the following: low vehicle speed, turn signal trigger, distraction deflection, and grip strength.
  • the first determination module includes: a first collection submodule, configured to collect the vehicle speed of the current vehicle; a first determination submodule, configured to determine whether the driving speed is lower than the preset speed threshold The driver is in an abnormal driving state.
  • the first determination module includes: a second collection submodule, configured to collect the signal trigger state of the turn signal of the current vehicle; a second determination submodule, configured to, when the signal trigger state indicates that the turn signal is not triggered, Determine that the driver is in a normal driving state; the third determining submodule is configured to determine that the current vehicle is in a turning state and determine that the driver is in an abnormal driving state when the signal trigger state indicates that the steering signal has been triggered.
  • the first determination module includes: a third acquisition submodule, configured to collect the driver's face and line of sight angle; a first statistical submodule, configured to count the driver's face and line of sight angle at a preset The duration of the normal driving area; the fourth determination submodule is configured to determine that the driver is in an abnormal driving state when the duration in the abnormal driving area reaches the first duration threshold.
  • the first determination module includes: a fourth collection submodule, configured to collect the driver's steering wheel grip; a fifth determination submodule, configured to determine the driver's grip strength when the steering wheel grip is lower than a preset grip threshold in abnormal driving conditions.
  • the update module includes: an output submodule, which is set to pass through the face angle and sight angle models for each image in the normal driving image set, and output the normal face and sight angle values corresponding to each image; the second statistical submodule, Set to count all normal face and line of sight angle values, and update the normal driving angle.
  • the automatic labeling device for the distraction area also includes: an area labeling module, configured to label the non-distraction labeling area and the distraction labeling area inside the current vehicle according to the preset area of interest, wherein the non-distraction labeling area Including: normal gazing mark points, the distraction area at least includes: boundary mark points; image acquisition module, set to collect images of multiple drivers facing the normal gazing mark points and boundary mark points respectively, to obtain normal gazing images and distracted gazing images
  • the image analysis module is configured to analyze the normal gaze image and the distracted gaze image to obtain the critical distraction deflection angle of the distraction area of the current vehicle.
  • the area labeling module includes: a sixth determining submodule, configured to characterize the preset region of interest as a non-distracting labeling area inside the current vehicle; a seventh determining submodule, configured to determine the non-distracting labeling area The center point of the center point obtains the normal fixation mark point; the eighth determination submodule is set to determine multiple boundaries of the non-distraction labeling area, and determines the distraction labeling area outside the non-distraction labeling area with each boundary as a side, Among them, any point on the boundary is represented as a boundary identification point.
  • the image analysis module includes: an analysis submodule, configured to analyze the normal gazing image, determine the sight angle of gazing at the normal gazing mark point, and obtain the first normal driving angle based on the distribution of the sight line angle of gazing at the normal gazing mark point; A calculation sub-module, configured to analyze all distracted gaze images, obtain the distribution of critical distracted driving angles in distracted areas, and calculate the critical distracted driving angles in distracted areas based on the distribution of critical distracted driving angles in distracted areas Mean value; the second calculation sub-module is configured to calculate the difference between the first normal driving angle and the mean value of the critical distracted driving angle to obtain critical distracted deflection angles of all current vehicles.
  • an analysis submodule configured to analyze the normal gazing image, determine the sight angle of gazing at the normal gazing mark point, and obtain the first normal driving angle based on the distribution of the sight line angle of gazing at the normal gazing mark point
  • a calculation sub-module configured to analyze all distracted gaze images, obtain the distribution of critical distracted driving angles in distracted areas, and calculate the critical distracted driving angles in distracted
  • the calibration unit includes: a first calibration module, configured to add the critical distraction deflection angle of the current vehicle on the basis of the normal driving angle, to obtain the critical position of the border of the distraction area; a second calibration module, configured to convert the distraction
  • the area included in the critical position of the boundary of the region is marked as a non-distraction area, and the area other than the area included in the critical position of the boundary of the distraction area is marked as a distraction area.
  • the automatic marking device of the above-mentioned distraction area can also include a processor and a memory, and the above-mentioned acquisition unit 41, the determination unit 43, the marking unit 45, etc. are all stored in the memory as program units, and the above-mentioned program stored in the memory is executed by the processor unit to achieve the corresponding function.
  • the above-mentioned processor includes a kernel, and the kernel retrieves corresponding program units from the memory.
  • One or more kernels can be set, and by adjusting the kernel parameters, the non-distraction area and the distraction area of the driver in the current vehicle can be calibrated based on the normal driving angle and the predetermined critical deflection angle of the current vehicle.
  • the above-mentioned memory may include non-permanent memory in computer-readable media, forms such as random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM), and the memory includes at least a memory chip.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash memory
  • a road vehicle including: a vehicle-mounted camera, installed at the windshield in front of the vehicle, and configured to collect road images of the road ahead; a vehicle-mounted control unit, connected to the vehicle-mounted camera, executing A method for automatic calibration of distraction regions for any of the above.
  • a vehicle-mounted electronic device including: a processor; and a memory configured to store executable instructions of the processor; wherein, the processor is configured to execute any of the above-mentioned instructions by executing the executable instructions.
  • a computer-readable storage medium includes a stored computer program, wherein, when the computer program is running, the device where the computer-readable storage medium is located is controlled to perform any of the above An automatic calibration method for distracting regions.
  • the present application also provides a computer program product, which, when executed on a data processing device, is suitable for executing a program initialized with the following method steps: collecting multiple faces of drivers contained in the current vehicle within a preset time period image; combined with multiple face images to determine the normal driving angle of the driver in the current vehicle; based on the normal driving angle and the predetermined critical deflection angle of the current vehicle, calibrate the driver's non-distraction area and distraction zone.
  • the disclosed technical content can be realized in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units can be a logical function division.
  • multiple units or components can be combined or can be Integrate into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of units or modules may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present disclosure is essentially or part of the contribution to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disc, etc., which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)

Abstract

本公开提供了一种分心区域的自动标定方法及装置、道路车辆、电子设备,涉及信息处理领域,其中,该自动标定方法包括:采集预设时间段内包含在当前车辆内的驾驶员的多张人脸图像;结合多张人脸图像确定驾驶员在当前车辆内的正常驾驶角度;基于正常驾驶角度以及预先确定的当前车辆临界的分心偏转角度,标定驾驶员在当前车辆内的非分心区域和分心区域。本公开解决了相关技术中由于驾驶姿态或人员更换引起分心区域变化,而分心检测仍采用固定分心区域导致误检的技术问题。

Description

分心区域的自动标定方法及装置、道路车辆、电子设备
本申请要求于2021年12月07日提交中国专利局、申请号为202111489217.X、申请名称“分心区域的自动标定方法及装置、道路车辆、电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及信息处理技术领域,具体而言,涉及一种分心区域的自动标定方法及装置、道路车辆、电子设备。
背景技术
相关技术中,在驾驶员开车过程中,常常会受到疲劳、外界事务等干扰,处于分心状态,这种分心状态容易造成交通事故,因此,需要监测驾驶员是否处于分心状态,而在监测驾驶员是否处于分心状态之前,需要快速定位每个车辆的分心区域;当前在标定分心区域时,容易受到驾驶员自身差异(驾驶姿态或人员更换出现差异)、开车习惯以及每种车辆的车型影响,导致标定误差较大。此外,当前的分心区域标定方式,通常采用固定区域阈值标定分心区域,容易因驾驶姿态或人员更换导致标定的分心区域出现误差。
当前采用的标定方式,通过获取驾驶人的眼动行为信息,实现对驾驶人分心状态的实时检测,为驾驶人在驾驶过程出现的分心行为进行预警,有效提高道路交通安全。但是这种方式,采用传统机器学习方法,受光线和驾驶员个体差异影响,精度较差,同时该种方式,未对实车特殊场景进行处理,误检情况较多。
针对上述的问题,目前尚未提出有效的解决方案。
发明内容
本公开提供了一种分心区域的自动标定方法及装置、道路车辆、电子设备,以至少解决相关技术中由于驾驶姿态或人员更换引起分心区域变化,而分心检测仍采用固定分心区域导致误检的技术问题。
根据本公开的一个方面,提供了一种分心区域的自动标定方法,包括:采集预设时间段内包含在当前车辆内的驾驶员的多张人脸图像;结合所述多张人脸图像确定所 述驾驶员在所述当前车辆内的正常驾驶角度;基于所述正常驾驶角度以及预先确定的所述当前车辆临界的分心偏转角度,标定所述驾驶员在所述当前车辆内的非分心区域和分心区域。
可选地,结合所述多张人脸图像确定所述驾驶员在所述当前车辆内的正常驾驶角度,包括:结合非正常驾驶信息确定所述驾驶员的非正常驾驶状态,并从全部人脸图像中剔除所述非正常驾驶状态对应的图像,获取正常驾驶图像集;根据所述正常驾驶图像集统计并更新所述正常驾驶角度。
可选地,结合所述多张人脸图像确定所述驾驶员在所述当前车辆内的正常驾驶角度之前,所述方法还包括:初始化所述正常驾驶角度,包括:采用出厂预设值初始化所述正常驾驶角度;或,采用所述驾驶员注视第一标记点时第一视线角度初始化所述正常驾驶角度。
可选地,所述非正常驾驶信息包括以下至少一个:低车速、转向信号触发、分心偏转、握力。
可选地,结合低车速确定所述驾驶员的非正常驾驶状态,包括:采集所述当前车辆的车辆行驶速度;若所述车辆行驶速度低于预设速度阈值,确定所述驾驶员处于非正常驾驶状态。
可选地,结合转向信号触发确定所述驾驶员的非正常驾驶状态,包括:采集所述当前车辆的转向信号的信号触发状态;若所述信号触发状态指示未触发所述转向信号,则确定所述驾驶员处于正常驾驶状态;若所述信号触发状态指示已触发所述转向信号,则确定所述当前车辆处于转弯状态,确定所述驾驶员处于非正常驾驶状态。
可选地,结合分心偏转确定所述驾驶员的非正常驾驶状态,包括:采集所述驾驶员的人脸和视线角度;统计所述驾驶员的人脸和视线角度处于预设的非正常驾驶区域的时长;若处于非正常驾驶区域的时长达到第一时长阈值,则确定所述驾驶员处于非正常驾驶状态。
可选地,结合握力确定所述驾驶员的非正常驾驶状态,包括:采集所述驾驶员的方向盘握力;若所述方向盘握力低于预设的握力阈值,确定所述驾驶员处于非正常驾驶状态。
可选地,根据所述正常驾驶图像集统计并更新所述正常驾驶角度,包括:所述正常驾驶图像集中每张图像通过人脸角度和视线角度模型,输出每张图像对应的正常人脸和视线角度值;统计所有所述正常人脸和视线角度值,更新所述正常驾驶角度。
可选地,预先确定的所述当前车辆临界的分心偏转角度,包括:根据预设的感兴趣区域标定所述当前车辆内部的非分心标注区域和分心标注区域,其中,所述非分心标注区域包括:正常注视标识点,所述分心区域至少包括:边界标识点;采集多位驾驶员分别朝向所述正常注视标识点和所述边界标识点的图像,得到正常注视图像和分心注视图像;分析所述正常注视图像和所述分心注视图像,得到所述当前车辆分心区域临界的分心偏转角度。
可选地,根据预设的感兴趣区域标定所述当前车辆内部的非分心标注区域和分心标注区域,包括:将预设的感兴趣区域表征为所述当前车辆内部的非分心标注区域;确定所述非分心标注区域的中心点,得到所述正常注视标识点;确定所述非分心标注区域的多条边界,并以每条所述边界为边确定在所述非分心标注区域外的分心标注区域,其中,所述边界上的任意点表征为所述边界标识点。
可选地,分析所述正常注视图像和所述分心注视图像,得到所述当前车辆分心区域临界的分心偏转角度,包括:分析所述正常注视图像,确定注视所述正常注视标识点的视线角度,基于所述注视所述正常注视标识点的视线角度的分布,得到第一正常驾驶角度;分析所有所述分心注视图像,获取分心区域的临界分心驾驶角度的分布,基于所述分心区域的临界分心驾驶角度的分布,计算所述分心区域的临界分心驾驶角度均值;计算所述第一正常驾驶角度与所述临界分心驾驶角度均值之间的差值,得到所有所述当前车辆临界的分心偏转角度。
可选地,基于所述正常驾驶角度以及预先确定的所述当前车辆临界的分心偏转角度,标定所述驾驶员在所述当前车辆内的非分心区域和分心区域,包括:在所述正常驾驶角度基础上添加所述当前车辆临界的分心偏转角度,获取分心区域边界的临界位置;将所述分心区域边界的临界位置包含的区域标定为所述非分心区域,将所述分心区域边界的临界位置包含的区域以外的区域标定为所述分心区域。
根据本公开的另一方面,还提供了一种分心区域的自动标定装置,包括:采集单元,设置为采集预设时间段内包含在当前车辆内的驾驶员的多张人脸图像;确定单元,设置为结合所述多张人脸图像确定所述驾驶员在所述当前车辆内的正常驾驶角度;标定单元,设置为基于所述正常驾驶角度以及预先确定的所述当前车辆临界的分心偏转角度,标定所述驾驶员在所述当前车辆内的非分心区域和分心区域。
可选地,所述确定单元包括:第一确定模块,设置为结合非正常驾驶信息确定所述驾驶员的非正常驾驶状态,并从全部人脸图像中剔除所述非正常驾驶状态对应的图像,获取正常驾驶图像集;更新模块,设置为根据所述正常驾驶图像集统计并更新所述正常驾驶角度。
可选地,所述自动标定装置还包括:初始化单元,设置为结合所述多张人脸图像确定所述驾驶员在所述当前车辆内的正常驾驶角度之前,初始化所述正常驾驶角度,所述初始化单元包括:第一初始化模块,设置为采用出厂预设值初始化所述正常驾驶角度;或,第二初始化模块,设置为采用所述驾驶员注视第一标记点时第一视线角度初始化所述正常驾驶角度。
可选地,所述非正常驾驶信息包括以下至少一个:低车速、转向信号触发、分心偏转、握力。
可选地,第一确定模块包括:第一采集子模块,设置为采集所述当前车辆的车辆行驶速度;第一确定子模块,设置为在所述车辆行驶速度低于预设速度阈值的情况下,确定所述驾驶员处于非正常驾驶状态。
可选地,第一确定模块包括:第二采集子模块,设置为采集所述当前车辆的转向信号的信号触发状态;第二确定子模块,设置为在所述信号触发状态指示未触发所述转向信号的情况下,确定所述驾驶员处于正常驾驶状态;第三确定子模块,设置为在所述信号触发状态指示已触发所述转向信号的情况下,确定所述当前车辆处于转弯状态,确定所述驾驶员处于非正常驾驶状态。
可选地,第一确定模块包括:第三采集子模块,设置为采集所述驾驶员的人脸和视线角度;第一统计子模块,设置为统计所述驾驶员的人脸和视线角度处于预设的非正常驾驶区域的时长;第四确定子模块,设置为在处于非正常驾驶区域的时长达到第一时长阈值的情况下,确定所述驾驶员处于非正常驾驶状态。
可选地,第一确定模块包括:第四采集子模块,设置为采集所述驾驶员的方向盘握力;第五确定子模块,设置为在所述方向盘握力低于预设的握力阈值的情况下,确定所述驾驶员处于非正常驾驶状态。
可选地,所述更新模块包括:输出子模块,设置为所述正常驾驶图像集中每张图像通过人脸角度和视线角度模型,输出每张图像对应的正常人脸和视线角度值;第二统计子模块,设置为统计所有所述正常人脸和视线角度值,更新所述正常驾驶角度。
可选地,分心区域的自动标定装置还包括:区域标定模块,设置为根据预设的感兴趣区域标定所述当前车辆内部的非分心标注区域和分心标注区域,其中,所述非分心标注区域包括:正常注视标识点,所述分心区域至少包括:边界标识点;图像采集模块,设置为采集多位驾驶员分别朝向所述正常注视标识点和所述边界标识点的图像,得到正常注视图像和分心注视图像;图像分析模块,设置为分析所述正常注视图像和所述分心注视图像,得到所述当前车辆分心区域临界的分心偏转角度。
可选地,所述区域标定模块包括:第六确定子模块,设置为将预设的感兴趣区域表征为所述当前车辆内部的非分心标注区域;第七确定子模块,设置为确定所述非分心标注区域的中心点,得到所述正常注视标识点;第八确定子模块,设置为确定所述非分心标注区域的多条边界,并以每条所述边界为边确定在所述非分心标注区域外的分心标注区域,其中,所述边界上的任意点表征为所述边界标识点。
可选地,图像分析模块包括:分析子模块,设置为分析所述正常注视图像,确定注视所述正常注视标识点的视线角度,基于所述注视所述正常注视标识点的视线角度的分布,得到第一正常驾驶角度;第一计算子模块,设置为分析所有所述分心注视图像,获取分心区域的临界分心驾驶角度的分布,基于所述分心区域的临界分心驾驶角度的分布,计算所述分心区域的临界分心驾驶角度均值;第二计算子模块,设置为计算所述第一正常驾驶角度与所述临界分心驾驶角度均值之间的差值,得到所有所述当前车辆临界的分心偏转角度。
可选地,所述标定单元包括:第一标定模块,设置为在所述正常驾驶角度基础上添加所述当前车辆临界的分心偏转角度,获取分心区域边界的临界位置;第二标定模块,设置为将所述分心区域边界的临界位置包含的区域标定为所述非分心区域,将所述分心区域边界的临界位置包含的区域以外的区域标定为所述分心区域。
根据本公开的另一方面,还提供了一种道路车辆,包括:车载摄像头,安装与车辆前方的挡风玻璃处,设置为采集前方道路的道路图像;车载控制单元,与所述车载摄像头连接,执行上述任意一项所述的分心区域的自动标定方法。
根据本公开的另一方面,还提供了一种车载电子设备,包括:处理器;以及存储器,设置为存储所述处理器的可执行指令;其中,所述处理器配置为经由执行所述可执行指令来执行上述任意一项所述的分心区域的自动标定方法。
根据本公开的另一方面,还提供了一种计算机可读存储介质,所述计算机可读存储介质包括存储的计算机程序,其中,在所述计算机程序运行时控制所述计算机可读存储介质所在设备执行上述任意一项所述的分心区域的自动标定方法。
本公开中,可采用自动标定,对于不同驾驶员个体和开车状态进行随动处理,和固定阈值方法相比,提高分心的准确性并有效降低误检。
本公开中,采用多信息融合方案,针对行车中非正常驾驶状态进行过滤,以避免误检。
本公开中,采用预设时间段内包含在当前车辆内的驾驶员的多张人脸图像,结合多张人脸图像确定驾驶员在当前车辆内的正常驾驶角度,基于正常驾驶角度以及预先 确定的当前车辆临界的分心偏转角度,标定驾驶员在当前车辆内的非分心区域和分心区域。在本公开中,可以分析当前车辆内的驾驶员的多张人脸图像,自动标定驾驶员在当前车辆内的非分心区域和分心区域,对于不同驾驶员个体和开车状态进行随动处理,提高分心区域的标定准确度,进而提高分心状态的检测准确度,从而解决相关技术中由于驾驶姿态或人员更换引起分心区域变化,而分心检测仍采用固定分心区域导致误检的技术问题。
附图说明
此处所说明的附图用来提供对本公开的进一步理解,构成本申请的一部分,本公开的示意性实施例及其说明设置为解释本公开,并不构成对本公开的不当限定。在附图中:
图1是根据本公开实施例的一种可选的分心区域的自动标定方法的流程图;
图2是根据本公开实施例的一种可选的预设的非正常驾驶区域的示意图;
图3是根据本公开实施例的一种可选的标定分心区域的示意图;
图4是根据本公开实施例的一种可选的分心区域的自动标定装置的示意图。
具体实施方式
为了使本技术领域的人员更好地理解本公开方案,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分的实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本公开保护的范围。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是设置为区别类似的对象,而不必设置为描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
本公开可应用于各种类型的交通工具(汽车、大巴车、摩托车、飞机、火车等),本实施例以车辆为例进行示意说明中,实现车辆分心区域以及驾驶员分心状态的检测。 车辆类型包括但不限于:轿车、卡车、跑车、SUV和MINI车等。本公开适配各种车位和区域,自动标定分心区域,同时,基于已标定的分心区域分析驾驶员是否处于分心状态,对于不同驾驶员个体和开车状态进行随动处理,和固定阈值方法相比,提高分心检测的准确性并有效降低误检。本公开可自适应调整分心区域,避免因驾驶姿态或人员更换导致分心误检。下面结合各个实施例来详细说明本公开。
实施例一
根据本公开实施例,提供了一种分心区域的自动标定方法的实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
图1是根据本公开实施例的一种可选的分心区域的自动标定方法的流程图,如图1所示,该方法包括如下步骤:
步骤S102,采集预设时间段内包含在当前车辆内的驾驶员的多张人脸图像;
步骤S104,结合多张人脸图像确定驾驶员在当前车辆内的正常驾驶角度;
步骤S106,基于正常驾驶角度以及预先确定的当前车辆临界的分心偏转角度,标定驾驶员在当前车辆内的非分心区域和分心区域。
通过上述步骤,可以采集预设时间段内包含在当前车辆内的驾驶员的多张人脸图像,结合多张人脸图像确定驾驶员在当前车辆内的正常驾驶角度,基于正常驾驶角度以及预先确定的当前车辆临界的分心偏转角度,标定驾驶员在当前车辆内的非分心区域和分心区域。在该实施例中,可以分析当前车辆内的驾驶员的多张人脸图像,自动标定驾驶员在当前车辆内的非分心区域和分心区域,对于不同驾驶员个体和开车状态进行随动处理,提高分心区域的标定准确度,进而提高分心状态的检测准确度,从而解决相关技术中由于驾驶姿态或人员更换引起分心区域变化,而分心检测仍采用固定分心区域导致误检的技术问题。
下面结合上述各实施步骤来详细说明本公开实施例。
步骤S102,采集预设时间段内包含在当前车辆内的驾驶员的多张人脸图像。
为了准确标定分心区域,需要结合当前车辆的预设偏转区域(区域形状和区域大小)以及实时考虑驾驶员的驾驶属性(驾驶员的体型和驾驶角度),例如,驾驶员的体型、身高不相同,会造成标定的分心区域出现变化。通过设置在车内的摄像模块在预设时间段内采集驾驶员的多张人脸图像,从多张人脸图像中恢复准确标定分心区域的 实时信息。
本实施例的预设时间段可以预先设定,具体预设时间段的时长与实际的检测精度要求有关,例如,1分钟内、30S内。此外,摄像模块采集视频流,本申请滑动处理预设时间长度的覆盖帧,在当前帧处理后,输入下一帧并剔除最早帧更新图像集后,再次处理图像集,从而实时自动标定。
本实施例中,可以在车辆内靠近车辆驾驶区域设置至少一个摄像模块,上述摄像模块具体安装位置不限定,采集包括驾驶员人脸的图像即可。摄像模块的类型包括但不限于:摄像头、深度摄像机、红外摄像仪等;采集的人脸图像的图像类型包括但不限于:普通RGB图像、深度图像、热成像等,本实施例以普通RGB图像进行示意性说明。
步骤S104,结合多张人脸图像确定驾驶员在当前车辆内的正常驾驶角度。
可选的,结合多张人脸图像确定驾驶员在当前车辆内的正常驾驶角度,包括:结合非正常驾驶信息确定驾驶员的非正常驾驶状态,并从全部人脸图像中剔除非正常驾驶状态对应的图像,获取正常驾驶图像集;根据正常驾驶图像集统计并更新正常驾驶角度。
具体的,正常驾驶角度指的是当前驾驶员在当前车辆内保持专注驾驶时的视线角度,专注驾驶意味着驾驶员视线保持正前方的状态,不存在打电话、转头和看后视镜等情况。进一步,正常驾驶角度随着场景变化会存在变化,晴朗天气行驶,正常驾驶角度保持正前方,视野范围大,进入隧道或雾天行驶,驾驶员会不自觉低头以聚焦眼前环境,视野范围减小,正常驾驶角度会较晴朗天气正前方位置相比下移,本申请实时采集视频流,并实时滑动处理采集的视频流,保证正常驾驶角度信息实时更新。此外,实际情况中,预设时间段内采集的图像集,并非都处于专注驾驶员的状态对应的状态,本申请通过删除全部人脸图像中的非正常驾驶状态对应的图像,得到正常驾驶图像集,在正常驾驶图像集基础上统计并更新长时间正常驾驶角度。
另一种可选的,非正常驾驶信息包括以下至少一个:低车速、转向信号触发、分心偏转、握力。下面分别结合各个非正常驾驶信息,对确定驾驶员的非正常驾驶状态的方式进行说明。
可选的,结合低车速确定驾驶员的非正常驾驶状态,包括:采集当前车辆的车辆行驶速度;若车辆行驶速度低于预设速度阈值,确定驾驶员处于非正常驾驶状态。
本实施例中,通常起步阶段和靠边停车等场景中,往往需要低速行驶环顾环境以做出正确决策,此阶段中并不属于正常驾驶阶段。本申请通过车辆的车速与预设速度 阈值的比较判断是否处于低速运行状态,预设速度阈值可以是提前设定的,具体预设速度阈值的设置根据实际用户需求确定。例如,设定预设速度阈值为30km/h,若当前车辆的车速>30km/h,则认为是正常车速运行状态,若车速<30km/h,则认为车速处于低速运行状态,驾驶员处于非正常开车状态,进一步将非正常驾驶阶段的对应帧从图像集中剔除。
可选的,结合转向信号触发确定驾驶员的非正常驾驶状态,包括:采集当前车辆的转向信号的信号触发状态;若信号触发状态指示未触发转向信号,则确定驾驶员处于正常驾驶状态;若信号触发状态指示已触发转向信号,则确定当前车辆处于转弯状态,确定驾驶员处于非正常驾驶状态。
由于转向或掉头时,需要转向信号开启,并且驾驶员需要环顾周边环境以做出正确决策。本申请通过对转向信号的触发状态的检测从而实现对驾驶状态的判断,若转向信号未触发,则驾驶员处于正常驾驶状态;若转向信号触发,则车辆处于转弯状态,进一步将非正常驾驶阶段的对应帧从图像集中剔除。
可选的,结合分心偏转确定驾驶员的非正常驾驶状态,包括:采集驾驶员的人脸和视线角度;统计驾驶员的人脸和视线角度处于预设的非正常驾驶区域的时长;若处于非正常驾驶区域的时长达到第一时长阈值,则确定驾驶员处于非正常驾驶状态。
在本实施例中,分心区域的区域范围大于等于预设的非正常驾驶区域,预设的非正常驾驶区域是用户设定的,指示的是驾驶员明显处于分心驾驶的区域。
图2是根据本公开实施例的一种可选的预设的非正常驾驶区域的示意图,如图2所示,除分心区域以外区域的非分心区域,除预设的正常驾驶区域以外区域为预设的非正常驾驶区域,与需要准确自动标定的分心区域相比,预设的非正常驾驶区域是一个粗略大致定位的区域。当视线角度落于该区域时表示的是驾驶员明显处于分心驾驶的区域,例如,驾驶员驾驶时视线长时间被路边的广告牌吸引,若长时间处于预设的非正常驾驶区域内,则确定驾驶员为非正常驾驶状态,进一步将非正常驾驶状态的对应帧从图像集中剔除。
另一种可选的,结合握力确定驾驶员的非正常驾驶状态,包括:采集驾驶员的方向盘握力;若方向盘握力低于预设的握力阈值,确定驾驶员处于非正常驾驶状态。
实际情况中,当驾驶员处于疲劳驾驶等其他分心状态时方向盘的握力会减小。基于上述情况,本公开实施例会借助传感器获取握力值,结合握力确定驾驶员的非正常驾驶员状态,本实施例的握力阈值根据每个车辆的车辆类型自适应预先设置,若方向盘握力低于预设的握力阈值,确定驾驶员处于非正常驾驶状态,进一步将非正常驾驶 状态的对应帧从图像集中剔除。
本申请实施例中不限制上述非正常驾驶信息包含种类的任意组合,亦不限制非正常驾驶信息包含种类在筛选图像集时的优先级顺序。
可选的,根据正常驾驶图像集统计并更新正常驾驶角度,包括:正常驾驶图像集中每张图像通过人脸角度和视线角度模型,输出每张图像对应的正常人脸和视线角度值;统计所有正常人脸和视线角度值,更新正常驾驶角度。
采集车辆内的正常驾驶图像集后,分析每张图像,以确定每张图像对应的正常人脸和视线角度值,从而通过所有的正常驾驶时人脸和视线角度值,更新正常驾驶角度。本实施例中,可通过将图像输入人脸角度和视线角度模型得到输出的人脸和视线角度输出值。本公开并不限制人脸角度和视线角度模型的形式和种类,可采用传统几何模型,亦可采用神经网络模型。本实施例可统计并更新长时间正常驾驶角度,例如,统计前60s内驾驶员正常驾驶人脸和视线角度,对其取均值并更新。
作为本实施例可选的实施方式,结合多张人脸图像确定驾驶员在当前车辆内的正常驾驶角度之前,方法还包括:初始化正常驾驶角度,包括:采用出厂预设值初始化正常驾驶角度;或,采用驾驶员注视第一标记点时第一视线角度初始化正常驾驶角度。
本实施例的第一标记点可以是指单独的点,也可以是多个点的均值。例如,可采用当前车辆前挡风玻璃正前方作为第一标记点,亦可以将前挡风玻璃特定区域中包含的任意多个点的均值作为第一标记点。初始化正常驾驶角度可帮助自动分心区域快速流畅地进入正常驾驶角度更新的运转模式,而不会因为初始数据存在极值偏差引起正常驾驶角度初始计算值误差过大,进一步重新回归正常运算需要耗费大量时间和资源。
步骤S106,基于正常驾驶角度以及预先确定的当前车辆临界的分心偏转角度,标定驾驶员在当前车辆内的非分心区域和分心区域。
另一种可选的,预先确定的当前车辆临界的分心偏转角度,包括:根据预设的感兴趣区域标定当前车辆内部的非分心标注区域和分心标注区域,其中,非分心标注区域包括:正常注视标识点,分心区域至少包括:边界标识点;采集多位驾驶员分别朝向正常注视标识点和边界标识点的图像,得到正常注视图像和分心注视图像;分析正常注视图像和分心注视图像,得到当前车辆分心区域临界的分心偏转角度。
在本实施例中,根据预设的感兴趣区域标定当前车辆内部的非分心标注区域和分心标注区域,包括:将预设的感兴趣区域表征为当前车辆内部的非分心标注区域;确定所述非分心标注区域的中心点,得到正常注视标识点;确定非分心标注区域的多条边界,并以每条边界为边确定在非分心标注区域外的分心标注区域,其中,边界上的 任意点表征为边界标识点。
图3是根据本公开实施例的一种可选的标定分心区域的示意图,如图3所示,车内驾驶区域的图像划分出感兴趣区域(如图3中位于方向盘上方的标注框),该区域为非分心标注区域。本申请中并不限制感兴趣区域的形状,可以为圆形、三角形和方形等,一般是出厂根据车辆属性设定,图3中是以方形为例。在非分心标注区域内设置标志位1,标志位1是该区域内的中心点,即正常注视标识点。确定非分心标注区域的多条边界,并将边界上的任意点表征为边界标识点。图3在该非分心标注区域的上下左右四个边界上各设置标志位2、3、4、5,标志位2,3,4,5分别为上、右、下、左分心区域边界位置,并且边界位置以外的区域为分心标注区域。具体的,位置1为前档中间,位置2为前档玻璃上边缘,位置3为后视镜右边缘,位置4为方向盘中间,位置5为左后视镜。
可选的,分析正常注视图像和每张分心注视图像,得到当前车辆分心区域临界的分心偏转角度,包括:分析正常注视图像,确定注视正常注视标识点的视线角度,基于注视正常注视标识点的视线角度的分布,得到第一正常驾驶角度;分析所有分心注视图像,获取每个分心区域的临界分心驾驶角度的分布,基于每个分心区域的临界分心驾驶角度的分布,计算每个分心区域的临界分心驾驶角度均值;计算第一正常驾驶角度与临界分心驾驶角度均值之间的差值,得到所有当前车辆临界的分心偏转角度。
以图3为例,本实施例采集驾驶员的朝向标志位1-5的多张图像,将图像输入通过预先训练得过的人脸角度和视线角度模型,输出人脸和视线角度输出值。具体的,基于注视标志位1采集的图像集确定,注视正常注视标识点时的视线角度的分布,可通过计算均值得到Mean_A1,即第一正常角度,同理,针对注视标志位2-5采集的图像集确定临界分心驾驶角度均值Mean_A2,Mean_A3……Mean_A5,计算第一正常驾驶角度与临界分心驾驶角度均值之间偏航和俯仰方向分别计算差值,得到所有当前车辆临界的分心偏转角度。例如,在标志位2上的临界的分心偏转角度通过下式计算:
Yaw_A2_1=Yaw(Mean_A2)-Yaw(Mean_A1);
Pitch_A2_1=Pitch(Mean_A2)-Pitch(Mean_A1)。
根据不同车辆或分心区域,设置获取人脸和视线的分心偏转角度,避免重新采集数据训练模型,达到较好的适用性。
在本实施例中,可计算每位驾驶员看向正常注视标识点和边界标识点的角度均值,进而计算第一正常驾驶角度与临界分心驾驶角度均值之间的差值,得到所有当前车辆临界的分心偏转角度,通过该分心偏转角度和驾驶员的正常驾驶角度,可标定驾驶员 在当前车辆内的非分心区域和分心区域。
可选的,基于正常驾驶角度以及预先确定的当前车辆临界的分心偏转角度,标定驾驶员在当前车辆内的非分心区域和分心区域,包括:在正常驾驶角度基础上添加当前车辆临界的分心偏转角度,获取分心区域边界的临界位置;将分心区域边界的临界位置包含的区域标定为非分心区域,将分心区域边界的临界位置包含的区域以外的区域标定为分心区域。
在标定出驾驶员在当前车辆内的非分心区域和分心区域后,输出驾驶员长时间的分心状态,例如连续5s处于分心区域则为分心驾驶状态,否则为正常驾驶状态,当车速为达到阈值或转向信号开启,则均认为处于非分心状态。通过检测驾驶员是否处于分心状态,可以在确认驾驶员处于分心状态时,及时发送提醒信息(例如,语音提醒、警报声音),降低驾驶员出现事故的概率,提高车辆驾驶安全度。
本公开实施例,采用多信息融合方案,针对行车中非正常驾驶状态进行过滤,以避免误检;同时,本公开实施例还采用自动标定,对于不同驾驶员个体和开车状态进行随动处理,和固定阈值标定方式相比,本申请能够提高分心的准确性,并有效降低误检。
下面结合另一种可选的实施例来说明本公开。
实施例二
本实施例提供了一种分心区域的自动标定装置,该自动标定装置所包含的各个单元对应于上述实施例一中的各个实施步骤。
图4是根据本公开实施例的一种可选的分心区域的自动标定装置的示意图,如图4所示,该自动标定装置可以包括:采集单元41、确定单元43、标定单元45,其中,
采集单元41,设置为采集预设时间段内包含在当前车辆内的驾驶员的多张人脸图像;
确定单元43,设置为结合多张人脸图像确定驾驶员在当前车辆内的正常驾驶角度;
标定单元45,设置为基于正常驾驶角度以及预先确定的当前车辆临界的分心偏转角度,标定驾驶员在当前车辆内的非分心区域和分心区域。
上述分心区域的自动标定装置,可以通过采集单元41采集预设时间段内包含在当前车辆内的驾驶员的多张人脸图像,通过确定单元43结合多张人脸图像确定驾驶员在当前车辆内的正常驾驶角度,通过标定单元45基于正常驾驶角度以及预先确定的当前 车辆临界的分心偏转角度,标定驾驶员在当前车辆内的非分心区域和分心区域。在该实施例中,可以分析当前车辆内的驾驶员的多张人脸图像,自动标定驾驶员在当前车辆内的非分心区域和分心区域,对于不同驾驶员个体和开车状态进行随动处理,提高分心区域的标定准确度,进而提高分心状态的检测准确度,从而解决相关技术中由于驾驶姿态或人员更换引起分心区域变化,而分心检测仍采用固定分心区域导致误检的
技术问题
可选地,确定单元包括:第一确定模块,设置为结合非正常驾驶信息确定驾驶员的非正常驾驶状态,并从全部人脸图像中剔除非正常驾驶状态对应的图像,获取正常驾驶图像集;更新模块,设置为根据正常驾驶图像集统计并更新正常驾驶角度。
可选地,还包括:初始化单元,设置为结合多张人脸图像确定驾驶员在当前车辆内的正常驾驶角度之前,初始化正常驾驶角度,初始化单元包括:第一初始化模块,设置为采用出厂预设值初始化正常驾驶角度;或,第二初始化模块,设置为采用驾驶员注视第一标记点时第一视线角度初始化正常驾驶角度。
可选地,非正常驾驶信息包括以下至少一个:低车速、转向信号触发、分心偏转、握力。
可选地,第一确定模块包括:第一采集子模块,设置为采集当前车辆的车辆行驶速度;第一确定子模块,设置为在车辆行驶速度低于预设速度阈值的情况下,确定驾驶员处于非正常驾驶状态。
可选地,第一确定模块包括:第二采集子模块,设置为采集当前车辆的转向信号的信号触发状态;第二确定子模块,设置为在信号触发状态指示未触发转向信号的情况下,确定驾驶员处于正常驾驶状态;第三确定子模块,设置为在信号触发状态指示已触发转向信号的情况下,确定当前车辆处于转弯状态,确定驾驶员处于非正常驾驶状态。
可选地,第一确定模块包括:第三采集子模块,设置为采集驾驶员的人脸和视线角度;第一统计子模块,设置为统计驾驶员的人脸和视线角度处于预设的非正常驾驶区域的时长;第四确定子模块,设置为在处于非正常驾驶区域的时长达到第一时长阈值的情况下,确定驾驶员处于非正常驾驶状态。
可选地,第一确定模块包括:第四采集子模块,设置为采集驾驶员的方向盘握力;第五确定子模块,设置为在方向盘握力低于预设的握力阈值的情况下,确定驾驶员处于非正常驾驶状态。
可选地,更新模块包括:输出子模块,设置为正常驾驶图像集中每张图像通过人 脸角度和视线角度模型,输出每张图像对应的正常人脸和视线角度值;第二统计子模块,设置为统计所有正常人脸和视线角度值,更新正常驾驶角度。
可选地,分心区域的自动标定装置还包括:区域标定模块,设置为根据预设的感兴趣区域标定当前车辆内部的非分心标注区域和分心标注区域,其中,非分心标注区域包括:正常注视标识点,分心区域至少包括:边界标识点;图像采集模块,设置为采集多位驾驶员分别朝向正常注视标识点和边界标识点的图像,得到正常注视图像和分心注视图像;图像分析模块,设置为分析正常注视图像和分心注视图像,得到当前车辆分心区域临界的分心偏转角度。
可选地,区域标定模块包括:第六确定子模块,设置为将预设的感兴趣区域表征为当前车辆内部的非分心标注区域;第七确定子模块,设置为确定非分心标注区域的中心点,得到正常注视标识点;第八确定子模块,设置为确定非分心标注区域的多条边界,并以每条边界为边确定在非分心标注区域外的分心标注区域,其中,边界上的任意点表征为边界标识点。
可选地,图像分析模块包括:分析子模块,设置为分析正常注视图像,确定注视正常注视标识点的视线角度,基于注视正常注视标识点的视线角度的分布,得到第一正常驾驶角度;第一计算子模块,设置为分析所有分心注视图像,获取分心区域的临界分心驾驶角度的分布,基于分心区域的临界分心驾驶角度的分布,计算分心区域的临界分心驾驶角度均值;第二计算子模块,设置为计算第一正常驾驶角度与临界分心驾驶角度均值之间的差值,得到所有当前车辆临界的分心偏转角度。
可选地,标定单元包括:第一标定模块,设置为在正常驾驶角度基础上添加当前车辆临界的分心偏转角度,获取分心区域边界的临界位置;第二标定模块,设置为将分心区域边界的临界位置包含的区域标定为非分心区域,将分心区域边界的临界位置包含的区域以外的区域标定为分心区域。
上述的分心区域的自动标定装置还可以包括处理器和存储器,上述采集单元41、确定单元43、标定单元45等均作为程序单元存储在存储器中,由处理器执行存储在存储器中的上述程序单元来实现相应的功能。
上述处理器中包含内核,由内核去存储器中调取相应的程序单元。内核可以设置一个或以上,通过调整内核参数来基于正常驾驶角度以及预先确定的当前车辆临界的分心偏转角度,标定驾驶员在当前车辆内的非分心区域和分心区域。
上述存储器可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM),存储器包括至少 一个存储芯片。
根据本公开的另一方面,还提供了一种道路车辆,包括:车载摄像头,安装与车辆前方的挡风玻璃处,设置为采集前方道路的道路图像;车载控制单元,与车载摄像头连接,执行上述任意一项的分心区域的自动标定方法。
根据本公开的另一方面,还提供了一种车载电子设备,包括:处理器;以及存储器,设置为存储处理器的可执行指令;其中,处理器配置为经由执行可执行指令来执行上述任意一项的分心区域的自动标定方法。
根据本公开的另一方面,还提供了一种计算机可读存储介质,计算机可读存储介质包括存储的计算机程序,其中,在计算机程序运行时控制计算机可读存储介质所在设备执行上述任意一项的分心区域的自动标定方法。
本申请还提供了一种计算机程序产品,当在数据处理设备上执行时,适于执行初始化有如下方法步骤的程序:采集预设时间段内包含在当前车辆内的驾驶员的多张人脸图像;结合多张人脸图像确定驾驶员在当前车辆内的正常驾驶角度;基于正常驾驶角度以及预先确定的当前车辆临界的分心偏转角度,标定驾驶员在当前车辆内的非分心区域和分心区域。
上述本公开实施例序号仅仅为了描述,不代表实施例的优劣。
在本公开的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成 的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅是本公开的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本公开原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本公开的保护范围。

Claims (20)

  1. 一种分心区域的自动标定方法,包括:
    采集预设时间段内包含在当前车辆内的驾驶员的多张人脸图像;
    结合所述多张人脸图像确定所述驾驶员在所述当前车辆内的正常驾驶角度;
    基于所述正常驾驶角度以及预先确定的所述当前车辆临界的分心偏转角度,标定所述驾驶员在所述当前车辆内的非分心区域和分心区域。
  2. 根据权利要求1所述的方法,其中,结合所述多张人脸图像确定所述驾驶员在所述当前车辆内的正常驾驶角度,包括:
    结合非正常驾驶信息确定所述驾驶员的非正常驾驶状态,并从全部人脸图像中剔除所述非正常驾驶状态对应的图像,获取正常驾驶图像集;
    根据所述正常驾驶图像集统计并更新所述正常驾驶角度。
  3. 根据权利要求1所述的方法,其中,结合所述多张人脸图像确定所述驾驶员在所述当前车辆内的正常驾驶角度之前,所述方法还包括:初始化所述正常驾驶角度,包括:
    采用出厂预设值初始化所述正常驾驶角度;或,
    采用所述驾驶员注视第一标记点时第一视线角度初始化所述正常驾驶角度。
  4. 根据权利要求2所述的方法,其中,所述非正常驾驶信息包括以下至少一个:低车速、转向信号触发、分心偏转、握力。
  5. 根据权利要求4所述的方法,其中,结合低车速确定所述驾驶员的非正常驾驶状态,包括:
    采集所述当前车辆的车辆行驶速度;
    若所述车辆行驶速度低于预设速度阈值,确定所述驾驶员处于非正常驾驶状态。
  6. 根据权利要求4所述的方法,其中,结合转向信号触发确定所述驾驶员的非正常驾驶状态,包括:
    采集所述当前车辆的转向信号的信号触发状态;
    若所述信号触发状态指示未触发所述转向信号,则确定所述驾驶员处于正常驾驶状态;
    若所述信号触发状态指示已触发所述转向信号,则确定所述当前车辆处于转弯状态,确定所述驾驶员处于非正常驾驶状态。
  7. 根据权利要求4所述的方法,其中,结合分心偏转确定所述驾驶员的非正常驾驶状态,包括:
    采集所述驾驶员的人脸和视线角度;
    统计所述驾驶员的人脸和视线角度处于预设的非正常驾驶区域的时长;
    若处于非正常驾驶区域的时长达到第一时长阈值,则确定所述驾驶员处于非正常驾驶状态。
  8. 根据权利要求4所述的方法,其中,结合握力确定所述驾驶员的非正常驾驶状态,包括:
    采集所述驾驶员的方向盘握力;
    若所述方向盘握力低于预设的握力阈值,确定所述驾驶员处于非正常驾驶状态。
  9. 根据权利要求4所述的方法,其中,根据所述正常驾驶图像集统计并更新所述正常驾驶角度,包括:
    所述正常驾驶图像集中每张图像通过人脸角度和视线角度模型,输出每张图像对应的正常人脸和视线角度值;
    统计所有所述正常人脸和视线角度值,更新所述正常驾驶角度。
  10. 根据权利要求1所述的方法,其中,预先确定的所述当前车辆临界的分心偏转角度,包括:
    根据预设的感兴趣区域标定所述当前车辆内部的非分心标注区域和分心标注区域,其中,所述非分心标注区域包括:正常注视标识点,所述分心区域至少包括:边界标识点;
    采集多位驾驶员分别朝向所述正常注视标识点和所述边界标识点的图像,得到正常注视图像和分心注视图像;
    分析所述正常注视图像和所述分心注视图像,得到所述当前车辆分心区域临 界的分心偏转角度。
  11. 根据权利要求10所述的方法,其中,根据预设的感兴趣区域标定所述当前车辆内部的非分心标注区域和分心标注区域,包括:
    将所述预设的感兴趣区域表征为所述当前车辆内部的非分心标注区域;
    确定所述非分心标注区域的中心点,得到所述正常注视标识点;
    确定所述非分心标注区域的多条边界,并以每条所述边界为边确定在所述非分心标注区域外的分心标注区域,其中,所述边界上的任意点表征为所述边界标识点。
  12. 根据权利要求10所述的方法,其中,分析所述正常注视图像和所述分心注视图像,得到所述当前车辆分心区域临界的分心偏转角度,包括:
    分析所述正常注视图像,确定注视所述正常注视标识点的视线角度,基于所述注视所述正常注视标识点的视线角度的分布,得到第一正常驾驶角度;
    分析所有所述分心注视图像,获取分心区域的临界分心驾驶角度的分布,基于所述分心区域的临界分心驾驶角度的分布,计算所述分心区域的临界分心驾驶角度均值;
    计算所述第一正常驾驶角度与所述临界分心驾驶角度均值之间的差值,得到所有所述当前车辆临界的分心偏转角度。
  13. 根据权利要求1所述的方法,其中,基于所述正常驾驶角度以及预先确定的所述当前车辆临界的分心偏转角度,标定所述驾驶员在所述当前车辆内的非分心区域和分心区域,包括:
    在所述正常驾驶角度基础上添加所述当前车辆临界的分心偏转角度,获取分心区域边界的临界位置;
    将所述分心区域边界的临界位置包含的区域标定为所述非分心区域,将所述分心区域边界的临界位置包含的区域以外的区域标定为所述分心区域。
  14. 一种分心区域的自动标定装置,包括:
    采集单元,设置为采集预设时间段内包含在当前车辆内的驾驶员的多张人脸图像;
    确定单元,设置为结合所述多张人脸图像确定所述驾驶员在所述当前车辆内 的正常驾驶角度;
    标定单元,设置为基于所述正常驾驶角度以及预先确定的所述当前车辆临界的分心偏转角度,标定所述驾驶员在所述当前车辆内的非分心区域和分心区域。
  15. 根据权利要求14所述的装置,其中,所述确定单元包括:
    第一确定模块,设置为结合非正常驾驶信息确定所述驾驶员的非正常驾驶状态,并从全部人脸图像中剔除所述非正常驾驶状态对应的图像,获取正常驾驶图像集;
    更新模块,设置为根据所述正常驾驶图像集统计并更新所述正常驾驶角度。
  16. 根据权利要求14所述的装置,其中,还包括:初始化单元,设置为结合所述多张人脸图像确定所述驾驶员在所述当前车辆内的正常驾驶角度之前,初始化所述正常驾驶角度,所述初始化单元包括:
    第一初始化模块,设置为采用出厂预设值初始化所述正常驾驶角度;或,
    第二初始化模块,设置为采用所述驾驶员注视第一标记点时第一视线角度初始化所述正常驾驶角度。
  17. 根据权利要求15所述的装置,其中,所述非正常驾驶信息包括以下至少一个:低车速、转向信号触发、分心偏转、握力。
  18. 一种道路车辆,包括:
    车载摄像头,安装与车辆前方的挡风玻璃处,设置为采集前方道路的道路图像;
    车载控制单元,与所述车载摄像头连接,执行权利要求1至13中任意一项所述的分心区域的自动标定方法。
  19. 一种车载电子设备,包括:
    处理器;以及
    存储器,设置为存储所述处理器的可执行指令;
    其中,所述处理器配置为经由执行所述可执行指令来执行权利要求1至13中任意一项所述的分心区域的自动标定方法。
  20. 一种计算机可读存储介质,所述计算机可读存储介质包括存储的计算机程序,其中,在所述计算机程序运行时控制所述计算机可读存储介质所在设备执行权利要求1 至13中任意一项所述的分心区域的自动标定方法。
PCT/CN2022/131200 2021-12-07 2022-11-10 分心区域的自动标定方法及装置、道路车辆、电子设备 WO2023103708A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111489217.X 2021-12-07
CN202111489217.XA CN114332451A (zh) 2021-12-07 2021-12-07 分心区域的自动标定方法及装置、道路车辆、电子设备

Publications (1)

Publication Number Publication Date
WO2023103708A1 true WO2023103708A1 (zh) 2023-06-15

Family

ID=81051636

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/131200 WO2023103708A1 (zh) 2021-12-07 2022-11-10 分心区域的自动标定方法及装置、道路车辆、电子设备

Country Status (2)

Country Link
CN (1) CN114332451A (zh)
WO (1) WO2023103708A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332451A (zh) * 2021-12-07 2022-04-12 虹软科技股份有限公司 分心区域的自动标定方法及装置、道路车辆、电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020029444A1 (zh) * 2018-08-10 2020-02-13 初速度(苏州)科技有限公司 一种驾驶员驾驶时注意力检测方法和系统
CN111709264A (zh) * 2019-03-18 2020-09-25 北京市商汤科技开发有限公司 驾驶员注意力监测方法和装置及电子设备
CN113378771A (zh) * 2021-06-28 2021-09-10 济南大学 驾驶员状态确定方法、装置、驾驶员监控系统、车辆
CN114332451A (zh) * 2021-12-07 2022-04-12 虹软科技股份有限公司 分心区域的自动标定方法及装置、道路车辆、电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020029444A1 (zh) * 2018-08-10 2020-02-13 初速度(苏州)科技有限公司 一种驾驶员驾驶时注意力检测方法和系统
CN111709264A (zh) * 2019-03-18 2020-09-25 北京市商汤科技开发有限公司 驾驶员注意力监测方法和装置及电子设备
CN113378771A (zh) * 2021-06-28 2021-09-10 济南大学 驾驶员状态确定方法、装置、驾驶员监控系统、车辆
CN114332451A (zh) * 2021-12-07 2022-04-12 虹软科技股份有限公司 分心区域的自动标定方法及装置、道路车辆、电子设备

Also Published As

Publication number Publication date
CN114332451A (zh) 2022-04-12

Similar Documents

Publication Publication Date Title
US20210357670A1 (en) Driver Attention Detection Method
JP6307629B2 (ja) 運転者の安全運転状態を検知する方法及び装置
JP5171629B2 (ja) 走行情報提供装置
EP2564766B1 (en) Visual input of vehicle operator
CN102510480B (zh) 驾驶员视线自动校准和跟踪系统
US9041789B2 (en) System and method for determining driver alertness
CN113378771B (zh) 驾驶员状态确定方法、装置、驾驶员监控系统、车辆
CN107757479A (zh) 一种基于增强现实显示技术的驾驶辅助系统及方法
CN105835880A (zh) 车道追踪系统
WO2023103708A1 (zh) 分心区域的自动标定方法及装置、道路车辆、电子设备
CN105599765A (zh) 一种车道偏离的判断和预警方法
CN110826369A (zh) 一种驾驶员驾驶时注意力检测方法和系统
CN110706282A (zh) 全景系统自动标定方法、装置、可读存储介质及电子设备
US20150124097A1 (en) Optical reproduction and detection system in a vehicle
CN111179552A (zh) 基于多传感器融合的驾驶员状态监测方法和系统
KR101986734B1 (ko) 차량 운전 보조 장치 및 이의 안전 운전 유도 방법
CN110909718B (zh) 驾驶状态识别方法、装置及车辆
US20200064912A1 (en) Eye gaze tracking of a vehicle passenger
CN108256487B (zh) 一种基于反向双目的驾驶状态检测装置和方法
CN116012822B (zh) 一种疲劳驾驶的识别方法、装置及电子设备
CN113942503A (zh) 一种车道保持方法和装置
JP2022012829A (ja) ドライバモニタ装置及びドライバモニタ方法
CN113212451A (zh) 一种智能驾驶汽车用后视辅助系统
CN116052136B (zh) 分心检测方法、车载控制器和计算机存储介质
CN117698757A (zh) 一种弥补l2级辅助驾驶系统不足的危险驾驶识别方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22903131

Country of ref document: EP

Kind code of ref document: A1