WO2021249239A1 - 一种疲劳驾驶检测方法及其系统、计算机设备 - Google Patents

一种疲劳驾驶检测方法及其系统、计算机设备 Download PDF

Info

Publication number
WO2021249239A1
WO2021249239A1 PCT/CN2021/097618 CN2021097618W WO2021249239A1 WO 2021249239 A1 WO2021249239 A1 WO 2021249239A1 CN 2021097618 W CN2021097618 W CN 2021097618W WO 2021249239 A1 WO2021249239 A1 WO 2021249239A1
Authority
WO
WIPO (PCT)
Prior art keywords
driver
side face
eye movement
parameter
instruction
Prior art date
Application number
PCT/CN2021/097618
Other languages
English (en)
French (fr)
Inventor
张莹
吴天来
蔡吉晨
何东健
沙锦周
陈超
Original Assignee
广州汽车集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州汽车集团股份有限公司 filed Critical 广州汽车集团股份有限公司
Priority to US17/924,555 priority Critical patent/US20230230397A1/en
Publication of WO2021249239A1 publication Critical patent/WO2021249239A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W40/09Driving style or behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/043Identity of occupants
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/229Attention level, e.g. attentive to driving, reading or sleeping

Definitions

  • the present invention relates to the technical field of driver fatigue driving detection, in particular to a fatigue driving detection method and system and computer equipment.
  • driver wear myopia glasses.
  • the driver When wearing myopia glasses, when oncoming light shines, the driver’s glasses lenses will receive light from the front to form light spots and light spots.
  • the eye features cannot be extracted correctly, so there is interference, which causes the fatigue recognition result to be inaccurate;
  • another example is the driver with a special situation, the degree of myopia is not high, but there will be drivers in the state of squinting.
  • drivers with eye defects, and the same set of detection models cannot be applied to all drivers. They need to be detected by a unified standard recognition algorithm.
  • the system When dealing with the fatigue driving detection of these drivers with special conditions, the system will It consumes a lot of computing resources, the detection and recognition efficiency is slow, and the recognition accuracy is not up to the ideal standard.
  • the present invention aims to provide a method for detecting fatigue driving, its system, and computer equipment, so as to solve the technical problem of low accuracy of fatigue driving detection due to individual differences among drivers.
  • an embodiment of the present invention proposes a method for detecting fatigue driving, including:
  • the ID file includes driving Characteristic parameters of the driver's side face and normal eye movement characteristic parameters of the driver when he is awake;
  • ID file corresponding to the currently seated driver in the driver If there is an ID file corresponding to the currently seated driver in the driver’s ID database, during the driving process, periodically obtain the driver’s current cycle profile image collected by the camera module, and according to the current cycle profile The face image obtains the eye movement characteristic parameters of the current cycle of the driver, and determines whether the driver is fatigued driving according to the comparison result of the eye movement characteristic parameters of the current cycle and the normal eye movement characteristic parameters of the driver.
  • the method further includes:
  • ID file corresponding to the currently seated driver in the driver If there is no ID file corresponding to the currently seated driver in the driver’s ID library, then obtain the multi-frame side face images collected by the camera module during a period of the driver in a normal awake state while driving, and compare the multiple Frame side face images are image-recognized to obtain multiple eye movement feature parameters, the normal eye movement feature parameters of the currently seated driver are obtained according to the multiple eye movement feature parameters, and the side face feature parameters of the currently seated driver are compared with The driver’s normal eye movement characteristic parameters establish the ID file corresponding to the driver currently seated;
  • the driver After establishing the ID file corresponding to the currently seated driver, during the driving process, periodically obtain the driver’s current cycle profile image collected by the camera module, and obtain the driver’s profile image according to the current cycle profile image.
  • the eye movement characteristic parameter of the current cycle, and the comparison result of the eye movement characteristic parameter of the current cycle and the normal eye movement characteristic parameter of the driver determines whether the driver is driving fatigued.
  • the eye movement characteristic parameter of the current cycle includes at least one of a distance parameter between the upper and lower eyelids of the current cycle, a blink time parameter, and an eye closure time ratio parameter, and the driver is normal
  • the eye movement characteristic parameter includes at least one of a distance parameter between the normal upper and lower eyelids, a normal blink time parameter, and a normal eye closure time ratio parameter.
  • the method further includes:
  • a supplementary light control instruction is generated, and the supplementary light control instruction is sent to the supplementary light actuator to control the supplementary light actuator to execute the supplementary light control instruction.
  • the method further includes:
  • the camera module includes a CMOS camera component and an infrared CCD Camera component;
  • the first wake-up instruction and the first sleep instruction are generated, and the first wake-up instruction is sent to the CMOS camera component to control the CMOS camera component to execute the first wake-up instruction , And send the first sleep instruction to the infrared CCD camera component to control the infrared CCD camera component to execute the first sleep instruction;
  • a second wake-up instruction and a second sleep instruction are generated, and the second wake-up instruction is sent to the infrared CCD camera component to control the infrared CCD camera component to execute the first Two wake-up instructions, and send a second sleep instruction to the CMOS camera component to control the CMOS camera component to execute the second sleep instruction.
  • the method further includes:
  • the lens Adjust the control instruction and send the lens adjustment control instruction to the lens adjustment drive mechanism to control the lens adjustment drive mechanism to drive the camera module to move, so that the camera optical axis of the camera module is located on the first plane and driving The contour of the side face of the operator is perpendicular to the second plane.
  • an embodiment of the present invention proposes a fatigue driving detection system, including:
  • the image acquisition unit is used to acquire the side face image of the currently seated driver collected by the camera module;
  • the driver determination unit is configured to perform face recognition on the side face image to obtain side face feature parameters, and determine whether there is an ID file corresponding to the currently seated driver in the driver ID database according to the side face feature parameters; Wherein the ID file includes the driver's side face feature parameters and the driver's normal eye movement feature parameters when the driver is awake; and
  • the fatigue driving determination unit is used to periodically obtain the current periodic side face image of the driver collected by the camera module during the driving process when there is an ID file corresponding to the currently seated driver in the driver ID library, and Obtain the eye movement characteristic parameters of the current cycle of the driver according to the side face image of the current cycle, and determine whether the driver is a fatigued driving according to the comparison result of the eye movement characteristic parameters of the current cycle and the normal eye movement characteristic parameters of the driver .
  • the system further includes an ID file establishing unit, which is used to obtain the camera module when the ID file corresponding to the currently seated driver does not exist in the driver ID database Acquire multiple frames of side face images during a period of a normal awake state when the driver is driving, and perform image recognition on the multiple frames of side face images to obtain multiple eye movement feature parameters, according to the multiple eye movement feature parameters Obtain the normal eye movement characteristic parameters of the currently seated driver, and establish the ID file corresponding to the currently seated driver according to the side face characteristic parameters of the currently seated driver and the driver’s normal eye movement characteristic parameters;
  • the fatigue driving determination unit is also used to periodically obtain the current periodic side face image of the driver collected by the camera module during the driving process after the ID file corresponding to the currently seated driver is established.
  • the side face image of the current cycle obtains the eye movement characteristic parameters of the current cycle of the driver, and determines whether the driver is a fatigued driving according to the comparison result of the eye movement characteristic parameters of the current cycle and the normal eye movement characteristic parameters of the driver.
  • the eye movement characteristic parameter of the current cycle includes at least one of a distance parameter between the upper and lower eyelids of the current cycle, a blink time parameter, and an eye closure time ratio parameter, and the driver is normal
  • the eye movement characteristic parameter includes at least one of a distance parameter between the normal upper and lower eyelids, a normal blink time parameter, and a normal eye closure time ratio parameter.
  • system further includes:
  • the ambient light intensity acquisition unit is used to acquire the current intensity parameters of the ambient light in the car collected by the ambient light recognition sensor;
  • the supplemental light judging unit is configured to determine whether to perform soft light supplementary light on the driver's side face according to the comparison result of the current intensity parameter of the ambient light in the vehicle and the preset intensity threshold;
  • the fill light control unit is used to generate a fill light control instruction if it is determined that the driver's side face is soft light fill, and send the fill light control instruction to the fill light actuator to control the fill light actuator to perform the fill light Light control instructions.
  • system further includes:
  • the ambient light intensity acquisition unit is used to acquire the current intensity parameters of the ambient light in the car collected by the ambient light recognition sensor;
  • the camera mode determination unit is configured to determine the use of a CMOS camera component or an infrared CCD camera component to collect the driver’s side face image according to the comparison result of the current intensity parameter of the ambient light in the car with a preset intensity threshold; the camera module Including CMOS camera components and infrared CCD camera components;
  • the first camera control unit is used to generate a first wake-up instruction and a first sleep instruction when it is determined that the CMOS camera component is used to collect the driver’s side face image, and send the first wake-up instruction to the CMOS camera component to control the CMOS camera
  • the component executes the first wake-up instruction, and sends the first sleep instruction to the infrared CCD camera component to control the infrared CCD camera component to execute the first sleep instruction;
  • the second camera control unit is used to generate a second wake-up command and a second sleep command when it is determined that the infrared CCD camera component is used to collect the driver’s side face image, and send the second wake-up command to the infrared CCD camera component to control
  • the infrared CCD camera component executes the second wake-up instruction and sends the second sleep instruction to the CMOS camera component to control the CMOS camera component to execute the second sleep instruction.
  • system further includes:
  • the first plane information acquiring unit is configured to acquire parameter information of the first plane where the optical axis of the camera of the camera module is located;
  • the second plane information acquiring unit is configured to perform image recognition on the side face image to obtain parameter information of the second plane where the side face contour of the driver is located;
  • the lens adjustment control unit is configured to determine whether the first plane and the second plane are perpendicular to each other according to the parameter information of the first plane and the parameter information of the second plane. If the two planes are not vertical, the lens adjustment control instruction is sent to the lens adjustment drive mechanism to control the lens adjustment drive mechanism to drive the camera module to move, so that the camera head of the camera module is light.
  • the first plane where the axis is located is perpendicular to the second plane where the driver's profile is located.
  • an embodiment of the present invention proposes a computer device, including: the fatigue driving detection system according to the embodiment of the second aspect; or, a memory and a processor, wherein computer-readable instructions are stored in the memory, and When the computer-readable instructions are executed by the processor, the processor is caused to execute the steps of the fatigue driving detection method according to the embodiment of the first aspect.
  • the above fatigue driving detection method, its system, and computer equipment have at least the following beneficial effects:
  • the driver's side face image (left face or right face)
  • perform face recognition on the side face image to obtain side face feature parameters
  • the comparison result determines whether the driver is driving fatigued.
  • the ID files include the driver's side face feature parameters and the driver's normal eye movement feature parameters when the driver is awake, that is, facing different drivers,
  • the normal eye movement characteristic parameters as a reference are different, so as to further better solve the technical problem of low accuracy of fatigue driving detection due to individual differences.
  • FIG. 1 is a schematic flowchart of a method for detecting fatigue driving in an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of a method for detecting fatigue driving in another embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a process of performing soft light supplement light in an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a flow of controlling the camera component in an embodiment of the present invention.
  • Fig. 5 is a schematic diagram of a framework of a fatigue driving detection system in an embodiment of the present invention.
  • Fig. 6 is a schematic diagram of a framework of a fatigue driving detection system in another embodiment of the present invention.
  • FIG. 7 is a schematic diagram of a frame of a soft light supplement light control module in an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a framework of a camera assembly control module in an embodiment of the present invention.
  • FIG. 1 is a flowchart of a method for detecting fatigue driving in this embodiment. Referring to FIG. 1, the method of this embodiment includes steps S11-S13;
  • Step S11 Obtain a side face image of the currently seated driver collected by the camera module
  • the side face image refers to the left side face image or the right side face image of the driver.
  • Different countries/regions may adopt different driving rules. Adopt the rules of driving on the left. Among them, when the driving rule of driving on the right is adopted, the driver sits in the left seat of the front seat. At this time, in order to facilitate the installation of the camera module and the side face image acquisition, the camera module can be installed close to At the position of the left seat of the front seat, the camera module collects the image of the driver’s left face; among them, when the driving rule of driving on the left is adopted, the driver sits on the right seat of the front seat. In order to facilitate the installation of the camera module and side-face image collection, the camera module can be installed near the right seat of the front seat, and the camera module collects the driver's right-side face image.
  • Step S12 Perform face recognition on the side face image to obtain side face feature parameters, and determine whether there is an ID file corresponding to the currently seated driver in the driver ID database according to the side face feature parameters; wherein the ID The file includes the characteristic parameters of the driver’s side face and the normal eye movement characteristic parameters of the driver when he is awake;
  • different ID files are established corresponding to different drivers, and a plurality of driver ID files are stored in the driver ID library, and the ID files include the driver's side face feature parameters and the driver's awake state.
  • the normal eye movement characteristic parameters below, that is to say, facing different drivers, the normal eye movement characteristic parameters used as a reference for comparison are different.
  • step S12 the side face feature parameters of the currently seated driver obtained by face recognition are specifically compared with the side face feature parameters of the driver ID files in the driver ID database one by one, so as to determine Whether there is an ID file corresponding to the currently seated driver in the driver ID database.
  • the profile feature parameter refers to the feature parameter of the profile profile as a whole.
  • the normal eye movement characteristic parameter is preferably but not limited to at least one of the distance parameter between the upper and lower eyelids when the eyes are normally opened, the normal blink time parameter, and the normal eye closure time ratio parameter (Per-clos).
  • the eye movement characteristics may also be other characteristics related to the eyes, which are not specifically limited in this embodiment.
  • Step S13 If there is an ID file corresponding to the currently seated driver in the driver ID database, during the driving process, periodically obtain the current periodic side face image of the driver collected by the camera module, and according to the current The periodic side face image obtains the eye movement characteristic parameters of the current cycle of the driver, and determines whether the driver is a fatigued driving according to the comparison result of the eye movement characteristic parameters of the current cycle and the normal eye movement characteristic parameters of the driver.
  • step S12 when it is determined in step S12 that there is an ID file corresponding to the currently seated driver in the driver ID library, the normal eye movement characteristic parameters of the driver in the corresponding ID file are obtained and used as the determination of whether the driver is fatigued Reference parameters for driving.
  • the vehicle-oriented assistant responds to wake-up warning and automatic driving intervention.
  • the feature extraction of an image is a technical method widely used in the field of image recognition.
  • the principle of feature extraction of an image is essentially to use the different pixel values of the pixels on the image to determine the detection target.
  • the detection The target is the eye. Therefore, based on the profile image collected in this embodiment, those skilled in the art know how to extract eye movement features. Therefore, the specific steps of eye movement feature extraction are not limited in this embodiment, and this embodiment is applicable Combining any technical means of eye movement feature extraction, it should be understood that they are all within the protection scope of this embodiment.
  • the main purpose of this embodiment is to use the driver’s side face image as the basis for determining the fatigue driving detection. Whether it is a side face image or a front face image, the extracted feature data are all eye movement features.
  • the technical means of fatigue driving detection based on eye movement features are widely used in the field of fatigue driving detection. Therefore, based on the eye movement feature parameters extracted in this embodiment, those skilled in the art are familiar with how to perform fatigue driving detection.
  • the embodiment does not limit the specific steps of the fatigue driving detection.
  • This embodiment can be combined with any technical means for fatigue driving detection based on eye movement features, such as the PERCLOS algorithm.
  • the PERCLOS algorithm also includes the feature extraction technology. The means should be understood as being within the protection scope of this embodiment.
  • this embodiment uses the driver’s side face image as the judgment data for detecting the driver’s fatigue driving. Therefore, it is possible to accurately extract the driver’s eye movement characteristic parameters, and perform driver fatigue driving detection based on the eye movement characteristic parameters, which can avoid the low accuracy of fatigue driving detection caused by the driver’s wearing glasses.
  • the ID files include the driver's side face feature parameters and the driver's normal eye movement feature parameters when the driver is awake, that is, facing different drivers,
  • the normal eye movement characteristic parameters as a reference are different, so as to further better solve the technical problem of low accuracy of fatigue driving detection due to individual differences.
  • the method further includes steps S14-S15:
  • Step S14 If the ID file corresponding to the currently seated driver does not exist in the driver ID library, obtain multiple frames of side face images collected by the camera module during a period of the driver in a normal awake state while driving, and Perform image recognition on the multi-frame side face images to obtain multiple eye movement feature parameters, obtain the normal eye movement feature parameters of the currently seated driver based on the multiple eye movement feature parameters, and obtain the normal eye movement feature parameters of the currently seated driver according to the side face of the currently seated driver.
  • the characteristic parameters and the normal eye movement characteristic parameters of the driver establish an ID file corresponding to the driver currently seated;
  • a new driver ID file is selected to be established to facilitate subsequent driver fatigue during driving.
  • Driving test the new driver ID file also includes side face feature parameters and the driver’s normal eye movement feature parameters.
  • the side face feature parameters are obtained in step S12, which can be obtained from multiple frames of side face images in one cycle in a normally awake state
  • the multiple eye movement feature parameter samples are the normal eye movement feature parameter samples when the driver is awake. Therefore, based on the multiple normal eye movement feature parameter samples, the preset model is used for training, and the corresponding driver can be obtained. Eye movement characteristic parameters.
  • the principle of using the preset model for training is: statistical analysis is performed based on a plurality of normal eye movement characteristic parameter samples to obtain the normal eye movement characteristic parameter range.
  • the deep learning neural network is widely used in model training and detection of fatigue driving, including training based on multiple samples to obtain the distance parameters between the upper and lower eyelids when the eyes are normally opened, the normal blink time parameters, and the normal eye closure time ratio parameters.
  • the preset model in the example can specifically use a deep learning neural network.
  • the sample parameters obtained are the distance parameter between the upper and lower eyelids, the blink time parameter, and the eye closure time ratio parameter. These parameters have nothing to do with whether the image is a side face or a front face. They represent numerical parameters. Therefore, in the process During training, the training model that uses frontal images for fatigue detection related technologies can also be used for training. It should be noted that the eye movement feature parameter training in the normal awake state is not the main point of the method of this embodiment, and therefore, the training model or the training process will not be described in detail here.
  • the driver should be in a normally awake state in the first state when driving. Therefore, a period of the driver in the normally awake state when driving may be after the vehicle has been driven for a preset time, for example 5 minutes.
  • a preset time for example 5 minutes.
  • the vehicle's driving should be relatively stable. Therefore, by obtaining the state information of the vehicle, it is possible to determine whether the vehicle is driving in a stable state according to the state information of the vehicle. Status to determine whether the driver is normally awake.
  • it can also be combined with other technical means for determining whether the driver is in a normal awake state while driving. This embodiment does not specifically limit it, and it should be understood that all of them fall within the protection scope of the present invention.
  • the step length of one cycle can be set according to specific technical requirements (for example, data processing time), which is not specifically limited in this embodiment.
  • Step S15 After the ID file corresponding to the currently seated driver is established, during the driving process, periodically acquire the current cycle profile image of the driver collected by the camera module, and obtain the profile image according to the current cycle profile.
  • the eye movement characteristic parameters of the current cycle of the driver and according to the comparison result of the eye movement characteristic parameters of the current cycle and the normal eye movement characteristic parameters of the driver, it is determined whether the driver is a fatigue driving.
  • the eye movement characteristic parameter of the current cycle includes at least one of a distance parameter between the upper and lower eyelids of the current cycle, a blink time parameter, and an eye closure time ratio parameter, and the driver's normal eye movement characteristic
  • the parameters include at least one of a distance parameter between the normal upper and lower eyelids, a normal blink time parameter, and a normal eye closure time ratio parameter.
  • eye movement feature parameters of each cycle are multiple eye movement feature parameters extracted from multiple frames of side face images in each cycle.
  • the parameter of the distance between the upper and lower eyelids above is taken as an example.
  • the distance between the upper and lower eyelids of the current cycle can be measured.
  • the method of statistical analysis is, for example, cluster analysis, averaging, etc., and the distance between the upper and lower eyelids obtained by the statistical analysis of the driver is not normal for the driver.
  • the eye movement characteristic parameter is within the range, it is determined that the driver is driving fatigued.
  • the distance parameter between the upper and lower eyelids in the current cycle can be statistically analyzed to get A blink frequency parameter, set when the blink frequency parameter obtained by the statistical analysis of the driver is not within the range of the driver's normal blink frequency parameter, or the driver's current blink interval parameter is not within the range of the driver's normal blink interval parameter. It is determined that the driver is driving fatigued.
  • the eyes closed time ratio parameter As an example.
  • the driver When the driver is fatigued driving, the driver may close his eyes for a long time, that is, enter a doze state. Therefore, the eyes closed time parameter and the eye open time parameter of the current cycle can be checked.
  • Statistical analysis obtains the eye closure time ratio parameter of the current cycle, and it is set when the eye closure time ratio parameter of the current cycle of the driver is not within the range of the driver's normal eye closure time ratio parameter, it is determined that the driver is driving fatigued.
  • one or more of the distance parameter between the upper and lower eyelids, the blink time parameter, and the eye closure time ratio parameter can be selected as the basis for determining that the driver is fatigued driving, which is not specifically limited in this embodiment . It should be noted that this is just an example. The determination of fatigue driving can be specifically limited in combination with the eye condition characteristics of the driver in a fatigue state. They are all based on the inventive concept of the present embodiment in combination with humans. The settings easily made by the face image recognition technology should be understood as all within the protection scope of the present invention, and therefore will not be repeated here.
  • the method further includes steps S21-S23:
  • Step S21 Obtain the current intensity parameter of the ambient light in the car collected by the ambient light recognition sensor;
  • the ambient light recognition sensor may be arranged in the vehicle interior space.
  • Step S22 According to the comparison result of the current intensity parameter of the ambient light in the vehicle and the preset intensity threshold, it is determined whether to perform soft light supplementation on the side face of the driver;
  • a CMOS camera component is used to collect the driver’s left face image.
  • the driver’s side face Perform soft light supplementation to provide light gain for CMOS camera components to obtain images with better imaging quality.
  • Step S23 If it is determined that the driver's side face is filled with soft light, then generate a fill light control instruction, and send the fill light control instruction to the fill light actuator to control the fill light actuator to execute the fill light control instruction .
  • the light supplementary actuator is a soft light supplement light
  • the soft light supplement light can be designed in a strip shape and assembled on the car body on the left side of the driving position corresponding to the left face of the human head. (Located in the car space).
  • the light of the soft light fill light is diffusely reflected soft light, which is not dazzling, and plays the role of filling light for the driver's left face under the condition of poor light effect. Under the premise of providing supplementary light, the diffusely reflected soft light can prevent glare interference to the driver.
  • the current intensity parameter of the ambient light in the car is less than the first preset intensity threshold, and when it is greater than the second preset intensity threshold, the first brightness is used to control the lighting of the soft light fill light, another example is the current ambient light in the car.
  • the intensity parameter of is smaller than the second preset intensity threshold, the soft light fill light is controlled by the second brightness to light up.
  • the method further includes steps S31-S34:
  • Step S31 Obtain the current intensity parameter of the ambient light in the car collected by the ambient light recognition sensor;
  • the ambient light recognition sensor may be arranged in the vehicle interior space.
  • Step S32 According to the comparison result of the current intensity parameter of the ambient light in the vehicle and the preset intensity threshold, it is determined that a CMOS camera component or an infrared CCD camera component is used to collect the driver's side face image; the camera module includes a CMOS camera component And infrared CCD camera components;
  • CMOS camera component or an infrared CCD camera component is used to collect the driver's left face image.
  • CMOS complementary metal-oxide-semiconductor
  • the CMOS camera component is used to collect the driver’s side face image to obtain an image with better imaging quality.
  • the infrared CCD camera The component goes to sleep.
  • Step S33 If it is determined that the CMOS camera component is used to collect the driver’s side face image, a first wake-up instruction and a first sleep instruction are generated, and the first wake-up instruction is sent to the CMOS camera component to control the CMOS camera component to execute the first A wake-up instruction, and sending the first sleep instruction to the infrared CCD camera component to control the infrared CCD camera component to execute the first sleep instruction;
  • Step S34 If it is determined that the infrared CCD camera component is used to collect the driver’s side face image, a second wake-up instruction and a second sleep command are generated, and the second wake-up instruction is sent to the infrared CCD camera component to control the infrared CCD camera component to execute The second wake-up command and the second sleep command are sent to the CMOS camera component to control the CMOS camera component to execute the second sleep command.
  • the preset intensity threshold range corresponding to the good imaging quality of the CMOS camera component can be set according to specific technical requirements, which is related to the specific image quality required by the specific feature extraction method selected.
  • This implementation The examples are not specifically limited, and it should be understood that they are all within the protection scope of this embodiment.
  • the method further includes steps S41-S43:
  • Step S41 Acquire parameter information of the first plane where the optical axis of the camera of the camera module is located;
  • Step S42 Perform image recognition on the side face image to obtain parameter information of the second plane where the side face contour of the driver is located;
  • Step S43 Determine whether the first plane is perpendicular to the second plane according to the parameter information of the first plane and the parameter information of the second plane, if the first plane and the second plane are not perpendicular , The lens adjustment control instruction is sent to the lens adjustment drive mechanism to control the lens adjustment drive mechanism to drive the camera module to move so that the optical axis of the camera module of the camera module is located at the first The plane is perpendicular to the second plane where the driver's profile is located.
  • a drive motor is used as the drive element of the lens adjustment mechanism.
  • the relationship between the first plane where the optical axis of the camera of the camera module is located and the second plane where the driver’s profile is located can be used.
  • the angular displacement output by the driving motor is controlled so that during the driving of the automobile, the plane where the optical axis of the camera is located is always perpendicular to the second plane where the profile of the driver's side face is located.
  • FIG. 5 is a schematic diagram of the framework of the fatigue driving detection system of this embodiment. Referring to FIG. 5, the system of this embodiment includes:
  • the image acquisition unit 11 is configured to acquire the side face image of the driver currently seated and collected by the camera module 100;
  • the driver determination unit 12 is configured to perform face recognition on the side face image to obtain side face feature parameters, and determine whether there is an ID file corresponding to the currently seated driver in the driver ID database according to the side face feature parameters ; Wherein the ID file includes the driver's side face feature parameters and the driver's normal eye movement feature parameters in an awake state; and
  • the fatigue driving determination unit 13 is used to periodically obtain the current periodic side face image of the driver collected by the camera module 100 during the driving process when there is an ID file corresponding to the currently seated driver in the driver ID library , And obtain the eye movement feature parameters of the driver in the current cycle according to the side face image of the current cycle, and determine whether the driver is Fatigue driving.
  • the system further includes an ID file establishing unit 14, which is used to obtain a camera when the ID file corresponding to the currently seated driver does not exist in the driver ID database.
  • the module 100 collects multiple frames of side face images during a period of the driver in a normal awake state while driving, performs image recognition on the multiple frames of side face images to obtain multiple eye movement feature parameters, and according to the multiple eye movement feature parameters
  • the dynamic characteristic parameters obtain the normal eye movement characteristic parameters of the currently seated driver, and establish the ID file corresponding to the currently seated driver according to the side face characteristic parameters of the currently seated driver and the driver’s normal eye movement characteristic parameters;
  • the fatigue driving determination unit 13 is also used to periodically obtain the current periodic side face image of the driver collected by the camera module 100 during the driving process after the ID file corresponding to the currently seated driver is established. Obtain the eye movement characteristic parameters of the current cycle of the driver according to the side face image of the current cycle, and determine whether the driver is a fatigued driving according to the comparison result of the eye movement characteristic parameters of the current cycle and the normal eye movement characteristic parameters of the driver .
  • the ID file creation unit 14 specifically includes:
  • the driver feature parameter acquisition unit is used to acquire multiple frames of side face images of the currently seated driver collected by the camera module 100 in one cycle, and perform image recognition on the multiple frames of side face images to obtain multiple eye movement feature parameters ;
  • the sample matching unit is used to match the multiple eye movement feature parameters with multiple individual fatigue detection samples in the individual fatigue detection sample library one by one, and determine the individual fatigue detection sample library and the multiple eye movement feature parameters Individual fatigue test samples with the highest matching degree;
  • the normal parameter acquisition unit is configured to acquire the normal eye movement characteristic parameters of the driver corresponding to the individual fatigue test sample with the highest matching degree, and use it as the normal eye movement characteristic parameters of the currently seated driver;
  • the file generating unit is used to generate an ID file corresponding to the currently seated driver according to the side face feature parameters of the currently seated driver and the normal eye movement feature parameters of the driver.
  • the system further includes a soft light supplement light control module.
  • the soft light supplement light control module includes:
  • the ambient light intensity acquiring unit 21 is configured to acquire the current intensity parameter of the ambient light in the vehicle collected by the ambient light recognition sensor;
  • the fill light judging unit 22 is configured to determine whether to perform soft light fill light on the driver's side face according to the comparison result of the current intensity parameter of the ambient light in the vehicle and a preset intensity threshold;
  • the fill light control unit 23 is used to generate a fill light control instruction if it is determined to perform soft light fill light on the driver's side face, and send the fill light control instruction to the fill light actuator 200 to control the fill light actuator 200 to execute The supplementary light control instruction.
  • the system further includes a camera component control module.
  • the camera component control module includes:
  • the ambient light intensity acquiring unit 31 is configured to acquire the current intensity parameter of the ambient light in the car collected by the ambient light recognition sensor;
  • the imaging mode determination unit 32 is configured to determine, according to the comparison result of the current intensity parameter of the ambient light in the vehicle and the preset intensity threshold, to adopt the CMOS camera component 101 or the infrared CCD camera component 102 to collect the driver's side face image;
  • the camera module 100 includes a CMOS camera component 101 and an infrared CCD camera component 102;
  • the first camera control unit 33 is configured to generate a first wake-up instruction and a first sleep instruction when it is determined that the CMOS camera component 101 is used to capture the driver’s side face image, and send the first wake-up instruction to the CMOS camera component 101 to Controlling the CMOS camera component 101 to execute the first wake-up instruction, and send the first sleep instruction to the infrared CCD camera component 102, so as to control the infrared CCD camera component 102 to execute the first sleep instruction;
  • the second camera control unit 34 is configured to generate a second wake-up instruction and a second sleep instruction when it is determined that the infrared CCD camera component 102 is used to collect the driver’s side face image, and send the second wake-up command to the infrared CCD camera component 102 , To control the infrared CCD camera component 102 to execute the second wake-up instruction, and send the second sleep instruction to the CMOS camera component 101 to control the CMOS camera component 101 to execute the second sleep instruction.
  • system further includes:
  • the first plane information acquiring unit is configured to acquire parameter information of the first plane where the optical axis of the camera of the camera module is located;
  • the second plane information acquiring unit is configured to perform image recognition on the side face image to obtain parameter information of the second plane where the side face contour of the driver is located;
  • the lens adjustment control unit is configured to determine whether the first plane and the second plane are perpendicular to each other according to the parameter information of the first plane and the parameter information of the second plane. If the two planes are not vertical, the lens adjustment control instruction is sent to the lens adjustment drive mechanism to control the lens adjustment drive mechanism to drive the camera module to move, so that the camera head of the camera module is light.
  • the first plane where the axis is located is perpendicular to the second plane where the driver's profile is located.
  • the fatigue driving detection system described in the foregoing embodiment is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • Another embodiment of the present invention also provides a computer device, including: the fatigue driving detection system according to the above-mentioned embodiment; or, a memory and a processor, wherein computer-readable instructions are stored in the memory, and the computer-readable When the instructions are executed by the processor, the processor is caused to execute the steps of the fatigue driving detection method according to the foregoing embodiment.
  • the computer device may also have components such as a wired or wireless network interface, a keyboard, an input and output interface for input and output, and the computer device may also include other components for implementing device functions, which will not be repeated here.
  • the computer program may be divided into one or more units, and the one or more units are stored in the memory and executed by the processor to complete the present invention.
  • the one or more units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program in the computer device.
  • the processor may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (ASIC), ready-made Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor, etc.
  • the processor is the control center of the computer equipment, and various interfaces and lines are used to connect various parts of the computer equipment. .
  • the memory may be used to store the computer program and/or unit, and the processor implements the computer by running or executing the computer program and/or unit stored in the memory and calling data stored in the memory.
  • the memory may include high-speed random access memory, and may also include non-volatile memory, such as hard disks, memory, plug-in hard disks, smart media cards (SMC), and secure digital (SD) cards.
  • Flash Card at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
  • Another embodiment of the present invention also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the fatigue driving detection method described in the foregoing embodiment are implemented.
  • the computer-readable storage medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media, etc.

Abstract

一种疲劳驾驶检测方法及其系统、计算机设备,所述方法包括:获取摄像模组采集的当前入座的驾驶员的侧脸图像(S11);对所述侧脸图像进行人脸识别获得侧脸特征参数,并根据所述侧脸特征参数判定驾驶员ID库中是否存在与当前入座的驾驶员对应的ID档案;其中所述ID档案包括驾驶员侧脸特征参数以及驾驶员清醒状态下的正常眼动特征参数(S12);如果驾驶员ID库中存在与当前入座的驾驶员对应的ID档案,则在驾驶过程中,周期性获取摄像模组采集的驾驶员的当前周期的侧脸图像,并根据所述当前周期的侧脸图像获得驾驶员的当前周期的眼动特征参数,并根据所述当前周期的眼动特征参数与驾驶员正常眼动特征参数的对比结果判定驾驶员是否为疲劳驾驶(S13)。该方法能够解决驾驶员因存在个体差异,导致疲劳驾驶检测准确率不高的技术问题。

Description

一种疲劳驾驶检测方法及其系统、计算机设备
相关申请
本申请要求于2020年6月12日提交中国国家知识产权局、申请号为CN202010535115.6、发明名称为“一种疲劳驾驶检测方法及其系统、计算机设备、存储介质”的中国专利申请的优先权,上述专利的全部内容通过引用结合在本申请中。
技术领域
本发明涉及驾驶员疲劳驾驶检测技术领域,尤其涉及一种疲劳驾驶检测方法及其系统、计算机设备。
背景技术
随着社会的发展、科技的进步以及人民生活水平的提高,现代社会车辆的使用越来越频繁,交通安全也成了人们日常生活中不可忽视的一个严重社会问题,每年因交通事故死亡和受伤的人都相当多。交通事故统计分析表明,众多交通事故中,绝大多数是人为因素造成的,其中人为因素中又有大部分是由疲劳驾驶引起的,具体体现为驾驶员在驾驶车辆的过程中由于疲劳可能产生视力下降、注意力不集中、思维能力下降等各种现象,进而导致反应迟钝、不能及时判断、动作迟缓等问题。以上问题可以从驾驶员的大脑或者人眼状态来分析判断,及时提醒驾驶员当前的疲劳状态,从而减少驾驶员疲劳驾驶潜在的危害。
目前国内外有相应的疲劳驾驶的检测技术,大多通过检测驾驶员眼睛的开闭情况来判断驾驶员是否正在进行疲劳驾驶,例如通过获取驾驶员正面人脸图像,并对驾驶员正面人脸图像进行人脸识别,提取驾驶员眨眼频率、面部表情等行为特征来获取驾驶员眼睛的开闭情况。
在实现本发明的过程中,发明人发现上述疲劳驾驶检测技术至少存在以下技术问题:
由于驾驶员存在个体差异,例如,有些驾驶员佩戴有近视眼镜,在佩戴近视眼镜时,当有迎面光线照射过来时,驾驶员的眼镜镜片会迎面受光形成光斑、光点,在采集的驾驶员面部图像上,眼部特征无法正确被提取,因此 存在干扰,导致疲劳识别结果准确性不高;又例如具有特殊情况的驾驶员,近视程度不高,但是会有眯眯眼状态的驾驶员,或者具有眼部缺陷的驾驶员,而采用同一套检测模型不能适用于所有的驾驶员,需要经过统一标准的识别算法进行检测,在应对这些具有特殊情况的驾驶员的疲劳驾驶检测时,系统会消耗较大的计算资源,检测识别效率慢,且识别精度达不到理想标准。
发明内容
本发明旨在提出一种疲劳驾驶检测方法及其系统、计算机设备,以解决驾驶员因存在个体差异,导致疲劳驾驶检测准确率不高的技术问题。
为解决上述技术问题,第一方面,本发明实施例提出一种疲劳驾驶检测方法,包括:
获取摄像模组采集的当前入座的驾驶员的侧脸图像;
对所述侧脸图像进行人脸识别获得侧脸特征参数,并根据所述侧脸特征参数判定驾驶员ID库中是否存在与当前入座的驾驶员对应的ID档案;其中所述ID档案包括驾驶员侧脸特征参数以及驾驶员清醒状态下的正常眼动特征参数;
如果驾驶员ID库中存在与当前入座的驾驶员对应的ID档案,则在驾驶过程中,周期性获取摄像模组采集的驾驶员的当前周期的侧脸图像,并根据所述当前周期的侧脸图像获得驾驶员的当前周期的眼动特征参数,并根据所述当前周期的眼动特征参数与驾驶员正常眼动特征参数的对比结果判定驾驶员是否为疲劳驾驶。
在一可选的实施方式中,所述方法还包括:
如果驾驶员ID库中不存在与当前入座的驾驶员对应的ID档案,则获取摄像模组采集的驾驶员开车时在正常清醒状态下的一个周期内的多帧侧脸图像,对所述多帧侧脸图像进行图像识别获得多个眼动特征参数,根据所述多个眼动特征参数获得当前入座的驾驶员的正常眼动特征参数,并根据当前入座的驾驶员的侧脸特征参数与驾驶员正常眼动特征参数建立当前入座的驾驶员对应的ID档案;
在建立与当前入座的驾驶员对应的ID档案之后,在驾驶过程中,周期性获取摄像模组采集的驾驶员的当前周期的侧脸图像,根据所述当前周期的 侧脸图像获得驾驶员的当前周期的眼动特征参数,并根据所述当前周期的眼动特征参数与驾驶员正常眼动特征参数的对比结果判定驾驶员是否为疲劳驾驶。
在一可选的实施方式中,所述当前周期的眼动特征参数包括当前周期的上下眼睑之间的距离参数、眨眼时间参数、眼睛闭合时间比参数中的至少一种,所述驾驶员正常眼动特征参数包括正常上下眼睑之间的距离参数、正常眨眼时间参数、正常眼睛闭合时间比参数中的至少一种。
在一可选的实施方式中,所述方法还包括:
获取环境光识别传感器所采集的当前车内环境光的强度参数;
根据所述当前车内环境光的强度参数与预设强度阈值的比较结果,判定是否对驾驶员的侧脸进行柔光补光;
如果判定对驾驶员的侧脸进行柔光补光,则生成补光控制指令,并将补光控制指令发送至补光执行机构,以控制补光执行机构执行所述补光控制指令。
在一可选的实施方式中,所述方法还包括:
获取环境光识别传感器所采集的当前车内环境光的强度参数;
根据所述当前车内环境光的强度参数与预设强度阈值的比较结果,确定采用CMOS摄像组件或红外CCD摄像组件采集驾驶员的侧脸图像;所述摄像模组包括CMOS摄像组件和红外CCD摄像组件;
如果确定采用CMOS摄像组件采集驾驶员的侧脸图像,则生成第一唤醒指令和第一休眠指令,并将第一唤醒指令发送至CMOS摄像组件,以控制CMOS摄像组件执行所述第一唤醒指令,并将第一休眠指令发送至红外CCD摄像组件,以控制红外CCD摄像组件执行所述第一休眠指令;
如果确定采用红外CCD摄像组件采集驾驶员的侧脸图像,则生成第二唤醒指令和第二休眠指令,并将第二唤醒指令发送至红外CCD摄像组件,以控制红外CCD摄像组件执行所述第二唤醒指令,并将第二休眠指令发送至CMOS摄像组件,以控制CMOS摄像组件执行所述第二休眠指令。
在一可选的实施方式中,所述方法还包括:
获取摄像模组的摄像头光轴所在第一平面的参数信息;
对所述侧脸图像进行图像识别获得驾驶员的侧脸轮廓所在第二平面的参数信息;
根据所述第一平面的参数信息和所述第二平面的参数信息判定所述第一平面与所述第二平面是否垂直,若所述第一平面和所述第二平面不垂直,则镜头调整控制指令,并将所述镜头调整控制指令发送至镜头调整驱动机构,以控制所述镜头调整驱动机构驱动所述摄像模组移动,以使得摄像模组的摄像头光轴所在第一平面与驾驶员的侧脸轮廓所在第二平面垂直。
第二方面,本发明实施例提出一种疲劳驾驶检测系统,包括:
图像获取单元,用于获取摄像模组采集的当前入座的驾驶员的侧脸图像;
驾驶员判定单元,用于对所述侧脸图像进行人脸识别获得侧脸特征参数,并根据所述侧脸特征参数判定驾驶员ID库中是否存在与当前入座的驾驶员对应的ID档案;其中所述ID档案包括驾驶员侧脸特征参数以及驾驶员清醒状态下的正常眼动特征参数;以及
疲劳驾驶判定单元,用于当驾驶员ID库中存在与当前入座的驾驶员对应的ID档案时,在驾驶过程中,周期性获取摄像模组采集的驾驶员的当前周期的侧脸图像,并根据所述当前周期的侧脸图像获得驾驶员的当前周期的眼动特征参数,并根据所述当前周期的眼动特征参数与驾驶员正常眼动特征参数的对比结果判定驾驶员是否为疲劳驾驶。
在一可选的实施方式中,所述系统还包括ID档案建立单元,所述档案建立单元用于当驾驶员ID库中不存在与当前入座的驾驶员对应的ID档案时,获取摄像模组采集的驾驶员开车时在正常清醒状态下的一个周期内的多帧侧脸图像,对所述多帧侧脸图像进行图像识别获得多个眼动特征参数,根据所述多个眼动特征参数获得当前入座的驾驶员的正常眼动特征参数,并根据当前入座的驾驶员的侧脸特征参数与驾驶员正常眼动特征参数建立当前入座的驾驶员对应的ID档案;
其中,所述疲劳驾驶判定单元还用于在建立与当前入座的驾驶员对应的ID档案之后,在驾驶过程中,周期性获取摄像模组采集的驾驶员的当前周期的侧脸图像,根据所述当前周期的侧脸图像获得驾驶员的当前周期的眼动特征参数,并根据所述当前周期的眼动特征参数与驾驶员正常眼动特征参数 的对比结果判定驾驶员是否为疲劳驾驶。
在一可选的实施方式中,所述当前周期的眼动特征参数包括当前周期的上下眼睑之间的距离参数、眨眼时间参数、眼睛闭合时间比参数中的至少一种,所述驾驶员正常眼动特征参数包括正常上下眼睑之间的距离参数、正常眨眼时间参数、正常眼睛闭合时间比参数中的至少一种。
在一可选的实施方式中,所述系统还包括:
环境光强度获取单元,用于获取环境光识别传感器所采集的当前车内环境光的强度参数;
补光判定单元,用于根据所述当前车内环境光的强度参数与预设强度阈值的比较结果,判定是否对驾驶员的侧脸进行柔光补光;以及
补光控制单元,用于如果判定对驾驶员的侧脸进行柔光补光,则生成补光控制指令,并将补光控制指令发送至补光执行机构以控制补光执行机构执行所述补光控制指令。
在一可选的实施方式中,所述系统还包括:
环境光强度获取单元,用于获取环境光识别传感器所采集的当前车内环境光的强度参数;
摄像模式判定单元,用于根据所述当前车内环境光的强度参数与预设强度阈值的比较结果,确定采用CMOS摄像组件或红外CCD摄像组件采集驾驶员的侧脸图像;所述摄像模组包括CMOS摄像组件和红外CCD摄像组件;
第一摄像控制单元,用于当确定采用CMOS摄像组件采集驾驶员的侧脸图像时,生成第一唤醒指令和第一休眠指令,并将第一唤醒指令发送至CMOS摄像组件,以控制CMOS摄像组件执行所述第一唤醒指令,并将第一休眠指令发送至红外CCD摄像组件,以控制红外CCD摄像组件执行所述第一休眠指令;
第二摄像控制单元,用于当确定采用红外CCD摄像组件采集驾驶员的侧脸图像时,生成第二唤醒指令和第二休眠指令,并将第二唤醒指令发送至红外CCD摄像组件,以控制红外CCD摄像组件执行所述第二唤醒指令,并将第二休眠指令发送至CMOS摄像组件,以控制CMOS摄像组件执行所述第二休眠指令。
在一可选的实施方式中,所述系统还包括:
第一平面信息获取单元,用于获取摄像模组的摄像头光轴所在第一平面的参数信息;
第二平面信息获取单元,用于对所述侧脸图像进行图像识别获得驾驶员的侧脸轮廓所在第二平面的参数信息;以及
镜头调整控制单元,用于根据所述第一平面的参数信息和所述第二平面的参数信息判定所述第一平面与所述第二平面是否垂直,若所述第一平面和所述第二平面不垂直,则镜头调整控制指令,并将所述镜头调整控制指令发送至镜头调整驱动机构,以控制所述镜头调整驱动机构驱动所述摄像模组移动,以使得摄像模组的摄像头光轴所在第一平面与驾驶员的侧脸轮廓所在第二平面垂直。第三方面,本发明实施例提出一种计算机设备,包括:根据第二方面实施例所述的疲劳驾驶检测系统;或者,存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行根据第一方面实施例所述疲劳驾驶检测方法的步骤。
以上疲劳驾驶检测方法及其系统、计算机设备至少具有如下有益效果:
当驾驶员入座之后,获取驾驶员的侧脸图像(左侧脸或右侧脸),对所述侧脸图像进行人脸识别获得侧脸特征参数,并根据所述侧脸特征参数判定驾驶员ID库中是否存在与当前入座的驾驶员对应的ID档案;如果驾驶员ID库中存在与当前入座的驾驶员对应的ID档案,则在驾驶过程中,周期性获取驾驶员的当前周期的侧脸图像,并根据所述当前周期的侧脸图像获得驾驶员的当前周期的眼动特征参数,最后根据所述当前周期的眼动特征参数与ID档案中记录的驾驶员正常眼动特征参数的对比结果判定驾驶员是否为疲劳驾驶。可以理解的是,当以驾驶员的侧脸图像作为检测驾驶员疲劳驾驶的判断数据时,由于摄像模组采集侧脸图像不会受到驾驶员的眼镜镜片迎面受光形成光斑、光点的影响,因此,可以精确地提取驾驶员的眼动特征参数,并根据眼动特征参数进行驾驶员疲劳驾驶检测,能够避免因为驾驶员佩戴眼镜的关系而导致疲劳驾驶检测准确率不高。此外,对应不同的驾驶员,建立了不同的ID档案,所述ID档案包括驾驶员侧脸特征参数以及驾驶员清醒状态下的正常眼动特征参数,也就是说,面对不同的驾驶员,作为参考的正常眼 动特征参数是不同的,从而进一步更好地解决因个体差异而导致疲劳驾驶检测准确率不高的技术问题。
本发明的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本发明而得以体现。本发明的目的和其他优点可通过在说明书、权利要求书以及附图中所特别指出的结构来实现和获得。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明一实施例中一种疲劳驾驶检测方法的流程示意图。
图2为本发明另一实施例中一种疲劳驾驶检测方法的流程示意图。
图3为本发明一实施例中进行柔光补光的流程示意图。
图4为本发明一实施例中进行摄像组件控制的流程示意图。
图5为本发明一实施例中一种疲劳驾驶检测系统的框架示意图。
图6为本发明另一实施例中一种疲劳驾驶检测系统的框架示意图。
图7为本发明一实施例中柔光补光控制模块的框架示意图。
图8为本发明一实施例中摄像组件控制模块的框架示意图。
具体实施方式
以下将参考附图详细说明本公开的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。
另外,为了更好的说明本发明,在下文的具体实施例中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本发明同样可以实施。在一些实例中,对于本领域技术人员熟知的手段未作详细描述,以便于凸显本发明的主旨。
本发明一实施例提出一种疲劳驾驶检测方法,图1为本实施例一种疲劳驾驶检测方法的流程图,参阅图1,本实施例方法包括步骤S11-S13;
步骤S11、获取摄像模组采集的当前入座的驾驶员的侧脸图像;
具体而言,所述侧脸图像指的是驾驶员左侧脸图像或者右侧脸图像,不同国家/地区可能采用不同的行驶规则,例如中国是采用靠右行驶的行驶规则,又例如英国是采用靠左行驶的行驶规则。其中,当采用靠右行驶的行驶规则时,驾驶员坐在前排座椅的左边座椅,此时,为了便于摄像模组的安装和侧脸图像采集,则可以将摄像模组安装于靠近前排座椅的左边座椅的位置,摄像模组采集驾驶员的左侧脸图像;其中,当采用靠左行驶的行驶规则时,驾驶员坐在前排座椅的右边座椅,此时,为了便于摄像模组的安装和侧脸图像采集,则可以将摄像模组安装于靠近前排座椅的右边座椅的位置,摄像模组采集驾驶员的右侧脸图像。
可以理解的是,相对于其他采用布置在驾驶员正前方的摄像头来采集驾驶员的正脸图像的技术而言,在本实施例中,当驾驶员配戴眼镜时,摄像头采集的驾驶员的侧脸图像,不会受到驾驶员眼镜镜片迎面受光形成光斑、光点的影响。
步骤S12、对所述侧脸图像进行人脸识别获得侧脸特征参数,并根据所述侧脸特征参数判定驾驶员ID库中是否存在与当前入座的驾驶员对应的ID档案;其中所述ID档案包括驾驶员侧脸特征参数以及驾驶员清醒状态下的正常眼动特征参数;
具体而言,本实施例对应不同的驾驶员,建立了不同的ID档案,驾驶员ID库中存储有多个驾驶员ID档案,所述ID档案包括驾驶员侧脸特征参数以及驾驶员清醒状态下的正常眼动特征参数,也就是说,面对不同的驾驶员,作为对比参考的正常眼动特征参数是不同的。
在步骤S12中,具体将人脸识别得到的当前入座的驾驶员的侧脸特征参数,分别与驾驶员ID库中的多个ID档案的驾驶员侧脸特征参数一一进行对比,从而判定驾驶员ID库中是否存在与当前入座的驾驶员对应的ID档案。其中,所述侧脸特征参数指的是侧脸整体的特征参数。
其中,所述正常眼动特征参数优选但不限于为正常睁眼时上下眼睑之间的距离参数、正常眨眼时间参数、正常眼睛闭合时间比参数(Per-clos)中的至少一种,当然,眼动特征还可以是其他关于眼睛的特征,本实施例中不进 行具体限定。
步骤S13、如果驾驶员ID库中存在与当前入座的驾驶员对应的ID档案,则在驾驶过程中,周期性获取摄像模组采集的驾驶员的当前周期的侧脸图像,并根据所述当前周期的侧脸图像获得驾驶员的当前周期的眼动特征参数,并根据所述当前周期的眼动特征参数与驾驶员正常眼动特征参数的对比结果判定驾驶员是否为疲劳驾驶。
具体而言,当步骤S12判定驾驶员ID库中存在与当前入座的驾驶员对应的ID档案,则获取对应ID档案中的驾驶员正常眼动特征参数,并将其作为判定驾驶员是否为疲劳驾驶的参考参数。
具体地,在汽车驾驶过程中,周期性地获取摄像模组采集的驾驶员的当前周期的侧脸图像,并进一步获得对应的当前周期的眼动特征参数,此处的周期时间可以自由设定,最后将当前周期的眼动特征参数与驾驶员正常眼动特征参数进行对比,如果当前周期的眼动特征参数在驾驶员正常眼动特征参数的范围之内,则判定驾驶员疲劳驾驶,否则,判定驾驶员没有疲劳驾驶。在检测到疲劳驾驶时,面向车载助手响应唤醒预警和自动驾驶介入。
需说明的是,图像的特征提取为图像识别领域广泛应用的技术手段,图像的特征提取的原理本质上在于利用图像上的像素点的像素值不同,来确定检测目标,在本实施例中检测目标即为眼睛。因此,基于本实施例所采集得到的侧脸图像,本领域技术人员所熟知如何进行眼动特征的提取,因此,本实施例中不对眼动特征提取的具体步骤进行限定,本实施例可以适用结合任一种眼动特征提取的技术手段,应当理解为,其均在本实施例的保护范围之内。
还需说明的是,本实施例的主旨在于将驾驶员的侧脸图像作为判定疲劳驾驶检测的判定依据,无论是侧脸图像还是正脸图像,所提取出来的特征数据均为眼动特征,而基于眼动特征进行疲劳驾驶检测的技术手段在疲劳驾驶检测领域有广泛应用,因此,基于本实施例所提取的眼动特征参数,本领域技术人员所熟知如何进行疲劳驾驶检测,因此,本实施例中不对疲劳驾驶检测的具体步骤进行限定,本实施例可以适用结合任一种基于眼动特征进行疲劳驾驶检测的技术手段,例如是PERCLOS算法,其中,PERCLOS算法也 包括了特征提取的技术手段,应当理解为,其均在本实施例的保护范围之内。
基于上述实施例内容描述可知,本实施例以驾驶员的侧脸图像作为检测驾驶员疲劳驾驶的判断数据,由于摄像模组采集侧脸图像不会受到驾驶员的眼镜镜片迎面受光形成光斑、光点的影响,因此,可以精确地提取驾驶员的眼动特征参数,并根据眼动特征参数进行驾驶员疲劳驾驶检测,能够避免因为驾驶员佩戴眼镜的关系而导致疲劳驾驶检测准确率不高。
此外,对应不同的驾驶员,建立了不同的ID档案,所述ID档案包括驾驶员侧脸特征参数以及驾驶员清醒状态下的正常眼动特征参数,也就是说,面对不同的驾驶员,作为参考的正常眼动特征参数是不同的,从而进一步更好地解决因个体差异而导致疲劳驾驶检测准确率不高的技术问题。
在一些实施例中,参阅图2,所述方法还包括步骤S14-S15:
步骤S14、如果驾驶员ID库中不存在与当前入座的驾驶员对应的ID档案,则获取摄像模组采集的驾驶员开车时在正常清醒状态下的一个周期内的多帧侧脸图像,对所述多帧侧脸图像进行图像识别获得多个眼动特征参数,根据所述多个眼动特征参数获得当前入座的驾驶员的正常眼动特征参数,并根据当前入座的驾驶员的侧脸特征参数与驾驶员正常眼动特征参数建立当前入座的驾驶员对应的ID档案;
具体而言,步骤中当驾驶员ID库中不存在与当前入座的驾驶员对应的ID档案时,则选择建立一个新的驾驶员ID档案,以便于后续驾驶员驾驶过程中对驾驶员进行疲劳驾驶检测。其中,新的驾驶员ID档案同样包括侧脸特征参数与驾驶员正常眼动特征参数,侧脸特征参数为步骤S12中获取,根据正常清醒状态下的一个周期内的多帧侧脸图像可以获得多个眼动特征参数样本,即为驾驶员处于清醒状态下的正常眼动特征参数样本,因此基于该多个正常眼动特征参数样本,利用预设模型进行训练,可以获得对应的驾驶员正常眼动特征参数。所述利用预设模型进行训练的原理为:基于多个正常眼动特征参数样本进行统计分析得出正常眼动特征参数范围。
其中,深度学习神经网络广泛应用于疲劳驾驶的模型训练和检测,包括根据多个样本训练得到正常睁眼时上下眼睑之间的距离参数、正常眨眼时间参数、正常眼睛闭合时间比参数,本实施例中的预设模型可以具体采用深度 学习神经网络。
可以理解的,获得的样本参数为上下眼睑之间的距离参数、眨眼时间参数、眼睛闭合时间比参数,这些参数与图像为侧脸或正脸无关,其表征的是数值参数,因此,在进行训练时,也可以沿用利用正脸图像进行疲劳检测相关技术的训练模型进行训练。需说明的是,正常清醒状态的眼动特征参数训练不是本实施例方法的主旨,因此,此处不对训练模型或训练过程进行详述。
示例性地,一般而言,开车时最开始的一个状态驾驶员理应处于正常清醒状态,因此,所述驾驶员开车时在正常清醒状态下的一个周期,可以是车辆行驶预设时间之后,例如5分钟。又示例性地,一般而言,驾驶员开车时在正常清醒状态下,车辆的行驶应当是比较稳定的,因此,可以通过获取车辆的状态信息,根据车辆的状态信息来判断车辆行驶是否处于稳定状态,从而确定驾驶员是否处于正常清醒状态。当然,还可以结合其他用于判定驾驶员开车时是否处于正常清醒状态的技术手段,本实施例中不进行具体限定,应当理解为,其均在本发明的保护范围之内。
其中,一个周期的步长可以根据具体技术要求(例如数据的处理时间)设定,本实施例中不进行具体限定。
步骤S15、在建立与当前入座的驾驶员对应的ID档案之后,在驾驶过程中,周期性获取摄像模组采集的驾驶员的当前周期的侧脸图像,根据所述当前周期的侧脸图像获得驾驶员的当前周期的眼动特征参数,并根据所述当前周期的眼动特征参数与驾驶员正常眼动特征参数的对比结果判定驾驶员是否为疲劳驾驶。
在一些实施例中,所述当前周期的眼动特征参数包括当前周期的上下眼睑之间的距离参数、眨眼时间参数、眼睛闭合时间比参数中的至少一种,所述驾驶员正常眼动特征参数包括正常上下眼睑之间的距离参数、正常眨眼时间参数、正常眼睛闭合时间比参数中的至少一种。
可以理解的是,每一周期的眼动特征参数为每一周期的多帧侧脸图像所提取的多个眼动特征参数。
其中,以上下眼睑之间的距离参数为例,由于驾驶员疲劳驾驶时,眼睛睁开的开度会变小,常见的有眯眼状态,因此,可以对当前周期的上下眼睑 之间的距离参数进行统计分析得到一个上下眼睑之间的距离参数,统计分析的方式例如为聚类分析、求均值等等,设定当驾驶员该统计分析得到的上下眼睑之间的距离参数不在驾驶员正常眼动特征参数的范围内时,判定驾驶员为疲劳驾驶。
其中,以眨眼时间参数为例,由于驾驶员疲劳驾驶时,驾驶员可能眨眼频率会增大,即眨眼时间间隔变小,因此,可以对当前周期的上下眼睑之间的距离参数进行统计分析得到一个眨眼频率参数,设定当驾驶员该统计分析得到的眨眼频率参数不在驾驶员正常眨眼频率参数的范围内,或者驾驶员当前眨眼时间间隔参数不在驾驶员正常眨眼时间间隔参数的范围内时,判定驾驶员为疲劳驾驶。
其中,以眼睛闭合时间比参数为例,由于驾驶员疲劳驾驶时,驾驶员可能会长时间闭上眼睛,即进入瞌睡状态,因此,可以对当前周期的闭眼时间参数和睁眼时间参数进行统计分析得到当前周期的眼睛闭合时间比参数,设定当驾驶员当前周期的眼睛闭合时间比参数不在驾驶员正常眼睛闭合时间比参数的范围内时,判定驾驶员为疲劳驾驶。
可以理解的是,可以选择上下眼睑之间的距离参数、眨眼时间参数、眼睛闭合时间比参数中的一种或多种参数作为判定驾驶员为疲劳驾驶的依据,本实施例中不进行具体限定。需说明的是,此处只是举例说明,疲劳驾驶的判定具体地可以结合驾驶员在疲劳状态下所表现出来的眼睛状态特征,来进行具体的限定,其均为基于本实施例发明构思结合人脸图像识别技术所容易作出的设置,应理解为均在本发明的保护范围之内,因此此处不进行赘述。
在一些实施例中,参阅图3,所述方法还包括步骤S21-S23:
步骤S21、获取环境光识别传感器所采集的当前车内环境光的强度参数;
具体而言,所述环境光识别传感器可以设置在车内空间。
步骤S22、根据所述当前车内环境光的强度参数与预设强度阈值的比较结果,判定是否对驾驶员的侧脸进行柔光补光;
具体而言,本实施例中采用CMOS摄像组件进行采集驾驶员的左脸图像,当当前车内环境光的强度参数不满足CMOS摄像组件具有较好的成像质量时,判定对驾驶员的侧脸进行柔光补光,为CMOS摄像组件提供光线 增益,以获取成像质量较佳的图像。
步骤S23、如果判定对驾驶员的侧脸进行柔光补光,则生成补光控制指令,并将补光控制指令发送至补光执行机构,以控制补光执行机构执行所述补光控制指令。
具体而言,所述补光执行机构为柔光补光灯,柔光补光灯可以设计成条状的,装配在驾驶位左侧在对应人体头部左脸一侧的车体上即可(位于车内空间)。柔光补光灯的光线为漫反射的柔光,不刺眼,起到在光线效果不佳的条件下为驾驶者的左侧脸部补光的作用。漫反射的柔光在提供补光作用的前提下,可以防止对驾驶者造成刺眼干扰。
更具体地,例如当前车内环境光的强度参数小于第一预设强度阈值,大于第二预设强度阈值时,以第一亮度控制柔光补光灯点亮,又例如当前车内环境光的强度参数小于第二预设强度阈值时,以第二亮度控制柔光补光灯点亮。以上仅为示例说明,基于本实施例的内容,本领域技术人员熟知可以根据实际技术要求进行调整,本实施例中不进行具体限定,应当理解为,其均在本实施例的保护范围之内。
在一些实施例中,参阅图4,所述方法还包括步骤S31-S34:
步骤S31、获取环境光识别传感器所采集的当前车内环境光的强度参数;
具体而言,所述环境光识别传感器可以设置在车内空间。
步骤S32、根据所述当前车内环境光的强度参数与预设强度阈值的比较结果,确定采用CMOS摄像组件或红外CCD摄像组件采集驾驶员的侧脸图像;所述摄像模组包括CMOS摄像组件和红外CCD摄像组件;
具体而言,本实施例中采用CMOS摄像组件或红外CCD摄像组件进行采集驾驶员的左脸图像。
其中,当当前车内环境光的强度参数不满足CMOS摄像组件具有较好的成像质量时,判定采用红外CCD摄像组件采集驾驶员的侧脸图像,以获取成像质量较佳的图像,此时CMOS摄像组件进入休眠。
其中,当当前车内环境光的强度参数满足CMOS摄像组件具有较好的成像质量时,判定采用CMOS摄像组件采集驾驶员的侧脸图像,以获取成像质量较佳的图像,此时红外CCD摄像组件进入休眠。
步骤S33、如果确定采用CMOS摄像组件采集驾驶员的侧脸图像,则生成第一唤醒指令和第一休眠指令,并将第一唤醒指令发送至CMOS摄像组件,以控制CMOS摄像组件执行所述第一唤醒指令,并将第一休眠指令发送至红外CCD摄像组件,以控制红外CCD摄像组件执行所述第一休眠指令;
步骤S34、如果确定采用红外CCD摄像组件采集驾驶员的侧脸图像,则生成第二唤醒指令和第二休眠指令,并将第二唤醒指令发送至红外CCD摄像组件,以控制红外CCD摄像组件执行所述第二唤醒指令,并将第二休眠指令发送至CMOS摄像组件,以控制CMOS摄像组件执行所述第二休眠指令。
可以理解的是,满足CMOS摄像组件具有较好的成像质量所对应的预设强度阈值范围可以根据具体技术要求进行设定,其与具体选用的特征提取方法所具体要求的图像质量有关,本实施例中不进行具体限定,应当理解为,其均在本实施例的保护范围之内。
在一些实施例中,所述方法还包括步骤S41-S43:
步骤S41、获取摄像模组的摄像头光轴所在第一平面的参数信息;
步骤S42、对所述侧脸图像进行图像识别获得驾驶员的侧脸轮廓所在第二平面的参数信息;
步骤S43、根据所述第一平面的参数信息和所述第二平面的参数信息判定所述第一平面与所述第二平面是否垂直,若所述第一平面和所述第二平面不垂直,则镜头调整控制指令,并将所述镜头调整控制指令发送至镜头调整驱动机构,以控制所述镜头调整驱动机构驱动所述摄像模组移动,以使得摄像模组的摄像头光轴所在第一平面与驾驶员的侧脸轮廓所在第二平面垂直。
具体而言,由于不同驾驶员的身高、坐姿习惯存在差异,因此,为了便于调整摄像模组的位置,以使得摄像模组能够完整地拍摄到驾驶员的侧脸图像,可以设置一用于调整摄像模组位置的镜头调整机构。示例性地,采用驱动电机作为镜头调整机构的驱动元件,则本实施例中可以根据所述摄像模组的摄像头光轴所在第一平面与驾驶员的侧脸轮廓所在第二平面之间的关系控制所述驱动电机输出的角位移,使得在汽车驾驶过程中,摄像头的光轴所在平面始终与驾驶员的侧脸轮廓所在第二平面垂直。
本发明另一实施例提出一种疲劳驾驶检测系统,图5为本实施例一种疲劳驾驶检测系统的框架示意图,参阅图5,本实施例系统包括:
图像获取单元11,用于获取摄像模组100采集的当前入座的驾驶员的侧脸图像;
驾驶员判定单元12,用于对所述侧脸图像进行人脸识别获得侧脸特征参数,并根据所述侧脸特征参数判定驾驶员ID库中是否存在与当前入座的驾驶员对应的ID档案;其中所述ID档案包括驾驶员侧脸特征参数以及驾驶员清醒状态下的正常眼动特征参数;以及
疲劳驾驶判定单元13,用于当驾驶员ID库中存在与当前入座的驾驶员对应的ID档案时,在驾驶过程中,周期性获取摄像模组100采集的驾驶员的当前周期的侧脸图像,并根据所述当前周期的侧脸图像获得驾驶员的当前周期的眼动特征参数,并根据所述当前周期的眼动特征参数与驾驶员正常眼动特征参数的对比结果判定驾驶员是否为疲劳驾驶。
在一些实施例中,参阅图6,所述系统还包括ID档案建立单元14,所述档案建立单元用于当驾驶员ID库中不存在与当前入座的驾驶员对应的ID档案时,获取摄像模组100采集的驾驶员开车时在正常清醒状态下的一个周期内的多帧侧脸图像,对所述多帧侧脸图像进行图像识别获得多个眼动特征参数,根据所述多个眼动特征参数获得当前入座的驾驶员的正常眼动特征参数,并根据当前入座的驾驶员的侧脸特征参数与驾驶员正常眼动特征参数建立当前入座的驾驶员对应的ID档案;
其中,所述疲劳驾驶判定单元13还用于在建立与当前入座的驾驶员对应的ID档案之后,在驾驶过程中,周期性获取摄像模组100采集的驾驶员的当前周期的侧脸图像,根据所述当前周期的侧脸图像获得驾驶员的当前周期的眼动特征参数,并根据所述当前周期的眼动特征参数与驾驶员正常眼动特征参数的对比结果判定驾驶员是否为疲劳驾驶。
在一些实施例中,所述ID档案建立单元14具体包括:
驾驶员特征参数获取单元,用于获取摄像模组100采集的当前入座的驾驶员的一个周期内的多帧侧脸图像,对所述多帧侧脸图像进行图像识别获得多个眼动特征参数;
样本匹配单元,用于将所述多个眼动特征参数与个体疲劳检测样本库中的多个个体疲劳检测样本进行一一匹配,确定个体疲劳检测样本库中与所述多个眼动特征参数匹配度最高的个体疲劳检测样本;
正常参数获取单元,用于获取所述匹配度最高的个体疲劳检测样本所对应的驾驶员正常眼动特征参数,将其作为当前入座的驾驶员的正常眼动特征参数;以及
档案生成单元,用于根据当前入座的驾驶员的侧脸特征参数与驾驶员正常眼动特征参数生成当前入座的驾驶员对应的ID档案。
在一些实施例中,所述系统还包括柔光补光控制模块,参阅图7,所述柔光补光控制模块包括:
环境光强度获取单元21,用于获取环境光识别传感器所采集的当前车内环境光的强度参数;
补光判定单元22,用于根据所述当前车内环境光的强度参数与预设强度阈值的比较结果,判定是否对驾驶员的侧脸进行柔光补光;以及
补光控制单元23,用于如果判定对驾驶员的侧脸进行柔光补光,则生成补光控制指令,并将补光控制指令发送至补光执行机构200以控制补光执行机构200执行所述补光控制指令。
在一些实施例中,所述系统还包括摄像组件控制模块,参阅图8,所述摄像组件控制模块包括:
环境光强度获取单元31,用于获取环境光识别传感器所采集的当前车内环境光的强度参数;
摄像模式判定单元32,用于根据所述当前车内环境光的强度参数与预设强度阈值的比较结果,确定采用CMOS摄像组件101或红外CCD摄像组件102采集驾驶员的侧脸图像;所述摄像模组100包括CMOS摄像组件101和红外CCD摄像组件102;
第一摄像控制单元33,用于当确定采用CMOS摄像组件101采集驾驶员的侧脸图像时,生成第一唤醒指令和第一休眠指令,并将第一唤醒指令发送至CMOS摄像组件101,以控制CMOS摄像组件101执行所述第一唤醒指令,并将第一休眠指令发送至红外CCD摄像组件102,以控制红外CCD 摄像组件102执行所述第一休眠指令;
第二摄像控制单元34,用于当确定采用红外CCD摄像组件102采集驾驶员的侧脸图像时,生成第二唤醒指令和第二休眠指令,并将第二唤醒指令发送至红外CCD摄像组件102,以控制红外CCD摄像组件102执行所述第二唤醒指令,并将第二休眠指令发送至CMOS摄像组件101,以控制CMOS摄像组件101执行所述第二休眠指令。
在一些实施例中,所述系统还包括:
第一平面信息获取单元,用于获取摄像模组的摄像头光轴所在第一平面的参数信息;
第二平面信息获取单元,用于对所述侧脸图像进行图像识别获得驾驶员的侧脸轮廓所在第二平面的参数信息;以及
镜头调整控制单元,用于根据所述第一平面的参数信息和所述第二平面的参数信息判定所述第一平面与所述第二平面是否垂直,若所述第一平面和所述第二平面不垂直,则镜头调整控制指令,并将所述镜头调整控制指令发送至镜头调整驱动机构,以控制所述镜头调整驱动机构驱动所述摄像模组移动,以使得摄像模组的摄像头光轴所在第一平面与驾驶员的侧脸轮廓所在第二平面垂直。
以上所描述的系统实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
需说明的是,上述实施例所述系统与上述实施例所述方法对应,因此,上述实施例所述系统未详述部分可以参阅上述实施例所述方法的内容得到,此处不再赘述。
并且,上述实施例所述疲劳驾驶检测系统如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。
本发明另一实施例还提出一种计算机设备,包括:根据上述实施例所述 的疲劳驾驶检测系统;或者,存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行根据上述实施例所述疲劳驾驶检测方法的步骤。
当然,所述计算机设备还可以具有有线或无线网络接口、键盘以及输入输出接口等部件,以便进行输入输出,该计算机设备还可以包括其他用于实现设备功能的部件,在此不做赘述。
示例性的,所述计算机程序可以被分割成一个或多个单元,所述一个或者多个单元被存储在所述存储器中,并由所述处理器执行,以完成本发明。所述一个或多个单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序在所述计算机设备中的执行过程。
所述处理器可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等,所述处理器是所述计算机设备的控制中心,利用各种接口和线路连接整个所述计算机设备的各个部分。
所述存储器可用于存储所述计算机程序和/或单元,所述处理器通过运行或执行存储在所述存储器内的计算机程序和/或单元,以及调用存储在存储器内的数据,实现所述计算机设备的各种功能。此外,存储器可以包括高速随机存取存储器,还可以包括非易失性存储器,例如硬盘、内存、插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)、至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
本发明另一实施例还提出一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述实施例所述疲劳驾驶检测方法的步骤。
具体而言,所述计算机可读存储介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算 机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。
以上已经描述了本发明的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本发明中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。

Claims (13)

  1. 一种疲劳驾驶检测方法,其特征在于,包括:
    获取摄像模组采集的当前入座的驾驶员的侧脸图像;
    对所述侧脸图像进行人脸识别获得侧脸特征参数,并根据所述侧脸特征参数判定驾驶员ID库中是否存在与当前入座的驾驶员对应的ID档案;其中所述ID档案包括驾驶员侧脸特征参数以及驾驶员清醒状态下的正常眼动特征参数;
    如果驾驶员ID库中存在与当前入座的驾驶员对应的ID档案,则在驾驶过程中,周期性获取摄像模组采集的驾驶员的当前周期的侧脸图像,并根据所述当前周期的侧脸图像获得驾驶员的当前周期的眼动特征参数,并根据所述当前周期的眼动特征参数与驾驶员正常眼动特征参数的对比结果判定驾驶员是否为疲劳驾驶。
  2. 根据权利要求1所述的疲劳驾驶检测方法,其特征在于,所述方法还包括:
    如果驾驶员ID库中不存在与当前入座的驾驶员对应的ID档案,则获取摄像模组采集的驾驶员开车时在正常清醒状态下的一个周期内的多帧侧脸图像,对所述多帧侧脸图像进行图像识别获得多个眼动特征参数,根据所述多个眼动特征参数获得当前入座的驾驶员的正常眼动特征参数,并根据当前入座的驾驶员的侧脸特征参数与驾驶员正常眼动特征参数建立当前入座的驾驶员对应的ID档案;
    在建立与当前入座的驾驶员对应的ID档案之后,在驾驶过程中,周期性获取摄像模组采集的驾驶员的当前周期的侧脸图像,根据所述当前周期的侧脸图像获得驾驶员的当前周期的眼动特征参数,并根据所述当前周期的眼动特征参数与驾驶员正常眼动特征参数的对比结果判定驾驶员是否为疲劳驾驶。
  3. 根据权利要求2所述的疲劳驾驶检测方法,其特征在于,所述当前周期的眼动特征参数包括当前周期的上下眼睑之间的距离参数、眨眼时间参数、眼睛闭合时间比参数中的至少一种,所述驾驶员正常眼动特征参数包括正常上下眼睑之间的距离参数、正常眨眼时间参数、正常眼睛闭合时间比参 数中的至少一种。
  4. 根据权利要求1所述的疲劳驾驶检测方法,其特征在于,所述方法还包括:
    获取环境光识别传感器所采集的当前车内环境光的强度参数;
    根据所述当前车内环境光的强度参数与预设强度阈值的比较结果,判定是否对驾驶员的侧脸进行柔光补光;
    如果判定对驾驶员的侧脸进行柔光补光,则生成补光控制指令,并将补光控制指令发送至补光执行机构,以控制补光执行机构执行所述补光控制指令。
  5. 根据权利要求1所述的疲劳驾驶检测方法,其特征在于,所述方法还包括:
    获取环境光识别传感器所采集的当前车内环境光的强度参数;
    根据所述当前车内环境光的强度参数与预设强度阈值的比较结果,确定采用CMOS摄像组件或红外CCD摄像组件采集驾驶员的侧脸图像;所述摄像模组包括CMOS摄像组件和红外CCD摄像组件;
    如果确定采用CMOS摄像组件采集驾驶员的侧脸图像,则生成第一唤醒指令和第一休眠指令,并将第一唤醒指令发送至CMOS摄像组件,以控制CMOS摄像组件执行所述第一唤醒指令,并将第一休眠指令发送至红外CCD摄像组件,以控制红外CCD摄像组件执行所述第一休眠指令;
    如果确定采用红外CCD摄像组件采集驾驶员的侧脸图像,则生成第二唤醒指令和第二休眠指令,并将第二唤醒指令发送至红外CCD摄像组件,以控制红外CCD摄像组件执行所述第二唤醒指令,并将第二休眠指令发送至CMOS摄像组件,以控制CMOS摄像组件执行所述第二休眠指令。
  6. 根据权利要求1所述的疲劳驾驶检测方法,其特征在于,所述方法还包括:
    获取摄像模组的摄像头光轴所在第一平面的参数信息;
    对所述侧脸图像进行图像识别获得驾驶员的侧脸轮廓所在第二平面的参数信息;
    根据所述第一平面的参数信息和所述第二平面的参数信息判定所述第 一平面与所述第二平面是否垂直,若所述第一平面和所述第二平面不垂直,则镜头调整控制指令,并将所述镜头调整控制指令发送至镜头调整驱动机构,以控制所述镜头调整驱动机构驱动所述摄像模组移动,以使得摄像模组的摄像头光轴所在第一平面与驾驶员的侧脸轮廓所在第二平面垂直。
  7. 一种疲劳驾驶检测系统,其特征在于,包括:
    图像获取单元,用于获取摄像模组采集的当前入座的驾驶员的侧脸图像;
    驾驶员判定单元,用于对所述侧脸图像进行人脸识别获得侧脸特征参数,并根据所述侧脸特征参数判定驾驶员ID库中是否存在与当前入座的驾驶员对应的ID档案;其中所述ID档案包括驾驶员侧脸特征参数以及驾驶员清醒状态下的正常眼动特征参数;以及
    疲劳驾驶判定单元,用于当驾驶员ID库中存在与当前入座的驾驶员对应的ID档案时,在驾驶过程中,周期性获取摄像模组采集的驾驶员的当前周期的侧脸图像,并根据所述当前周期的侧脸图像获得驾驶员的当前周期的眼动特征参数,并根据所述当前周期的眼动特征参数与驾驶员正常眼动特征参数的对比结果判定驾驶员是否为疲劳驾驶。
  8. 根据权利要求7所述的疲劳驾驶检测系统,其特征在于,所述系统还包括ID档案建立单元,所述档案建立单元用于当驾驶员ID库中不存在与当前入座的驾驶员对应的ID档案时,获取摄像模组采集的驾驶员开车时在正常清醒状态下的一个周期内的多帧侧脸图像,对所述多帧侧脸图像进行图像识别获得多个眼动特征参数,根据所述多个眼动特征参数获得当前入座的驾驶员的正常眼动特征参数,并根据当前入座的驾驶员的侧脸特征参数与驾驶员正常眼动特征参数建立当前入座的驾驶员对应的ID档案;
    其中,所述疲劳驾驶判定单元还用于在建立与当前入座的驾驶员对应的ID档案之后,在驾驶过程中,周期性获取摄像模组采集的驾驶员的当前周期的侧脸图像,根据所述当前周期的侧脸图像获得驾驶员的当前周期的眼动特征参数,并根据所述当前周期的眼动特征参数与驾驶员正常眼动特征参数的对比结果判定驾驶员是否为疲劳驾驶。
  9. 根据权利要求8所述的疲劳驾驶检测系统,其特征在于,所述当前周期的眼动特征参数包括当前周期的上下眼睑之间的距离参数、眨眼时间参 数、眼睛闭合时间比参数中的至少一种,所述驾驶员正常眼动特征参数包括正常上下眼睑之间的距离参数、正常眨眼时间参数、正常眼睛闭合时间比参数中的至少一种。
  10. 根据权利要求7所述的疲劳驾驶检测系统,其特征在于,所述系统还包括:
    环境光强度获取单元,用于获取环境光识别传感器所采集的当前车内环境光的强度参数;
    补光判定单元,用于根据所述当前车内环境光的强度参数与预设强度阈值的比较结果,判定是否对驾驶员的侧脸进行柔光补光;以及
    补光控制单元,用于如果判定对驾驶员的侧脸进行柔光补光,则生成补光控制指令,并将补光控制指令发送至补光执行机构以控制补光执行机构执行所述补光控制指令。
  11. 根据权利要求7所述的疲劳驾驶检测系统,其特征在于,所述系统还包括:
    环境光强度获取单元,用于获取环境光识别传感器所采集的当前车内环境光的强度参数;
    摄像模式判定单元,用于根据所述当前车内环境光的强度参数与预设强度阈值的比较结果,确定采用CMOS摄像组件或红外CCD摄像组件采集驾驶员的侧脸图像;所述摄像模组包括CMOS摄像组件和红外CCD摄像组件;
    第一摄像控制单元,用于当确定采用CMOS摄像组件采集驾驶员的侧脸图像时,生成第一唤醒指令和第一休眠指令,并将第一唤醒指令发送至CMOS摄像组件,以控制CMOS摄像组件执行所述第一唤醒指令,并将第一休眠指令发送至红外CCD摄像组件,以控制红外CCD摄像组件执行所述第一休眠指令;
    第二摄像控制单元,用于当确定采用红外CCD摄像组件采集驾驶员的侧脸图像时,生成第二唤醒指令和第二休眠指令,并将第二唤醒指令发送至红外CCD摄像组件,以控制红外CCD摄像组件执行所述第二唤醒指令,并将第二休眠指令发送至CMOS摄像组件,以控制CMOS摄像组件执行所述第二休眠指令。
  12. 根据权利要求7所述的疲劳驾驶检测系统,其特征在于,所述系统还包括:
    第一平面信息获取单元,用于获取摄像模组的摄像头光轴所在第一平面的参数信息;
    第二平面信息获取单元,用于对所述侧脸图像进行图像识别获得驾驶员的侧脸轮廓所在第二平面的参数信息;以及
    镜头调整控制单元,用于根据所述第一平面的参数信息和所述第二平面的参数信息判定所述第一平面与所述第二平面是否垂直,若所述第一平面和所述第二平面不垂直,则镜头调整控制指令,并将所述镜头调整控制指令发送至镜头调整驱动机构,以控制所述镜头调整驱动机构驱动所述摄像模组移动,以使得摄像模组的摄像头光轴所在第一平面与驾驶员的侧脸轮廓所在第二平面垂直。
  13. 一种计算机设备,包括:根据权利要求7所述的疲劳驾驶检测系统;或者,存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行根据权利要求1所述疲劳驾驶检测方法的步骤。
PCT/CN2021/097618 2020-06-12 2021-06-01 一种疲劳驾驶检测方法及其系统、计算机设备 WO2021249239A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/924,555 US20230230397A1 (en) 2020-06-12 2021-06-01 Drowsy driving detection method and system thereof, and computer device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010535115.6 2020-06-12
CN202010535115.6A CN113807126A (zh) 2020-06-12 2020-06-12 一种疲劳驾驶检测方法及其系统、计算机设备、存储介质

Publications (1)

Publication Number Publication Date
WO2021249239A1 true WO2021249239A1 (zh) 2021-12-16

Family

ID=78845263

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/097618 WO2021249239A1 (zh) 2020-06-12 2021-06-01 一种疲劳驾驶检测方法及其系统、计算机设备

Country Status (3)

Country Link
US (1) US20230230397A1 (zh)
CN (1) CN113807126A (zh)
WO (1) WO2021249239A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198616A (zh) * 2013-03-20 2013-07-10 重庆大学 基于驾驶员头颈移动特征识别的疲劳驾驶检测方法与系统
CN104298963A (zh) * 2014-09-11 2015-01-21 浙江捷尚视觉科技股份有限公司 一种鲁棒的基于人脸形状回归模型的多姿态疲劳监测方法
CN108791299A (zh) * 2018-05-16 2018-11-13 浙江零跑科技有限公司 一种基于视觉的驾驶疲劳检测及预警系统及方法
CN109367479A (zh) * 2018-08-31 2019-02-22 南京理工大学 一种疲劳驾驶监测方法及装置
US20190202352A1 (en) * 2018-01-02 2019-07-04 Getac Technology Corporation Vehicle camera device and method for setting the same
CN110115796A (zh) * 2019-05-06 2019-08-13 苏州国科视清医疗科技有限公司 基于N-range图像处理算法的眼动参数监测的疲劳检测及促醒系统
CN212569804U (zh) * 2020-06-12 2021-02-19 广州汽车集团股份有限公司 一种疲劳驾驶检测装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2507742A2 (en) * 2009-12-02 2012-10-10 Tata Consultancy Services Limited A cost effective and robust system and method for eye tracking and driver drowsiness identification
US9956963B2 (en) * 2016-06-08 2018-05-01 GM Global Technology Operations LLC Apparatus for assessing, predicting, and responding to driver fatigue and drowsiness levels
CN107241535B (zh) * 2017-05-26 2020-10-23 北京小米移动软件有限公司 闪光灯调节装置及终端设备
CN108446600A (zh) * 2018-02-27 2018-08-24 上海汽车集团股份有限公司 一种车辆驾驶员疲劳监测预警系统及方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198616A (zh) * 2013-03-20 2013-07-10 重庆大学 基于驾驶员头颈移动特征识别的疲劳驾驶检测方法与系统
CN104298963A (zh) * 2014-09-11 2015-01-21 浙江捷尚视觉科技股份有限公司 一种鲁棒的基于人脸形状回归模型的多姿态疲劳监测方法
US20190202352A1 (en) * 2018-01-02 2019-07-04 Getac Technology Corporation Vehicle camera device and method for setting the same
CN108791299A (zh) * 2018-05-16 2018-11-13 浙江零跑科技有限公司 一种基于视觉的驾驶疲劳检测及预警系统及方法
CN109367479A (zh) * 2018-08-31 2019-02-22 南京理工大学 一种疲劳驾驶监测方法及装置
CN110115796A (zh) * 2019-05-06 2019-08-13 苏州国科视清医疗科技有限公司 基于N-range图像处理算法的眼动参数监测的疲劳检测及促醒系统
CN212569804U (zh) * 2020-06-12 2021-02-19 广州汽车集团股份有限公司 一种疲劳驾驶检测装置

Also Published As

Publication number Publication date
CN113807126A (zh) 2021-12-17
US20230230397A1 (en) 2023-07-20

Similar Documents

Publication Publication Date Title
WO2019232972A1 (zh) 驾驶管理方法和系统、车载智能系统、电子设备、介质
WO2020078464A1 (zh) 驾驶状态检测方法和装置、驾驶员监控系统、车辆
CN107292251B (zh) 一种基于人眼状态的驾驶员疲劳检测方法及系统
US9105172B2 (en) Drowsiness-estimating device and drowsiness-estimating method
CN101593425A (zh) 一种基于机器视觉的疲劳驾驶监控方法及系统
CN106250801A (zh) 基于人脸检测和人眼状态识别的疲劳检测方法
Manoharan et al. Android OpenCV based effective driver fatigue and distraction monitoring system
Anjali et al. Real-time nonintrusive monitoring and detection of eye blinking in view of accident prevention due to drowsiness
EP2060993B1 (en) An awareness detection system and method
CN111553214B (zh) 一种驾驶员吸烟行为检测方法及系统
CN109165630A (zh) 一种基于二维人眼识别的疲劳监测方法
TW201737237A (zh) 電子裝置、螢幕調節系統及方法
JP4989249B2 (ja) 目検知装置、居眠り検知装置及び目検知装置の方法
JP2009219555A (ja) 眠気検知装置、運転支援装置、眠気検知方法
CN113140093A (zh) 一种基于AdaBoost算法的疲劳驾驶检测方法
CN114220158A (zh) 基于深度学习的疲劳驾驶检测方法
WO2021249239A1 (zh) 一种疲劳驾驶检测方法及其系统、计算机设备
Panicker et al. Open-eye detection using iris–sclera pattern analysis for driver drowsiness detection
Mu et al. Research on a driver fatigue detection model based on image processing
JP2004192552A (ja) 開閉眼判定装置
JP7240910B2 (ja) 乗員観察装置
KR102288753B1 (ko) 운전자 모니터링 장치 및 방법
Singh et al. Driver fatigue detection using machine vision approach
JP4781292B2 (ja) 閉眼検知装置、居眠り検知装置、閉眼検知方法及び閉眼検知のためのプログラム
JP2019068933A (ja) 推定装置及び推定方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21822860

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21822860

Country of ref document: EP

Kind code of ref document: A1