WO2020237939A1 - 一种人眼眼睑曲线的构建方法及装置 - Google Patents

一种人眼眼睑曲线的构建方法及装置 Download PDF

Info

Publication number
WO2020237939A1
WO2020237939A1 PCT/CN2019/108072 CN2019108072W WO2020237939A1 WO 2020237939 A1 WO2020237939 A1 WO 2020237939A1 CN 2019108072 W CN2019108072 W CN 2019108072W WO 2020237939 A1 WO2020237939 A1 WO 2020237939A1
Authority
WO
WIPO (PCT)
Prior art keywords
eyelid
position information
eye
point
human eye
Prior art date
Application number
PCT/CN2019/108072
Other languages
English (en)
French (fr)
Inventor
李源
王晋玮
Original Assignee
初速度(苏州)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 初速度(苏州)科技有限公司 filed Critical 初速度(苏州)科技有限公司
Publication of WO2020237939A1 publication Critical patent/WO2020237939A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Definitions

  • the present invention relates to the technical field of intelligent transportation, in particular to a method and device for constructing a human eyelid curve.
  • Fatigue driving is one of the causes of car accidents. In order to avoid car accidents caused by fatigue driving to a certain extent, related fatigue driving detection technologies have emerged.
  • the process of using related fatigue driving detection technology to perform fatigue driving detection is generally: monitoring the opening and closing state of the driver’s eyes, and then determining whether the driver is experiencing fatigue driving according to the opening and closing state of the driver’s eyes, In the event of fatigue driving, an alert is issued.
  • the detection of the opening and closing state of the driver's eyes is very important, and the detection of the opening and closing state of the driver's eyes mostly depends on the difference between the upper and lower eyelids of the driver's eyes.
  • the calculation of the distance between the upper and lower eyelids of the driver’s eyes is crucial to the reconstruction of the upper and lower eyelid curves of the driver’s eyes. So how to reconstruct the upper and lower eyelid curves of the driver becomes an urgent need solved problem.
  • the invention provides a method and device for constructing a human eyelid curve, so as to realize the construction of the human eyelid curve and obtain the spatial information of the human eyelid.
  • the specific technical solutions are as follows:
  • an embodiment of the present invention provides a method for constructing a human eyelid curve, including:
  • the third position information wherein the eyelid points include: a first number of upper eyelid points and/or a second number of lower eyelid points;
  • the first position information, and the second position information Based on the pose information and internal reference information of each image acquisition device, the first position information, and the second position information, the first three-dimensional position information and the second corner point corresponding to the first corner point are determined Corresponding second three-dimensional position information;
  • the third position information of each eyelid point, and the pose information and internal parameter information of each image acquisition device construct a reprojection error constraint corresponding to the eyelid point;
  • an eyelid space curve equation for characterizing the upper eyelid and/or the lower eyelid of the human eye is constructed.
  • an eyelid space curve equation for characterizing the upper eyelid and/or lower eyelid of the human eye is constructed before the step, the method further includes:
  • the step of constructing an eyelid space curve equation representing the upper eyelid and/or lower eyelid of the human eye based on the first eye corner constraint, the second eye corner constraint, and the reprojection error constraint includes:
  • the step of constructing a reprojection error constraint corresponding to the eyelid point based on the curve equation, the third position information of each eyelid point, and the pose information and internal reference information of each image acquisition device includes:
  • a reprojection error constraint corresponding to the eyelid point is constructed.
  • the reprojection error corresponding to the eyelid point is constructed based on the third three-dimensional position information corresponding to each eyelid point, the third position information of each eyelid point, and the pose information and internal reference information of each image acquisition device
  • the steps of restraint include:
  • each eye image in the human eye image is determined.
  • the reprojection corresponding to the eyelid point is determined Error constraints.
  • the method further includes:
  • the face feature point detection model is: a model trained based on sample images of calibrated human face feature points.
  • an eyelid space curve equation for characterizing the upper eyelid and/or lower eyelid of the human eye is constructed After the steps, the method further includes:
  • the fatigue degree of the person corresponding to the human eye is determined.
  • the step of determining the current opening and closing length of the human eye based on the space eyelid curve equation used to characterize the upper eyelid and the space eyelid curve equation used to characterize the lower eyelid includes:
  • the maximum distance is used as the current opening and closing length of the human eye.
  • the step of determining the fatigue degree of the person corresponding to the human eye based on the current opening and closing length and the historical opening and closing length includes:
  • the opening and closing length includes the current opening and closing length and the historical opening and closing length
  • an embodiment of the present invention provides a device for constructing a human eyelid curve, including:
  • the recognition module is configured to recognize the first position information of the first corner point of the human eye and the second corner point of the second eye in the human eye image from each human eye image collected by each image acquisition device at the same time.
  • Position information and third position information of the eyelid points, where the eyelid points include: a first number of upper eyelid points and/or a second number of lower eyelid points;
  • the first determining module is configured to determine the first three-dimensional position corresponding to the first corner point based on the pose information and internal reference information of each image acquisition device, the first position information, and the second position information Information and second three-dimensional position information corresponding to the second corner of the eye;
  • the first construction module is configured to construct a first eye corner constraint based on the first three-dimensional position information, the second three-dimensional position information, and a preset curve equation;
  • the second construction module is configured to construct a second eye corner constraint based on the first value, the second value and the first eye corner constraint, wherein the first value and the second value are used to constrain the first The value range of the independent variable in the corner of the eye constraint;
  • the third building module is configured to construct a reprojection error constraint corresponding to the eyelid point based on the curve equation, the third position information of each eyelid point, and the pose information and internal parameter information of each image acquisition device;
  • the fourth construction module is configured to construct an eyelid space curve for characterizing the upper eyelid and/or lower eyelid of the human eye based on the first corner of the eye constraint, the second corner of the eye constraint, and the reprojection error constraint equation.
  • the device further includes:
  • the fifth construction module is configured to construct the upper eyelid and/or lower eyelid of the human eye based on the first corner of the eye constraint, the second corner of the eye constraint, and the reprojection error constraint.
  • the fourth construction module is specifically configured to: construct a characterizing human eye based on the first corner of the eye constraint, the second corner of the eye constraint, the reprojection error constraint, and the order constraint The space curve equation of the upper eyelid and/or the lower eyelid.
  • the third construction module includes: a first construction unit configured to use the curve equation to construct the third three-dimensional position information corresponding to each eyelid point; and the second construction unit is configured to be based on each The third three-dimensional position information corresponding to an eyelid point, the third position information of each eyelid point, and the pose information and internal reference information of each image acquisition device are constructed to construct a reprojection error constraint corresponding to the eyelid point.
  • the second construction unit is specifically configured to determine the conversion relationship between the device coordinate systems of every two image acquisition devices based on the pose information and internal parameter information of each image acquisition device; Image, based on the third three-dimensional position information corresponding to each eyelid point in the human eye image and the conversion relationship between the device coordinate systems of every two image acquisition devices, determine the corresponding eyelid point in the human eye image
  • the fourth position information of the projection point of the spatial point in the human eye image based on the third position information of each eyelid point in each human eye image, the spatial point corresponding to the eyelid point is in the human eye image where the eyelid point is located
  • the fourth position information of the projection point determines the reprojection error constraint corresponding to the eyelid point.
  • the apparatus further includes: a first obtaining module configured to identify the first image of the human eye in each human eye image acquired from each image acquisition device at the same time Before the first position information of the corner of the eye, the second position information of the second corner of the eye, and the third position information of the eyelid point, obtain facial images collected by multiple image acquisition devices at the same time; the detection module is configured to be based on The established face feature point detection model detects the area where the human eye is located in each face image from each face image collected by each image acquisition device to obtain a human eye image, wherein the pre-established face feature
  • the point detection model is: a model trained based on sample images of calibrated facial feature points.
  • the device further includes: a second determining module configured to construct a character for characterizing the person based on the first corner of the eye constraint, the second corner of the eye constraint, and the reprojection error constraint.
  • a second determining module configured to construct a character for characterizing the person based on the first corner of the eye constraint, the second corner of the eye constraint, and the reprojection error constraint.
  • the second determining module is specifically configured to: calculate the maximum distance between the space eyelid curve equation used to characterize the upper eyelid and the space eyelid curve equation used to characterize the lower eyelid; The maximum distance is used as the current opening and closing length of the human eye.
  • the third determining module is specifically configured to: compare each opening and closing length with a preset length threshold to obtain a comparison result, wherein the opening and closing length includes the current opening and closing length and all The historical opening and closing length; the number of first results indicating that the opening and closing length is less than the preset length threshold is obtained by statistics; based on the current opening and closing length and the total number of the history opening and closing lengths and the first The number of results determines the fatigue degree of the person corresponding to the human eye.
  • the method and device for constructing a human eyelid curve can identify the human eye in the human eye image from each human eye image collected by each image acquisition device at the same time.
  • the first three-dimensional position information corresponding to the first corner point of the human eye and the second corner point corresponding to the second corner point of the human eye can be constructed based on the pose information and internal reference information of multiple image acquisition devices.
  • Three-dimensional position information that is, the human eyes of the same person are monitored by multiple image acquisition devices to obtain human eye images. Based on the human eye images collected by multiple image acquisition devices, the eye corners with obvious semantic features can be accurately obtained.
  • Three-dimensional position information and further, based on the first three-dimensional position information, the second three-dimensional position information and the preset curve equation, construct the first eye corner constraint; based on the first preset value, the second preset value and the first eye corner constraint, construct Second eye corner constraint; and use the curve equation, the third position information of each eyelid point, and the pose information and internal parameter information of each image acquisition device to determine the reprojection error constraint corresponding to the eyelid point.
  • the human eyes of the same person are monitored by multiple image acquisition devices to obtain human eye images. Based on the human eye images collected by the multiple image acquisition devices, the three-dimensional position of the eye corners with obvious semantic features in the human eye can be accurately obtained Information, and further, based on the first three-dimensional position information, the second three-dimensional position information and the preset curve equation, construct the first eye corner constraint; based on the first preset value, the second preset value and the first eye corner constraint, construct the second Eye corner constraints; and use the curve equation, the third position information of each eyelid point, and the pose information and internal parameter information of each image acquisition device to determine the reprojection error constraints corresponding to the eyelid points.
  • the current opening and closing length of the human eye can be determined conveniently and accurately, and then combined with the time dimension, that is, in advance Setting the historical opening and closing length of the human eye determined within the time period can more flexibly and accurately monitor whether the person corresponding to the human eye has fatigued driving.
  • FIG. 1A is a schematic flowchart of a method for constructing a human eyelid curve according to an embodiment of the present invention
  • FIG. 1B is a schematic diagram of the obtained human eye image
  • FIG. 2 is a schematic diagram of another flow chart of a method for constructing a human eyelid curve according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of a method for constructing a human eyelid curve according to an embodiment of the present invention
  • Fig. 4 is a schematic structural diagram of a device for constructing a human eyelid curve provided by an embodiment of the present invention.
  • the invention provides a method and device for constructing a human eyelid curve, so as to realize the construction of the human eyelid curve and obtain the spatial information of the human eyelid.
  • the embodiments of the present invention will be described in detail below.
  • Fig. 1A is a schematic flowchart of a method for constructing a human eyelid curve provided by an embodiment of the present invention. The method can include the following steps:
  • S101 Identify the first position information of the first corner point of the human eye, the second position information of the second corner point, and the eyelid from each human eye image collected by each image acquisition device at the same time.
  • the third location information of the point is Identify the first position information of the first corner point of the human eye, the second position information of the second corner point, and the eyelid from each human eye image collected by each image acquisition device at the same time.
  • the eyelid points include: a first number of upper eyelid points and/or a second number of lower eyelid points.
  • the method for constructing a human eyelid curve provided by the embodiment of the present invention can be applied to any type of electronic device, and the electronic device can be a server or a terminal device.
  • the electronic device is connected with multiple image acquisition devices to obtain images collected by the multiple image acquisition devices and/or image recognition results recognized by the image acquisition device from the images collected by the image acquisition device.
  • the image acquisition areas of the multiple image acquisition devices have overlapping areas, that is, the multiple image acquisition devices can simultaneously monitor the same target.
  • the image acquisition device may be a camera, a camera, etc.
  • multiple image capture devices can simultaneously monitor the same target, that is, capture images for the same target, where the image captured by the image capture device contains the same target, and the target is a human face; subsequently, The image acquisition device can directly recognize the image of the area where the human eye is contained in the human face from the image it acquires, and intercept it. Accordingly, each image acquisition device obtains the human eye image it has acquired, and then , Each image acquisition device sends the obtained human eye image collected at the same time to the electronic device. The electronic device obtains the human eye image collected by each image acquisition device at the same time; and based on the human eyelid recognition model, recognizes each human eye image, from each image acquisition device at the same time. In the human eye image, the first position information of the first corner point of the human eye, the second position information of the second corner point, and the third position information of the eyelid point are identified.
  • the above-mentioned human eye images collected at the same time may refer to multiple image collection devices in the same collection period.
  • the human eye images collected by the multiple image collection devices are images collected for the same human eye.
  • the electronic device can identify the position of the human eyelid from each human eye image based on the human eyelid recognition model.
  • the eyelid points with obvious semantic features in the human eyelid are the left and right corner points of the human eye; the electronic device can be directly based on the human eye.
  • the eyelid recognition model recognizes the first position information of the first corner point and the second position information of the second corner point contained in the human eye image from each human eye image.
  • the electronic device can obtain the preset first number of upper eyelid points and/or the second number of lower eyelid points that need to be recognized, and then the electronic device can set the first number of upper eyelid points on the recognized upper eyelid.
  • On the upper eyelid take the equal or unequal points to obtain the first number of upper eyelid points, and then obtain the third position information of the first number of upper eyelid points; and/or based on the second number, in the identified On the lower eyelid of, take equal or unequal points to obtain the second number of lower eyelid points, and then obtain the third position information of the second number of lower eyelid points.
  • the upper eyelid point is a characteristic point on the upper eyelid of the human eye in the human eye image
  • the lower eyelid point is a characteristic point on the lower eyelid of the human eye in the human eye image.
  • the first quantity and the second quantity are preset quantities, and the two may be equal or different.
  • Each image acquisition device can correspond to a set of first and second numbers.
  • the first numbers corresponding to different image acquisition devices can be equal or unequal
  • the second numbers corresponding to image acquisition devices can be equal or unequal. .
  • the larger the value of the first number and the second number the higher the accuracy of the determined eyelid space curve equation for representing the upper eyelid and/or the lower eyelid of the human eye.
  • the human eyelid recognition model can be a neural network model obtained by training based on the first sample image of a human eyelid calibration, or it can be a current related geometric feature-based algorithm and local feature analysis algorithm for identifying people The algorithm of the area where each part of the face is located, this is all possible.
  • the neural network model may be a convolutional neural network model or a Discriminative Locality Alignment (DLA) model.
  • the embodiment of the present invention may also include a process of training to obtain the human eyelid recognition model, specifically: the initial human eyelid recognition model may be obtained first, and the initial human eyelid recognition model includes a feature extraction layer And the feature classification layer; obtain the first sample image, each first sample image includes human eyes; obtain the calibration information corresponding to each first sample image, wherein the calibration information includes the first sample image Contains the calibration position information of the area where the upper and lower eyelids of the human eye are located, and may also include the calibration position information of the area where the corner point of the human eye is located.
  • the calibration information can be manual calibration or calibration through a specific calibration procedure;
  • This image is input to the feature extraction layer of the initial human eyelid recognition model, and the image features of the area where the upper eyelid of the human eye is located and the image features of the area where the lower eyelid is located in each first sample image are obtained; each first sample The image features of the area where the upper eyelid of the human eye is located and the image feature of the area where the lower eyelid is located in the image are input into the feature classification layer of the initial human eyelid recognition model to obtain the area where the upper eyelid of the human eye is located in each first sample image The current position information of the area, and the current position information of the area where the lower eyelid is located; the current position information of the area where the upper eyelid of the human eye is located in each first sample image is matched with the calibration position information of the area where the corresponding upper eyelid is located, and Match the current position information of the area where the lower eyelid of the human eye is located in each first sample image with the calibration position information of the area where the corresponding lower
  • the current position information of the area where the upper eyelid of the human eye is located in each first sample image is matched with the calibration position information of the area where the corresponding upper eyelid is located, and the position information of the human eye in each first sample image is matched.
  • the process of matching the current position information of the area where the lower eyelid is located with the calibration position information of the area where the lower eyelid is located can be: using a preset loss function to calculate the current position information of the area where each upper eyelid is located and its corresponding calibration position The first loss value between the information, and calculate the second loss value between the current position information of each lower eyelid area and its corresponding calibration position information, determine whether the first loss value is less than the first preset loss threshold, and Determine whether the second loss value is less than a second preset loss threshold; if it is determined that the first loss value is less than the first preset loss threshold, and the second loss value is less than the second preset loss threshold, then it is determined that the matching is successful, this When it is determined that the initial human eyelid recognition model has converged, that is, it is determined that the training of the initial human eyelid recognition model is completed, and the human eyelid recognition model is obtained; if it is determined that the first loss value is not less than the first preset loss threshold, and ⁇ Or the second loss value
  • each first sample image has a corresponding relationship with the current position information of the area where the upper eyelid is located, and each first sample image has a corresponding relationship with the calibration position information of the area where the upper eyelid is located in the calibration information, then the upper eyelid There is a corresponding relationship between the current position information of the area where the upper eyelid is located in the calibration information.
  • the electronic device can recognize the first position information of the first corner point, the second position information of the second corner point, and the eyelid point in each human eye image based on the human eyelid recognition model.
  • the third location information is generated.
  • the calibration information corresponding to each first sample image may also include the calibration position information of the area where the corner of the eye of the human eye is located.
  • This image and the calibration position information of the area where the corner of the eye of the human eye is located trains a human eyelid recognition model that can accurately recognize the position of the corner of the eye in the image.
  • the method may further include:
  • the pre-established face feature point detection model is: a model obtained by training based on sample images of calibrated face feature points.
  • the multiple image acquisition devices may be image acquisition devices that monitor the driving vehicles on the road.
  • the image acquisition devices can obtain face images containing the same face, and each image acquisition device will collect The face image is directly sent to the electronic device.
  • the electronic device obtains the face image collected by each image collection device at the same time, it can be based on the pre-established face feature point detection model from each face collected by each image collection device.
  • the image the area where the eye is located in each face image is detected, and then each face image is intercepted, so that the area where the eye is located in each face image detected is cut out from the face image, and only Human eye image of the area where the human eye is located. In this way, useless information in the image can be reduced, and the calculation burden of the subsequent construction process of the human eyelid curve can be reduced.
  • the aforementioned pre-established face feature point detection model is: a neural network model trained based on sample images of calibrated face feature points.
  • the training process of the pre-established face feature point detection model can refer to the above-mentioned training process of the human eyelid recognition model, which will not be repeated here.
  • the pre-built face feature point detection model requires The sample image contains a human face, and its corresponding calibration information contains the calibration position information of each facial feature point in the face.
  • the facial feature point can include the corner point of the human eye, or it can be based on the position of the corner point Determine the area where the human eye is located.
  • the facial feature points can also include the feature points of the nose, such as the wing of the nose, the bridge of the nose, etc., and can also include the feature points of the lips, such as the feature points of the lip line edge, etc. .
  • various parts of the face in the face image can be identified, and the various parts include the lips, nose, eyebrows, and eyes in the face.
  • the acquired images include not only the human face, but also other information, such as the windows or front of the vehicle, etc. .
  • the electronic device can detect the area where the face is located from the image through the preset face detection model from this type of image, and then intercept the face image that only contains the area where the face is located from the image to obtain the person Face image, and then execute the subsequent construction process of the human eyelid curve provided by the embodiment of the present invention, which is also possible.
  • the multiple image acquisition devices may be at least two image acquisition devices.
  • FIG. 1B it is a schematic diagram of the obtained human eye image.
  • the multiple image acquisition devices are three image acquisition devices, and the electronic device can obtain the human eye images collected by the three image acquisition devices, such as: a total of three frames of human eye images collected by 3 image acquisition devices are obtained, respectively As shown in Figure 1B.
  • S102 Determine the first three-dimensional position information corresponding to the first corner point and the second three-dimensional position corresponding to the second corner point based on the pose information and internal reference information, first position information, and second position information of each image acquisition device information.
  • the first three-dimensional position information corresponding to the first corner point can be determined, that is, the value of the space point corresponding to the first corner point.
  • the first three-dimensional position information based on the pose information, internal reference information, and second position information of each image acquisition device, the second three-dimensional position information corresponding to the second corner point can be determined, that is, the corner space point corresponding to the second corner point The second three-dimensional location information.
  • the designated image acquisition device o 0 is any image acquisition device among the multiple image acquisition devices. For example: Obtain human eye images collected by three image acquisition devices, which are image acquisition device 1, image acquisition device 2, and image acquisition device 3.
  • the first three-dimensional position information and the second three-dimensional position information It can be the position information in the device coordinate system of the image acquisition device 1, that is, the image acquisition device 1 is the designated image acquisition device o 0 ; the first three-dimensional position information and the second three-dimensional position information can also be the image acquisition device 2
  • the location information in the device coordinate system, that is, the image capture device 2 is the designated image capture device o 0 ; the first three-dimensional location information and the second three-dimensional location information may also be location information in the device coordinate system of the image capture device 3 , That is, the image capture device 3 is the designated image capture device o 0 .
  • the first three-dimensional position information and the second three-dimensional position information may also be position information in the world coordinate system.
  • the electronic device can first obtain the position and posture of each image acquisition device when the corresponding human eye image is collected Information, and the internal parameter information of each image acquisition device, where the internal parameter information may include the length of each pixel in the horizontal axis direction of the image acquisition device, the length of each pixel in the vertical axis direction, the focal length, and the image master The position information of the point and the zoom factor, etc.
  • the principal point of the image is the intersection of the optical axis and the image plane;
  • the pose information can include: the position and posture of the image capture device; based on the pose information and internal parameter information of each image capture device, It is determined to obtain the rotation and translation relationship between the multiple image acquisition devices, that is, the conversion relationship between the device coordinate systems of each two image acquisition devices in the multiple image acquisition devices, wherein the multiple image acquisition devices are
  • the rotation and translation relationship among the multiple image acquisition devices includes the rotation and translation relationship between other image acquisition devices o q of the plurality of image acquisition devices except the image acquisition device o 0 and the image acquisition device o 0 , where q represents the other image
  • the qth other image acquisition device in the acquisition device, q can take [1, g-1], g represents the number of multiple image acquisition devices, which can be equal to the number n of human eye images; based on two image acquisition devices
  • the rotation and translation relationship between the two is to perform feature point matching on the human eye image collected by each image acquisition device, and determine the
  • the poses of the multiple image acquisition devices may be fixed.
  • the pose information and internal parameter information of each image acquisition device may be calibrated in advance through a calibration algorithm, and the calibration algorithm may be the Zhang Zhengyou calibration method.
  • the poses of the multiple image capture devices may be non-fixed.
  • the internal parameter information and initial pose information of each image capture device may be calibrated in advance through a calibration algorithm, which may be Zhang Zhengyou Calibration method.
  • the subsequent pose information of the image acquisition device may be determined through the initial pose information and IMU data collected by an IMU (Inertial Measurement Unit) corresponding to the image acquisition device.
  • the IMU is a device used to measure the pose change of the corresponding image acquisition device.
  • S103 Construct a first eye corner constraint based on the first three-dimensional position information, the second three-dimensional position information, and a preset curve equation.
  • the curve equations can be preset for the upper and lower eyelids respectively, where the process of constructing the eyelid space curve equation for the upper eyelid and the eyelid space curve equation for the lower eyelid is similar. Take the process of representing the eyelid space curve equation of the upper eyelid of the human eye as an example.
  • the a 1 , a 2 , a 3 , b 1 , b 2 , b 3 , c 1 , c 2 , and c 3 are the required coefficients respectively
  • t is the independent variable
  • (x, y, z) Represents the spatial coordinates of the points on the curve, that is, the three-dimensional position information of the points on the curve, that is, the spatial coordinates of the upper eyelid point on the upper eyelid of the human eye.
  • the first corner of the eye constraint can be constructed, where the first corner of the eye constraint can be: in the device coordinate system of the image acquisition device o 0 constraint;
  • the first corner constraint can be expressed as formula (2):
  • (x 0 , y 0 , z 0 ) represents the first three-dimensional position information
  • (x 1 , y 1 , z 1 ) represents the second three-dimensional position information
  • first corner point and the second corner point both exist in the upper eyelid in the human eye image and the lower eyelid in the human eye image
  • corner space point corresponding to the first corner point and the second corner point correspond to The corner space points exist in both the upper eyelid and the lower eyelid of the human eye.
  • the first eye corner constraint can simultaneously constrain the upper eyelid curve of the human eye and the lower eyelid curve, that is, the eyelid space curve equation used to represent the upper eyelid of the human eye and the eyelid used to represent the lower eyelid of the human eye mentioned later Space curve equation.
  • the second eyelid constraint mentioned later can also constrain the upper eyelid curve of the human eye and the lower eyelid curve at the same time.
  • S104 Construct a second eye corner constraint based on the first value, the second value, and the first eye corner constraint.
  • the first value and the second value are used to constrain the value range of the independent variable in the first corner of the eye constraint.
  • the first value and the second value are respectively substituted into the first corner of the eye constraint to construct the second corner of the eye constraint.
  • the second corner of the eye constraint can be expressed as formula (3):
  • the t 01 represents the value of the independent variable t corresponding to the first corner of the eye, which is the first value
  • the t 02 represents the value of the independent variable t corresponding to the second corner of the eye, which is the second value.
  • the t 01 and t 02 are not equal. In one case, t 01 can be less than t 02 .
  • the above t 01 may take the value of 0, and the above t 02 may take the value of 1.
  • the second eye corner constraint can be expressed as formula (4):
  • t 01 is limited to be 0 and t 02 is 1, the second corner of the eye constraint can also be called 01 constraint.
  • the three-dimensional position information of each eyelid point can be constructed based on the curve equation, and further, based on the three-dimensional position information of each upper eyelid point, the third position information of each upper eyelid point, and each image acquisition device
  • the pose information and internal reference information to construct the upper eyelid point corresponding reprojection error constraint; and ⁇ or based on the three-dimensional position information of each lower eyelid point, the third position information of each lower eyelid point, and the pose of each image acquisition device Information and internal reference information to construct the reprojection error constraint corresponding to the lower eyelid point, where the upper eyelid point includes the upper eyelid point identified from the image of each person’s eye, and the lower eyelid point includes the point identified from the image of each person’s eye Point of the lower eyelid.
  • the S105 may include:
  • a reprojection error constraint corresponding to the eyelid point is constructed.
  • the i-th upper eyelid point in the j-th individual eye image can be represented by t ji , where i can take [1, M j ], M j represents the first number of upper eyelid points in the j-th human eye image, j can be a positive integer in [1, n], and n represents the number of human eye images.
  • i can take [1, M j ]
  • M j represents the first number of upper eyelid points in the j-th human eye image
  • j can be a positive integer in [1, n]
  • n represents the number of human eye images.
  • the first three-dimensional position information and the second three-dimensional position information can be the position information in the device coordinate system of the designated image acquisition device o 0 among the multiple image acquisition devices
  • the third three-dimensional position information corresponding to each eyelid point, the third position information of each eyelid point, and the pose information and internal reference information of each image acquisition device are used to construct the corresponding eyelid point
  • the steps of reprojection error constraint include:
  • each eye image in the human eye image is determined.
  • the reprojection corresponding to the eyelid point is determined Error constraints.
  • the conversion relationship between the device coordinate systems of each two image capture devices is determined; through the conversion relationship, a certain preset spatial point
  • the position information in the device coordinate system of one image capture device in the conversion relationship is converted to the position information in the device coordinate system of another image capture device in the conversion relationship.
  • the image of each human eye can be based on The third three-dimensional position information corresponding to each upper eyelid point in the human eye image and the conversion relationship between the device coordinate systems of every two image acquisition devices determine the space corresponding to each upper eyelid point in the human eye image Point the fourth position information of the projection point in each human eye image.
  • the third three-dimensional position information corresponding to each upper eyelid point is: the position information in the device coordinate system of the image acquisition device o 0 can be directly based on the image information acquisition device internal reference o 0, construct a mapping relationship between the image pickup apparatus and the image coordinate system o device coordinate system 0 as a first mapping relation; mapping relationship based on the first turn, the specified image acquisition device the third dimensional position information of each point of the upper eyelid o eye images acquired corresponding to 0, the image acquisition device to convert the image coordinate system 0 o to obtain the specified image acquisition device 0 o acquired human eye
  • the spatial point corresponding to each upper eyelid point in the image is the fourth position information of the projection point in the human eye image collected by the image collecting device o 0 .
  • the device coordinate system of the other image capture device o q and the device coordinate system of the image capture device o 0 can be determined from the conversion relationship between the device coordinate systems of every two image capture devices.
  • the conversion relationship between the coordinate systems is regarded as the conversion relationship to be used; based on the conversion relationship to be used, the third three-dimensional position information corresponding to each upper eyelid point in the human eye image collected by the other image capture device o q , Convert from the device coordinate system of the image acquisition device o 0 to the device coordinate system of the other image acquisition device o q to obtain the corresponding upper eyelid point in the human eye image collected by the other image acquisition device o q a fourth three-dimensional position information; in addition, based on other information of the internal reference image acquisition device o q, constructing a mapping relationship between the other image acquisition device o q coordinate system and the image device coordinate system, the other image acquisition devices as o q corresponding mapping relation; a fourth three-dimensional position information based on
  • the reprojection corresponding to the upper eyelid point can be constructed based on the fourth position information of the projection point of the spatial point corresponding to each upper eyelid point of each human eye image in the human eye image and the third position information of the eyelid point Error constraints.
  • the reprojection error constraint corresponding to the upper eyelid point can be expressed as formula (7):
  • M j represents the first value corresponding to the j-th person's eye image
  • (u ji , v ji ) represents the third position information of the i-th upper eyelid point in the j-th person's eye image
  • (u′ ji , v′ ji ) Represents the spatial point corresponding to the i-th upper eyelid point in the j-th human eye image
  • the fourth position information of the projection point in the j-th human eye image
  • S106 Based on the first corner of the eye constraint, the second corner of the eye constraint, and the reprojection error constraint, construct an eyelid space curve equation for representing the upper eyelid and/or the lower eyelid of the human eye.
  • the electronic device can construct an eyelid space curve equation representing the upper eyelid of the human eye based on the first corner of the eye constraint, the second corner of the eye constraint, and the reprojection error constraint corresponding to the upper eyelid point of the human eye, that is, to obtain The eyelid space curve of the upper eyelid of the eye; and/or the eyelid space curve representing the lower eyelid of the human eye can be constructed based on the first eye corner constraint, the second eye corner constraint, and the reprojection error constraint corresponding to the lower eyelid point of the human eye
  • the equation is the eyelid space curve that characterizes the lower eyelid of the human eye.
  • the upper and lower eyelids of the human eye can be drawn based on the eyelid space curve representing the upper and lower eyelids of the human eye.
  • the problem of solving the above-mentioned simultaneous formulas can be transformed into a nonlinear least squares method
  • the coefficients in the curve equation construct the eyelid space curve equation used to characterize the upper eyelid of the human eye.
  • the first three-dimensional position information corresponding to the first corner point of the human eye and the second corresponding second corner point of the human eye can be constructed based on the pose information and internal reference information of multiple image acquisition devices.
  • Three-dimensional position information that is, the human eye of the same person is monitored by multiple image acquisition devices, and the human eye image is obtained. Based on the human eye image collected by multiple image acquisition devices, the eye corners with obvious semantic features can be accurately obtained.
  • the three-dimensional position information and further, based on the first three-dimensional position information, the second three-dimensional position information, and the preset curve equation, construct the first eye corner constraint; based on the first preset value, the second preset value, and the first eye corner constraint, construct Second eye corner constraint; and use the curve equation, the third position information of each eyelid point, and the pose information and internal parameter information of each image acquisition device to determine the reprojection error constraint corresponding to the eyelid point.
  • the method may include the following steps:
  • S201 Identify the first position information of the first corner point of the human eye, the second position information of the second corner point and the eyelid from each human eye image collected by each image acquisition device at the same time. The third location information of the point.
  • the eyelid points may include: a first number of upper eyelid points and/or a second number of lower eyelid points.
  • S202 Determine the first three-dimensional position information corresponding to the first corner point and the second three-dimensional position corresponding to the second corner point based on the pose information and internal reference information, first position information, and second position information of each image acquisition device information.
  • S203 Construct a first eye corner constraint based on the first three-dimensional position information, the second three-dimensional position information, and a preset curve equation.
  • S204 Construct a second eye corner constraint based on the first value, the second value, and the first eye corner constraint.
  • the first value and the second value are used to constrain the value range of the independent variable in the first corner of the eye constraint.
  • S206 Based on the order of the eyelid points in the eye image of each person, construct order constraints for the eyelid points in the eye image of each person.
  • S207 Based on the first corner of the eye constraint, the second corner of the eye constraint, the reprojection error constraint, and the order constraint, construct an eyelid space curve equation for representing the upper eyelid and/or the lower eyelid of the human eye.
  • the S201 is the same as S101 shown in FIG. 1
  • the S202 is the same as S102 shown in FIG. 1
  • the S203 is the same as S103 shown in FIG. 1
  • the S204 is the same as S104 shown in FIG.
  • This S205 is the same as S105 shown in FIG. 1, and will not be repeated here.
  • the identified eyelid points are all ordered, among which there is order between the upper eyelid points identified in the human eye image. There is order between the identified lower eyelid points.
  • the upper eyelid and/or the lower eyelid of the human eye can be used to construct the eyelid space curve equation, and continue to combine the constructed order constraint , To jointly construct the eyelid space curve equation used to characterize the upper eyelid and/or the lower eyelid of the human eye.
  • formula (8) can be transformed into: Subsequently, formulas (2), (5), (7), and (9) are simultaneously solved and solved to obtain the coefficients in the eyelid space curve equation used to characterize the upper eyelid of the human eye.
  • the method may include the following steps:
  • S301 Identify the first position information of the first corner point of the human eye, the second position information of the second corner point and the eyelid from each human eye image collected by each image acquisition device at the same time. The third location information of the point.
  • the eyelid points may include: a first number of upper eyelid points and/or a second number of lower eyelid points.
  • S302 Determine the first three-dimensional position information corresponding to the first corner point and the second three-dimensional position corresponding to the second corner point based on the pose information and internal reference information, first position information, and second position information of each image acquisition device information.
  • S303 Construct a first eye corner constraint based on the first three-dimensional position information, the second three-dimensional position information, and a preset curve equation.
  • S304 Construct a second eye corner constraint based on the first value, the second value, and the first eye corner constraint.
  • the first value and the second value are used to constrain the value range of the independent variable in the first corner of the eye constraint.
  • S306 Based on the first corner of the eye constraint, the second corner of the eye constraint, and the reprojection error constraint, construct an eyelid space curve equation for representing the upper eyelid and/or the lower eyelid of the human eye.
  • S307 Determine the current opening and closing length of the human eye based on the eyelid space curve equation used to characterize the upper eyelid and the eyelid space curve equation used to characterize the lower eyelid;
  • S309 Based on the current opening and closing length and the historical opening and closing length, determine the fatigue degree of the person corresponding to the human eye.
  • the S301 is the same as S101 shown in FIG. 1
  • the S302 is the same as S102 shown in FIG. 1
  • the S303 is the same as S103 shown in FIG. 1
  • the S304 is the same as S104 shown in FIG.
  • This S305 is the same as S105 shown in FIG. 1
  • this S306 is the same as S106 shown in FIG. 1, which will not be repeated here.
  • the eyelid space curve equation for representing the upper eyelid of the human eye and the eyelid space curve equation for the lower eyelid is calculated to obtain the current opening and closing length of the human eye, and then, combined with the time dimension information, that is, the historical opening and closing length of the human eye, the fatigue degree of the person corresponding to the human eye is determined.
  • the electronic device can obtain the human eye image collected by multiple image acquisition devices at the current moment, that is, the human eye image is the image of each image acquisition device.
  • the human eye image captured at the current moment.
  • the historical opening and closing length of the human eye can be stored locally or in the storage device connected to the electronic device. After calculating the current opening and closing length of the human eye, the electronic device can obtain it from the corresponding storage location.
  • the historical opening and closing length of the human eye is: the determined opening and closing length of the human eye based on the human eye image before the human eye image collected by the multiple image acquisition devices.
  • a more accurate opening and closing length of the human eye that is, the physical length of the opening and closing of the human eye can be determined Furthermore, combined with the time dimension, the fatigue degree of the human eye corresponding to the human eye can be monitored more flexibly and accurately.
  • the step of determining the current opening and closing length of the human eye based on the space eyelid curve equation used to characterize the upper eyelid and the space eyelid curve equation used to characterize the lower eyelid Can include:
  • the electronic device may be based on the space eyelid curve equation used to characterize the upper eyelid and the space eyelid curve equation used to characterize the lower eyelid, respectively, from the space eyelid curve used to characterize the upper eyelid and the space eyelid curve used to characterize the lower eyelid.
  • the spatial eyelid curve of select the point pairs with the same horizontal axis coordinate value and the corresponding vertical axis coordinate value. For each point pair, calculate the distance between the two points, and determine the point pair with the largest calculated distance. As the target point pair, the distance between the target point pair is taken as the maximum distance, and the current opening and closing length of the human eye is taken as the distance.
  • select the bisecting point that is, the center point of the space eyelid curve used to characterize the lower eyelid, as the second center point
  • calculate the distance between the first center point and the second center point as the maximum distance As the current opening and closing length of the human eye. Wait.
  • the step of determining the fatigue degree of the person corresponding to the human eye based on the current opening and closing length and the historical opening and closing length may include:
  • the opening and closing length includes the current opening and closing length and the historical opening and closing length
  • the fatigue degree of the person corresponding to the human eye is determined.
  • the electronic device can obtain a preset length threshold set in advance, and compare each opening and closing length with the preset length threshold to compare the size of each opening and closing length with the preset length threshold to obtain the comparison result ; Further, the number of comparison results that characterize the opening and closing lengths less than the preset length threshold is obtained by statistics, as the first result quantity; subsequently, based on the current opening and closing length and the total number of historical opening and closing lengths and the first result quantity, it is determined to obtain The fatigue degree of the person corresponding to the human eye.
  • the process of determining the fatigue degree of the person corresponding to the human eye based on the current opening and closing length and the total number of historical opening and closing lengths and the first result number may be: calculating the ratio of the first result number to the total number, if If the ratio is greater than the preset ratio, it is determined that the fatigue degree of the person corresponding to the human eye is fatigue; if the ratio is not greater than the preset ratio, it is determined that the fatigue degree of the person corresponding to the human eye is not fatigued.
  • the first number can be directly compared with the preset number, and if the number of first results is greater than The preset number determines that the fatigue degree of the person corresponding to the human eye is fatigue; if the first result number is not greater than the preset number, it is determined that the fatigue degree of the person corresponding to the human eye is not fatigue.
  • warning information can be generated to remind the user The person corresponding to the eye is in a state of fatigue, so that the user can take corresponding measures for this situation to reduce the occurrence of car accidents caused by fatigue driving to a certain extent.
  • the embodiment of the present invention provides a device for constructing a human eyelid curve, as shown in FIG. 4, which may include:
  • the recognition module 410 is configured to recognize the first position information of the first corner point of the human eye and the second corner point of the second eye in the human eye image from each human eye image collected by each image acquisition device at the same time. 2. Position information and third position information of the eyelid points, wherein the eyelid points include: a first number of upper eyelid points and/or a second number of lower eyelid points;
  • the first determining module 420 is configured to determine the first three-dimensional corresponding to the first corner point based on the pose information and internal reference information of each image acquisition device, the first position information, and the second position information Position information and second three-dimensional position information corresponding to the second corner of the eye;
  • the first construction module 430 is configured to construct a first eye corner constraint based on the first three-dimensional position information, the second three-dimensional position information, and a preset curve equation;
  • the second construction module 440 is configured to construct a second eye corner constraint based on the first value, the second value and the first eye corner constraint, wherein the first value and the second value are used to constrain the first eye corner constraint.
  • the third construction module 450 is configured to construct a reprojection error constraint corresponding to the eyelid point based on the curve equation, the third position information of each eyelid point, and the pose information and internal parameter information of each image acquisition device;
  • the fourth construction module 460 is configured to construct an eyelid space for characterizing the upper eyelid and/or lower eyelid of the human eye based on the first corner of the eye constraint, the second corner of the eye constraint, and the reprojection error constraint Curve equation.
  • the first three-dimensional position information corresponding to the first corner point of the human eye and the second corresponding second corner point of the human eye can be constructed based on the pose information and internal reference information of multiple image acquisition devices.
  • Three-dimensional position information that is, the human eye of the same person is monitored by multiple image acquisition devices, and the human eye image is obtained. Based on the human eye image collected by multiple image acquisition devices, the eye corners with obvious semantic features can be accurately obtained.
  • the three-dimensional position information and further, based on the first three-dimensional position information, the second three-dimensional position information, and the preset curve equation, construct the first eye corner constraint; based on the first preset value, the second preset value, and the first eye corner constraint, construct Second eye corner constraint; and use the curve equation, the third position information of each eyelid point, and the pose information and internal parameter information of each image acquisition device to determine the reprojection error constraint corresponding to the eyelid point.
  • the device may further include:
  • the fifth construction module is configured to construct an upper eyelid and/or lower eyelid of the human eye based on the first corner of the eye constraint, the second corner of the eye constraint, and the reprojection error constraint.
  • the fourth constructing module 460 is specifically configured to construct the human eye based on the first corner of the eye constraint, the second corner of the eye constraint, the reprojection error constraint, and the order constraint The eyelid space curve equation of the upper eyelid and ⁇ or the lower eyelid.
  • the third construction module 450 includes: a first construction unit configured to use the curve equation to construct the third three-dimensional position information corresponding to each eyelid point; The unit is configured to construct a reprojection error constraint corresponding to the eyelid point based on the third three-dimensional position information corresponding to each eyelid point, the third position information of each eyelid point, and the pose information and internal reference information of each image acquisition device .
  • the second construction unit is specifically configured to determine the conversion between the device coordinate systems of every two image acquisition devices based on the pose information and internal parameter information of each image acquisition device Relationship; For each human eye image, based on the third three-dimensional position information corresponding to each eyelid point in the human eye image and the conversion relationship between the device coordinate systems of every two image acquisition devices, determine the human eye image The fourth position information of the projection point of the spatial point corresponding to each eyelid point in the human eye image; based on the third position information of each eyelid point in each human eye image, the spatial point corresponding to the eyelid point is on the eyelid The fourth position information of the projection point in the human eye image where the point is located determines the reprojection error constraint corresponding to the eyelid point.
  • the device may further include: a first obtaining module configured to identify the human eye in each human eye image collected at the same time from each image acquisition device Before the first position information of the first corner point of the human eye, the second position information of the second corner point, and the third position information of the eyelid point in the image, obtain face images collected by multiple image acquisition devices at the same time;
  • the module is configured to detect the area of the human eye in each face image from each face image collected by each image acquisition device based on the pre-established face feature point detection model, and obtain the human eye image.
  • the pre-established face feature point detection model is: a model trained based on sample images of calibrated face feature points.
  • the apparatus may further include: a second determining module configured to perform the following steps based on the first corner of the eye constraint, the second corner of the eye constraint, and the reprojection error constraint, After constructing the eyelid space curve equation used to characterize the upper eyelid and/or lower eyelid of the human eye, based on the eyelid space curve equation used to characterize the upper eyelid and the eyelid space curve equation used to characterize the lower eyelid, The current opening and closing length of the human eye is determined; the second obtaining module is configured to obtain the historical opening and closing length of the human eye determined within a preset time; the third determining module is configured to be based on the The current opening and closing length and the historical opening and closing length are used to determine the fatigue degree of the person corresponding to the human eye.
  • a second determining module configured to perform the following steps based on the first corner of the eye constraint, the second corner of the eye constraint, and the reprojection error constraint, After constructing the eyelid space curve equation used to characterize the upper eyelid and/or
  • the second determining module is specifically configured to obtain one of a space eyelid curve equation used to characterize the upper eyelid and a space eyelid curve equation used to characterize the lower eyelid through calculation.
  • the maximum distance between the two; the maximum distance is used as the current opening and closing length of the human eye.
  • the third determining module is specifically configured to compare each opening and closing length with a preset length threshold to obtain a comparison result, wherein the opening and closing length includes the The current opening and closing length and the historical opening and closing length; the number of first results indicating that the opening and closing length is less than the preset length threshold is obtained by statistics; based on the total of the current opening and closing length and the historical opening and closing length The number and the first result number determine the fatigue degree of the person corresponding to the human eye.
  • the foregoing device embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment.
  • the device embodiment is obtained based on the method embodiment, and the specific description can be found in the method embodiment part, which will not be repeated here.
  • modules in the device in the embodiment may be distributed in the device in the embodiment according to the description of the embodiment, or may be located in one or more devices different from this embodiment with corresponding changes.
  • the modules of the above-mentioned embodiments can be combined into one module, or further divided into multiple sub-modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明实施例公开一种人眼眼睑曲线的构建方法及装置,该方法包括:从每一图像采集设备在同一时刻采集的人眼图像中,识别人眼的第一眼角点的第一位置信息、第二眼角点的第二位置信息以及眼睑点的第三位置信息;基于每一图像采集设备的位姿信息和内参信息、第一位置信息以及第二位置信息,确定出第一眼角点和第二眼角点对应的三维位置信息;基于上述三维位置信息、预设的曲线方程、第一数值以及第二数值,构建第一眼角约束和第二眼角约束;再结合每一眼睑点的第三位置信息,构建眼睑点对应的重投影误差约束;基于上述三个约束,构建用于表征人眼的上眼睑和\或下眼睑的眼睑空间曲线方程,以实现对人眼眼睑曲线的构建,得到人眼眼睑的空间信息。

Description

一种人眼眼睑曲线的构建方法及装置 技术领域
本发明涉及智能交通技术领域,具体而言,涉及一种人眼眼睑曲线的构建方法及装置。
背景技术
疲劳驾驶是导致车祸的原因之一,为了在一定程度上避免疲劳驾驶所导致的车祸的发生,相关的疲劳驾驶检测技术应运而生。利用相关的疲劳驾驶检测技术进行疲劳驾驶检测的过程,一般为:监控驾驶员的眼睛的开闭状态,进而根据驾驶员的眼睛的开闭状态,确定驾驶员是否出现疲劳驾驶,在确定出驾驶员出现疲劳驾驶的情况下,进行告警。
可见,上述疲劳驾驶检测的过程中,对驾驶员的眼睛的开闭状态的检测至关重要,而对驾驶员的眼睛的开闭状态的检测,大多依赖于对驾驶员的眼睛的上下眼睑之间的距离的计算,而实现对驾驶员的眼睛的上下眼睑之间的距离的计算,对驾驶员的眼睛的上下眼睑曲线的重建至关重要,那么如何重建出驾驶员的上下眼睑曲线成为亟待解决的问题。
发明内容
本发明提供了一种人眼眼睑曲线的构建方法及装置,以实现对人眼眼睑曲线的构建,得到人眼眼睑的空间信息。具体的技术方案如下:
第一方面,本发明实施例提供了一种人眼眼睑曲线的构建方法,包括:
从每一图像采集设备在同一时刻采集的每一人眼图像中,识别出该人眼图像中人眼的第一眼角点的第一位置信息、第二眼角点的第二位置信息以及眼睑点的第三位置信息,其中,所述眼睑点包括:第一数量个上眼睑点和\或第二数量个下眼睑点;
基于每一图像采集设备的位姿信息和内参信息、所述第一位置信息以及所述第二位置信息,确定出所述第一眼角点对应的第一三维位置信息和所述第二眼角点对应的第二三维位置信息;
基于所述第一三维位置信息、所述第二三维位置信息以及预设的曲线方程,构建第一眼角约束;
基于第一数值、第二数值以及所述第一眼角约束,构建第二眼角约束,其中,所述第一数值和所述第二数值用于约束所述第一眼角约束中自变量的取值范围;
基于所述曲线方程、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建眼睑点对应的重投影误差约束;
基于所述第一眼角约束、所述第二眼角约束以及所述重投影误差约束,构建用于表征所述人眼的上眼睑和\或下眼睑的眼睑空间曲线方程。
可选的,在所述基于所述第一眼角约束、所述第二眼角约束以及所述重投影误差约束,构建用于表征所述人眼的上眼睑和\或下眼睑的眼睑空间曲线方程的步骤之前,所述方法还包括:
基于每一人眼图像中眼睑点的有序性,针对每一人眼图像中眼睑点,构建有序性约束;
所述基于所述第一眼角约束、所述第二眼角约束以及所述重投影误差约束,构建用于表征所述人眼的上眼睑和\或下眼睑的眼睑空间曲线方程的步骤,包括:
基于所述第一眼角约束、所述第二眼角约束、所述重投影误差约束以及所述有序性约束,构建用于表征所述人眼的上眼睑和\或下眼睑的眼睑空间曲线方程。
可选的,所述基于所述曲线方程、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建眼睑点对应的重投影误差约束的步骤,包括:
利用所述曲线方程,构建每一眼睑点对应的第三三维位置信息;
基于每一眼睑点对应的第三三维位置信息、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建眼睑点对应的重投影误差约束。
可选的,所述基于每一眼睑点对应的第三三维位置信息、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建眼睑点对应的重投影误差约束的步骤,包括:
基于每一图像采集设备的位姿信息和内参信息,确定每两个图像采集设备的设备坐标系之间的转换关系;
针对每一人眼图像,基于该人眼图像中的每一眼睑点对应的第三三维位置信息以及每两个图像采 集设备的设备坐标系之间的转换关系,确定出该人眼图像中每一眼睑点对应的空间点在该人眼图像中的投影点的第四位置信息;
基于每一人眼图像中每一眼睑点的第三位置信息,和该眼睑点对应的空间点在该眼睑点所在人眼图像中的投影点的第四位置信息,确定出眼睑点对应的重投影误差约束。
可选的,在所述从每一图像采集设备在同一时刻采集的每一人眼图像中,识别出该人眼图像中人眼的第一眼角点的第一位置信息、第二眼角点的第二位置信息以及眼睑点的第三位置信息的步骤之前,所述方法还包括:
获得多个图像采集设备在同一时刻采集的人脸图像;
基于预先建立的人脸特征点检测模型,从每一图像采集设备采集的每一人脸图像中,检测出每一人脸图像中人眼所在区域,得到人眼图像,其中,所述预先建立的人脸特征点检测模型为:基于标定有人脸特征点的样本图像训练所得的模型。
可选的,在所述基于所述第一眼角约束、所述第二眼角约束以及所述重投影误差约束,构建用于表征所述人眼的上眼睑和\或下眼睑的眼睑空间曲线方程的步骤之后,所述方法还包括:
基于用于表征所述上眼睑的眼睑空间曲线方程和用于表征所述下眼睑的眼睑空间曲线方程,确定出所述人眼的当前开闭长度;
获得在预设时长内所确定出的所述人眼的历史开闭长度;
基于所述当前开闭长度以及所述历史开闭长度,确定得到所述人眼对应的人员的疲劳程度。
可选的,所述基于用于表征所述上眼睑的空间眼睑曲线方程和用于表征所述下眼睑的空间眼睑曲线方程,确定出所述人眼的当前开闭长度的步骤,包括:
计算得到用于表征所述上眼睑的空间眼睑曲线方程和用于表征所述下眼睑的空间眼睑曲线方程之间的最大距离;
将所述最大距离,作为所述人眼的当前开闭长度。
可选的,所述基于所述当前开闭长度以及所述历史开闭长度,确定得到所述人眼对应的人员的疲劳程度的步骤,包括:
将每一开闭长度与预设长度阈值进行比较,获得比较结果,其中,所述开闭长度包括所述当前开闭长度以及所述历史开闭长度;
统计得到表征开闭长度小于所述预设长度阈值的比较结果的第一结果数量;
基于所述当前开闭长度以及所述历史开闭长度的总数量和所述第一结果数量,确定得到所述人眼对应的人员的疲劳程度。
第二方面,本发明实施例提供了一种人眼眼睑曲线的构建装置,包括:
识别模块,被配置为从每一图像采集设备在同一时刻采集的每一人眼图像中,识别出该人眼图像中人眼的第一眼角点的第一位置信息、第二眼角点的第二位置信息以及眼睑点的第三位置信息,其中,所述眼睑点包括:第一数量个上眼睑点和\或第二数量个下眼睑点;
第一确定模块,被配置为基于每一图像采集设备的位姿信息和内参信息、所述第一位置信息以及所述第二位置信息,确定出所述第一眼角点对应的第一三维位置信息和所述第二眼角点对应的第二三维位置信息;
第一构建模块,被配置为基于所述第一三维位置信息、所述第二三维位置信息以及预设的曲线方程,构建第一眼角约束;
第二构建模块,被配置为基于第一数值、第二数值以及所述第一眼角约束,构建第二眼角约束,其中,所述第一数值和所述第二数值用于约束所述第一眼角约束中自变量的取值范围;
第三构建模块,被配置为基于所述曲线方程、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建眼睑点对应的重投影误差约束;
第四构建模块,被配置为基于所述第一眼角约束、所述第二眼角约束以及所述重投影误差约束,构建用于表征所述人眼的上眼睑和\或下眼睑的眼睑空间曲线方程。
可选的,所述装置还包括:
第五构建模块,被配置为在所述基于所述第一眼角约束、所述第二眼角约束以及所述重投影误差约束,构建用于表征所述人眼的上眼睑和\或下眼睑的眼睑空间曲线方程之前,基于每一人眼图像中眼睑点的有序性,针对每一人眼图像中眼睑点,构建有序性约束;
所述第四构建模块,被具体配置为:基于所述第一眼角约束、所述第二眼角约束、所述重投影误差约束以及所述有序性约束,构建用于表征所述人眼的上眼睑和\或下眼睑的眼睑空间曲线方程。
可选的,所述第三构建模块,包括:第一构建单元,被配置为利用所述曲线方程,构建每一眼睑点对应的第三三维位置信息;第二构建单元,被配置为基于每一眼睑点对应的第三三维位置信息、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建眼睑点对应的重投影误差约束。
可选的,所述第二构建单元,被具体配置为基于每一图像采集设备的位姿信息和内参信息,确定每两个图像采集设备的设备坐标系之间的转换关系;针对每一人眼图像,基于该人眼图像中的每一眼睑点对应的第三三维位置信息以及每两个图像采集设备的设备坐标系之间的转换关系,确定出该人眼图像中每一眼睑点对应的空间点在该人眼图像中的投影点的第四位置信息;基于每一人眼图像中每一眼睑点的第三位置信息,和该眼睑点对应的空间点在该眼睑点所在人眼图像中的投影点的第四位置信息,确定出眼睑点对应的重投影误差约束。
可选的,所述装置还包括:第一获得模块,被配置为在所述从每一图像采集设备在同一时刻采集的每一人眼图像中,识别出该人眼图像中人眼的第一眼角点的第一位置信息、第二眼角点的第二位置信息以及眼睑点的第三位置信息之前,获得多个图像采集设备在同一时刻采集的人脸图像;检测模块,被配置为基于预先建立的人脸特征点检测模型,从每一图像采集设备采集的每一人脸图像中,检测出每一人脸图像中人眼所在区域,得到人眼图像,其中,所述预先建立的人脸特征点检测模型为:基于标定有人脸特征点的样本图像训练所得的模型。
可选的,所述装置还包括:第二确定模块,被配置为在所述基于所述第一眼角约束、所述第二眼角约束以及所述重投影误差约束,构建用于表征所述人眼的上眼睑和\或下眼睑的眼睑空间曲线方程之后,基于用于表征所述上眼睑的眼睑空间曲线方程和用于表征所述下眼睑的眼睑空间曲线方程,确定出所述人眼的当前开闭长度;第二获得模块,被配置为获得在预设时长内所确定出的所述人眼的历史开闭长度;第三确定模块,被配置为基于所述当前开闭长度以及所述历史开闭长度,确定得到所述人眼对应的人员的疲劳程度。
可选的,所述第二确定模块,被具体配置为:计算得到用于表征所述上眼睑的空间眼睑曲线方程和用于表征所述下眼睑的空间眼睑曲线方程之间的最大距离;将所述最大距离,作为所述人眼的当前开闭长度。
可选的,所述第三确定模块,被具体配置为:将每一开闭长度与预设长度阈值进行比较,获得比较结果,其中,所述开闭长度包括所述当前开闭长度以及所述历史开闭长度;统计得到表征开闭长度小于所述预设长度阈值的比较结果的第一结果数量;基于所述当前开闭长度以及所述历史开闭长度的总数量和所述第一结果数量,确定得到所述人眼对应的人员的疲劳程度。
由上述内容可知,本发明实施例提供的一种人眼眼睑曲线的构建方法及装置,可以从每一图像采集设备在同一时刻采集的每一人眼图像中,识别出该人眼图像中人眼的第一眼角点的第一位置信息、第二眼角点的第二位置信息以及眼睑点的第三位置信息,其中,眼睑点包括:第一数量个上眼睑点和\或第二数量个下眼睑点;基于每一图像采集设备的位姿信息和内参信息、第一位置信息以及第二位置信息,确定出第一眼角点对应的第一三维位置信息和第二眼角点对应的第二三维位置信息;基于第一三维位置信息、第二三维位置信息以及预设的曲线方程,构建第一眼角约束;基于第一数值、第二数值以及第一眼角约束,构建第二眼角约束,其中,第一数值和第二数值用于约束第一眼角约束中自变量的取值范围;基于曲线方程、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建眼睑点对应的重投影误差约束;基于第一眼角约束、第二眼角约束以及重投影误差约束,构建用于表征人眼的上眼睑和\或下眼睑的眼睑空间曲线方程。
应用本发明实施例,可以基于多个图像采集设备的位姿信息和内参信息,构建得到人眼图像中人眼的第一眼角点对应的第一三维位置信息和第二眼角点对应的第二三维位置信息,即通过多个图像采集设备针对同一人员的人眼进行监控,得到人眼图像,基于多个图像采集设备采集的人眼图像,可以准确地得到人眼中具有明显语义特征的眼角的三维位置信息,进而,基于第一三维位置信息、第二三维位置信息以及预设的曲线方程,构建第一眼角约束;基于第一预设数值、第二预设数值以及第一眼角约束,构建第二眼角约束;并利用曲线方程、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,确定眼睑点对应的重投影误差约束,通过多约束条件,可以构建得到精确度较高的用于表征人眼的上眼睑和\或下眼睑的眼睑空间曲线方程,实现对人眼眼睑曲线的构建,得到人眼眼睑的空间信息,为后续的人眼对应的人员的疲劳程度的确定提供基础。当然,实施本发明的任一产品或方法并不一定需要同时达到以上所述的所有优点。
本发明实施例的创新点包括:
1、通过多个图像采集设备针对同一人员的人眼进行监控,得到人眼图像,基于该多个图像采集设备采集的人眼图像,可以准确地得到人眼中具有明显语义特征的眼角的三维位置信息,进而,基于第一三维位置信息、第二三维位置信息以及预设的曲线方程,构建第一眼角约束;基于第一预设数值、第二预设数值以及第一眼角约束,构建第二眼角约束;并利用曲线方程、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,确定眼睑点对应的重投影误差约束,通过多约束条件,可以构建得到精确度较高的用于表征人眼的上眼睑和\或下眼睑的眼睑空间曲线方程,实现对人眼眼睑曲线的构建,得到人眼眼睑的空间信息,为后续的人眼对应的人员的疲劳程度的确定提供基础。
2、考虑到每一人眼图像中眼睑点的有序性,针对每一人眼图像中眼睑点,构建有序性约束,结合第一眼角约束、第二眼角约束、重投影误差约束以及有序性约束,构建出准确性更高的用于表征人眼的上眼睑和\或下眼睑的眼睑空间曲线方程。
3、基于用于表征上眼睑的眼睑空间曲线方程和用于表征下眼睑的眼睑空间曲线方程,可以方便、精确地确定出确定出人眼的当前开闭长度,进而结合时间维度,即在预设时长内所确定出的人眼的历史开闭长度,可以更加灵活、准确地监控出人眼对应的人员是否出现疲劳驾驶。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单介绍。显而易见地,下面描述中的附图仅仅是本发明的一些实施例。对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1A为本发明实施例提供的人眼眼睑曲线的构建方法的一种流程示意图;
图1B为所获得的人眼图像的一种示意图;
图2为本发明实施例提供的人眼眼睑曲线的构建方法的另一种流程示意图;
图3为本发明实施例提供的人眼眼睑曲线的构建方法的一种流程示意图;
图4为本发明实施例提供的人眼眼睑曲线的构建装置的一种结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整的描述。显然,所描述的实施例仅仅是本发明的一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有付出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,本发明实施例及附图中的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。例如包含的一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。
本发明提供了一种人眼眼睑曲线的构建方法及装置,以实现对人眼眼睑曲线的构建,得到人眼眼睑的空间信息。下面对本发明实施例进行详细说明。
图1A为本发明实施例提供的人眼眼睑曲线的构建方法的一种流程示意图。该方法可以包括以下步骤:
S101:从每一图像采集设备在同一时刻采集的每一人眼图像中,识别出该人眼图像中人眼的第一眼角点的第一位置信息、第二眼角点的第二位置信息以及眼睑点的第三位置信息。
其中,该眼睑点包括:第一数量个上眼睑点和\或第二数量个下眼睑点。
本发明实施例所提供的人眼眼睑曲线的构建方法,可以应用于任意类型的电子设备,该电子设备可以为服务器或者终端设备。该电子设备与多个图像采集设备进行连接,可以获得多个图像采集设备所采集的图像和\或该图像采集设备从其所采集的图像中所识别出的图像识别结果。其中,该多个图像采集设备的图像采集区域存在重合区域,即该多个图像采集设备可以同时针对同一目标进行监控。该图像采集设备可以为摄像头以及相机等。
在一种实现方式中,多个图像采集设备可以同时针对同一目标进行监控,即针对该同一目标采集图像,其中,图像采集设备所采集的图像中包含同一目标,该目标为人脸;后续的,图像采集设备可以直接从其所采集的图像中,识别出其中所包含人脸的人眼所在区域的图像,并进行截取,相应的,每一图像采集设备获得其所采集的人眼图像,进而,每一图像采集设备将在同一时刻内采集的所获得的人眼图像,发送至电子设备。该电子设备获得每一图像采集设备在同一时刻所采集的人眼图像;并基于人眼眼睑识别模型,对每一人眼图像进行识别,从每一图像采集设备在同一时刻采集的每一人眼图像中,识别出该人眼图像中人眼的第一眼角点的第一位置信息、第二眼角点的第二位置信息以及眼 睑点的第三位置信息。
可以理解的是,不同图像采集设备采集图像的图像采集周期之间可能存在时间差上述所述的在同一时刻内采集的所获得的人眼图像,可以指在多个图像采集设备在同一采集周期内采集的所获得的人眼图像。该多个图像采集设备所采集的人眼图像为针对同一人眼所采集的图像。
电子设备可以基于人眼眼睑识别模型从每一人眼图像中识别出人眼眼睑所在位置,其中,人眼眼睑中具有明显语义特征的眼睑点,为人眼的左右眼角点;电子设备可以直接基于人眼眼睑识别模型从每一人眼图像中,识别出该人眼图像所包含的第一眼角点的第一位置信息和第二眼角点的第二位置信息。进而电子设备可以获得预先设定的所需识别出的上眼睑点的第一数量,和\或下眼睑点的第二数量,进而电子设备可以基于该第一数量,在所识别出的上眼睑上,取等分点或不等分点,以得到第一数量个上眼睑点,进而得到第一数量个上眼睑点的第三位置信息;和\或基于该第二数量,在所识别出的下眼睑上,取等分点或不等分点,以得到第二数量个下眼睑点,进而得到第二数量个下眼睑点的第三位置信息。
其中,该上眼睑点为人眼图像中人眼的上眼睑上的特征点,该下眼睑点为人眼图像中人眼的下眼睑上的特征点。该第一数量和第二数量为预先设定的数量,其两者可以相等,也可以不等。每一图像采集设备可以对应一组第一数量和第二数量,不同图像采集设备所对应的第一数量可以相等也可以不等,且图像采集设备所对应的第二数量可以相等也可以不等。该第一数量的第二数量的数值越大,所确定出的用于表征人眼的上眼睑和\或下眼睑的眼睑空间曲线方程的精确度越高。
其中,该人眼眼睑识别模型可以是基于标定有人眼眼睑的第一样本图像,训练得到的神经网络模型,也可以是目前相关的基于几何特征的算法以及局部特征分析算法等用于识别人脸各部位所在区域的算法,这都是可以的。该神经网络模型可以为卷积神经网络模型、或Discriminative Locality Alignment(DLA)模型等。
在一种情况中,本发明实施例还可以包括训练得到该人眼眼睑识别模型的过程,具体的:可以先获得初始的人眼眼睑识别模型,该初始的人眼眼睑识别模型包括特征提取层和特征分类层;获得第一样本图像,每一第一样本图像包括人眼;获得对每一第一样本图像对应的标定信息,其中,该标定信息包括第一样本图像中所包含人眼的上下眼睑所在区域的标定位置信息,还可以包括人眼的眼角点所在区域的标定位置信息,该标定信息可以为人工标定也可以是通过特定标定程序标定;将每一第一样本图像输入初始的人眼眼睑识别模型的特征提取层,得到每一第一样本图像中人眼的上眼睑所在区域的图像特征和下眼睑所在区域的图像特征;将每一第一样本图像中人眼的上眼睑所在区域的图像特征和下眼睑所在区域的图像特征,输入初始的人眼眼睑识别模型的特征分类层,得到每一第一样本图像中人眼的上眼睑所在区域的当前位置信息,以及下眼睑所在区域的当前位置信息;将每一第一样本图像中人眼的上眼睑所在区域的当前位置信息与其对应的上眼睑所在区域的标定位置信息进行匹配,并将每一第一样本图像中人眼的下眼睑所在区域的当前位置信息与其对应的下眼睑所在区域的标定位置信息进行匹配;若匹配成功,则得到包含特征提取层和特征分类层的人眼眼睑识别模型;若匹配不成功,则调整特征提取层和特征分类层参数,返回执行该将每一第一样本图像输入初始的人眼眼睑识别模型的特征提取层,得到每一第一样本图像中人眼的上眼睑所在区域的图像特征和下眼睑所在区域的图像特征的步骤;直至匹配成功,则得到包含特征提取层和特征分类层的人眼眼睑识别模型。
其中,上述将每一第一样本图像中人眼的上眼睑所在区域的当前位置信息与其对应的上眼睑所在区域的标定位置信息进行匹配,并将每一第一样本图像中人眼的下眼睑所在区域的当前位置信息与其对应的下眼睑所在区域的标定位置信息进行匹配的过程,可以是:利用预设的损失函数,计算每一上眼睑所在区域的当前位置信息与其对应的标定位置信息之间的第一损失值,并计算每一下眼睑所在区域的当前位置信息与其对应的标定位置信息之间的第二损失值,判断该第一损失值是否小于第一预设损失阈值,且判断该第二损失值是否小于第二预设损失阈值;若判断该第一损失值小于第一预设损失阈值,且该第二损失值小于第二预设损失阈值,则确定匹配成功,此时可以确定该初始的人眼眼睑识别模型收敛,即确定该初始的人眼眼睑识别模型训练完成,得到人眼眼睑识别模型;若判断该第一损失值不小于第一预设损失阈值,和\或该第二损失值不小于第二预设损失阈值,则确定匹配不成功。
其中,每一第一样本图像与上眼睑所在区域的当前位置信息存在对应关系,且每一第一样本图像与标定信息中的上眼睑所在区域的标定位置信息存在对应关系,则上眼睑所在区域的当前位置信息与标定信息中的上眼睑所在区域的标定位置信息存在对应关系。
训练得到人眼眼睑识别模型之后,电子设备则可以基于人眼眼睑识别模型识别得到每一人眼图像中的第一眼角点的第一位置信息、第二眼角点的第二位置信息以及眼睑点的第三位置信息。
一种情况中,为了更准确的确定出图像中人眼的眼角所在区域,每一第一样本图像对应的标定信息还可以包括人眼的眼角所在区域的标定位置信息,以基于第一样本图像以及该人眼的眼角所在区域的标定位置信息,训练出可以准确识别出图像中人眼的眼角所在位置的人眼眼睑识别模型。
在另一种实现方式中,在所述S101之前,所述方法还可以包括:
获得多个图像采集设备在同一时刻采集的人脸图像;基于预先建立的人脸特征点检测模型,从每一图像采集设备采集的每一人脸图像中,检测出每一人脸图像中人眼所在区域,得到人眼图像。其中,该预先建立的人脸特征点检测模型为:基于标定有人脸特征点的样本图像训练所得的模型。
在一种情况中,该多个图像采集设备可以为针对道路中的行驶车辆进行监控的图像采集设备,图像采集设备可以获得包含同一人脸的人脸图像,每一图像采集设备将采集得到的人脸图像直接发送至电子设备,电子设备获得每一图像采集设备在同一时刻采集的人脸图像之后,可以基于预先建立的人脸特征点检测模型,从每一图像采集设备采集的每一人脸图像中,检测出每一人脸图像中人眼所在区域,进而对每一人脸图像进行截取,以将所检测出的每一人脸图像中人眼所在区域,从人脸图像中截取出,得到仅人眼所在区域的人眼图像。以可以减少图像中的无用信息,减轻后续的人眼眼睑曲线的构建流程的计算负担。
其中,上述预先建立的人脸特征点检测模型为:基于标定有人脸特征点的样本图像训练所得的神经网络模型。其中,该预先建立的人脸特征点检测模型的训练过程,可以参照上述人眼眼睑识别模型的训练过程,在此不再进行赘述,其中,该预先建立的人脸特征点检测模型所需的样本图像中包含人脸,其对应的标定信息中包含人脸中各人脸特征点的标定位置信息,其中,该人脸特征点可以包括人眼的眼角点,也可以基于该眼角点所在位置确定出人眼所在区域,该人脸特征点还可以包括鼻子的各特征点,如鼻翼、鼻梁等特征点,还可以包括嘴唇的各特征点,如嘴唇的唇线边缘的各特征点等等。通过该预先建立的人脸特征点检测模型可以将人脸图像中人脸的各个部位均识别出,该各个部位包括人脸中嘴唇、鼻子、眉毛以及眼睛等。
在另一种情况中,该多个图像采集设备针对道路中的行驶车辆进行监控时,所采集得到的图像中除包含人脸之外,还包括其他信息,如包括车辆的车窗或车头等。相应的,电子设备可以从该类图像中通过预设的人脸检测模型,从图像中检测出人脸所在区域,进而从图像中截取出仅包含人脸所在区域的人脸图像,以获得人脸图像,进而执行本发明实施例所提供的后续的人眼眼睑曲线的构建流程,这也是可以的。该多个图像采集设备可以为至少两个图像采集设备。
如图1B所示,为所获得的人眼图像中的一种示意图。其中,该多个图像采集设备为3个图像采集设备,电子设备可以获得3个图像采集设备采集得到的人眼图像,如:共获得3个图像采集设备采集得到的三帧人眼图像,分别如图1B所示。
S102:基于每一图像采集设备的位姿信息和内参信息、第一位置信息以及第二位置信息,确定出第一眼角点对应的第一三维位置信息和第二眼角点对应的第二三维位置信息。
本步骤中,基于每一图像采集设备的位姿信息、内参信息以及第一位置信息,可以确定出第一眼角点对应的第一三维位置信息,即该第一眼角点对应的眼角空间点的第一三维位置信息;基于每一图像采集设备的位姿信息、内参信息以及第二位置信息,可以确定出第二眼角点对应的第二三维位置信息,即第二眼角点对应的眼角空间点的第二三维位置信息。
在一种情况中,该第一三维位置信息和第二三维位置信息可以为该多个图像采集设备中指定的图像采集设备o 0的设备坐标系下的位置信息,该指定的图像采集设备o 0为该多个图像采集设备中的任一图像采集设备。例如:获得3个图像采集设备所采集的人眼图像,该3个图像采集设备分别为图像采集设备1、图像采集设备2以及图像采集设备3,该第一三维位置信息和第二三维位置信息可以为图像采集设备1的设备坐标系下的位置信息,即该图像采集设备1为指定的图像采集设备o 0;该第一三维位置信息和第二三维位置信息也可以为图像采集设备2的设备坐标系下的位置信息,即该图像采集设备2为指定的图像采集设备o 0;该第一三维位置信息和第二三维位置信息也可以为图像采集设备3的设备坐标系下的位置信息,即该图像采集设备3为指定的图像采集设备o 0。在另一种情况中,该第一三维位置信息和第二三维位置信息也可以为世界坐标系下的位置信息。
其中,以该第一三维位置信息和第二三维位置信息为该多个图像采集设备中图像采集设备o 0的设备坐标系下的位置信息为例,对上述基于每一图像采集设备的位姿信息、内参信息以及第一位置信息,确定出第一眼角点对应的第一三维位置信息的过程,进行说明:电子设备可以首先获得每一图像采集设备采集得到相应的人眼图像时的位姿信息,以及每一图像采集设备的内参信息,其中,该内参信息可以包括图像采集设备的横轴方向上每个像素点的长度,纵轴方向上的每个像素点的长度,焦距,像 主点的位置信息以及缩放因数等,像主点为光轴与像平面的交点;位姿信息可以包括:图像采集设备的位置和姿态;基于每一图像采集设备的位姿信息和内参信息,可以确定得到多个图像采集设备两两之间的旋转平移关系,即该多个图像采集设备中每两个图像采集设备的设备坐标系之间的转换关系,其中,该多个图像采集设备两两之间的旋转平移关系包括:该多个图像采集设备中除图像采集设备o 0外的其他图像采集设备o q,分别与图像采集设备o 0之间旋转平移关系,其中,q表示该其他图像采集设备中的第q个其他图像采集设备,q可以取[1,g-1],g表示多个图像采集设备的数量,可以与人眼图像的数量n相等;基于多个图像采集设备两两之间的旋转平移关系,对每一图像采集设备所采集的人眼图像进行特征点匹配,确定出匹配的特征点对,其中,该特征点对中包括每一图像采集设备采集的人眼图像中的第一眼角点对;进而基于该特征点对在图像中的位置信息以及每一图像采集设备的设备坐标系与世界坐标系之间的转换关系,确定出特征点对对应的三维位置信息,其中,包括第一眼角点对应的第一三维位置信息。
上述基于每一图像采集设备的位姿信息、内参信息以及第二位置信息,确定出第二眼角点对应的第二三维位置信息的过程,与上述确定出第一眼角点对应的第一三维位置信息的过程相似,不再进行赘述。
一种情况,该多个图像采集设备的位姿可以是固定的,相应的,每一图像采集设备的位姿信息和内参信息可以预先通过标定算法标定得到,该标定算法可以为张正友标定法。
另一种情况,该多个图像采集设备的位姿可以是非固定的,相应的,每一图像采集设备的内参信息以及初始的位姿信息可以预先通过标定算法标定得到,该标定算法可以为张正友标定法。后续的,图像采集设备的后续的位姿信息可以通过初始的位姿信息以及图像采集设备对应的IMU(Inertial measurement unit,惯性测量单元)采集的IMU数据确定得到。其中,该IMU为用于测量所对应图像采集设备的位姿变化的器件。
S103:基于第一三维位置信息、第二三维位置信息以及预设的曲线方程,构建第一眼角约束。
可以理解的是,可以分别针对上下眼睑,预先设置曲线方程,其中,构建用于表征人眼的上眼睑的眼睑空间曲线方程和下眼睑的眼睑空间曲线方程的过程相似,后续的,以构建用于表征人眼的上眼睑的眼睑空间曲线方程的过程为例进行说明。
本步骤中,针对人眼的上眼睑,所预设的曲线方程可以表示为公式(1):
Figure PCTCN2019108072-appb-000001
其中,该a 1、a 2、a 3、b 1、b 2、b 3、c 1、c 2、和c 3分别为所需求得的系数,t为自变量,(x,y,z)表示曲线上的点的空间坐标,即曲线上的点的三维位置信息,即人眼的上眼睑上的上眼睑点的空间坐标。
将第一三维位置信息和第二三维位置信息代入预设的曲线方程中,可以构建出第一眼角约束,其中,该第一眼角约束可以为:在图像采集设备o 0的设备坐标系下的约束;
该第一眼角约束可以表示为公式(2):
Figure PCTCN2019108072-appb-000002
其中,(x 0,y 0,z 0)表示第一三维位置信息,(x 1,y 1,z 1)表示第二三维位置信息。
其中,第一眼角点和第二眼角点均既存在于人眼图像中的上眼睑又存在于人眼图像中的下眼睑,且该第一眼角点对应的眼角空间点和第二眼角点对应的眼角空间点既存在于人眼的上眼睑又存在于人眼的下眼睑。通过第一眼角约束可以同时约束人眼的上眼睑曲线又可以约束下眼睑曲线,即后续提到的用于表征人眼的上眼睑的眼睑空间曲线方程和用于表征人眼的下眼睑的眼睑空间曲线方程。同理的,后续提到的第二眼睑约束也可以同时约束人眼的上眼睑曲线又可以约束下眼睑曲线。
S104:基于第一数值、第二数值以及第一眼角约束,构建第二眼角约束。
其中,第一数值和第二数值用于约束第一眼角约束中自变量的取值范围。
本步骤中,将第一数值和第二数值分别代入第一眼角约束,构建出第二眼角约束。
其中,第二眼角约束可以表示为公式(3):
Figure PCTCN2019108072-appb-000003
该t 01表示第一眼角点对应的自变量t的取值,为第一数值,该t 02表示第二眼角点对应的自变量t的取值,为第二数值。该t 01和t 02不相等。一种情况中,t 01可以小于t 02
为了便于后续的人眼眼睑曲线的构建流程的计算,在一种实现方式中,上述t 01可以取值为0,上述t 02可以取1。
相应的,第二眼角约束可以表示为公式(4):
Figure PCTCN2019108072-appb-000004
可以变形为公式(5):
Figure PCTCN2019108072-appb-000005
限定t 01取值为0,t 02取值为1,可以将所需求解的自变量从9个减少为3个,即从a 1、a 2、a 3、b 1、b 2、b 3、c 1、c 2、和c 39个自变量,减少为a 1、a 2和a 33个自变量,减少了所需求解的自变量个数,在一定程度上可以降低后续的人眼眼睑曲线的构建流程的计算量。在限定t 01取值为0,t 02取1的情况下,也可以称该第二眼角约束为01约束。
S105:基于曲线方程、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建眼睑点对应的重投影误差约束。
本发明实施例中,基于曲线方程可以构建出每一眼睑点的三维位置信息,进而,基于每一上眼睑点的三维位置信息、每一上眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建上眼睑点对应的重投影误差约束;和\或基于每一下眼睑点的三维位置信息、每一下眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建下眼睑点对应的重投影误差约束,其中,该上眼睑点包括从每一人眼图像中所识别出的上眼睑点,该下眼睑点包括从每一人眼图像中识别出的下眼睑点。在一种实现方式中,所述S105,可以包括:
利用曲线方程,构建每一眼睑点对应的第三三维位置信息;
基于每一眼睑点对应的第三三维位置信息、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建眼睑点对应的重投影误差约束。
以构建用于表征人眼的上眼睑的眼睑空间曲线方程的过程为例进行说明,可以通过t ji表示第j个人眼图像中的第i个上眼睑点,其中,i可以取[1,M j]中的正整数,M j表示第j个人眼图像中的上眼睑点的第一数量,j可以取[1,n]中的正整数,n表示人眼图像的数量。利用曲线方程,构建得到的每一上眼睑点对应的第三三维位置信息,可以表示为公式(6):
Figure PCTCN2019108072-appb-000006
其中,
Figure PCTCN2019108072-appb-000007
表示上眼睑点t ji对应的第三三维位置信息。
若该第一三维位置信息和第二三维位置信息可以为该多个图像采集设备中指定的图像采集设备o 0的设备坐标系下的位置信息,上眼睑点t ji对应的第三三维位置信息为图像采集设备o 0的设备坐标系下的位置信息。
在一种实现方式中,所述基于每一眼睑点对应的第三三维位置信息、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建眼睑点对应的重投影误差约束的步骤,包括:
基于每一图像采集设备的位姿信息和内参信息,确定每两个图像采集设备的设备坐标系之间的转换关系;
针对每一人眼图像,基于该人眼图像中的每一眼睑点对应的第三三维位置信息以及每两个图像采集设备的设备坐标系之间的转换关系,确定出该人眼图像中每一眼睑点对应的空间点在该人眼图像中的投影点的第四位置信息;
基于每一人眼图像中每一眼睑点的第三位置信息,和该眼睑点对应的空间点在该眼睑点所在人眼图像中的投影点的第四位置信息,确定出眼睑点对应的重投影误差约束。
本实现方式中,基于每一图像采集设备的位姿信息和内参信息,确定每两个图像采集设备的设备坐标系之间的转换关系;通过该转换关系,可以将某一预设的空间点在该转换关系中一个图像采集设备的设备坐标系下的位置信息,转换至该转换关系中另一图像采集设备的设备坐标系下的位置信息。
以构建用于表征人眼的上眼睑的眼睑空间曲线方程的过程为例进行说明,在确定出每两个图像采集设备的设备坐标系之间的转换关系之后,可以针对每一人眼图像,基于该人眼图像中的每一上眼睑点对应的第三三维位置信息以及每两个图像采集设备的设备坐标系之间的转换关系,确定出该人眼图像中每一上眼睑点对应的空间点在每一人眼图像中的投影点的第四位置信息。
具体的,可以是:针对指定的图像采集设备o 0而言,每一上眼睑点对应的第三三维位置信息为:在该图像采集设备o 0的设备坐标系下的位置信息,可以直接基于图像采集设备o 0的内参信息,构建图像采集设备o 0的设备坐标系与图像坐标系之间的映射关系,作为第一映射关系;进而基于该第一映射关系,将该指定的图像采集设备o 0所采集的人眼图像中的每一上眼睑点对应的第三三维位置信息,转换至图像采集设备o 0的图像坐标系下,得到该指定的图像采集设备o 0所采集的人眼图像中的每一上眼睑点对应的空间点,在图像采集设备o 0所采集的人眼图像中的投影点的第四位置信息。
针对其他图像采集设备o q而言,可以首先从每两个图像采集设备的设备坐标系之间的转换关系中,确定该其他图像采集设备o q的设备坐标系与图像采集设备o 0的设备坐标系之间的转换关系,作为待利用转换关系;基于该待利用转换关系,将该其他图像采集设备o q所采集的人眼图像中的每一上眼睑点对应的第三三维位置信息,从图像采集设备o 0的设备坐标系下,转换至该其他图像采集设备o q的设备坐标系下,得到该其他图像采集设备o q所采集的人眼图像中的每一上眼睑点对应的第四三维位置信息;进而,基于该其他图像采集设备o q的内参信息,构建该其他图像采集设备o q的设备坐标系与图像坐标系之间的映射关系,作为该其他图像采集设备o q对应的映射关系;基于该其他图像采集设备o q对应的映射关系,将该其他图像采集设备o q所采集的人眼图像中的每一上眼睑点对应的第四三维位置信息,转换至该其他图像采集设备o q的图像坐标系下,得到该其他图像采集设备o q所采集的人眼图像中的每一上眼睑点对应的空间点,在该其他图像采集设备o q所采集的人眼图像中的投影点的第四位置信息。
后续的,可以基于每一人眼图像每一上眼睑点对应的空间点在该人眼图像中的投影点的第四位置信息以及该眼睑点得第三位置信息,构建上眼睑点对应的重投影误差约束。其中,该上眼睑点对应的重投影误差约束,可以表示为公式(7):
Figure PCTCN2019108072-appb-000008
其中,M j表示第j个人眼图像对应的第一数值,(u ji,v ji)表示第j个人眼图像中第i个上眼睑点的第三位置信息;(u′ ji,v′ ji)表示第j个人眼图像中第i个上眼睑点对应的空间点,在该第j个人眼图像中的投影点的第四位置信息。
S106:基于第一眼角约束、第二眼角约束以及重投影误差约束,构建用于表征人眼的上眼睑和\或下眼睑的眼睑空间曲线方程。
本步骤中,电子设备可以基于第一眼角约束、第二眼角约束以及人眼的上眼睑点对应的重投影误差约束,构建用于表征人眼的上眼睑的眼睑空间曲线方程,即得到表征人眼的上眼睑的眼睑空间曲线;和\或可以基于第一眼角约束、第二眼角约束以及人眼的下眼睑点对应的重投影误差约束,构建用于表征人眼的下眼睑的眼睑空间曲线方程,即得到表征人眼的下眼睑的眼睑空间曲线。
后续的,可以基于该表征人眼的上下眼睑的眼睑空间曲线可以绘制出该人眼的上下眼睑。
以构建用于表征人眼的上眼睑的眼睑空间曲线方程的过程为例进行说明。联立并求解公式(2)、(3)和(7),以求解得到用于表征人眼的上眼睑的眼睑空间曲线方程中的系数,即a 1、a 2、a 3、b 1、b 2、b 3、c 1、c 2、和c 3。在一种情况中,为了便利计算,可以联立公式(2)、(5)和(7)以求解得到用于表征人眼的上眼睑的眼睑空间曲线方程中的系数,即a 1、a 2和a 3
其中,在上述联并求解立公式,求解得到用于表征人眼的上眼睑的眼睑空间曲线方程中的系数的过程中,可以将求解上述联立的公式的问题,转化为非线性最小二乘法的优化问题,可以设置约束条件:上眼睑点对应的重投影误差不大于预设误差阈值;基于该约束条件,求解上述联立的公式,以求解得到用于表征人眼的上眼睑的眼睑空间曲线方程中的系数,构建出用于表征人眼的上眼睑的眼睑空 间曲线方程。
对于求解得到用于表征人眼的下眼睑的眼睑空间曲线方程的过程,可以参见上述求解得到用于表征人眼的上眼睑的眼睑空间曲线方程的过程,在此不再赘述。
应用本发明实施例,可以基于多个图像采集设备的位姿信息和内参信息,构建得到人眼图像中人眼的第一眼角点对应的第一三维位置信息和第二眼角点对应的第二三维位置信息,即通过多个图像采集设备针对同一人员的人眼进行监控,得到人眼图像,基于多个图像采集设备采集的人眼图像,可以准确地得到人眼中具有明显语义特征的眼角的三维位置信息,进而,基于第一三维位置信息、第二三维位置信息以及预设的曲线方程,构建第一眼角约束;基于第一预设数值、第二预设数值以及第一眼角约束,构建第二眼角约束;并利用曲线方程、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,确定眼睑点对应的重投影误差约束,通过多约束条件,可以构建得到精确度较高的用于表征人眼的上眼睑和\或下眼睑的眼睑空间曲线方程,实现对人眼眼睑曲线的构建,得到人眼眼睑的空间信息,为后续的人眼对应的人员的疲劳程度的确定提供基础。
在本发明的另一实施例中,如图2所示,所述方法可以包括如下步骤:
S201:从每一图像采集设备在同一时刻采集的每一人眼图像中,识别出该人眼图像中人眼的第一眼角点的第一位置信息、第二眼角点的第二位置信息以及眼睑点的第三位置信息。
其中,眼睑点可以包括:第一数量个上眼睑点和\或第二数量个下眼睑点。
S202:基于每一图像采集设备的位姿信息和内参信息、第一位置信息以及第二位置信息,确定出第一眼角点对应的第一三维位置信息和第二眼角点对应的第二三维位置信息。
S203:基于第一三维位置信息、第二三维位置信息以及预设的曲线方程,构建第一眼角约束。
S204:基于第一数值、第二数值以及第一眼角约束,构建第二眼角约束。
其中,第一数值和第二数值用于约束第一眼角约束中自变量的取值范围。
S205:基于曲线方程、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建眼睑点对应的重投影误差约束。
S206:基于每一人眼图像中眼睑点的有序性,针对每一人眼图像中眼睑点,构建有序性约束。
S207:基于第一眼角约束、第二眼角约束、重投影误差约束以及有序性约束,构建用于表征人眼的上眼睑和\或下眼睑的眼睑空间曲线方程。
其中,该S201与图1中所示的S101相同,该S202与图1中所示的S102相同,该S203与图1中所示的S103相同,该S204与图1中所示的S104相同,该S205与图1中所示的S105相同,在此不再赘述。
可以理解的是,在人眼图像中,所识别出的眼睑点均存在有序性,其中,在人眼图像中所识别出的上眼睑点之间存在有序性,在人眼图像中所识别出的下眼睑点之间存在有序性。本实施例中,为了确定出准确性更高的用于表征人眼的上眼睑和\或下眼睑的眼睑空间曲线方程。可以在基于第一眼角约束、第二眼角约束、重投影误差约束,构建用于表征人眼的上眼睑和\或下眼睑的眼睑空间曲线方程的前提下,继续结合所构建的有序性约束,共同构建用于表征人眼的上眼睑和\或下眼睑的眼睑空间曲线方程。
以构建用于表征人眼的上眼睑的眼睑空间曲线方程的过程为例进行说明。第j个人眼图像中的人眼的上眼睑点之间的有序性约束可以表示为:
Figure PCTCN2019108072-appb-000009
后续的,联立并求解公式(2)、(3)、(7)和(8),以求解得到用于表征人眼的上眼睑的眼睑空间曲线方程中的系数。
其中,当t 01=0,且t 02=1时,公式(8)可以变形为:
Figure PCTCN2019108072-appb-000010
后续的,联立并求解公式(2)、(5)、(7)和(9),以求解得到用于表征人眼的上眼睑的眼睑空间曲线方程中的系数。
在本发明的另一实施例中,如图3所示,所述方法可以包括如下步骤:
S301:从每一图像采集设备在同一时刻采集的每一人眼图像中,识别出该人眼图像中人眼的第一眼角点的第一位置信息、第二眼角点的第二位置信息以及眼睑点的第三位置信息。
其中,眼睑点可以包括:第一数量个上眼睑点和\或第二数量个下眼睑点。
S302:基于每一图像采集设备的位姿信息和内参信息、第一位置信息以及第二位置信息,确定出第一眼角点对应的第一三维位置信息和第二眼角点对应的第二三维位置信息。
S303:基于第一三维位置信息、第二三维位置信息以及预设的曲线方程,构建第一眼角约束。
S304:基于第一数值、第二数值以及第一眼角约束,构建第二眼角约束。
其中,第一数值和第二数值用于约束第一眼角约束中自变量的取值范围。
S305:基于曲线方程、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建眼睑点对应的重投影误差约束。
S306:基于第一眼角约束、第二眼角约束以及重投影误差约束,构建用于表征人眼的上眼睑和\或下眼睑的眼睑空间曲线方程。
S307:基于用于表征上眼睑的眼睑空间曲线方程和用于表征下眼睑的眼睑空间曲线方程,确定出人眼的当前开闭长度;
S308:获得在预设时长内所确定出的人眼的历史开闭长度;
S309:基于当前开闭长度以及历史开闭长度,确定得到人眼对应的人员的疲劳程度。
其中,该S301与图1中所示的S101相同,该S302与图1中所示的S102相同,该S303与图1中所示的S103相同,该S304与图1中所示的S104相同,该S305与图1中所示的S105相同,该S306与图1中所示的S106相同,在此不再赘述。
本实施例中,在构建出用于表征人眼的上眼睑的眼睑空间曲线方程和下眼睑的眼睑空间曲线方程之后,可以基于该用于表征人眼的上眼睑的眼睑空间曲线方程和下眼睑的眼睑空间曲线方程,计算得到人眼的当前开闭长度,进而,结合时间维度信息,即该人眼的历史开闭长度,确定人眼对应的人员的疲劳程度。
其中,为了保证所确定出的该人眼对应的人员的疲劳程度的及时性,电子设备可以获得多个图像采集设备在当前时刻采集的人眼图像,即该人眼图像为各图像采集设备在当前时刻所采集的人眼图像。
可以理解的是,电子设备本地或所连接的存储设备中,可以存储有该人眼的历史开闭长度,在计算得到人眼的当前开闭长度之后,电子设备可以从相应的存储位置处获得该人眼的历史开闭长度。其中,该人眼的历史开闭长度为:基于该多个图像采集设备所采集的该人眼图像之前的人眼图像,所确定得到的该人眼的开闭长度。
本发明实施例中,通过用于表征上眼睑的眼睑空间曲线方程和用于表征下眼睑的眼睑空间曲线方程,可以确定出更加准确的人眼的开闭长度,即人眼开闭的物理长度,进而,结合时间维度,可以更加灵活、准确地监控得到人眼对应人眼的疲劳程度。
在本发明的一种可选的实现方式中,所述基于用于表征上眼睑的空间眼睑曲线方程和用于表征下眼睑的空间眼睑曲线方程,确定出人眼的当前开闭长度的步骤,可以包括:
计算得到用于表征上眼睑的空间眼睑曲线方程和用于表征下眼睑的空间眼睑曲线方程之间的最大距离;
将最大距离,作为人眼的当前开闭长度。
本实现方式中,可以是电子设备分别基于用于表征上眼睑的空间眼睑曲线方程和用于表征下眼睑的空间眼睑曲线方程,分别从用于表征上眼睑的空间眼睑曲线和用于表征下眼睑的空间眼睑曲线中,选取出所对应横轴坐标值相同,且所对应纵轴坐标值相等的点对,针对每一点对,计算该两点之间的距离,确定出所计算距离最大的点对,作为目标点对,将该目标点对之间的距离作为该最大距离,作为人眼的当前开闭长度。
也可以是:从用于表征上眼睑的空间眼睑曲线中选取二等分点,即该用于表征上眼睑的空间眼睑曲线的中心点,作为第一中心点;并从用于表征下眼睑的空间眼睑曲线方程中选取二等分点,即该用于表征下眼睑的空间眼睑曲线的中心点,作为第二中心点;计算第一中心点和第二中心点之间的距离,作为最大距离,作为人眼的当前开闭长度。等。
在本发明的一种可选的实现方式中,所述基于当前开闭长度以及历史开闭长度,确定得到人眼对应的人员的疲劳程度的步骤,可以包括:
将每一开闭长度与预设长度阈值进行比较,获得比较结果,其中,开闭长度包括当前开闭长度以及历史开闭长度;
统计得到表征开闭长度小于预设长度阈值的比较结果的第一结果数量;
基于当前开闭长度以及历史开闭长度的总数量和第一结果数量,确定得到人眼对应的人员的疲劳程度。
本实现方式中,电子设备可以获得预先设置的预设长度阈值,并将每一开闭长度与预设长度阈值进行比较,以比较每一开闭长度与预设长度阈值的大小,得到比较结果;进而,统计得到表征开闭长度小于预设长度阈值的比较结果的数量,作为第一结果数量;后续的,基于当前开闭长度以及历史开闭长度的总数量和第一结果数量,确定得到人眼对应的人员的疲劳程度。其中,该基于当前开闭长度 以及历史开闭长度的总数量和第一结果数量,确定得到人眼对应的人员的疲劳程度的过程,可以是:计算第一结果数量和总数量的比值,若该比值大于预设比值,则确定该人眼对应的人员的疲劳程度为疲劳;若该比值不大于预设比值,则确定该人眼对应的人员的疲劳程度为不疲劳。也可以是:计算总数量和第一结果数量的差值,若该差值小于预设差值,则确定该人眼对应的人员的疲劳程度为疲劳;若该差值不小于预设差值,则确定该人眼对应的人员的疲劳程度为不疲劳。
另一种实现方式中,在统计得到表征开闭长度小于预设长度阈值的比较结果的第一结果数量之后,可以直接将该第一数量与预设数量进行比较,若该第一结果数量大于该预设数量,则确定该人眼对应的人员的疲劳程度为疲劳;若该第一结果数量不大于该预设数量,则确定该人眼对应的人员的疲劳程度为不疲劳。
本发明实施例中,为了在一定程度上减少因疲劳驾驶所导致的车祸的情况的发生,在确定出人眼对应的人员的疲劳程度为疲劳的情况下,可以生成告警信息,以提示用户人眼对应的人员处于疲劳状态,以便用户可以针对该种情况采取相应措施,以在一定程度上减少因疲劳驾驶所导致的车祸的情况的发生。
相应于上述方法实施例,本发明实施例提供了一种人眼眼睑曲线的构建装置,如图4所示,可以包括:
识别模块410,被配置为从每一图像采集设备在同一时刻采集的每一人眼图像中,识别出该人眼图像中人眼的第一眼角点的第一位置信息、第二眼角点的第二位置信息以及眼睑点的第三位置信息,其中,所述眼睑点包括:第一数量个上眼睑点和\或第二数量个下眼睑点;
第一确定模块420,被配置为基于每一图像采集设备的位姿信息和内参信息、所述第一位置信息以及所述第二位置信息,确定出所述第一眼角点对应的第一三维位置信息和所述第二眼角点对应的第二三维位置信息;
第一构建模块430,被配置为基于所述第一三维位置信息、所述第二三维位置信息以及预设的曲线方程,构建第一眼角约束;
第二构建模块440,被配置为基于第一数值、第二数值以及所述第一眼角约束,构建第二眼角约束,其中,所述第一数值和所述第二数值用于约束所述第一眼角约束中自变量的取值范围;
第三构建模块450,被配置为基于所述曲线方程、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建眼睑点对应的重投影误差约束;
第四构建模块460,被配置为基于所述第一眼角约束、所述第二眼角约束以及所述重投影误差约束,构建用于表征所述人眼的上眼睑和\或下眼睑的眼睑空间曲线方程。
应用本发明实施例,可以基于多个图像采集设备的位姿信息和内参信息,构建得到人眼图像中人眼的第一眼角点对应的第一三维位置信息和第二眼角点对应的第二三维位置信息,即通过多个图像采集设备针对同一人员的人眼进行监控,得到人眼图像,基于多个图像采集设备采集的人眼图像,可以准确地得到人眼中具有明显语义特征的眼角的三维位置信息,进而,基于第一三维位置信息、第二三维位置信息以及预设的曲线方程,构建第一眼角约束;基于第一预设数值、第二预设数值以及第一眼角约束,构建第二眼角约束;并利用曲线方程、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,确定眼睑点对应的重投影误差约束,通过多约束条件,可以构建得到精确度较高的用于表征人眼的上眼睑和\或下眼睑的眼睑空间曲线方程,实现对人眼眼睑曲线的构建,得到人眼眼睑的空间信息,为后续的人眼对应的人员的疲劳程度的确定提供基础。
在本发明的另一实施例中,所述装置还可以包括:
第五构建模块,被配置为在所述基于所述第一眼角约束、所述第二眼角约束以及所述重投影误差约束,构建用于表征所述人眼的上眼睑和\或下眼睑的眼睑空间曲线方程之前,基于每一人眼图像中眼睑点的有序性,针对每一人眼图像中眼睑点,构建有序性约束;
所述第四构建模块460,被具体配置为:基于所述第一眼角约束、所述第二眼角约束、所述重投影误差约束以及所述有序性约束,构建用于表征所述人眼的上眼睑和\或下眼睑的眼睑空间曲线方程。
在本发明的另一实施例中,所述第三构建模块450,包括:第一构建单元,被配置为利用所述曲线方程,构建每一眼睑点对应的第三三维位置信息;第二构建单元,被配置为基于每一眼睑点对应的第三三维位置信息、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建眼睑点对应的重投影误差约束。
在本发明的另一实施例中,所述第二构建单元,被具体配置为基于每一图像采集设备的位姿信息和内参信息,确定每两个图像采集设备的设备坐标系之间的转换关系;针对每一人眼图像,基于该人 眼图像中的每一眼睑点对应的第三三维位置信息以及每两个图像采集设备的设备坐标系之间的转换关系,确定出该人眼图像中每一眼睑点对应的空间点在该人眼图像中的投影点的第四位置信息;基于每一人眼图像中每一眼睑点的第三位置信息,和该眼睑点对应的空间点在该眼睑点所在人眼图像中的投影点的第四位置信息,确定出眼睑点对应的重投影误差约束。
在本发明的另一实施例中,所述装置还可以包括:第一获得模块,被配置为在所述从每一图像采集设备在同一时刻采集的每一人眼图像中,识别出该人眼图像中人眼的第一眼角点的第一位置信息、第二眼角点的第二位置信息以及眼睑点的第三位置信息之前,获得多个图像采集设备在同一时刻采集的人脸图像;检测模块,被配置为基于预先建立的人脸特征点检测模型,从每一图像采集设备采集的每一人脸图像中,检测出每一人脸图像中人眼所在区域,得到人眼图像,其中,所述预先建立的人脸特征点检测模型为:基于标定有人脸特征点的样本图像训练所得的模型。
在本发明的另一实施例中,所述装置还可以包括:第二确定模块,被配置为在所述基于所述第一眼角约束、所述第二眼角约束以及所述重投影误差约束,构建用于表征所述人眼的上眼睑和\或下眼睑的眼睑空间曲线方程之后,基于用于表征所述上眼睑的眼睑空间曲线方程和用于表征所述下眼睑的眼睑空间曲线方程,确定出所述人眼的当前开闭长度;第二获得模块,被配置为获得在预设时长内所确定出的所述人眼的历史开闭长度;第三确定模块,被配置为基于所述当前开闭长度以及所述历史开闭长度,确定得到所述人眼对应的人员的疲劳程度。
在本发明的另一实施例中,所述第二确定模块,被具体配置为:计算得到用于表征所述上眼睑的空间眼睑曲线方程和用于表征所述下眼睑的空间眼睑曲线方程之间的最大距离;将所述最大距离,作为所述人眼的当前开闭长度。
在本发明的另一实施例中,所述第三确定模块,被具体配置为:将每一开闭长度与预设长度阈值进行比较,获得比较结果,其中,所述开闭长度包括所述当前开闭长度以及所述历史开闭长度;统计得到表征开闭长度小于所述预设长度阈值的比较结果的第一结果数量;基于所述当前开闭长度以及所述历史开闭长度的总数量和所述第一结果数量,确定得到所述人眼对应的人员的疲劳程度。
上述装置实施例与方法实施例相对应,与该方法实施例具有同样的技术效果,具体说明参见方法实施例。装置实施例是基于方法实施例得到的,具体的说明可以参见方法实施例部分,此处不再赘述。
本领域普通技术人员可以理解:附图只是一个实施例的示意图,附图中的模块或流程并不一定是实施本发明所必须的。
本领域普通技术人员可以理解:实施例中的装置中的模块可以按照实施例描述分布于实施例的装置中,也可以进行相应变化位于不同于本实施例的一个或多个装置中。上述实施例的模块可以合并为一个模块,也可以进一步拆分成多个子模块。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明实施例技术方案的精神和范围。

Claims (10)

  1. 一种人眼眼睑曲线的构建方法,其特征在于,包括:
    从每一图像采集设备在同一时刻采集的每一人眼图像中,识别出该人眼图像中人眼的第一眼角点的第一位置信息、第二眼角点的第二位置信息以及眼睑点的第三位置信息,其中,所述眼睑点包括:第一数量个上眼睑点和\或第二数量个下眼睑点;
    基于每一图像采集设备的位姿信息和内参信息、所述第一位置信息以及所述第二位置信息,确定出所述第一眼角点对应的第一三维位置信息和所述第二眼角点对应的第二三维位置信息;
    基于所述第一三维位置信息、所述第二三维位置信息以及预设的曲线方程,构建第一眼角约束;
    基于第一数值、第二数值以及所述第一眼角约束,构建第二眼角约束,其中,所述第一数值和所述第二数值用于约束所述第一眼角约束中自变量的取值范围;
    基于所述曲线方程、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建眼睑点对应的重投影误差约束;
    基于所述第一眼角约束、所述第二眼角约束以及所述重投影误差约束,构建用于表征所述人眼的上眼睑和\或下眼睑的眼睑空间曲线方程。
  2. 如权利要求1所述的方法,其特征在于,在所述基于所述第一眼角约束、所述第二眼角约束以及所述重投影误差约束,构建用于表征所述人眼的上眼睑和\或下眼睑的眼睑空间曲线方程的步骤之前,所述方法还包括:
    基于每一人眼图像中眼睑点的有序性,针对每一人眼图像中眼睑点,构建有序性约束;
    所述基于所述第一眼角约束、所述第二眼角约束以及所述重投影误差约束,构建用于表征所述人眼的上眼睑和\或下眼睑的眼睑空间曲线方程的步骤,包括:
    基于所述第一眼角约束、所述第二眼角约束、所述重投影误差约束以及所述有序性约束,构建用于表征所述人眼的上眼睑和\或下眼睑的眼睑空间曲线方程。
  3. 如权利要求1-2任一项所述的方法,其特征在于,所述基于所述曲线方程、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建眼睑点对应的重投影误差约束的步骤,包括:
    利用所述曲线方程,构建每一眼睑点对应的第三三维位置信息;
    基于每一眼睑点对应的第三三维位置信息、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建眼睑点对应的重投影误差约束。
  4. 如权利要求3所述的方法,其特征在于,所述基于每一眼睑点对应的第三三维位置信息、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建眼睑点对应的重投影误差约束的步骤,包括:
    基于每一图像采集设备的位姿信息和内参信息,确定每两个图像采集设备的设备坐标系之间的转换关系;
    针对每一人眼图像,基于该人眼图像中的每一眼睑点对应的第三三维位置信息以及每两个图像采集设备的设备坐标系之间的转换关系,确定出该人眼图像中每一眼睑点对应的空间点在该人眼图像中的投影点的第四位置信息;
    基于每一人眼图像中每一眼睑点的第三位置信息,和该眼睑点对应的空间点在该眼睑点所在人眼图像中的投影点的第四位置信息,确定出眼睑点对应的重投影误差约束。
  5. 如权利要求1所述的方法,其特征在于,在所述从每一图像采集设备在同一时刻采集的每一人眼图像中,识别出该人眼图像中人眼的第一眼角点的第一位置信息、第二眼角点的第二位置信息以及眼睑点的第三位置信息的步骤之前,所述方法还包括:
    获得多个图像采集设备在同一时刻采集的人脸图像;
    基于预先建立的人脸特征点检测模型,从每一图像采集设备采集的每一人脸图像中,检测出每一人脸图像中人眼所在区域,得到人眼图像,其中,所述预先建立的人脸特征点检测模型为:基于标定有人脸特征点的样本图像训练所得的模型。
  6. 如权利要求1-5任一项所述的方法,其特征在于,在所述基于所述第一眼角约束、所述第二眼角约束以及所述重投影误差约束,构建用于表征所述人眼的上眼睑和\或下眼睑的眼睑空间曲线方程的步骤之后,所述方法还包括:
    基于用于表征所述上眼睑的眼睑空间曲线方程和用于表征所述下眼睑的眼睑空间曲线方程,确定 出所述人眼的当前开闭长度;
    获得在预设时长内所确定出的所述人眼的历史开闭长度;
    基于所述当前开闭长度以及所述历史开闭长度,确定得到所述人眼对应的人员的疲劳程度。
  7. 如权利要求6所述的方法,其特征在于,所述基于用于表征所述上眼睑的空间眼睑曲线方程和用于表征所述下眼睑的空间眼睑曲线方程,确定出所述人眼的当前开闭长度的步骤,包括:
    计算得到用于表征所述上眼睑的空间眼睑曲线方程和用于表征所述下眼睑的空间眼睑曲线方程之间的最大距离;
    将所述最大距离,作为所述人眼的当前开闭长度。
  8. 如权利要求6所述的方法,其特征在于,所述基于所述当前开闭长度以及所述历史开闭长度,确定得到所述人眼对应的人员的疲劳程度的步骤,包括:
    将每一开闭长度与预设长度阈值进行比较,获得比较结果,其中,所述开闭长度包括所述当前开闭长度以及所述历史开闭长度;
    统计得到表征开闭长度小于所述预设长度阈值的比较结果的第一结果数量;
    基于所述当前开闭长度以及所述历史开闭长度的总数量和所述第一结果数量,确定得到所述人眼对应的人员的疲劳程度。
  9. 一种人眼眼睑曲线的构建装置,其特征在于,包括:
    识别模块,被配置为从每一图像采集设备在同一时刻采集的每一人眼图像中,识别出该人眼图像中人眼的第一眼角点的第一位置信息、第二眼角点的第二位置信息以及眼睑点的第三位置信息,其中,所述眼睑点包括:第一数量个上眼睑点和\或第二数量个下眼睑点;
    第一确定模块,被配置为基于每一图像采集设备的位姿信息和内参信息、所述第一位置信息以及所述第二位置信息,确定出所述第一眼角点对应的第一三维位置信息和所述第二眼角点对应的第二三维位置信息;
    第一构建模块,被配置为基于所述第一三维位置信息、所述第二三维位置信息以及预设的曲线方程,构建第一眼角约束;
    第二构建模块,被配置为基于第一数值、第二数值以及所述第一眼角约束,构建第二眼角约束,其中,所述第一数值和所述第二数值用于约束所述第一眼角约束中自变量的取值范围;
    第三构建模块,被配置为基于所述曲线方程、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建眼睑点对应的重投影误差约束;
    第四构建模块,被配置为基于所述第一眼角约束、所述第二眼角约束以及所述重投影误差约束,构建用于表征所述人眼的上眼睑和\或下眼睑的眼睑空间曲线方程。
  10. 如权利要求9所述的装置,其特征在于,所述装置还包括:
    第五构建模块,被配置为在所述基于所述第一眼角约束、所述第二眼角约束以及所述重投影误差约束,构建用于表征所述人眼的上眼睑和\或下眼睑的眼睑空间曲线方程之前,基于每一人眼图像中眼睑点的有序性,针对每一人眼图像中眼睑点,构建有序性约束;
    所述第四构建模块,被具体配置为
    基于所述第一眼角约束、所述第二眼角约束、所述重投影误差约束以及所述有序性约束,构建用于表征所述人眼的上眼睑和\或下眼睑的眼睑空间曲线方程。
PCT/CN2019/108072 2019-05-26 2019-09-26 一种人眼眼睑曲线的构建方法及装置 WO2020237939A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910443046.3A CN110956067B (zh) 2019-05-26 2019-05-26 一种人眼眼睑曲线的构建方法及装置
CN201910443046.3 2019-05-26

Publications (1)

Publication Number Publication Date
WO2020237939A1 true WO2020237939A1 (zh) 2020-12-03

Family

ID=69975435

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/108072 WO2020237939A1 (zh) 2019-05-26 2019-09-26 一种人眼眼睑曲线的构建方法及装置

Country Status (2)

Country Link
CN (1) CN110956067B (zh)
WO (1) WO2020237939A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221599B (zh) * 2020-01-21 2022-06-10 魔门塔(苏州)科技有限公司 一种眼睑曲线的构建方法及装置
CN113516705B (zh) * 2020-04-10 2024-04-02 魔门塔(苏州)科技有限公司 一种手部关键点的标定方法及装置
CN112971877B (zh) * 2021-02-05 2022-05-27 中国科学院深圳先进技术研究院 一种用于眼睑翻开的软体装置及方法
US20230100638A1 (en) * 2021-02-05 2023-03-30 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Soft-bodied apparatus and method for opening eyelid

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281646A (zh) * 2008-05-09 2008-10-08 山东大学 基于视觉的驾驶员疲劳实时检测方法
CN101375796A (zh) * 2008-09-18 2009-03-04 浙江工业大学 疲劳驾驶实时检测系统
CN104036299A (zh) * 2014-06-10 2014-09-10 电子科技大学 一种基于局部纹理aam的人眼轮廓跟踪方法
WO2016116201A1 (de) * 2015-01-19 2016-07-28 Robert Bosch Gmbh Verfahren und vorrichtung zum erkennen von sekundenschlaf eines fahrers eines fahrzeugs
CN109271875A (zh) * 2018-08-24 2019-01-25 中国人民解放军火箭军工程大学 一种基于眉部和眼部关键点信息的疲劳检测方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5221436B2 (ja) * 2009-04-02 2013-06-26 トヨタ自動車株式会社 顔特徴点検出装置及びプログラム
CN104091150B (zh) * 2014-06-26 2019-02-26 浙江捷尚视觉科技股份有限公司 一种基于回归的人眼状态判断方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281646A (zh) * 2008-05-09 2008-10-08 山东大学 基于视觉的驾驶员疲劳实时检测方法
CN101375796A (zh) * 2008-09-18 2009-03-04 浙江工业大学 疲劳驾驶实时检测系统
CN104036299A (zh) * 2014-06-10 2014-09-10 电子科技大学 一种基于局部纹理aam的人眼轮廓跟踪方法
WO2016116201A1 (de) * 2015-01-19 2016-07-28 Robert Bosch Gmbh Verfahren und vorrichtung zum erkennen von sekundenschlaf eines fahrers eines fahrzeugs
CN109271875A (zh) * 2018-08-24 2019-01-25 中国人民解放军火箭军工程大学 一种基于眉部和眼部关键点信息的疲劳检测方法

Also Published As

Publication number Publication date
CN110956067A (zh) 2020-04-03
CN110956067B (zh) 2022-05-17

Similar Documents

Publication Publication Date Title
WO2020237939A1 (zh) 一种人眼眼睑曲线的构建方法及装置
WO2020215961A1 (zh) 面向室内环境控制的人员信息检测方法与系统
García et al. Driver monitoring based on low-cost 3-D sensors
CN106682603B (zh) 一种基于多源信息融合的实时驾驶员疲劳预警系统
CN111126399A (zh) 一种图像检测方法、装置、设备及可读存储介质
WO2019129255A1 (zh) 一种目标跟踪方法及装置
WO2022037387A1 (zh) 一种视觉感知算法的评测方法及装置
CN108596087B (zh) 一种基于双网络结果的驾驶疲劳程度检测回归模型
CN114359181B (zh) 一种基于图像和点云的智慧交通目标融合检测方法及系统
WO2021253245A1 (zh) 识别车辆变道趋势的方法和装置
CN113139437B (zh) 一种基于YOLOv3算法的安全帽佩戴检查方法
WO2020181426A1 (zh) 一种车道线检测方法、设备、移动平台及存储介质
CN106920247A (zh) 一种基于比对网络的目标跟踪方法及装置
JP2021531601A (ja) ニューラルネットワーク訓練、視線検出方法及び装置並びに電子機器
CN104103077A (zh) 一种人头检测方法和装置
CN115841651B (zh) 基于计算机视觉与深度学习的施工人员智能监测系统
CN109784296A (zh) 客车乘客数量统计方法、装置及计算机可读存储介质
CN108805184B (zh) 一种固定空间、车辆上的图像识别方法及系统
CN106570440A (zh) 基于图像分析的人数统计方法和人数统计装置
US8971592B2 (en) Method for determining eye location on a frontal face digital image to validate the frontal face and determine points of reference
WO2022042203A1 (zh) 一种人体关键点的检测方法及装置
CN105718896A (zh) 一种具有目标识别功能的智能机器人
CN113284120B (zh) 限高高度测量方法及装置
CN113008380B (zh) 一种智慧ai体温预警方法、系统及存储介质
CN112699748B (zh) 基于yolo及rgb图像的人车距离估计方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19930266

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19930266

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19930266

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24/06/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19930266

Country of ref document: EP

Kind code of ref document: A1