WO2023231663A1 - Eye type detection method and apparatus, computer device, storage medium and computer program product - Google Patents

Eye type detection method and apparatus, computer device, storage medium and computer program product Download PDF

Info

Publication number
WO2023231663A1
WO2023231663A1 PCT/CN2023/091183 CN2023091183W WO2023231663A1 WO 2023231663 A1 WO2023231663 A1 WO 2023231663A1 CN 2023091183 W CN2023091183 W CN 2023091183W WO 2023231663 A1 WO2023231663 A1 WO 2023231663A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye
strabismus
target object
image
angle
Prior art date
Application number
PCT/CN2023/091183
Other languages
French (fr)
Chinese (zh)
Inventor
马文辉
吴阳平
许亮
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023231663A1 publication Critical patent/WO2023231663A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/08Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing binocular or stereoscopic vision, e.g. strabismus
    • A61B3/085Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing binocular or stereoscopic vision, e.g. strabismus for testing strabismus
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography

Definitions

  • the present disclosure relates to but is not limited to the field of artificial intelligence, and in particular, to an eye type detection method, device, computer equipment, storage medium and computer program product.
  • Gaze usually represents the content of human attention or the intention of interaction.
  • artificial intelligence technology can support line-of-sight-based interaction scenarios, such as in smart home scenes, smart mobile electronic devices, and smart driving scenarios, it supports line-of-sight detection of areas or objects that people are interested in, or line-of-sight control for air-to-air control.
  • the Driver Monitor System will determine parameters such as eye gaze direction or gaze area based on the detection of the driver's eyes to determine whether the driver is distracted.
  • the present disclosure provides an eye type detection method, device, computer equipment, storage medium and computer program product.
  • an embodiment of the present disclosure provides an eye type detection method, which method includes:
  • the eye strabismus type of the target object is calibrated according to the strabismus eye in the front face image.
  • an eye type detection device which includes:
  • the first acquisition module is configured to acquire the front face image of the target object collected by the image acquisition device when it is determined that the target object is facing the image acquisition device;
  • a first determination module configured to respectively determine the pupil position of the left eye and the pupil position of the right eye of the target object in the front face image
  • a second determination module configured to determine the gaze angle of the left eye based on the pupil position of the left eye, and determine the gaze angle of the right eye of the target object based on the pupil position of the right eye;
  • a third determination module configured to determine the angle difference between the gaze angle of the left eye and the gaze angle of the right eye
  • the fourth determination module is configured to determine the one with a larger absolute value of the gaze angles of the left eye and the right eye as the front face when the angle difference exceeds the first predetermined angle range. squinting eyes in the image;
  • a calibration module configured to calibrate the eye strabismus type of the target object according to the strabismus eye in the front face image.
  • an embodiment of the present disclosure provides a computer device, including: a processor; and a memory for storing instructions executable by the processor; wherein the processor is configured to execute the above eye type detection method.
  • embodiments of the present disclosure provide a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the above eye type detection method is implemented.
  • embodiments of the present disclosure provide a computer program product, the computer program product comprising a computer program, when the computer program is run on a computer device, causing the computer device to perform the eye type as described above Steps in the detection method.
  • the angle difference between the gaze angle of the left eye and the gaze angle of the right eye is determined based on the pupil position of the target object's left eye and the pupil position of the right eye in the front face image.
  • the angle difference exceeds the third In the case of a predetermined angle range, the one with the larger absolute value of the gaze angle of the left eye and the right eye is determined as the strabismus eye in the front face image, and then the target object's eye strabismus type is calibrated.
  • cover inspection method strabismus angle measurement method, or using a detection instrument for strabismus to detect strabismus.
  • the embodiment of the present disclosure can conveniently and accurately detect the strabismus type of the target object's eyes through image processing.
  • Figure 1 is a flowchart 1 of the implementation of an eye type detection method provided by an embodiment of the present disclosure
  • Figure 2 is an example diagram of the gaze angle of the left eye and the gaze angle of the right eye in a camera coordinate system provided by an embodiment of the present disclosure
  • Figure 3 is a flowchart 2 of the implementation of an eye type detection method provided by an embodiment of the present disclosure
  • Figure 4 is a flowchart 3 of the implementation of an eye type detection method provided by an embodiment of the present disclosure
  • Figure 5 is a schematic structural diagram of an eye type detection device provided by an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of a hardware entity of a computer device provided by an embodiment of the present disclosure.
  • Embodiments of the present disclosure provide an eye type detection method, the execution subject of which may be an eye type detection device.
  • the eye type detection method may be executed by a terminal device or a server or other electronic device, where the terminal device may be a user device (User).
  • Equipment, UE mobile devices, user terminals, terminals, cellular phones, cordless phones, personal digital assistants (Personal Digital Assistant, PDA), handheld devices, computing devices, vehicle-mounted devices, wearable devices, etc.
  • the eye type detection method may be implemented by a processor calling computer readable instructions stored in a memory.
  • the eye type detection device may include an image acquisition device, thereby obtaining an image using the image acquisition device.
  • the eye type detection device can also receive images sent from other image acquisition devices.
  • FIG 1 is an implementation flowchart 1 of an eye type detection method provided by an embodiment of the present disclosure. As can be seen from Figure 1, the method includes the following steps S11 to S16:
  • Step S11 When it is determined that the target object is facing the image acquisition device, obtain the front face image of the target object collected by the image acquisition device;
  • Step S12 Determine the pupil position of the left eye and the pupil position of the right eye of the target object in the front face image respectively;
  • Step S13 Determine the gaze angle of the left eye according to the pupil position of the left eye, and determine the gaze angle of the right eye of the target object according to the pupil position of the right eye;
  • Step S14 Determine the angle difference between the gaze angle of the left eye and the gaze angle of the right eye;
  • Step S15 When the angle difference exceeds the first predetermined angle range, determine the one with the larger absolute value of the gaze angle of the left eye and the right eye as the strabismus in the front face image. Eye;
  • Step S16 Calibrate the eye strabismus type of the target object based on the strabismus eye in the front face image.
  • step S11 determines that the target object is facing the image acquisition device (such as a camera). It may be to output a prompt message requesting the camera to face the target object and then determine that the target object is facing the image acquisition device; or to determine the target in a specific scene.
  • the object is facing the image acquisition device, such as the image acquisition device installed in the cabin facing the driver's seat. In the scene of the vehicle driving in a straight line at high speed, it is determined that the driver is facing the image acquisition device; or during the online consultation process, it is determined that the doctor is facing the computer. Camera. Therefore, when it is determined that the target object is facing the image acquisition device, a front face image of the target object facing the image acquisition device collected by the image acquisition device is obtained.
  • the front face image obtained by the present disclosure of the target object may be a front face image filtered out based on the facial features in the extracted image among the images collected by the image acquisition device.
  • the face rotation angle can be determined based on the facial features, and the image with the face rotation angle smaller than the threshold can be used as the front face image; it can also be determined based on the facial features, whether the nose is on the central axis of the face outline, and the nose is placed on the central axis of the face.
  • the image on the central axis of the facial contour is used as the front face image.
  • step S12 is executed to respectively determine the pupil position of the left eye and the pupil position of the right eye of the target object in the front face image.
  • the pupil position (including the pupil position of the left eye and the pupil position of the right eye) can be the position of the pupil outline in the image coordinate system of the frontal image, or it can be the pupil center
  • the position in the image coordinate system of the front face image; the image coordinate system can be a coordinate system in pixels with the upper left corner of the image as the origin.
  • the abscissa and ordinate of the pixel are the number of columns in its image array respectively. and the row number, the image coordinate system is a two-dimensional rectangular coordinate system.
  • the image can be converted into a grayscale image and binarized on the image, based on the binarized image Determine the pupil outline and obtain the pupil position.
  • the centroid method can be used to determine the position of the pupil center in the pupil outline.
  • the average value of the sum of all directions can also be calculated based on the sum of the displacement vectors in all directions and the gradient direction of each pixel within the pupil range, and the pupil range can be The coordinates of the point corresponding to the maximum average of the sums of all pixel points are used as the position of the pupil center.
  • step S13 is performed to determine the gaze angle of the left eye based on the pupil position of the left eye, and determine the gaze angle of the right eye based on the pupil position of the right eye.
  • the gaze angle represents the angle between the gaze direction of the left eye or the right eye and the normal direction of the imaging surface of the image acquisition device.
  • the gaze angle can be positive or negative.
  • the gaze direction is located in the normal direction of the imaging surface of the image acquisition device. On the left, the gaze angle is a negative value; the gaze direction is on the right side of the normal direction of the imaging surface of the image acquisition device, the gaze angle is a positive value, etc.
  • the gaze angle of the present disclosure may be an angle in the image coordinate system, the angle in the image acquisition device coordinate system, or the angle in the world coordinate system, and the disclosure does not limit this.
  • the coordinate system of the image acquisition device is a three-dimensional rectangular coordinate system established with the focus center of the image acquisition device as the origin and the optical axis as the Z-axis.
  • the present disclosure can use the feature point detection method to extract the eye contour features in the front face image.
  • the relative position of the pupil outline and the eye outline Determine the gaze angle of the eye in the image coordinate system.
  • feature point detection methods include but are not limited to any of the following methods: local-based methods, global-based methods, hybrid-based methods, Active Shape Model (ASM) and Active Appearance Model (Active Appearance Model). , AAM).
  • the present disclosure determines a first distance between the leftmost end of the pupil outline and the leftmost end of the eye outline and/or a second distance between the rightmost end of the pupil outline and the rightmost end of the eye outline based on the relative position of the pupil outline and the eye outline. Distance; determine the gaze angle of the eye based on the correspondence between the predetermined first distance and/or the second distance and the gaze angle.
  • the width of the eyes is about 30 mm
  • the horizontal gaze angle of one eye is between [-75, 75] degrees (the gaze angle when looking straight ahead is set to 0° (degrees))
  • the first A distance of 15 mm corresponds to a gaze angle of 0 degrees
  • a first distance of 5 mm corresponds to a gaze angle of -50 degrees (i.e. 50 degrees to the left)
  • a first distance of 20 mm corresponds to a gaze angle of 50 degrees (i.e. to the left). 50 degrees to the right).
  • the present disclosure can use the aforementioned feature point detection method to extract the eye contour features in the front-face image. According to the position of the pupil center in the eye contour area Relative position, determine the gaze angle of the eye in the image coordinate system corresponding to the relative position.
  • the gaze angle of the eye in the coordinate system of the image acquisition device can be determined based on the internal parameters of the image acquisition device. In some embodiments, after determining the gaze angle in the coordinate system of the image acquisition device, the gaze angle of the eye in the world coordinate system may be determined based on external parameters of the image acquisition device.
  • the front face image when determining the gaze angle of the corresponding eye according to the pupil position, can also be input into the gaze angle detection model to obtain the gaze angle of the left eye and the gaze angle of the right eye; wherein, the gaze angle detection model It can be trained based on deep learning networks, for example, based on training sample data and label values, training and parameter adjustment of networks such as convolutional neural networks (Convolutional Neural Networks, CNN), deep learning networks (Deep Neural Networks, DNN), etc. Finally, the gaze angle detection model is obtained; where the label value is the predetermined gaze angle.
  • the present disclosure is based on a trained gaze angle detection model. By inputting the front face image of the target object into the model, the gaze angle of the left eye and the gaze angle of the right eye can be obtained.
  • the gaze angle of the left eye can be determined according to the pupil position of the left eye
  • the gaze angle of the right eye can be determined according to the pupil position of the right eye
  • step S14 is performed to determine the angle difference between the gaze angle of the left eye and the gaze angle of the right eye.
  • the angle difference may be the difference between the gaze angle of the left eye and the gaze angle of the right eye, or may be the result of a weighted calculation of the gaze angle of the left eye and the gaze angle of the right eye.
  • step S15 of the present disclosure determines the one with the larger absolute value of the gaze angle of the left eye and the right eye as the strabismus eye in the front face image when the angle difference exceeds the first predetermined angle range. .
  • the angle difference is the difference between the gaze angle of the left eye and the gaze angle of the right eye.
  • the angle difference is a negative value, correspondingly, if the angle difference exceeds the first predetermined angle range, it can be less than the predetermined negative value; if the angle difference is a positive value, , correspondingly, the first predetermined angle range may be greater than the predetermined positive value, and the present disclosure does not limit the specific content of the first predetermined angle range.
  • step S16 is performed to calibrate the strabismus type of the target object's eyes based on the determined strabismus eye, where the eye corner strabismus type includes at least one of the following: non-strabismus in both eyes, strabismus in the left eye, and strabismus in the right eye. .
  • the eye strabismus type of the target object can be calibrated directly based on the strabismus in a single front face image.
  • the strabismus type of the target object's eyes can be calibrated.
  • Figure 2 is an example diagram of the gaze angle of the left eye and the gaze angle of the right eye in a camera coordinate system provided by an embodiment of the present disclosure.
  • 21 is a camera, corresponding to the aforementioned image acquisition device, and 22 is the left eye of the target object.
  • the pupil of The gaze angle of the normal line of the camera's imaging surface corresponds to the aforementioned gaze angle of the right eye, and ⁇ is greater than ⁇ .
  • the angle difference is determined as ⁇ - ⁇ according to the gaze angle of the left eye and the right eye; when the angle difference exceeds the first predetermined angle range, the right eye corresponding to ⁇ is determined
  • the eye is determined to be strabismus, thereby determining that the target object's eye strabismus type is right eye strabismus.
  • the normal of the camera imaging plane in Figure 2 can be set to 0°
  • the gaze angle ⁇ of the right eye is 30°
  • the gaze angle ⁇ of the left eye is -10°
  • the angle threshold ⁇ in the first angle range of the predetermined angle is is 10°
  • the angle difference is the difference between the absolute value of the gaze angle of the right eye and the gaze angle of the left eye, that is, 20°, which exceeds the angle threshold ⁇ , so it can be determined that the right eye corresponding to ⁇ is a strabismus eye.
  • the angle difference between the gaze angle of the left eye and the gaze angle of the right eye is determined based on the pupil position of the left eye and the pupil position of the right eye in the front face image.
  • the one with the larger absolute value of the gaze angle of the left eye and the right eye is determined to be a strabismus.
  • the strabismus angle measurement method, or using the method for strabismus detects strabismus.
  • the embodiment of the present disclosure can conveniently and accurately detect the strabismus type of the target object's eyes through image processing.
  • a medical seeker in a medical scenario, it is possible to determine whether a medical seeker has strabismus according to the eye type detection method of the present disclosure; in addition, the number of people with strabismus can also be counted based on the eye type detection results.
  • whether the driver has strabismus can be determined based on the eye type detection method of the present disclosure.
  • the eye type detection method of the present disclosure can be used to determine whether the driver has strabismus. This can be based on the monitoring of the gaze of the non-strabismus eye (i.e., the normal eye), such as the gaze angle and the gaze angle. The degree of change, gaze duration, etc., determine whether the driver is distracted, thereby improving the accuracy of the driver monitoring system.
  • the driver's eyes are normal to detect whether the driver is driving safely, such as judging whether the driver is driving safely based on the difference between the gaze angle of the eyes and the deflection angle of the face.
  • the driver is a person with strabismus.
  • it is easy to mistakenly detect the result of unsafe driving.
  • the absolute value of the gaze angle of the left eye and the right eye is larger.
  • One is determined to be a strabismus, and the target object's eye strabismus type is calibrated so that in vehicle driving scenarios, the driver can be detected to drive safely based on non-strabismus eyes, which can improve the accuracy of driver status assessment in a DMS system, for example.
  • the frontal face image includes multiple frames of frontal face images in the video stream of the target object collected by the image acquisition device;
  • Calibrating the eye strabismus type of the target object based on the strabismus eye in the front face image includes:
  • the eye type of the target object is calibrated as right eye strabismus.
  • the acquired frontal face image includes multiple frames of frontal face images in the video stream of the target object collected by the image acquisition device, wherein the multiple frames of frontal face images may be consecutive frames of frontal face images, or may be Multi-frame front face images obtained by sampling at certain time intervals in the video stream.
  • the present disclosure can count multiple frames when calibrating the eye strabismus type of the target object based on the strabismus in the front face image.
  • the eye type of the target object is calibrated to be left eye strabismus; or, when the number of frames in the image in which the right eye is determined to be strabismus reaches the predetermined threshold,
  • the eye type of the calibration target object is right eye strabismus.
  • the number of frames of the image of the left eye with strabismus is positively correlated with the probability that the left eye is strabismus
  • the number of frames of the image of the right eye with strabismus is positively correlated with the probability of the right eye with strabismus
  • each front-face image can obtain a corresponding strabismus detection result.
  • the present disclosure detects multiple front-face images, and the multiple detection results obtained may include those whose left eye is strabismus.
  • the results also include the result that the right eye is strabismus.
  • the number of frames of images in which the left eye is a strabismus and the number of frames in which the right eye is a strabismus can be obtained by counting the corresponding multiple detection results. The larger the number of frames, the more likely it is that the eye corresponding to the number of frames is a strabismus. The bigger. Therefore, determining the eye corresponding to the number of frames reaching the predetermined threshold as a strabismus and calibrating the eye type of the target object accordingly can further improve the accuracy of determining the eye type.
  • the frontal face image includes N frames of frontal face images in the video stream of the target object collected by the image acquisition device, where N is a predetermined positive integer;
  • the method also includes:
  • Calibrating the eye strabismus type of the target object based on the strabismus eye in the front face image includes:
  • the first frame number of the image in which both eyes are determined to be non-strabismus, the second frame number of the image in which the left eye is determined to be a strabismus, and the third frame number of the image in which the right eye is determined to be strabismus are determined. Frame numbers are compared;
  • the strabismus eye in the front face image is the strabismus eye of the target object.
  • the frontal face image includes N frames of frontal face images obtained based on the foregoing method in the video stream of the target object collected by the image acquisition device, where N is a predetermined positive integer.
  • the angle difference does not exceed the first predetermined angle range, that is, the difference in the gaze angles of the two eyes is small and does not conform to the gaze angle characteristics of the strabismus human eye, therefore, it is determined that both eyes in the front face image are non-strabismus.
  • the first frame number of the image in which both eyes are non- strabismus is determined. It can be understood that the first frame number is positively related to the probability that both eyes are non-strabismus.
  • the first frame number After determining the first frame number, combining the second frame number (the number of frames corresponding to the image of the left eye with strabismus) and the third frame number (corresponding to the number of frames of the image of the right eye with strabismus) obtained based on the aforementioned method, Compare the first frame number, the second frame number and the third frame number.
  • the target object's binocular type When the first frame number is greater than the second frame number and the third frame number, that is, when the probability of both eyes being non-strabismus is the highest, Calibrate the target object's binocular type to be binocular non-strabismus; or, when the first frame number is less than either of the second frame number and the third frame number, that is, when the probability of strabismus exists is high, calibrate the third The strabismus eye in the front face image corresponding to the larger of the second frame number and the third frame number is the strabismus eye of the target object.
  • N frames of front face images In order to reduce the misdetection of strabismus by normal eyes due to occasional strabismus or low image quality of the front face image (such as overexposure, low resolution), etc., in the present disclosure, in N frames of front face images, The first frame number of the image in which both eyes are determined to be non-squinting eyes, the second frame number of the image in which the left eye is determined to be strabismus, and the third frame number of the image in which the right eye is determined to be strabismus are compared, and the strabismus is calibrated based on the comparison results. , improve the accuracy of calibrating strabismus.
  • the N frames of frontal face images are N frames of frontal face images whose timing difference between any two frames in the video stream does not exceed the preset duration, so that eye type detection is performed on the N frames of frontal face images within the preset duration.
  • the N frames of frontal images of the present disclosure are N frames of frontal images whose total timing does not exceed the preset duration.
  • N is 10 and the preset duration is 15 minutes.
  • the preset time period can be started from the time when the vehicle starts and the first frame of the front face image is collected, or it can be started from the time under other circumstances (such as the vehicle speed is greater than the predetermined speed threshold) and the first frame of the front face image is collected. This disclosure does not do this. limited.
  • the larger N is in this disclosure, the higher the accuracy of the eye type calibration result.
  • the setting of the preset duration needs to take into account the time required to complete the acquisition of N frames of frontal face images. On this basis, the shorter the preset duration, the higher the immediacy of the final determination of the eye type.
  • N frames of frontal face images are not acquired within a preset time period, the detection of eye types is stopped, thereby reducing the power consumption caused by the eye type detection device being stuck in the detection of frontal face images for a long time.
  • the present disclosure detects N frames of frontal face images and calibrates the strabismus of the target object. Compared with calibrating strabismus based on only one frame of frontal face images, the present disclosure can improve the accuracy of calibrating strabismus. Secondly, the timing difference between any two frames in the N-frame frontal face image of the present disclosure does not exceed the preset time, which improves the immediacy of eye type calibration while taking into consideration the improvement of eye type accuracy.
  • determining the gaze angle of the left eye based on the pupil position of the left eye, and determining the gaze angle of the right eye based on the pupil position of the right eye includes:
  • the position of the pupil center of the left eye in the front face image is the first position in the coordinate system of the image acquisition device, and the position of the right pupil center in the front face image is determined.
  • the second position in the coordinate system of the image acquisition device is the first position in the coordinate system of the image acquisition device, and the position of the right pupil center in the front face image is determined.
  • the gaze angle of the left eye is determined based on the angle between the connection line between the first position and the coordinate origin of the coordinate system of the image acquisition device and the normal line of the imaging plane of the image acquisition device. ;
  • the gaze angle of the right eye is determined based on the angle between the connection line between the second position and the coordinate origin of the coordinate system of the image acquisition device and the normal line of the imaging surface of the image acquisition device. .
  • the position of the pupil center (including the position of the left pupil center and the position of the right pupil center) may be the position of the center of the outline of the pupil area in the front face image in the image coordinate system.
  • image capture device of the present disclosure can be installed directly in front of the head of the target subject to capture the front face image.
  • the present disclosure places the pupil center in the two-dimensional image coordinate system.
  • the coordinates are mapped to the three-dimensional coordinates in the coordinate system of the image acquisition device, so that the position of the center of the left pupil is mapped to the first position in the coordinate system of the image acquisition device, and the position of the center of the right pupil is mapped to the coordinate system of the image acquisition device.
  • second position According to the connection between the first position and the coordinate origin of the coordinate system of the image acquisition device, the angle between the normal line of the imaging plane of the image acquisition device is determined as the gaze angle of the left eye.
  • the angle between the line connecting the coordinate origins of the device's coordinate system and the normal line of the imaging surface of the image acquisition device is determined as the gaze angle of the right eye. Then, the eye type is calibrated based on the gaze angle of the left eye and the gaze angle of the right eye.
  • the present disclosure may also be based on the first position and the coordinate system of the image acquisition device.
  • the angle between the connection between the coordinate origins and the vehicle driving direction or the head direction is determined as the gaze angle of the left eye; according to the connection between the second position and the coordinate origin of the coordinate system of the image acquisition device, and
  • the angle between the vehicle's driving direction or the vehicle's head direction is determined as the gaze angle of the right eye.
  • the driving direction or head direction of the vehicle can be determined based on the rotation angle of the steering wheel, or the driving direction of the vehicle can be obtained by docking the vehicle DMS system.
  • the present disclosure determines the gaze angle of the left eye and the gaze angle of the right eye based on the normal of the imaging surface of the image capture device, When the angle difference between the gaze angle of the left eye and the gaze angle of the right eye exceeds the first predetermined angle range, one of the absolute larger gaze angles of the left eye and the right eye is determined to be a strabismus eye to improve the calibration Accuracy of strabismus.
  • the image acquisition device is an image acquisition device provided inside a vehicle, and the target object is the driver of the vehicle;
  • the method also includes:
  • obtaining the front face image of the target object collected by the image acquisition device includes:
  • the front face image of the driver captured by the image acquisition device is acquired.
  • the image capture device can be installed inside the vehicle, such as directly in front of the driver's seat, to facilitate capturing the driver's front face image.
  • This disclosure can obtain the speed of the vehicle and the torsion angle of the steering wheel of the vehicle. It can obtain the values collected by the speed sensor and the angle sensor installed in the vehicle. It can also obtain the speed of the vehicle and the torsion angle of the steering wheel by docking the vehicle DMS system.
  • the present disclosure determines that the target object is facing the image acquisition device based on the acquired vehicle speed and the steering wheel's twist angle, when the vehicle's speed is greater than a predetermined speed threshold and the steering wheel's twist angle is less than a predetermined angle threshold. And in response to the driver's squint calibration function being turned on, the facial image collected by the image acquisition device is used as a front face image, so that the driver's squint eyes are calibrated based on the front face image based on the aforementioned method.
  • the driver's squint calibration function can be turned on by default when the vehicle is started, or it can be turned on after a preset period of time after the vehicle is started, or it can be turned on based on the driver's squint calibration function.
  • the issued operation command turns on the strabismus calibration function, which is not limited in this disclosure.
  • the embodiments of the present disclosure can reduce the disturbance to the user and improve the user experience.
  • the method further includes:
  • the facial image that satisfies the predetermined image quality condition is used as the front face image of the target object acquired when the target object faces the image acquisition device.
  • the first prompt information indicating that the target subject is required to face the image collection device can be output through display screen text output or voice broadcast, etc., prompting the user to look towards the device that collects the face image.
  • the image collection device collects the user's facial image after outputting the first prompt information.
  • facial recognition technology such as FaceID technology, can be used to register facial images.
  • the present disclosure determines whether the facial image meets the predetermined image quality conditions, uses the facial image that meets the predetermined image quality conditions as the front face image, and then performs the aforementioned eye type calibration based on the acquired front face image;
  • the predetermined image quality conditions at least include that the face image is a frontal image, and may also include that the resolution is greater than a predetermined resolution threshold, the exposure is within a predetermined exposure range, and the signal-to-noise ratio is greater than a predetermined signal-to-noise ratio threshold, etc.
  • the present disclosure calibrates eye types based on images that meet predetermined image quality conditions, which can improve the accuracy of determining eye types.
  • the features can be used as the unique identification of the user, so that the calibrated strabismus eye can be associated and stored with the unique identification of the user.
  • the acquired features in the user's facial image can be directly compared with the unique identifier.
  • the feature similarity is greater than a predetermined
  • the similarity threshold can directly determine the eye type detection result associated with the unique identifier as the eye type detection result of the target object in the facial image, without having to perform the aforementioned eye type detection method, thereby improving the immediacy of calibrating the eye type.
  • the present disclosure obtains the facial image during the facial image registration process, and uses the facial image that meets the predetermined image quality conditions as the front face image, which can improve the accuracy and immediacy of the subsequent eye type calibration based on the front face image.
  • the eye type detection device of the present disclosure can also directly obtain the facial image after successful facial image registration by other devices, and convert the facial image to as a frontal image.
  • the present disclosure directly uses the facial image after successfully registering the facial image obtained from other devices as the front face image, which can improve the reuse degree of the facial image and eliminates the need to output to the user a request for the target object to look directly at the target object.
  • the first prompt information of the image collection device reduces the disturbance to the user and improves the user experience.
  • the method further includes:
  • the eye type of the target object is calibrated as left-eye strabismus or right-eye strabismus, at least one of the following interactions is performed based on the gaze information of the target object's non-strabismus eye:
  • the second prompt information is output in response to the gaze angle of the non-strabismic eye exceeding the second predetermined angle range within a predetermined period of time.
  • interactive information can be output based on the gaze information of the non-strabismic eye; wherein the gaze information includes gaze angle and gaze duration.
  • the interactive object corresponding to the gaze angle of the strabismus eye in the space is not the object that the strabismus eye really wants to gaze at.
  • the present disclosure shields the strabismus eye and determines the corresponding interactive object in the space based on the gaze angle of the non- strabismus eye, thereby determining the object that the strabismus person really wants to gaze at, and outputs the object's information.
  • Determining the interactive object based on the gaze angle of the non-strabismic eye may include determining the gaze point of the gaze angle in the space where the target object is located based on the gaze angle of the non-strabismic eye, thereby determining the object corresponding to the gaze point as the interactive object;
  • the gaze range can also be determined based on the gaze angle of the non-strabismic eye, for example, the gaze angle range of plus or minus 20 degrees is used as the gaze range, and then the object whose gaze range covers the space where the target object is located is determined as the interactive object.
  • the dashboard information such as vehicle speed, rotational speed, etc., can be displayed on the display screen of the vehicle center console.
  • the present disclosure shields the strabismus eye when the target object is the driver of the vehicle and the vehicle speed is not 0, and when the gaze angle of the non-strabismus eye exceeds the second predetermined angle range within a predetermined period of time, it represents The driver may be distracted (such as not looking in the direction of travel for a long time), and the second prompt message is output.
  • the gaze angle is a positive value
  • exceeding the second predetermined angle range may be greater than the predetermined positive value
  • the gaze angle is a negative value
  • exceeding the second predetermined angle range may be less than the predetermined negative value.
  • the present disclosure blocks the strabismic eye after determining the strabismic eye, and outputs interactive information based on the gaze information of the non-strabismic eye, which can reduce the erroneous output of interactive information based on the gaze information of the strabismic eye and improve the accuracy of interactive information output.
  • FIG. 3 is a flowchart 2 of the implementation of an eye type detection method provided by an embodiment of the present disclosure. As can be seen from Figure 3, the method includes the following steps S31 to step S33:
  • Step S31 Whether the facial image registration is successful; if yes, proceed to step S32; if not, continue to execute step S31.
  • the facial image that satisfies the predetermined image quality condition will be used as the front face image of the target object acquired when the target object faces the image acquisition device.
  • Step S32 Whether the difference between the left and right pupil directions is greater than the threshold; if so, execute step S33; if not, the detection process ends and it is determined that there is no strabismus.
  • the left pupil direction is the gaze angle of the aforementioned left eye
  • the right pupil direction is the gaze angle of the aforementioned right eye.
  • the difference between the left and right pupil directions is the aforementioned angle difference
  • the threshold is the aforementioned first predetermined angle range. The present disclosure determines whether the difference between the left and right pupil directions is greater than a threshold, that is, the aforementioned determination of the angle difference between the gaze angle of the left eye and the gaze angle of the right eye, and whether the angle difference exceeds the first predetermined angle range.
  • Step S33 The eye with a greater deviation from the normal direction of the camera is regarded as a strabismus eye.
  • the normal line of the camera is the normal line of the imaging surface of the image acquisition device
  • the direction deviation from the normal line of the camera is the gaze angle of the left eye and the gaze angle of the right eye.
  • the eye with a greater deviation from the normal direction of the camera is regarded as a strabismus. That is, when the angle difference exceeds the first predetermined angle range, the one with the larger absolute value of the gaze angle of the left eye and the right eye is determined. It is the squint in the front face image.
  • the present disclosure uses the image after the face image registration is successful as a front face image for eye type detection, which can improve the degree of image reuse.
  • this disclosure determines the difference between the left and right pupil directions based on the acquired front face image.
  • the difference between the left and right pupil directions is greater than the threshold, the eye with a larger deviation from the normal direction of the camera is determined to be a strabismus.
  • embodiments of the present disclosure can quickly and accurately detect the strabismus type of the target object's eyes through image processing.
  • FIG. 4 is a flowchart 3 of the implementation of an eye type detection method provided by an embodiment of the present disclosure. As can be seen from Figure 4, the method includes the following steps S41 to step S45:
  • Step S41 Whether the whole vehicle is in a high-speed scene; if so, execute step S42; if not, end the detection process.
  • the high-speed scene is a scene in which the vehicle speed is greater than a predetermined speed threshold and the steering wheel twist angle is less than a predetermined angle threshold.
  • N frames of facial images in the video stream collected by the image acquisition device are used as front-face images.
  • Step S42 Whether the difference between the left and right pupil directions in this frame is greater than the threshold; if so, execute step S43; if not, execute step S44.
  • the present disclosure determines whether the difference in direction between the left and right pupils in the current frame of the face image is greater than a threshold, that is, the aforementioned determination of the angle difference between the gaze angle of the left eye and the gaze angle of the right eye, and whether the angle difference exceeds the third A predetermined angle range.
  • Step S43 The eye with a larger deviation angle from the driving direction ahead is regarded as a strabismus eye;
  • the forward driving direction is the aforementioned vehicle driving direction
  • the deviation angle from the forward driving direction is the gaze angle of the left eye and the gaze angle of the right eye based on the vehicle traveling direction.
  • the eye with a larger deviation angle from the driving direction ahead is regarded as a strabismus. That is, when the angle difference exceeds the first predetermined angle range, the left eye and the right eye are regarded as squinting.
  • the one with the larger absolute value of the angle is determined to be the squinting eye in the front face image.
  • the number of image frames in which the left eye is strabismus is determined, corresponding to the aforementioned second frame number; and the number of frames in the image in which the right eye is strabismus is determined, corresponding to the aforementioned third frame number.
  • Step S44 The driver is considered to be a normal person
  • the present disclosure determines that the driver is driving when the difference between the left and right pupil directions is less than or equal to a threshold. If the person is a normal person and the aforementioned angle difference does not exceed the first predetermined angle range, the target object's binocular type is binocular non-strabismus. Based on this method, for N frames of frontal face images, the number of frames in which both eyes are non-squinting eyes is determined, corresponding to the aforementioned first number of frames.
  • Steps S42 to S44 are executed in a loop N times, that is, the detection of N frames of images is completed. After executing steps S42 to S44 N times in a loop, the first frame number, the second frame number, and the third frame number are obtained, and step S45 is executed.
  • Step S45 Vote on N results and output the result with the highest number of votes.
  • N results are voted on to obtain the number of votes, where the number of votes for the left eye with strabismus corresponds to the second frame number, the number of votes for the right eye with strabismus corresponds to the third frame number, and the number of votes for both eyes without strabismus.
  • the result with the highest number of votes is output; the result with the highest number of votes is the detection result of the eye type.
  • the binocular type of the target object is calibrated to binocular non-strabismus; or, when the first frame number is less than the second frame number and the third frame
  • the strabismus eye in the front face image corresponding to the larger of the second frame number and the third frame number is calibrated as the strabismus eye of the target object.
  • the facial image collected by the image acquisition device is more likely to be a frontal face image, so that The process of filtering out front-face images from images collected by an image acquisition device can be reduced, thereby improving the immediacy of determining eye types based on front-face images.
  • the front-face image is acquired in this scenario, the user cannot perceive it.
  • the embodiment of the present disclosure can reduce the disturbance to the user. Improve user experience.
  • the present disclosure determines the first frame number, the second frame number and the third frame number based on N frames of frontal images obtained when the whole vehicle is in a high-speed scene. Based on the first frame number, the second frame number and the third frame number, The comparison result of the third number of frames determines the final eye type. Compared with determining the eye type based only on one frame of the front face image, the embodiment of the present disclosure can improve the accuracy of determining the eye type.
  • This disclosure is based on the FaceID registration process (i.e., the above-mentioned facial image registration).
  • the camera collects the driver's front face image; then; , detect the pupil direction angles of the driver's eyes in the front face image (i.e., the gaze angle of the left eye and the right eye), and obtain the difference in gaze directions of the two pupils (i.e., the gaze angle of the left eye and the right eye) (angle difference between the gaze angles); finally, based on the angle difference and the first predetermined angle range, it is determined whether the driver is a strabismus person.
  • a period of dynamic strabismus calibration will also be performed when the vehicle is running at high speed.
  • the present disclosure detects the pupil direction angle of the driver's eyes on the picture that passes the FaceID registration requirement (ie, the above-mentioned front face image), that is, the picture that meets the driver's requirement of facing the camera. If the difference in pupil direction angles of both eyes (i.e., the angle difference between the gaze angle of the left eye and the gaze angle of the right eye) exceeds a certain threshold (i.e., the first predetermined angle range), it is considered that the driver has both pupils. There are non-parallel situations. Based on this scenario, the driver should be looking directly at the camera, so the eye with a larger deviation angle from the normal direction of the camera's imaging plane is output as a strabismus eye.
  • the FaceID registration requirement ie, the above-mentioned front face image
  • the DMS system is used to perform dynamic calibration of strabismus in certain scenarios during driving.
  • a scene is created in which the driver looks towards a certain area with a high probability (that is, the above scene of the vehicle driving straight at high speed), and then in this scene, the driver can The eye with a larger pupil direction deviation area is output as a strabismus.
  • This disclosure sets the DMS dynamic calibration scene as a high-speed driving scene, that is, when the vehicle is running at a high speed (>80km/h) and the steering wheel angle deflection is small, it is considered that the driver is basically looking ahead (i.e., the above-mentioned front-facing image acquisition device) driving status. Since the dynamic calibration process of DMS is the result of multiple rounds of voting, it has stronger robustness. Therefore, if the DMS dynamic calibration process is completely executed, the results of the DMS dynamic calibration can be used to replace the detection results during the FaceID registration process. Whether it is the FaceID registration process or the DMS dynamic calibration process, there is no need to explicitly inform the driver to cooperate in the detection, and they are all non-sensory strabismus detection processes.
  • the number of testing rounds N can be appropriately increased; at the same time, in order to prevent the calibration process from being stuck in the calibration process for too long, a calibration time limit can be added. If N testing is not performed within a certain period of time, the current round of calibration will be abandoned.
  • the present disclosure provides a driver-insensitive strabismus detection method; (2)
  • the DMS dynamic calibration process provided by the present disclosure supplements the potential problems caused by the FaceID detection process. random error problem.
  • FIG. 5 is a schematic structural diagram of an eye type detection device provided by an embodiment of the present disclosure. As can be seen from Figure 5, the eye type detection device 500 includes:
  • the first acquisition module 501 is configured to acquire the front face image of the target object collected by the image acquisition device when it is determined that the target object is facing the image acquisition device;
  • the first determination module 502 is configured to respectively determine the pupil position of the left eye and the pupil position of the right eye of the target object in the front face image;
  • the second determination module 503 is configured to determine the gaze angle of the left eye according to the pupil position of the left eye, and determine the gaze angle of the right eye of the target object according to the pupil position of the right eye;
  • the third determination module 504 is configured to determine the angle difference between the gaze angle of the left eye and the gaze angle of the right eye;
  • the fourth determination module 505 is configured to determine the larger absolute value of the gaze angle of the left eye and the right eye as the correct angle when the angle difference exceeds the first predetermined angle range. squinting eyes in face images;
  • the calibration module 506 is configured to calibrate the eye strabismus type of the target object according to the strabismus eye in the front face image.
  • the frontal face image includes multiple frames of frontal face images in the video stream of the target object collected by the image acquisition device;
  • the calibration module 506 is configured to calibrate the eye type of the target object as left eye strabismus when the number of frames in the multi-frame frontal face image in which it is determined that the left eye is strabismus reaches a predetermined threshold; or when In the multi-frame frontal face images, if the number of frames in which it is determined that the right eye is strabismus reaches a predetermined threshold, the eye type of the target object is calibrated as right eye strabismus.
  • the frontal face image includes N frames of frontal face images in the video stream of the target object collected by the image acquisition device, where N is a predetermined positive integer;
  • the eye type detection device 500 also includes:
  • the fifth determination module 507 is configured to determine that both eyes in the front face image are non-strabismus eyes when the angle difference does not exceed the first predetermined angle range;
  • Calibration module 506 includes:
  • the determination sub-module is configured to determine, in the N frames of front face images, the first frame number of the image in which both eyes are non-strabismus, the second frame number of the image in which the left eye is determined to be strabismus, and the determination of the right eye to be strabismus.
  • the third frame number of the eye image is compared;
  • a calibration submodule configured to calibrate the binocular type of the target object as binocular non-strabismus when the first frame number is greater than the second frame number and the third frame number; or in the first When the number of frames is less than any one of the second number of frames and the third number of frames, calibrate the front face image corresponding to the larger one of the second number of frames and the third number of frames.
  • the squinting eye is the squinting eye of the target subject.
  • the second determination module 503 includes:
  • the first determination sub-module is configured to determine the first position of the pupil center of the left eye in the front face image in the coordinate system of the image acquisition device according to the internal parameters of the image acquisition device, and the The position of the center of the right pupil in the frontal image is at the second position in the coordinate system of the image acquisition device;
  • the second determination sub-module is configured to determine the angle between the connection line between the first position and the coordinate origin of the coordinate system of the image acquisition device and the normal line of the imaging surface of the image acquisition device, Determine the gaze angle of the left eye;
  • the third determination sub-module is configured to determine the angle between the connection line between the second position and the coordinate origin of the coordinate system of the image acquisition device and the normal line of the imaging surface of the image acquisition device, Determine the gaze angle of the right eye.
  • the image acquisition device is an image acquisition device provided inside a vehicle, and the target object is the driver of the vehicle;
  • the eye type detection device 500 also includes:
  • the second acquisition module 508 is configured to acquire the speed of the vehicle and the twist angle of the steering wheel of the vehicle;
  • the sixth determination module 509 is configured to determine that the target object is facing the image acquisition device when the speed is greater than a predetermined speed threshold and the twist angle of the steering wheel is less than a predetermined angle threshold;
  • the first acquisition module 501 is configured to acquire the driver's front face image collected by the image acquisition device in response to the driver's squint calibration function being turned on when it is determined that the target object is looking directly at the image acquisition device.
  • the eye type detection device 500 further includes:
  • the output module 510 is configured to, during the facial image registration process, output the first prompt information requesting the target subject to face the image collection device;
  • the third acquisition module 511 is configured to acquire a facial image based on the first prompt information
  • the seventh determination module 512 is configured to use the facial image that meets the predetermined image quality condition as a target pair.
  • the eye type detection device 500 further includes:
  • the interaction module 513 is configured to perform at least one of the following interactions based on the gaze information of the target object's non-strabismus eyes when the eye type of the target object is calibrated as left-eye strabismus or right-eye strabismus: output the non-strabismus eye.
  • the gaze angle exceeds the second predetermined angle range, the second prompt information is output.
  • Figure 6 is a schematic diagram of the hardware entity of a computer device provided by an embodiment of the present disclosure.
  • the hardware entity of the computer device 800 includes: a processor 801, a communication interface 802 and a memory 803, where: the processor 801 usually Controls the overall operation of computer device 800.
  • the communication interface 802 can enable the computer device to communicate with other terminals or servers through a network.
  • the memory 803 is configured to store instructions and applications executable by the processor 801, and can also cache data to be processed or processed by the processor 801 and each module in the computer device 800 (for example, image data, audio data, voice communication data and Video communication data), which can be implemented through flash memory (FLASH) or random access memory (Random Access Memory, RAM). Data transmission can be carried out between the processor 801, the communication interface 802 and the memory 803 through the bus 804.
  • the processor 801 is used to execute some or all steps in the above method.
  • Embodiments of the present disclosure provide a computer-readable storage medium on which a computer program is stored. When the computer program is executed by a processor, some or all of the steps in the above method are implemented.
  • Embodiments of the present disclosure provide a computer program product, which includes a computer program that, when the computer program is run on a computer device, causes the computer device to execute some or all of the steps in the above method.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division.
  • the coupling, direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be electrical, mechanical, or other forms. of.
  • the units described above as separate components may or may not be physically separated; the components shown as units may or may not be physical units; they may be located in one place or distributed to multiple network units; Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure can be all integrated into one processing unit, or each unit can be separately used as a unit, or two or more units can be integrated into one unit; the above-mentioned integration
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the aforementioned program can be stored in a computer-readable storage medium.
  • the execution includes: The steps of the above method embodiment; and the aforementioned storage media include: mobile storage devices, read-only memory (Read Only Memory, ROM), magnetic disks or optical disks and other various media that can store program codes.
  • the above-mentioned integrated units of the present disclosure are implemented in the form of software function modules and sold or used as independent products, they can also be stored in a computer-readable storage medium.
  • the technical solution of the present disclosure can be embodied in the form of a software product in essence or that contributes to related technologies.
  • the computer software product is stored in a storage medium and includes a number of instructions to enable a computer.
  • a computer device (which may be a personal computer, a server, a network device, etc.) executes all or part of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: mobile storage devices, ROMs, magnetic disks or optical disks and other media that can store program codes.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

Disclosed are an eye type detection method and an apparatus, a computer device, a storage medium and a computer program product. The method comprises: in a case of determining that a target object faces an image acquisition device, acquiring a front face image of the target object acquired by the image acquisition device; determining a pupil position of the left eye and a pupil position of the right eye of the target object in the front face image, respectively; determining, according to the pupil position of the left eye, a gaze angle of the left eye, and determining, according to the pupil position of the right eye, a gaze angle of the right eye of the target object; determining an angle difference between the gaze angle of the left eye and the gaze angle of the right eye; in a case that the angle difference exceeds a first predetermined angle range, determining a larger one of absolute values of the gaze angles of the left eye and the right eye as a strabismus eye in the front face image; and according to the strabismus eye in the front face image, calibrating an eye strabismus type of the target object. According to the embodiment of the present disclosure, the eye strabismus type of the target object can be conveniently and accurately detected in an image processing mode.

Description

一种眼睛类型检测方法、装置、计算机设备、存储介质和计算机程序产品An eye type detection method, device, computer equipment, storage medium and computer program product
相关申请的交叉引用Cross-references to related applications
本公开实施例基于申请号为202210615649.9、申请日为2022年05月31日、申请名称为“眼睛类型检测方法及装置、计算机设备、存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。This disclosed embodiment is based on a Chinese patent application with application number 202210615649.9, application date is May 31, 2022, and the application name is "Eye type detection method and device, computer equipment, storage medium", and requires that the Chinese patent application Priority, the entire content of this Chinese patent application is hereby incorporated by reference into this disclosure.
技术领域Technical field
本公开涉及但不限于人工智能领域,尤其涉及一种眼睛类型检测方法、装置、计算机设备、存储介质和计算机程序产品。The present disclosure relates to but is not limited to the field of artificial intelligence, and in particular, to an eye type detection method, device, computer equipment, storage medium and computer program product.
背景技术Background technique
视线通常代表人关注的内容或交互的意图。目前,人工智能技术可支持基于视线的交互场景,例如在智能家居场景、智能移动电子设备、智能驾驶场景中支持通过视线检测人关注的区域或对象、或者通过视线进行隔空控制等。Gaze usually represents the content of human attention or the intention of interaction. Currently, artificial intelligence technology can support line-of-sight-based interaction scenarios, such as in smart home scenes, smart mobile electronic devices, and smart driving scenarios, it supports line-of-sight detection of areas or objects that people are interested in, or line-of-sight control for air-to-air control.
例如,在驾驶场景中,驾驶员监测系统(Driver Monitor System,DMS)在驾驶员人眼检测的基础上,会确定眼睛注视方向或注视区域等参数,从而判断驾驶员是否分心。For example, in a driving scene, the Driver Monitor System (DMS) will determine parameters such as eye gaze direction or gaze area based on the detection of the driver's eyes to determine whether the driver is distracted.
然而,对于斜视人群,由于其双眼视线方向不一致,目前的技术难以判断用户的真实注视情况,因此,斜视人群的眼睛类型标定具有重要的意义。However, for people with strabismus, because their eyesight directions are inconsistent, current technology is difficult to determine the user's true gaze. Therefore, eye type calibration for people with strabismus is of great significance.
发明内容Contents of the invention
本公开提供一种眼睛类型检测方法、装置、计算机设备、存储介质和计算机程序产品。The present disclosure provides an eye type detection method, device, computer equipment, storage medium and computer program product.
第一方面,本公开实施例提供一种眼睛类型检测方法,所述方法包括:In a first aspect, an embodiment of the present disclosure provides an eye type detection method, which method includes:
在确定目标对象正视图像采集设备的情况下,获取所述图像采集设备采集的所述目标对象的正脸图像;When it is determined that the target object is facing the image acquisition device, obtain the front face image of the target object collected by the image acquisition device;
分别确定所述正脸图像中所述目标对象的左眼的瞳孔位置和右眼的瞳孔位置;Determine respectively the pupil position of the left eye and the pupil position of the right eye of the target object in the front face image;
根据所述左眼的瞳孔位置确定所述左眼的注视角度,以及根据所述右眼的瞳孔位置确定所述目标对象的右眼的注视角度;Determine the gaze angle of the left eye based on the pupil position of the left eye, and determine the gaze angle of the right eye of the target object based on the pupil position of the right eye;
确定所述左眼的注视角度与所述右眼的注视角度之间的角度差异;determining an angular difference between the gaze angle of the left eye and the gaze angle of the right eye;
在所述角度差异超过第一预定角度范围情况下,将所述左眼和所述右眼中所述注视角度的绝对值较大的一者,确定为所述正脸图像中的斜视眼;When the angle difference exceeds the first predetermined angle range, determine the one with the larger absolute value of the gaze angle of the left eye and the right eye as the strabismus eye in the front face image;
根据所述正脸图像中的斜视眼标定所述目标对象的眼睛斜视类型。The eye strabismus type of the target object is calibrated according to the strabismus eye in the front face image.
第二方面,本公开实施例提供一种眼睛类型检测装置,所述装置包括:In a second aspect, an embodiment of the present disclosure provides an eye type detection device, which includes:
第一获取模块,配置为在确定目标对象正视图像采集设备的情况下,获取所述图像采集设备采集的所述目标对象的正脸图像; The first acquisition module is configured to acquire the front face image of the target object collected by the image acquisition device when it is determined that the target object is facing the image acquisition device;
第一确定模块,配置为分别确定所述正脸图像中所述目标对象的左眼的瞳孔位置和右眼的瞳孔位置;A first determination module configured to respectively determine the pupil position of the left eye and the pupil position of the right eye of the target object in the front face image;
第二确定模块,配置为根据所述左眼的瞳孔位置确定所述左眼的注视角度,以及根据所述右眼的瞳孔位置确定所述目标对象的右眼的注视角度;a second determination module configured to determine the gaze angle of the left eye based on the pupil position of the left eye, and determine the gaze angle of the right eye of the target object based on the pupil position of the right eye;
第三确定模块,配置为确定所述左眼的注视角度与所述右眼的注视角度之间的角度差异;a third determination module configured to determine the angle difference between the gaze angle of the left eye and the gaze angle of the right eye;
第四确定模块,配置为在所述角度差异超过第一预定角度范围情况下,将所述左眼和所述右眼中所述注视角度的绝对值较大的一者,确定为所述正脸图像中的斜视眼;The fourth determination module is configured to determine the one with a larger absolute value of the gaze angles of the left eye and the right eye as the front face when the angle difference exceeds the first predetermined angle range. squinting eyes in the image;
标定模块,配置为根据所述正脸图像中的斜视眼标定所述目标对象的眼睛斜视类型。A calibration module configured to calibrate the eye strabismus type of the target object according to the strabismus eye in the front face image.
第三方面,本公开实施例提供一种计算机设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为执行上述眼睛类型检测方法。In a third aspect, an embodiment of the present disclosure provides a computer device, including: a processor; and a memory for storing instructions executable by the processor; wherein the processor is configured to execute the above eye type detection method.
第四方面,本公开实施例提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时,实现上述眼睛类型检测方法。In a fourth aspect, embodiments of the present disclosure provide a computer-readable storage medium on which a computer program is stored. When the computer program is executed by a processor, the above eye type detection method is implemented.
第五方面,本公开实施例提供一种计算机程序产品,所述计算机程序产品包括计算机程序,在所述计算机程序在计算机设备上运行的情况下,使得所述计算机设备执行如上所述的眼睛类型检测方法中的步骤。In a fifth aspect, embodiments of the present disclosure provide a computer program product, the computer program product comprising a computer program, when the computer program is run on a computer device, causing the computer device to perform the eye type as described above Steps in the detection method.
本公开实施例提供的技术方案可以包括以下有益效果:The technical solutions provided by the embodiments of the present disclosure may include the following beneficial effects:
在本公开实施例中,根据正脸图像中目标对象的左眼的瞳孔位置和右眼的瞳孔位置,确定左眼的注视角度以及右眼的注视角度之间的角度差异,在角度差异超过第一预定角度范围的情况下,将左眼和右眼中注视角度的绝对值较大的一者,确定为正脸图像中的斜视眼,进而标定目标对象的眼睛斜视类型,相较于利用遮盖检查法、斜视角测定法,或利用针对斜视眼的检测仪器对斜视眼进行检测,本公开实施例通过图像处理的方式可以便捷、准确地检测出目标对象的眼睛斜视类型。In an embodiment of the present disclosure, the angle difference between the gaze angle of the left eye and the gaze angle of the right eye is determined based on the pupil position of the target object's left eye and the pupil position of the right eye in the front face image. When the angle difference exceeds the third In the case of a predetermined angle range, the one with the larger absolute value of the gaze angle of the left eye and the right eye is determined as the strabismus eye in the front face image, and then the target object's eye strabismus type is calibrated. Compared with using cover inspection method, strabismus angle measurement method, or using a detection instrument for strabismus to detect strabismus. The embodiment of the present disclosure can conveniently and accurately detect the strabismus type of the target object's eyes through image processing.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。It should be understood that the foregoing general description and the following detailed description are exemplary and explanatory only, and do not limit the present disclosure.
附图说明Description of the drawings
图1为本公开实施例提供的一种眼睛类型检测方法的实现流程图一;Figure 1 is a flowchart 1 of the implementation of an eye type detection method provided by an embodiment of the present disclosure;
图2为本公开实施例提供的一种相机坐标系下左眼的注视角度和右眼的注视角度示例图;Figure 2 is an example diagram of the gaze angle of the left eye and the gaze angle of the right eye in a camera coordinate system provided by an embodiment of the present disclosure;
图3为本公开实施例提供的一种眼睛类型检测方法的实现流程图二;Figure 3 is a flowchart 2 of the implementation of an eye type detection method provided by an embodiment of the present disclosure;
图4为本公开实施例提供的一种眼睛类型检测方法的实现流程图三;Figure 4 is a flowchart 3 of the implementation of an eye type detection method provided by an embodiment of the present disclosure;
图5为本公开实施例提供的一种眼睛类型检测装置的结构示意图;Figure 5 is a schematic structural diagram of an eye type detection device provided by an embodiment of the present disclosure;
图6为本公开实施例提供的一种计算机设备的硬件实体示意图。FIG. 6 is a schematic diagram of a hardware entity of a computer device provided by an embodiment of the present disclosure.
具体实施方式Detailed ways
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附 图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。Exemplary embodiments will be described in detail herein, examples of which are illustrated in the accompanying drawings. The following description refers to the attached In the drawings, the same numbers in the different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with aspects of the disclosure as detailed in the appended claims.
本公开实施例提供一种眼睛类型检测方法,其执行主体可以是眼睛类型检测装置,例如,眼睛类型检测方法可以由终端设备或服务器或其它电子设备执行,其中,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字处理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。在一些实施例中,眼睛类型检测方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。Embodiments of the present disclosure provide an eye type detection method, the execution subject of which may be an eye type detection device. For example, the eye type detection method may be executed by a terminal device or a server or other electronic device, where the terminal device may be a user device (User). Equipment, UE), mobile devices, user terminals, terminals, cellular phones, cordless phones, personal digital assistants (Personal Digital Assistant, PDA), handheld devices, computing devices, vehicle-mounted devices, wearable devices, etc. In some embodiments, the eye type detection method may be implemented by a processor calling computer readable instructions stored in a memory.
在本公开实施例中,眼睛类型检测装置可以包括图像采集设备,从而利用图像采集设备获得图像。此外,眼睛类型检测装置也可以接收其他图像采集设备发送过来的图像。In embodiments of the present disclosure, the eye type detection device may include an image acquisition device, thereby obtaining an image using the image acquisition device. In addition, the eye type detection device can also receive images sent from other image acquisition devices.
图1为本公开实施例提供的一种眼睛类型检测方法的实现流程图一,由图1可知,该方法包括如下步骤S11至S16:Figure 1 is an implementation flowchart 1 of an eye type detection method provided by an embodiment of the present disclosure. As can be seen from Figure 1, the method includes the following steps S11 to S16:
步骤S11:在确定目标对象正视图像采集设备的情况下,获取所述图像采集设备采集的所述目标对象的正脸图像;Step S11: When it is determined that the target object is facing the image acquisition device, obtain the front face image of the target object collected by the image acquisition device;
步骤S12:分别确定所述正脸图像中所述目标对象的左眼的瞳孔位置和右眼的瞳孔位置;Step S12: Determine the pupil position of the left eye and the pupil position of the right eye of the target object in the front face image respectively;
步骤S13:根据所述左眼的瞳孔位置确定所述左眼的注视角度,以及根据所述右眼的瞳孔位置确定所述目标对象的右眼的注视角度;Step S13: Determine the gaze angle of the left eye according to the pupil position of the left eye, and determine the gaze angle of the right eye of the target object according to the pupil position of the right eye;
步骤S14:确定所述左眼的注视角度与所述右眼的注视角度之间的角度差异;Step S14: Determine the angle difference between the gaze angle of the left eye and the gaze angle of the right eye;
步骤S15:在所述角度差异超过第一预定角度范围情况下,将所述左眼和所述右眼中所述注视角度的绝对值较大的一者,确定为所述正脸图像中的斜视眼;Step S15: When the angle difference exceeds the first predetermined angle range, determine the one with the larger absolute value of the gaze angle of the left eye and the right eye as the strabismus in the front face image. Eye;
步骤S16:根据所述正脸图像中的斜视眼标定所述目标对象的眼睛斜视类型。Step S16: Calibrate the eye strabismus type of the target object based on the strabismus eye in the front face image.
在本公开实施例中,步骤S11确定目标对象正视图像采集设备(如摄像头),可以是向目标对象输出请求正视摄像头的提示信息后确定目标对象正视图像采集设备;或是在特定场景下确定目标对象正视图像采集设备,如设置于车舱内正对驾驶座的图像采集设备,在车辆高速直线行驶的场景下,确定驾驶员正视图像采集设备;或是在在线问诊过程确定就医者正视电脑摄像头。从而在确定目标对象正视图像采集设备的情况下,获取图像采集设备采集的目标对象正视图像采集设备的正脸图像。In the embodiment of the present disclosure, step S11 determines that the target object is facing the image acquisition device (such as a camera). It may be to output a prompt message requesting the camera to face the target object and then determine that the target object is facing the image acquisition device; or to determine the target in a specific scene. The object is facing the image acquisition device, such as the image acquisition device installed in the cabin facing the driver's seat. In the scene of the vehicle driving in a straight line at high speed, it is determined that the driver is facing the image acquisition device; or during the online consultation process, it is determined that the doctor is facing the computer. Camera. Therefore, when it is determined that the target object is facing the image acquisition device, a front face image of the target object facing the image acquisition device collected by the image acquisition device is obtained.
本公开获取目标对象的正脸图像可以是在图像采集设备采集的图像中,基于提取的图像中的面部特征筛选出的正脸图像。例如,可以基于面部特征,确定脸部旋转角度,将脸部旋转角度小于阈值的图像作为正脸图像;还可以是基于面部特征,确定鼻子是否在脸部轮廓的中轴线上,将鼻子在脸部轮廓的中轴线上的图像作为正脸图像。The front face image obtained by the present disclosure of the target object may be a front face image filtered out based on the facial features in the extracted image among the images collected by the image acquisition device. For example, the face rotation angle can be determined based on the facial features, and the image with the face rotation angle smaller than the threshold can be used as the front face image; it can also be determined based on the facial features, whether the nose is on the central axis of the face outline, and the nose is placed on the central axis of the face. The image on the central axis of the facial contour is used as the front face image.
本公开在获取到目标对象的正脸图像后,执行步骤S12,分别确定正脸图像中目标对象的左眼的瞳孔位置和右眼的瞳孔位置。其中,瞳孔位置(包括左眼的瞳孔位置和右眼的瞳孔位置)可以是瞳孔轮廓在正脸图像的图像坐标系下的位置,也可以是瞳孔中心 在正脸图像的图像坐标系下的位置;图像坐标系可以是以图像左上角为原点建立以像素为单位的坐标系,像素的横坐标与纵坐标分别是在其图像数组中所在的列数与所在行数,图像坐标系为二维直角坐标系。In this disclosure, after acquiring the front face image of the target object, step S12 is executed to respectively determine the pupil position of the left eye and the pupil position of the right eye of the target object in the front face image. Among them, the pupil position (including the pupil position of the left eye and the pupil position of the right eye) can be the position of the pupil outline in the image coordinate system of the frontal image, or it can be the pupil center The position in the image coordinate system of the front face image; the image coordinate system can be a coordinate system in pixels with the upper left corner of the image as the origin. The abscissa and ordinate of the pixel are the number of columns in its image array respectively. and the row number, the image coordinate system is a two-dimensional rectangular coordinate system.
若瞳孔位置为瞳孔轮廓在正脸图像的图像坐标系下的位置,在一些实施例中,可以通过将图像转化为灰度图像并对图像进行二值化处理,基于二值化处理后的图像确定瞳孔轮廓,得到瞳孔位置。If the pupil position is the position of the pupil outline in the image coordinate system of the frontal image, in some embodiments, the image can be converted into a grayscale image and binarized on the image, based on the binarized image Determine the pupil outline and obtain the pupil position.
若瞳孔位置为瞳孔中心在正脸图像的图像坐标系下的位置,在一些实施例中,在通过上述方法确定瞳孔轮廓后,可使用质心法确定瞳孔轮廓中瞳孔中心的位置。If the pupil position is the position of the pupil center in the image coordinate system of the frontal image, in some embodiments, after the pupil outline is determined by the above method, the centroid method can be used to determine the position of the pupil center in the pupil outline.
在一些实施例中,在通过上述方法得到瞳孔轮廓后,还可根据瞳孔范围内的每个像素点在所有方向的位移向量以及梯度方向的和,计算所有方向和的平均值,将瞳孔范围内所有像素点的和的平均值中最大的和的平均值对应的点的坐标作为瞳孔中心的位置。In some embodiments, after the pupil contour is obtained through the above method, the average value of the sum of all directions can also be calculated based on the sum of the displacement vectors in all directions and the gradient direction of each pixel within the pupil range, and the pupil range can be The coordinates of the point corresponding to the maximum average of the sums of all pixel points are used as the position of the pupil center.
本公开在确定左眼的瞳孔位置和右眼的瞳孔位置后,执行步骤S13,根据左眼的瞳孔位置确定左眼的注视角度,以及根据右眼的瞳孔位置确定右眼的注视角度。其中,注视角度表征左眼或右眼的注视方向与图像采集设备的成像面的法线方向的夹角,注视角度可以有正负,如注视方向位于图像采集设备的成像面的法线方向的左侧,注视角度为负值;注视方向位于图像采集设备的成像面的法线方向的右侧,注视角度为正值等。In this disclosure, after determining the pupil position of the left eye and the pupil position of the right eye, step S13 is performed to determine the gaze angle of the left eye based on the pupil position of the left eye, and determine the gaze angle of the right eye based on the pupil position of the right eye. Among them, the gaze angle represents the angle between the gaze direction of the left eye or the right eye and the normal direction of the imaging surface of the image acquisition device. The gaze angle can be positive or negative. For example, the gaze direction is located in the normal direction of the imaging surface of the image acquisition device. On the left, the gaze angle is a negative value; the gaze direction is on the right side of the normal direction of the imaging surface of the image acquisition device, the gaze angle is a positive value, etc.
本公开注视角度可以是在图像坐标系下的角度,也可以是在图像采集设备坐标系下的角度,或是在世界坐标系下的角度,本公开对此不做限制。其中,图像采集设备坐标系是以图像采集设备的聚焦中心为原点,以光轴为Z轴建立的三维直角坐标系。The gaze angle of the present disclosure may be an angle in the image coordinate system, the angle in the image acquisition device coordinate system, or the angle in the world coordinate system, and the disclosure does not limit this. Among them, the coordinate system of the image acquisition device is a three-dimensional rectangular coordinate system established with the focus center of the image acquisition device as the origin and the optical axis as the Z-axis.
以瞳孔位置为瞳孔轮廓在正脸图像的图像坐标系下的位置为例,本公开可以利用特征点检测方法提取正脸图像中的眼部轮廓特征,根据瞳孔轮廓与眼部轮廓的相对位置,确定眼睛在图像坐标系下的注视角度。其中,特征点检测方法包括但不限于以下方式中的任一种:基于局部方法、基于全局的方法、基于混合的方法、主动性状模型(Active Shape Model,ASM)以及主动外观模型(Active Appearance Model,AAM)。Taking the pupil position as the position of the pupil outline in the image coordinate system of the front face image as an example, the present disclosure can use the feature point detection method to extract the eye contour features in the front face image. According to the relative position of the pupil outline and the eye outline, Determine the gaze angle of the eye in the image coordinate system. Among them, feature point detection methods include but are not limited to any of the following methods: local-based methods, global-based methods, hybrid-based methods, Active Shape Model (ASM) and Active Appearance Model (Active Appearance Model). , AAM).
示例性的,本公开根据瞳孔轮廓与眼部轮廓的相对位置,确定瞳孔轮廓的最左端与眼部轮廓最左端的第一距离和/或瞳孔轮廓的最右端与眼部轮廓最右端的第二距离;根据预定的第一距离和/或第二距离与注视角度的对应关系,确定眼睛的注视角度。例如,通常情况下眼睛的宽度在30毫米左右,单眼的水平注视角度在[-75,75]度之间(注视正前方时的注视角度设为0°(度)),因此可设置第一距离为15毫米对应的注视角度为0度,第一距离为5毫米对应的注视角度为-50度(即向左50度),第一距离为20毫米对应的注视角度为50度(即向右50度)。Illustratively, the present disclosure determines a first distance between the leftmost end of the pupil outline and the leftmost end of the eye outline and/or a second distance between the rightmost end of the pupil outline and the rightmost end of the eye outline based on the relative position of the pupil outline and the eye outline. Distance; determine the gaze angle of the eye based on the correspondence between the predetermined first distance and/or the second distance and the gaze angle. For example, under normal circumstances, the width of the eyes is about 30 mm, and the horizontal gaze angle of one eye is between [-75, 75] degrees (the gaze angle when looking straight ahead is set to 0° (degrees)), so the first A distance of 15 mm corresponds to a gaze angle of 0 degrees, a first distance of 5 mm corresponds to a gaze angle of -50 degrees (i.e. 50 degrees to the left), and a first distance of 20 mm corresponds to a gaze angle of 50 degrees (i.e. to the left). 50 degrees to the right).
以瞳孔位置为瞳孔中心在正脸图像的图像坐标系下的位置为例,本公开可以利用前述特征点检测方法提取正脸图像中的眼部轮廓特征,根据瞳孔中心在眼部轮廓区域内的相对位置,确定与该相对位置对应的在图像坐标系下的眼睛的注视角度。Taking the pupil position as the position of the pupil center in the image coordinate system of the front-face image as an example, the present disclosure can use the aforementioned feature point detection method to extract the eye contour features in the front-face image. According to the position of the pupil center in the eye contour area Relative position, determine the gaze angle of the eye in the image coordinate system corresponding to the relative position.
例如,根据瞳孔中心在眼部轮廓区域内的相对位置,确定瞳孔中心到眼部轮廓最左端的第三距离和/或瞳孔中心到眼部轮廓最右端的第四距离;根据预定的第三距离和/或 第四距离与注视角度的对应关系,确定眼睛的注视角度。For example, determine a third distance from the pupil center to the leftmost end of the eye contour and/or a fourth distance from the pupil center to the rightmost end of the eye contour based on the relative position of the pupil center within the eye contour area; based on the predetermined third distance and / or The fourth correspondence relationship between distance and gaze angle determines the gaze angle of the eyes.
本公开在根据瞳孔位置确定眼睛在图像坐标系下的注视角度后,在一些实施例中,可以基于图像采集设备的内参数,确定眼睛在图像采集设备的坐标系下的注视角度。在一些实施例中,在确定在图像采集设备的坐标系下的注视角度后,可以基于图像采集设备的外参数,确定眼睛在世界坐标系下的注视角度。After the present disclosure determines the gaze angle of the eye in the image coordinate system according to the pupil position, in some embodiments, the gaze angle of the eye in the coordinate system of the image acquisition device can be determined based on the internal parameters of the image acquisition device. In some embodiments, after determining the gaze angle in the coordinate system of the image acquisition device, the gaze angle of the eye in the world coordinate system may be determined based on external parameters of the image acquisition device.
在一些实施例中,在根据瞳孔位置确定对应的眼睛的注视角度时,还可以将正脸图像输入注视角度检测模型,获取左眼的注视角度以及右眼的注视角度;其中,注视角度检测模型可以是基于深度学习网络训练而成,例如,基于训练样本数据以及标签值,对如卷积神经网络(Convolutional Neural Networks,CNN)、深度学习网络(Deep Neural Networks,DNN)等网络进行训练调参后,得到该注视角度检测模型;其中,标签值即为预定的注视角度。本公开基于训练好的注视角度检测模型,将目标对象的正脸图像输入该模型即可得到左眼的注视角度以及右眼的注视角度。In some embodiments, when determining the gaze angle of the corresponding eye according to the pupil position, the front face image can also be input into the gaze angle detection model to obtain the gaze angle of the left eye and the gaze angle of the right eye; wherein, the gaze angle detection model It can be trained based on deep learning networks, for example, based on training sample data and label values, training and parameter adjustment of networks such as convolutional neural networks (Convolutional Neural Networks, CNN), deep learning networks (Deep Neural Networks, DNN), etc. Finally, the gaze angle detection model is obtained; where the label value is the predetermined gaze angle. The present disclosure is based on a trained gaze angle detection model. By inputting the front face image of the target object into the model, the gaze angle of the left eye and the gaze angle of the right eye can be obtained.
根据上述实施例,可以根据左眼的瞳孔位置确定左眼的注视角度,以及根据右眼的瞳孔位置确定右眼的注视角度,但本公开不限于上述方法。According to the above embodiments, the gaze angle of the left eye can be determined according to the pupil position of the left eye, and the gaze angle of the right eye can be determined according to the pupil position of the right eye, but the present disclosure is not limited to the above method.
本公开在确定左眼的注视角度以及右眼的注视角度后,执行步骤S14,确定左眼的注视角度与右眼的注视角度之间的角度差异。其中,角度差异可以是左眼的注视角度与右眼的注视角度的差值,也可以是左眼的注视角度与右眼的注视角度经加权运算后的结果。In this disclosure, after determining the gaze angle of the left eye and the gaze angle of the right eye, step S14 is performed to determine the angle difference between the gaze angle of the left eye and the gaze angle of the right eye. The angle difference may be the difference between the gaze angle of the left eye and the gaze angle of the right eye, or may be the result of a weighted calculation of the gaze angle of the left eye and the gaze angle of the right eye.
由于斜视人群双眼的注视方向会存在差异,例如,正常情况下,双眼的注视角度平行,但斜视人群双眼的注视角度差异较大。基于斜视人群的生理特征,本公开步骤S15在角度差异超过第一预定角度范围的情况下,将左眼和右眼中注视角度的绝对值较大的一者,确定为正脸图像中的斜视眼。其中,以角度差异为左眼的注视角度与右眼的注视角度的差值,若角度差异为负值,对应的,超过第一预定角度范围可以是小于预定负值;若角度差异为正值,对应的,第一预定角度范围可以是大于预定正值,本公开不限定第一预定角度范围的具体内容。Because the gaze directions of the eyes of people with strabismus are different, for example, under normal circumstances, the gaze angles of the eyes are parallel, but the gaze angles of the eyes of people with strabismus are quite different. Based on the physiological characteristics of strabismus people, step S15 of the present disclosure determines the one with the larger absolute value of the gaze angle of the left eye and the right eye as the strabismus eye in the front face image when the angle difference exceeds the first predetermined angle range. . Among them, the angle difference is the difference between the gaze angle of the left eye and the gaze angle of the right eye. If the angle difference is a negative value, correspondingly, if the angle difference exceeds the first predetermined angle range, it can be less than the predetermined negative value; if the angle difference is a positive value, , correspondingly, the first predetermined angle range may be greater than the predetermined positive value, and the present disclosure does not limit the specific content of the first predetermined angle range.
在确定正脸图像中的斜视眼后,执行步骤S16,根据确定的斜视眼标定目标对象的眼睛斜视类型,其中,眼角斜视类型包括以下至少之一:双眼非斜视、左眼斜视、右眼斜视。具体地,可以直接根据单张正脸图像中的斜视眼标定目标对象的眼睛斜视类型。或者,可以在连续采集的多张正脸图像中确定出目标对象的同一只眼睛为斜视眼之后,标定该目标对象的眼睛斜视类型。After the strabismus eye in the front face image is determined, step S16 is performed to calibrate the strabismus type of the target object's eyes based on the determined strabismus eye, where the eye corner strabismus type includes at least one of the following: non-strabismus in both eyes, strabismus in the left eye, and strabismus in the right eye. . Specifically, the eye strabismus type of the target object can be calibrated directly based on the strabismus in a single front face image. Alternatively, after it is determined that the same eye of the target object is a strabismus in multiple consecutively collected front face images, the strabismus type of the target object's eyes can be calibrated.
图2为本公开实施例提供的一种相机坐标系下左眼的注视角度和右眼的注视角度示例图,图2中,21为相机,对应前述图像采集设备,22为目标对象的左眼的瞳孔,23为目标对象的右眼的瞳孔,虚线为相机成像面的法线,α为左眼基于相机成像面的法线的注视角度,对应前述左眼的注视角度,β为右眼基于相机成像面的法线的注视角度,对应前述右眼的注视角度,β大于α。本公开根据左眼的注视角度和右眼的注视角度,确定角度差异为β-α;在角度差异超过第一预定角度范围的情况下,将β对应的右眼确 定为斜视眼,从而确定目标对象的眼睛斜视类型为右眼斜视。Figure 2 is an example diagram of the gaze angle of the left eye and the gaze angle of the right eye in a camera coordinate system provided by an embodiment of the present disclosure. In Figure 2, 21 is a camera, corresponding to the aforementioned image acquisition device, and 22 is the left eye of the target object. The pupil of The gaze angle of the normal line of the camera's imaging surface corresponds to the aforementioned gaze angle of the right eye, and β is greater than α. According to the present disclosure, the angle difference is determined as β-α according to the gaze angle of the left eye and the right eye; when the angle difference exceeds the first predetermined angle range, the right eye corresponding to β is determined The eye is determined to be strabismus, thereby determining that the target object's eye strabismus type is right eye strabismus.
示例性的,图2中相机成像面的法线可设为0°,右眼的注视角度β为30°,左眼的注视角度α为-10°,预定角度第一角度范围中角度阈值θ为10°,角度差异为右眼的注视角度与左眼的注视角度的绝对值的差值,即20°,超过角度阈值θ,从而可以确定β对应的右眼为斜视眼。For example, the normal of the camera imaging plane in Figure 2 can be set to 0°, the gaze angle β of the right eye is 30°, the gaze angle α of the left eye is -10°, and the angle threshold θ in the first angle range of the predetermined angle is is 10°, and the angle difference is the difference between the absolute value of the gaze angle of the right eye and the gaze angle of the left eye, that is, 20°, which exceeds the angle threshold θ, so it can be determined that the right eye corresponding to β is a strabismus eye.
可以理解的是,在本公开实施例中,根据正脸图像中左眼的瞳孔位置和右眼的瞳孔位置,确定左眼的注视角度以及右眼的注视角度之间的角度差异,在角度差异超过第一预定角度范围的情况下,将左眼和右眼中注视角度的绝对值较大的一者确定为斜视眼,相较于利用遮盖检查法、斜视角测定法,或利用针对斜视眼的检测仪器对斜视眼进行检测,本公开实施例通过图像处理的方式可以便捷、准确地检测出目标对象的眼睛斜视类型。It can be understood that in the embodiment of the present disclosure, the angle difference between the gaze angle of the left eye and the gaze angle of the right eye is determined based on the pupil position of the left eye and the pupil position of the right eye in the front face image. In the angle difference When the first predetermined angle range is exceeded, the one with the larger absolute value of the gaze angle of the left eye and the right eye is determined to be a strabismus. Compared with using the cover inspection method, the strabismus angle measurement method, or using the method for strabismus The detection instrument detects strabismus. The embodiment of the present disclosure can conveniently and accurately detect the strabismus type of the target object's eyes through image processing.
在一些实施例中,在医疗场景中,可以根据本公开眼睛类型检测方法,确定求医者是否有斜视眼;此外,还可根据眼睛类型的检测结果,统计斜视眼人群数量。在考驾照体检场景中,可以根据本公开眼睛类型检测方法,确定驾驶员是否存在斜视眼的情况。在车辆驾驶场景中,可以根据本公开眼睛类型检测方法,确定驾驶员是否存在斜视眼的情况,由此可基于对非斜视眼(即正常眼)的注视情况的监测,如注视角度、注视角度变化程度、注视时长等,确定驾驶员是否分心,从而提升驾驶员监测系统的准确性。In some embodiments, in a medical scenario, it is possible to determine whether a medical seeker has strabismus according to the eye type detection method of the present disclosure; in addition, the number of people with strabismus can also be counted based on the eye type detection results. In a driver's license physical examination scenario, whether the driver has strabismus can be determined based on the eye type detection method of the present disclosure. In a vehicle driving scene, the eye type detection method of the present disclosure can be used to determine whether the driver has strabismus. This can be based on the monitoring of the gaze of the non-strabismus eye (i.e., the normal eye), such as the gaze angle and the gaze angle. The degree of change, gaze duration, etc., determine whether the driver is distracted, thereby improving the accuracy of the driver monitoring system.
而相关技术中,如在车辆驾驶场景下,以驾驶员双眼正常为前提对驾驶员是否安全驾驶进行检测,如根据眼睛的注视角度与脸部偏转角度的差异判断驾驶员是否安全驾驶,其未考虑驾驶员为斜视人群的情况,在斜视人群作为驾驶员时容易误检测出未安全驾驶的结果。In related technologies, for example, in a vehicle driving scenario, the driver's eyes are normal to detect whether the driver is driving safely, such as judging whether the driver is driving safely based on the difference between the gaze angle of the eyes and the deflection angle of the face. Consider the situation where the driver is a person with strabismus. When a person with strabismus is the driver, it is easy to mistakenly detect the result of unsafe driving.
因此,可以理解的是,本公开在左眼的注视角度和右眼的注视角度之间的角度差异超过第一预定角度范围的情况下,将左眼和右眼中注视角度的绝对值较大的一者确定为斜视眼,并标定目标对象的眼睛斜视类型,以便在车辆驾驶场景中,基于非斜视眼检测驾驶员是否安全驾驶,可以提高例如DMS系统中对驾驶员状态评估的准确性。Therefore, it can be understood that in the present disclosure, when the angle difference between the gaze angle of the left eye and the gaze angle of the right eye exceeds the first predetermined angle range, the absolute value of the gaze angle of the left eye and the right eye is larger. One is determined to be a strabismus, and the target object's eye strabismus type is calibrated so that in vehicle driving scenarios, the driver can be detected to drive safely based on non-strabismus eyes, which can improve the accuracy of driver status assessment in a DMS system, for example.
在一些实施例中,所述正脸图像包括所述图像采集设备采集的所述目标对象的视频流中的多帧正脸图像;In some embodiments, the frontal face image includes multiple frames of frontal face images in the video stream of the target object collected by the image acquisition device;
所述根据所述正脸图像中的斜视眼标定所述目标对象的眼睛斜视类型,包括:Calibrating the eye strabismus type of the target object based on the strabismus eye in the front face image includes:
在所述多帧正脸图像中,确定左眼为斜视眼的图像的帧数达到预定阈值的情况下,标定所述目标对象的眼睛类型为左眼斜视;或者In the multi-frame front face images, if the number of frames in which it is determined that the left eye is strabismus reaches a predetermined threshold, calibrate the eye type of the target object as left eye strabismus; or
在所述多帧正脸图像中,确定右眼为斜视眼的图像的帧数达到预定阈值的情况下,标定所述目标对象的眼睛类型为右眼斜视。In the multi-frame frontal face images, if the number of frames in which it is determined that the right eye is strabismus reaches a predetermined threshold, the eye type of the target object is calibrated as right eye strabismus.
在本公开实施例中,获取的正脸图像包括图像采集设备采集目标对象的视频流中的多帧正脸图像,其中,多帧正脸图像可以是连续帧数的正脸图像,也可以是在视频流中按照一定时间间隔采样后得到的多帧正脸图像。In the embodiment of the present disclosure, the acquired frontal face image includes multiple frames of frontal face images in the video stream of the target object collected by the image acquisition device, wherein the multiple frames of frontal face images may be consecutive frames of frontal face images, or may be Multi-frame front face images obtained by sampling at certain time intervals in the video stream.
本公开在根据正脸图像中的斜视眼标定目标对象的眼睛斜视类型时,可以统计多帧 正脸图像中左眼为斜视眼的图像的帧数以及右眼为斜视眼的图像的帧数。在确定左眼为斜视眼的图像的帧数达到预定阈值的情况下,标定目标对象的眼睛类型为左眼斜视;或者,确定右眼为斜视眼的图像的帧数达到预定阈值的情况下,标定目标对象的眼睛类型为右眼斜视。可以理解的是,左眼为斜视眼的图像的帧数与左眼为斜视眼的概率正相关,右眼为斜视眼的图像的帧数与右眼为斜视眼的概率正相关。The present disclosure can count multiple frames when calibrating the eye strabismus type of the target object based on the strabismus in the front face image. The number of frames in the front face image in which the left eye is strabismus and the number of frames in the image in which the right eye is strabismus. When the number of frames of the image in which the left eye is determined to be strabismus reaches a predetermined threshold, the eye type of the target object is calibrated to be left eye strabismus; or, when the number of frames in the image in which the right eye is determined to be strabismus reaches the predetermined threshold, The eye type of the calibration target object is right eye strabismus. It can be understood that the number of frames of the image of the left eye with strabismus is positively correlated with the probability that the left eye is strabismus, and the number of frames of the image of the right eye with strabismus is positively correlated with the probability of the right eye with strabismus.
在一些实施例中,每一张正脸图像均可以获得一个对应的斜视眼检测结果,本公开对多张正脸图像进行检测,得到的多个检测结果中可能既包含左眼为斜视眼的结果又包含右眼为斜视眼的结果。可以统计对应的多个检测结果中得到左眼为斜视眼的图像的帧数以及右眼为斜视眼的图像的帧数,帧数越大,表征该帧数对应的眼睛为斜视眼的可能性越大。因此,将达到预定阈值的帧数对应的眼睛确定为斜视眼,并相应标定目标对象的眼睛类型,可以进一步提高确定眼睛类型的准确性。In some embodiments, each front-face image can obtain a corresponding strabismus detection result. The present disclosure detects multiple front-face images, and the multiple detection results obtained may include those whose left eye is strabismus. The results also include the result that the right eye is strabismus. The number of frames of images in which the left eye is a strabismus and the number of frames in which the right eye is a strabismus can be obtained by counting the corresponding multiple detection results. The larger the number of frames, the more likely it is that the eye corresponding to the number of frames is a strabismus. The bigger. Therefore, determining the eye corresponding to the number of frames reaching the predetermined threshold as a strabismus and calibrating the eye type of the target object accordingly can further improve the accuracy of determining the eye type.
在一些实施例中,所述正脸图像包括所述图像采集设备采集的所述目标对象的视频流中的N帧正脸图像,N为预定正整数;In some embodiments, the frontal face image includes N frames of frontal face images in the video stream of the target object collected by the image acquisition device, where N is a predetermined positive integer;
所述方法还包括:The method also includes:
在所述角度差异未超过第一预定角度范围的情况下,确定所述正脸图像中的双眼为非斜视眼;When the angle difference does not exceed the first predetermined angle range, determine that both eyes in the front face image are non-strabismus;
所述根据所述正脸图像中的斜视眼标定所述目标对象的眼睛斜视类型,包括:Calibrating the eye strabismus type of the target object based on the strabismus eye in the front face image includes:
在所述N帧正脸图像中,对确定双眼为非斜视眼的图像的第一帧数、确定左眼为斜视眼的图像的第二帧数、确定右眼为斜视眼的图像的第三帧数进行比较;In the N frames of frontal face images, the first frame number of the image in which both eyes are determined to be non-strabismus, the second frame number of the image in which the left eye is determined to be a strabismus, and the third frame number of the image in which the right eye is determined to be strabismus are determined. Frame numbers are compared;
在所述第一帧数大于所述第二帧数和所述第三帧数的情况下,标定所述目标对象的双眼类型为双眼非斜视;或者In the case where the first frame number is greater than the second frame number and the third frame number, calibrating the binocular type of the target object as binocular non-strabismus; or
在所述第一帧数小于所述第二帧数和所述第三帧数中的任意一个的情况下,标定所述第二帧数和所述第三帧数中较大的一个对应的正脸图像中的斜视眼为所述目标对象的斜视眼。When the first frame number is less than any one of the second frame number and the third frame number, calibrate the corresponding one of the second frame number and the third frame number. The strabismus eye in the front face image is the strabismus eye of the target object.
在本公开实施例中,正脸图像包括图像采集设备采集的所述目标对象的视频流中,基于前述方法获得的N帧正脸图像,N为预定正整数。In this embodiment of the present disclosure, the frontal face image includes N frames of frontal face images obtained based on the foregoing method in the video stream of the target object collected by the image acquisition device, where N is a predetermined positive integer.
本公开在角度差异未超过第一预定角度范围的情况下,即双眼注视角度差异较小,不符合斜视人眼双眼注视角度特征,因此,确定正脸图像中双眼为非斜视眼。In the present disclosure, when the angle difference does not exceed the first predetermined angle range, that is, the difference in the gaze angles of the two eyes is small and does not conform to the gaze angle characteristics of the strabismus human eye, therefore, it is determined that both eyes in the front face image are non-strabismus.
本公开在N帧正脸图像中,确定双眼为非斜视眼的图像的第一帧数,可以理解的是,第一帧数与双眼为非斜视眼的概率正相关。In the present disclosure, in N frames of frontal face images, the first frame number of the image in which both eyes are non- strabismus is determined. It can be understood that the first frame number is positively related to the probability that both eyes are non-strabismus.
在确定第一帧数后,结合基于前述方法得到的第二帧数(对应左眼为斜视眼的图像的帧数)和第三帧数(对应右眼为斜视眼的图像的帧数),将第一帧数、第二帧数和第三帧数进行比较,在第一帧数大于第二帧数和第三帧数的情况下,即双眼为非斜视眼的概率最大的情况下,标定目标对象的双眼类型为双眼非斜视;或者,在第一帧数小于第二帧数和第三帧数中的任意一个的情况下,即存在斜视眼的概率较大的情况下,标定第二帧数和第三帧数中较大的一个对应的正脸图像中的斜视眼为目标对象的斜视眼。 After determining the first frame number, combining the second frame number (the number of frames corresponding to the image of the left eye with strabismus) and the third frame number (corresponding to the number of frames of the image of the right eye with strabismus) obtained based on the aforementioned method, Compare the first frame number, the second frame number and the third frame number. When the first frame number is greater than the second frame number and the third frame number, that is, when the probability of both eyes being non-strabismus is the highest, Calibrate the target object's binocular type to be binocular non-strabismus; or, when the first frame number is less than either of the second frame number and the third frame number, that is, when the probability of strabismus exists is high, calibrate the third The strabismus eye in the front face image corresponding to the larger of the second frame number and the third frame number is the strabismus eye of the target object.
可以理解的是,为减少正常双眼因偶尔斜视或正脸图像的图像质量不高(如过曝光、分辨率低)等原因而误检测出斜视眼,本公开在N帧正脸图像中,对确定双眼为非斜视眼的图像的第一帧数、确定左眼为斜视眼的图像的第二帧数、确定右眼为斜视眼的图像的第三帧数进行比较,基于比较结果标定斜视眼,提高标定斜视眼的准确性。It can be understood that in order to reduce the misdetection of strabismus by normal eyes due to occasional strabismus or low image quality of the front face image (such as overexposure, low resolution), etc., in the present disclosure, in N frames of front face images, The first frame number of the image in which both eyes are determined to be non-squinting eyes, the second frame number of the image in which the left eye is determined to be strabismus, and the third frame number of the image in which the right eye is determined to be strabismus are compared, and the strabismus is calibrated based on the comparison results. , improve the accuracy of calibrating strabismus.
在一些实施例中,N帧正脸图像为视频流中任意两帧的时序差异均不超过预设时长的N帧正脸图像,从而对预设时长内的N帧正脸图像进行眼睛类型检测。In some embodiments, the N frames of frontal face images are N frames of frontal face images whose timing difference between any two frames in the video stream does not exceed the preset duration, so that eye type detection is performed on the N frames of frontal face images within the preset duration. .
在本公开实施例中,若任意两帧的图像为首尾两帧的图像,那么本公开N帧正脸图像为总时序不超过预设时长的N帧正脸图像。In the embodiment of the present disclosure, if the images of any two frames are images of the first and last frames, then the N frames of frontal images of the present disclosure are N frames of frontal images whose total timing does not exceed the preset duration.
示例性的,N为10,预设时长为15分钟,在车辆驾驶场景中,完成对总时序时长不超过15分钟的10帧正脸图像的检测,得到第一帧数、第二帧数以及第三帧数,从而基于第一帧数、第二帧数和第三帧数的比较结果标定目标对象的眼睛类型。其中,预设时长可以从车辆启动开始计时并采集首帧正脸图像,也可以是其他情况下(如车辆速度大于预定速度阈值)开始计时并采集首帧正脸图像,本公开对此不做限定。For example, N is 10 and the preset duration is 15 minutes. In the vehicle driving scene, the detection of 10 frames of frontal images with a total duration of no more than 15 minutes is completed, and the first frame number, the second frame number and The third frame number is used to calibrate the eye type of the target object based on the comparison results of the first frame number, the second frame number and the third frame number. Among them, the preset time period can be started from the time when the vehicle starts and the first frame of the front face image is collected, or it can be started from the time under other circumstances (such as the vehicle speed is greater than the predetermined speed threshold) and the first frame of the front face image is collected. This disclosure does not do this. limited.
可以理解的是,本公开中N越大,眼睛类型标定结果的准确度越高。预设时长的设置需兼顾完成获取N帧正脸图像所需耗时,在此基础上,预设时长越短,最终确定眼睛类型的即时性越高。It can be understood that the larger N is in this disclosure, the higher the accuracy of the eye type calibration result. The setting of the preset duration needs to take into account the time required to complete the acquisition of N frames of frontal face images. On this basis, the shorter the preset duration, the higher the immediacy of the final determination of the eye type.
需要说明的是,若在预设时长内未获取N帧正脸图像,则停止对眼睛类型的检测,减少眼睛类型检测装置长时间陷入对正脸图像的检测中造成的功耗。It should be noted that if N frames of frontal face images are not acquired within a preset time period, the detection of eye types is stopped, thereby reducing the power consumption caused by the eye type detection device being stuck in the detection of frontal face images for a long time.
可以理解的是,首先,本公开针对N帧正脸图像进行检测,标定目标对象的斜视眼,相较于仅基于一帧正脸图像标定斜视眼,本公开可以提高标定斜视眼的准确性。其次,本公开N帧正脸图像中任意两帧的时序差异均不超过预设时长,在兼顾提高眼睛类型准确性的基础上,提高标定眼睛类型的即时性。It can be understood that, first, the present disclosure detects N frames of frontal face images and calibrates the strabismus of the target object. Compared with calibrating strabismus based on only one frame of frontal face images, the present disclosure can improve the accuracy of calibrating strabismus. Secondly, the timing difference between any two frames in the N-frame frontal face image of the present disclosure does not exceed the preset time, which improves the immediacy of eye type calibration while taking into consideration the improvement of eye type accuracy.
在一些实施例中,所述根据所述左眼的瞳孔位置确定所述左眼的注视角度,以及根据所述右眼的瞳孔位置确定所述右眼的注视角度,包括:In some embodiments, determining the gaze angle of the left eye based on the pupil position of the left eye, and determining the gaze angle of the right eye based on the pupil position of the right eye includes:
根据所述图像采集装置的内参数,确定所述正脸图像中左眼的瞳孔中心的位置在所述图像采集装置的坐标系中的第一位置,以及所述正脸图像中右瞳孔中心的位置在所述图像采集装置的坐标系中的第二位置;According to the internal parameters of the image acquisition device, it is determined that the position of the pupil center of the left eye in the front face image is the first position in the coordinate system of the image acquisition device, and the position of the right pupil center in the front face image is determined. The second position in the coordinate system of the image acquisition device;
根据所述第一位置与所述图像采集装置的坐标系的坐标原点之间的连线,与所述图像采集装置的成像面的法线之间的夹角,确定所述左眼的注视角度;The gaze angle of the left eye is determined based on the angle between the connection line between the first position and the coordinate origin of the coordinate system of the image acquisition device and the normal line of the imaging plane of the image acquisition device. ;
根据所述第二位置与所述图像采集装置的坐标系的坐标原点之间的连线,与所述图像采集装置的成像面的法线之间的夹角,确定所述右眼的注视角度。The gaze angle of the right eye is determined based on the angle between the connection line between the second position and the coordinate origin of the coordinate system of the image acquisition device and the normal line of the imaging surface of the image acquisition device. .
在本公开实施例中,瞳孔中心的位置(包括左瞳孔中心的位置和右瞳孔中心的位置)可以是正脸图像中瞳孔区域的轮廓的中心在图像坐标系下的位置。In embodiments of the present disclosure, the position of the pupil center (including the position of the left pupil center and the position of the right pupil center) may be the position of the center of the outline of the pupil area in the front face image in the image coordinate system.
需要说明的是,本公开图像采集装置可以架设在目标对象的头部正前方,以便采集正脸图像。It should be noted that the image capture device of the present disclosure can be installed directly in front of the head of the target subject to capture the front face image.
本公开根据图像采集装置的内参数(如焦距),将瞳孔中心在图像坐标系下的二维 坐标映射到图像采集装置的坐标系下的三维坐标,从而得到左瞳孔中心的位置映射到图像采集装置的坐标系下的第一位置,以及右瞳孔中心的位置映射到图像采集装置的坐标系下的第二位置。根据第一位置与图像采集装置的坐标系的坐标原点之间的连线,与图像采集装置的成像面的法线之间的夹角确定为左眼的注视角度,根据第二位置与图像采集装置的坐标系的坐标原点之间的连线,与图像采集装置的成像面的法线之间的夹角确定为右眼的注视角度。进而后续基于左眼的注视角度和右眼的注视角度,标定眼睛类型。According to the internal parameters of the image acquisition device (such as focal length), the present disclosure places the pupil center in the two-dimensional image coordinate system. The coordinates are mapped to the three-dimensional coordinates in the coordinate system of the image acquisition device, so that the position of the center of the left pupil is mapped to the first position in the coordinate system of the image acquisition device, and the position of the center of the right pupil is mapped to the coordinate system of the image acquisition device. second position. According to the connection between the first position and the coordinate origin of the coordinate system of the image acquisition device, the angle between the normal line of the imaging plane of the image acquisition device is determined as the gaze angle of the left eye. According to the second position and the image acquisition device The angle between the line connecting the coordinate origins of the device's coordinate system and the normal line of the imaging surface of the image acquisition device is determined as the gaze angle of the right eye. Then, the eye type is calibrated based on the gaze angle of the left eye and the gaze angle of the right eye.
在一些实施例中,由于在驾驶车辆的过程中,车辆的行驶方向或车头方向大概率与驾驶员正脸方向一致,因此,本公开还可以是根据第一位置与图像采集装置的坐标系的坐标原点之间的连线,与车辆行驶方向或车头方向之间的夹角,确定为左眼的注视角度;根据第二位置与图像采集装置的坐标系的坐标原点之间的连线,与车辆行驶方向或车头方向之间的夹角,确定为右眼的注视角度。其中,车辆的行驶方向或车头方向可根据方向盘的转动角度确定,也可通过对接车辆DMS系统获取车辆的行驶方向。In some embodiments, since during the process of driving the vehicle, the driving direction or the head direction of the vehicle is likely to be consistent with the direction of the driver's face, the present disclosure may also be based on the first position and the coordinate system of the image acquisition device. The angle between the connection between the coordinate origins and the vehicle driving direction or the head direction is determined as the gaze angle of the left eye; according to the connection between the second position and the coordinate origin of the coordinate system of the image acquisition device, and The angle between the vehicle's driving direction or the vehicle's head direction is determined as the gaze angle of the right eye. Among them, the driving direction or head direction of the vehicle can be determined based on the rotation angle of the steering wheel, or the driving direction of the vehicle can be obtained by docking the vehicle DMS system.
可以理解的是,由于采集正脸图像的图像采集装置架设在目标对象的头部正前方,本公开基于图像采集装置的成像面的法线,确定左眼的注视角度和右眼的注视角度,在左眼的注视角度与右眼的注视角度的角度差异超过第一预定角度范围的情况下,将左眼和右眼中较大的注视角度的绝对者中的一者确定为斜视眼,提高标定斜视眼的准确性。It can be understood that since the image capture device that captures the front face image is installed directly in front of the head of the target subject, the present disclosure determines the gaze angle of the left eye and the gaze angle of the right eye based on the normal of the imaging surface of the image capture device, When the angle difference between the gaze angle of the left eye and the gaze angle of the right eye exceeds the first predetermined angle range, one of the absolute larger gaze angles of the left eye and the right eye is determined to be a strabismus eye to improve the calibration Accuracy of strabismus.
在一些实施例中,所述图像采集设备为车辆内部设置的图像采集设备,所述目标对象为所述车辆的驾驶员;In some embodiments, the image acquisition device is an image acquisition device provided inside a vehicle, and the target object is the driver of the vehicle;
所述方法还包括:The method also includes:
获取所述车辆的速度以及所述车辆的方向盘的扭转角;Obtain the speed of the vehicle and the twist angle of the steering wheel of the vehicle;
在所述速度大于预定速度阈值,并且所述方向盘的扭转角小于预定角度阈值的情况下,确定所述目标对象正视所述图像采集设备;When the speed is greater than a predetermined speed threshold and the twist angle of the steering wheel is less than a predetermined angle threshold, it is determined that the target object is facing the image acquisition device;
所述在确定目标对象正视图像采集设备的情况下,获取所述图像采集设备采集的所述目标对象的正脸图像,包括:When it is determined that the target object is facing the image acquisition device, obtaining the front face image of the target object collected by the image acquisition device includes:
在确定目标对象正视图像采集设备的情况下,响应于所述驾驶员斜视标定功能开启,获取所述图像采集设备采集的所述驾驶员的正脸图像。When it is determined that the target object is looking directly at the image acquisition device, in response to the driver's squint calibration function being turned on, the front face image of the driver captured by the image acquisition device is acquired.
在本公开实施例中,图像采集设备可以架设于车辆内部,如驾驶座的正前方,便于采集驾驶员的正脸图像。In the embodiment of the present disclosure, the image capture device can be installed inside the vehicle, such as directly in front of the driver's seat, to facilitate capturing the driver's front face image.
本公开获取车辆的速度以及车辆的方向盘的扭转角,可以是获取设置于车辆内的速度传感器和角度传感器采集的数值,也可通过对接车辆DMS系统获取车辆的速度和方向盘的扭转角。This disclosure can obtain the speed of the vehicle and the torsion angle of the steering wheel of the vehicle. It can obtain the values collected by the speed sensor and the angle sensor installed in the vehicle. It can also obtain the speed of the vehicle and the torsion angle of the steering wheel by docking the vehicle DMS system.
本公开根据获取的车辆的速度和方向盘的扭转角,在车辆的速度大于预定速度阈值,并且方向盘的扭转角小于预定角度阈值的情况下,确定目标对象正视图像采集设备。并响应于驾驶员斜视标定功能开启,将图像采集设备采集到的脸部图像作为正脸图像,从而基于前述方法,根据该正脸图像标定驾驶员的斜视眼。其中,驾驶员斜视标定功能开启,可以是车辆启动时默认开启,或是车辆启动预设时长后开启,还可以是基于驾驶员 发出的操作指令开启该斜视标定功能,本公开对此不做限定。The present disclosure determines that the target object is facing the image acquisition device based on the acquired vehicle speed and the steering wheel's twist angle, when the vehicle's speed is greater than a predetermined speed threshold and the steering wheel's twist angle is less than a predetermined angle threshold. And in response to the driver's squint calibration function being turned on, the facial image collected by the image acquisition device is used as a front face image, so that the driver's squint eyes are calibrated based on the front face image based on the aforementioned method. Among them, the driver's squint calibration function can be turned on by default when the vehicle is started, or it can be turned on after a preset period of time after the vehicle is started, or it can be turned on based on the driver's squint calibration function. The issued operation command turns on the strabismus calibration function, which is not limited in this disclosure.
可以理解的是,在车辆的速度大于预定速度阈值,并且方向盘扭转角小于预定角度阈值的情况下,表征车辆高速直行,此时驾驶员正视前方的概率较大,因此,在该情况下图像采集设备采集的脸部图像较大概率为目标对象正视前方的正脸图像,从而可以减少从图像采集设备采集的图像中筛选出正脸图像这一过程,进而提高根据正脸图像确定眼睛类型的即时性。此外,由于该场景下获取正脸图像,用户是感知不到的,相较于向用户输出提示信息使用户配合采集正脸图像,本公开实施例可以减少对用户的打扰,提高用户体验。It can be understood that when the speed of the vehicle is greater than the predetermined speed threshold and the steering wheel twist angle is less than the predetermined angle threshold, it means that the vehicle is traveling straight at high speed. At this time, the probability of the driver looking straight ahead is relatively high. Therefore, in this case, image collection The facial image collected by the device has a high probability of being a frontal image of the target subject looking straight ahead, which can reduce the process of filtering out the frontal face image from the images collected by the image collection device, thereby improving the real-time determination of eye type based on the frontal face image. sex. In addition, since the front-face image is acquired in this scenario, the user cannot perceive it. Compared with outputting prompt information to the user to ask the user to cooperate in collecting the front-face image, the embodiments of the present disclosure can reduce the disturbance to the user and improve the user experience.
在一些实施例中,所述方法还包括:In some embodiments, the method further includes:
在脸部图像注册过程中,输出请求目标对象正视图像采集设备的第一提示信息;During the face image registration process, output a first prompt message requesting the target subject to look directly at the image collection device;
基于所述第一提示信息,获取脸部图像;Based on the first prompt information, obtain a facial image;
将满足预定图像质量条件的所述脸部图像,作为目标对象正视图像采集设备的情况下获取的所述目标对象的正脸图像。The facial image that satisfies the predetermined image quality condition is used as the front face image of the target object acquired when the target object faces the image acquisition device.
在本公开实施例中,在脸部图像注册过程中,可以通过显示屏文本输出或语音播报等方式输出表征请求目标对象正视图像采集设备的第一提示信息,提示用户看向采集脸部图像的图像采集设备,并在输出第一提示信息后采集用户的脸部图像。其中,可以是利用面部识别技术,如FaceID技术进行脸部图像注册。In the embodiment of the present disclosure, during the face image registration process, the first prompt information indicating that the target subject is required to face the image collection device can be output through display screen text output or voice broadcast, etc., prompting the user to look towards the device that collects the face image. The image collection device collects the user's facial image after outputting the first prompt information. Among them, facial recognition technology, such as FaceID technology, can be used to register facial images.
本公开在获取到脸部图像后,判断脸部图像是否满足预定图像质量条件,将满足预定图像质量条件的脸部图像作为正脸图像,进而根据获取的正脸图像执行前述眼睛类型的标定;其中,预定图像质量条件至少包括人脸图像为正脸图像,还可以包括分辨率大于预定分辨率阈值、曝光度在预定曝光度范围内、信噪比大于预定信噪比阈值等。本公开根据满足预定图像质量条件的图像标定眼睛类型,可以提高确定眼睛类型的准确性。After acquiring the facial image, the present disclosure determines whether the facial image meets the predetermined image quality conditions, uses the facial image that meets the predetermined image quality conditions as the front face image, and then performs the aforementioned eye type calibration based on the acquired front face image; The predetermined image quality conditions at least include that the face image is a frontal image, and may also include that the resolution is greater than a predetermined resolution threshold, the exposure is within a predetermined exposure range, and the signal-to-noise ratio is greater than a predetermined signal-to-noise ratio threshold, etc. The present disclosure calibrates eye types based on images that meet predetermined image quality conditions, which can improve the accuracy of determining eye types.
此外,基于采集的脸部图像中的特征,可以将特征作为该用户的唯一标识,从而可以将标定的斜视眼与该用户的唯一标识进行关联存储。在下次对用户进行眼睛类型检测时,可直接将获取的用户的脸部图像中的特征与唯一标识进行比较,在该脸部图像与唯一标识归属同一个体的情况下,例如特征相似度大于预定相似度阈值,可直接将与唯一标识关联的眼睛类型检测结果确定为该脸部图像中目标对象的眼睛类型检测结果,而无需再执行前述眼睛类型检测方法,从而提高标定眼睛类型的即时性。In addition, based on the features in the collected facial images, the features can be used as the unique identification of the user, so that the calibrated strabismus eye can be associated and stored with the unique identification of the user. The next time the user's eye type is detected, the acquired features in the user's facial image can be directly compared with the unique identifier. In the case where the facial image and the unique identifier belong to the same individual, for example, the feature similarity is greater than a predetermined The similarity threshold can directly determine the eye type detection result associated with the unique identifier as the eye type detection result of the target object in the facial image, without having to perform the aforementioned eye type detection method, thereby improving the immediacy of calibrating the eye type.
因此,本公开获取脸部图像注册过程中的脸部图像,将脸部图像中满足预定图像质量条件的图像作为正脸图像,可以提高后续基于正脸图像标定眼睛类型的准确性以及即时性。Therefore, the present disclosure obtains the facial image during the facial image registration process, and uses the facial image that meets the predetermined image quality conditions as the front face image, which can improve the accuracy and immediacy of the subsequent eye type calibration based on the front face image.
在一些实施例中,由于脸部图像的注册通常会获取用户的正脸图像,本公开眼睛类型检测装置还可直接获取其他装置进行脸部图像注册成功后的脸部图像,将该脸部图像作为正脸图像。In some embodiments, since the registration of facial images usually obtains the user's front face image, the eye type detection device of the present disclosure can also directly obtain the facial image after successful facial image registration by other devices, and convert the facial image to as a frontal image.
可以理解的是,本公开直接将从其他装置获取的脸部图像注册成功后的脸部图像作为正脸图像,可以提高脸部图像的复用程度,并且无需再向用户输出请求目标对象正视 图像采集设备的第一提示信息,减少对用户的打扰,提高用户体验。It can be understood that the present disclosure directly uses the facial image after successfully registering the facial image obtained from other devices as the front face image, which can improve the reuse degree of the facial image and eliminates the need to output to the user a request for the target object to look directly at the target object. The first prompt information of the image collection device reduces the disturbance to the user and improves the user experience.
在一些实施例中,所述方法还包括:In some embodiments, the method further includes:
在标定所述目标对象的眼睛类型为左眼斜视或右眼斜视的情况下,根据所述目标对象的非斜视眼的注视信息进行以下至少一项交互:When the eye type of the target object is calibrated as left-eye strabismus or right-eye strabismus, at least one of the following interactions is performed based on the gaze information of the target object's non-strabismus eye:
输出所述非斜视眼的注视角度在所述目标对象所处空间内对应的交互对象的信息;Output information about the interactive object corresponding to the gaze angle of the non-strabismic eye in the space where the target object is located;
在所述目标对象为车辆的驾驶员且车速不为0的情况下,响应于预定时长内所述非斜视眼的注视角度超过第二预定角度范围,输出的第二提示信息。In the case where the target object is the driver of the vehicle and the vehicle speed is not 0, the second prompt information is output in response to the gaze angle of the non-strabismic eye exceeding the second predetermined angle range within a predetermined period of time.
在本公开实施例中,可以根据非斜视眼的注视信息,输出交互信息;其中,注视信息包括注视角度、注视时长。In the embodiment of the present disclosure, interactive information can be output based on the gaze information of the non-strabismic eye; wherein the gaze information includes gaze angle and gaze duration.
由于斜视人群在注视某一方向时,斜视眼的注视角度偏离正常眼的注视角度较大,因此,斜视眼的注视角度在所处空间内对应的交互对象并非斜视人群真正想要注视的对象。Because when people with strabismus gaze in a certain direction, the gaze angle of the strabismus eye deviates greatly from the gaze angle of the normal eye, therefore, the interactive object corresponding to the gaze angle of the strabismus eye in the space is not the object that the strabismus eye really wants to gaze at.
对此,在一些实施例中,本公开屏蔽斜视眼,根据非斜视眼的注视角度确定在所处空间内对应的交互对象,从而确定斜视人群真正想要注视的对象,并输出对象的信息。其中,根据非斜视眼的注视角度确定交互对象,可以是根据非斜视眼的注视角度确定注视角度在目标对象所处空间内的注视落点,从而将注视落点对应的对象确定为交互对象;还可以是根据非斜视眼的注视角度,确定注视范围,如将注视角度正负20度范围作为注视范围,进而将注视范围在目标对象所处空间内覆盖的对象确定为交互对象。In this regard, in some embodiments, the present disclosure shields the strabismus eye and determines the corresponding interactive object in the space based on the gaze angle of the non- strabismus eye, thereby determining the object that the strabismus person really wants to gaze at, and outputs the object's information. Determining the interactive object based on the gaze angle of the non-strabismic eye may include determining the gaze point of the gaze angle in the space where the target object is located based on the gaze angle of the non-strabismic eye, thereby determining the object corresponding to the gaze point as the interactive object; The gaze range can also be determined based on the gaze angle of the non-strabismic eye, for example, the gaze angle range of plus or minus 20 degrees is used as the gaze range, and then the object whose gaze range covers the space where the target object is located is determined as the interactive object.
以车辆驾驶场景为例,若非斜视眼的注视角度在车舱内的交互对象为车辆仪表盘,可在车辆中控台的显示屏上显示仪表盘的信息,如车速、转速等。Taking the vehicle driving scene as an example, if the non-squinting eye's gaze angle interacts with the vehicle dashboard in the cabin, the dashboard information, such as vehicle speed, rotational speed, etc., can be displayed on the display screen of the vehicle center console.
在一些实施例中,本公开在目标对象为车辆的驾驶员且车速不为0的情况下,屏蔽斜视眼,在预定时长内非斜视眼的注视角度超过第二预定角度范围的情况下,表征驾驶员可能存在分心(如长时间未看向行车方向),输出第二提示信息。其中,若注视角度为正值,超过第二预定角度范围可以是大于预定正值;若注视角度为负值,超过第二预定角度范围可以是小于预定负值。In some embodiments, the present disclosure shields the strabismus eye when the target object is the driver of the vehicle and the vehicle speed is not 0, and when the gaze angle of the non-strabismus eye exceeds the second predetermined angle range within a predetermined period of time, it represents The driver may be distracted (such as not looking in the direction of travel for a long time), and the second prompt message is output. Wherein, if the gaze angle is a positive value, exceeding the second predetermined angle range may be greater than the predetermined positive value; if the gaze angle is a negative value, exceeding the second predetermined angle range may be less than the predetermined negative value.
可以理解的是,本公开在确定斜视眼后,屏蔽斜视眼,根据非斜视眼的注视信息输出交互信息,可以减少根据斜视眼的注视信息而误输出交互信息,提高交互信息输出的准确性。It can be understood that the present disclosure blocks the strabismic eye after determining the strabismic eye, and outputs interactive information based on the gaze information of the non-strabismic eye, which can reduce the erroneous output of interactive information based on the gaze information of the strabismic eye and improve the accuracy of interactive information output.
图3为本公开实施例提供的一种眼睛类型检测方法的实现流程图二,由图3可知,该方法包括如下步骤S31至步骤S33:Figure 3 is a flowchart 2 of the implementation of an eye type detection method provided by an embodiment of the present disclosure. As can be seen from Figure 3, the method includes the following steps S31 to step S33:
步骤S31:脸部图像注册是否成功;若是,执行步骤S32;若否,继续执行步骤S31。Step S31: Whether the facial image registration is successful; if yes, proceed to step S32; if not, continue to execute step S31.
在一些实施例中,在脸部图像注册成功的情况下,将将满足预定图像质量条件的脸部图像作为目标对象正视图像采集设备的情况下获取的目标对象的正脸图像。In some embodiments, if the facial image registration is successful, the facial image that satisfies the predetermined image quality condition will be used as the front face image of the target object acquired when the target object faces the image acquisition device.
步骤S32:左右瞳孔方向差值是否大于阈值;若是,执行步骤S33;若否,检测流程结束,确定不存在斜视眼。Step S32: Whether the difference between the left and right pupil directions is greater than the threshold; if so, execute step S33; if not, the detection process ends and it is determined that there is no strabismus.
在一些实施例中,左瞳孔方向即为前述左眼的注视角度,右瞳孔方向即为前述右眼 的注视角度,左右瞳孔方向差值即为前述角度差异,阈值即为前述第一预定角度范围。本公开判断左右瞳孔方向差值是否大于阈值,即前述确定左眼的注视角度与右眼的注视角度之间的角度差异,判断角度差异是否超过第一预定角度范围。In some embodiments, the left pupil direction is the gaze angle of the aforementioned left eye, and the right pupil direction is the gaze angle of the aforementioned right eye. At the gaze angle, the difference between the left and right pupil directions is the aforementioned angle difference, and the threshold is the aforementioned first predetermined angle range. The present disclosure determines whether the difference between the left and right pupil directions is greater than a threshold, that is, the aforementioned determination of the angle difference between the gaze angle of the left eye and the gaze angle of the right eye, and whether the angle difference exceeds the first predetermined angle range.
步骤S33:与摄像头法线方向偏差更大的眼睛作为斜视眼。Step S33: The eye with a greater deviation from the normal direction of the camera is regarded as a strabismus eye.
在一些实施例中,摄像头法线即为前述图像采集设备的成像面的法线,与摄像头法线方向偏差即前述左眼的注视角度和右眼的注视角度。本公开将与摄像头法线方向偏差更大的眼睛作为斜视眼,即前述在角度差异超过第一预定角度范围的情况下,将左眼和右眼中注视角度的绝对值较大的一者,确定为正脸图像中的斜视眼。In some embodiments, the normal line of the camera is the normal line of the imaging surface of the image acquisition device, and the direction deviation from the normal line of the camera is the gaze angle of the left eye and the gaze angle of the right eye. In this disclosure, the eye with a greater deviation from the normal direction of the camera is regarded as a strabismus. That is, when the angle difference exceeds the first predetermined angle range, the one with the larger absolute value of the gaze angle of the left eye and the right eye is determined. It is the squint in the front face image.
在一些实施例中,一方面,本公开在脸部图像注册成功的情况下,将脸部图像注册成功后的图像作为用于眼睛类型检测的正脸图像,可以提高图像的复用程度。另一方面,本公开基于获取的正脸图像,确定左右瞳孔方向差值,在左右瞳孔方向差值大于阈值的情况下,将与摄像头法线方向偏差更大的眼睛确定为斜视眼,相较于利用遮盖检查法、斜视角测定法,或利用针对斜视眼的检测仪器对斜视眼进行检测,本公开实施例通过图像处理的方式可以快捷、准确地检测出目标对象的眼睛斜视类型。In some embodiments, on the one hand, when the face image registration is successful, the present disclosure uses the image after the face image registration is successful as a front face image for eye type detection, which can improve the degree of image reuse. On the other hand, this disclosure determines the difference between the left and right pupil directions based on the acquired front face image. When the difference between the left and right pupil directions is greater than the threshold, the eye with a larger deviation from the normal direction of the camera is determined to be a strabismus. Compared with Instead of detecting strabismus using cover inspection, strabismus angle measurement, or using detection instruments for strabismus, embodiments of the present disclosure can quickly and accurately detect the strabismus type of the target object's eyes through image processing.
图4为本公开实施例提供的一种眼睛类型检测方法的实现流程图三,由图4可知,该方法包括如下步骤S41至步骤S45:Figure 4 is a flowchart 3 of the implementation of an eye type detection method provided by an embodiment of the present disclosure. As can be seen from Figure 4, the method includes the following steps S41 to step S45:
步骤S41:整车是否为高速场景;若是,执行步骤S42;若否,结束检测流程。Step S41: Whether the whole vehicle is in a high-speed scene; if so, execute step S42; if not, end the detection process.
在一些实施例中,高速场景即为前述车辆速度大于预定速度阈值,并且方向盘扭转角小于预定角度阈值的场景。In some embodiments, the high-speed scene is a scene in which the vehicle speed is greater than a predetermined speed threshold and the steering wheel twist angle is less than a predetermined angle threshold.
若整车处于高速场景,将图像采集设备采集到的视频流中N帧脸部图像作为正脸图像。If the whole vehicle is in a high-speed scene, N frames of facial images in the video stream collected by the image acquisition device are used as front-face images.
步骤S42:本帧中左右瞳孔方向差值是否大于阈值;若是,执行步骤S43;若否,执行步骤S44。Step S42: Whether the difference between the left and right pupil directions in this frame is greater than the threshold; if so, execute step S43; if not, execute step S44.
在一些实施例中,本公开判断本帧正脸图像中左右瞳孔方向差值是否大于阈值,即前述确定左眼的注视角度与右眼的注视角度之间的角度差异,判断角度差异是否超过第一预定角度范围。In some embodiments, the present disclosure determines whether the difference in direction between the left and right pupils in the current frame of the face image is greater than a threshold, that is, the aforementioned determination of the angle difference between the gaze angle of the left eye and the gaze angle of the right eye, and whether the angle difference exceeds the third A predetermined angle range.
步骤S43:与前方行车方向偏差角度更大的眼睛作为斜视眼;Step S43: The eye with a larger deviation angle from the driving direction ahead is regarded as a strabismus eye;
在一些实施例中,前方行车方向即为前述车辆行驶方向,与前方行车方向偏差角度即为基于车辆行驶方向的左眼的注视角度以及右眼的注视角度。In some embodiments, the forward driving direction is the aforementioned vehicle driving direction, and the deviation angle from the forward driving direction is the gaze angle of the left eye and the gaze angle of the right eye based on the vehicle traveling direction.
本公开在左右瞳孔方向差大于阈值的情况下,将与前方行车方向偏差角度更大的眼睛作为斜视眼,即前述在角度差异超过第一预定角度范围的情况下,将左眼和右眼中注视角度的绝对值较大的一者,确定为正脸图像中的斜视眼。In the present disclosure, when the difference in direction between the left and right pupils is greater than a threshold, the eye with a larger deviation angle from the driving direction ahead is regarded as a strabismus. That is, when the angle difference exceeds the first predetermined angle range, the left eye and the right eye are regarded as squinting. The one with the larger absolute value of the angle is determined to be the squinting eye in the front face image.
基于该方法,对于N帧正脸图像,确定左眼为斜视眼的图像帧数,对应前述第二帧数;以及确定右眼为斜视眼的图像的帧数,对应前述第三帧数。Based on this method, for N frames of frontal face images, the number of image frames in which the left eye is strabismus is determined, corresponding to the aforementioned second frame number; and the number of frames in the image in which the right eye is strabismus is determined, corresponding to the aforementioned third frame number.
步骤S44:认为该驾驶员为正常人;Step S44: The driver is considered to be a normal person;
在一些实施例中,本公开在左右瞳孔方向差小于或等于阈值的情况下,确定该驾驶 员为正常人,对应前述角度差异未超过第一预定角度范围的情况下,目标对象的双眼类型为双眼非斜视。基于该方法,对于N帧正脸图像中,确定双眼为双眼非斜视眼的图像的帧数,对应前述第一帧数。In some embodiments, the present disclosure determines that the driver is driving when the difference between the left and right pupil directions is less than or equal to a threshold. If the person is a normal person and the aforementioned angle difference does not exceed the first predetermined angle range, the target object's binocular type is binocular non-strabismus. Based on this method, for N frames of frontal face images, the number of frames in which both eyes are non-squinting eyes is determined, corresponding to the aforementioned first number of frames.
循环执行N次步骤S42~S44,即完成对N帧图像的检测。在循环执行完N次步骤S42~S44后,得到第一帧数、第二帧数以及第三帧数,执行步骤S45。Steps S42 to S44 are executed in a loop N times, that is, the detection of N frames of images is completed. After executing steps S42 to S44 N times in a loop, the first frame number, the second frame number, and the third frame number are obtained, and step S45 is executed.
步骤S45:N次结果进行投票,输出票数最高的结果。Step S45: Vote on N results and output the result with the highest number of votes.
在一些实施例中,对N次结果进行投票,得到票数,其中,左眼为斜视眼的票数对应第二帧数、右眼为斜视眼的票数对应第三帧数、双眼非斜视眼的票数对应第一帧数,输出票数最高的结果;票数最高的结果即为眼睛类型的检测结果。In some embodiments, N results are voted on to obtain the number of votes, where the number of votes for the left eye with strabismus corresponds to the second frame number, the number of votes for the right eye with strabismus corresponds to the third frame number, and the number of votes for both eyes without strabismus. Corresponding to the first frame number, the result with the highest number of votes is output; the result with the highest number of votes is the detection result of the eye type.
其中,本公开在第一帧数大于第二帧数和第三帧数的情况下,标定目标对象的双眼类型为双眼非斜视;或者,在第一帧数小于第二帧数和第三帧数中的任意一个的情况下,标定第二帧数和第三帧数中较大的一个对应的正脸图像中的斜视眼为目标对象的斜视眼。Wherein, in the present disclosure, when the first frame number is greater than the second frame number and the third frame number, the binocular type of the target object is calibrated to binocular non-strabismus; or, when the first frame number is less than the second frame number and the third frame In the case of any one of the numbers, the strabismus eye in the front face image corresponding to the larger of the second frame number and the third frame number is calibrated as the strabismus eye of the target object.
在一些实施例中,一方面,整车在高速场景的情况下,驾驶员正视前方的概率较大,因此,在该情况下图像采集设备采集的脸部图像较大概率为正脸图像,从而可以减少从图像采集设备采集的图像中筛选出正脸图像这一过程,进而提高根据正脸图像确定眼睛类型的即时性。此外,由于该场景下获取正脸图像,用户是感知不到的,相较于向用户输出提示信息使用户配合正视图像采集设备后采集正脸图像,本公开实施例可以减少对用户的打扰,提高用户体验。In some embodiments, on the one hand, when the vehicle is in a high-speed scene, the driver is more likely to look straight ahead. Therefore, in this case, the facial image collected by the image acquisition device is more likely to be a frontal face image, so that The process of filtering out front-face images from images collected by an image acquisition device can be reduced, thereby improving the immediacy of determining eye types based on front-face images. In addition, since the front-face image is acquired in this scenario, the user cannot perceive it. Compared with outputting prompt information to the user to ask the user to cooperate with the front-face image collection device to collect the front-face image, the embodiment of the present disclosure can reduce the disturbance to the user. Improve user experience.
另一方面,本公开基于整车处于高速场景的情况下获取的N帧正脸图像,确定第一帧数、第二帧数以及第三帧数,基于第一帧数、第二帧数以及第三帧数的比较结果确定最终确定眼睛类型,相较于仅基于一帧正脸图像确定眼睛类型,本公开实施例可以提高确定眼睛类型的准确性。On the other hand, the present disclosure determines the first frame number, the second frame number and the third frame number based on N frames of frontal images obtained when the whole vehicle is in a high-speed scene. Based on the first frame number, the second frame number and the third frame number, The comparison result of the third number of frames determines the final eye type. Compared with determining the eye type based only on one frame of the front face image, the embodiment of the present disclosure can improve the accuracy of determining the eye type.
本公开基于FaceID注册(即上述脸部图像注册)过程中,首先,在驾驶员(即上述目标对象)为正视摄像头(即上述图像采集设备)状态下,摄像头采集驾驶员的正脸图像;然后,对正脸图像中驾驶员双眼的瞳孔方向角(即上述左眼的注视角度和右眼的注视角度)进行检测,得到双瞳的视线方向相差情况(即上述左眼的注视角度与右眼的注视角度之间的角度差异);最后,根据角度差异与第一预定角度范围,判断该驾驶员是否为斜视人群。另外,为了避免FaceID注册过程中的随机误差,在DMS运行过程中,也会在整车处于高速运行场景下,进行一段时间的斜视动态校准。This disclosure is based on the FaceID registration process (i.e., the above-mentioned facial image registration). First, when the driver (i.e., the above-mentioned target object) is facing the camera (i.e., the above-mentioned image acquisition device), the camera collects the driver's front face image; then; , detect the pupil direction angles of the driver's eyes in the front face image (i.e., the gaze angle of the left eye and the right eye), and obtain the difference in gaze directions of the two pupils (i.e., the gaze angle of the left eye and the right eye) (angle difference between the gaze angles); finally, based on the angle difference and the first predetermined angle range, it is determined whether the driver is a strabismus person. In addition, in order to avoid random errors in the FaceID registration process, during the operation of DMS, a period of dynamic strabismus calibration will also be performed when the vehicle is running at high speed.
本公开在通过FaceID注册要求的图片(即上述正脸图像)上,即满足驾驶员正视摄像头要求的图片上,检测驾驶员双眼的瞳孔方向角。若双眼的瞳孔方向角差值(即上述左眼的注视角度与右眼的注视角度之间的角度差异)超过一定阈值(即上述第一预定角度范围)时,则认为该驾驶员的双瞳有不平行的情况。基于该场景下,驾驶员应该是正视摄像头状态,所以将与摄像头的成像面的法线方向偏差角度更大的眼睛作为斜视眼输出。虽然FaceID注册时,驾驶员的头部姿态及眼睛朝向较为稳定规范,但是无法完 全控制此时驾驶员是否真正的看向摄像头。为了进一步减小FaceID注册过程中的随机误差,在行车过程中,通过DMS系统,在某些场景下,进行斜视的动态校准。在DMS运行过程中,通过对斜视的动态校准场景的设定,制造出一个驾驶员极大概率看向某一区域的场景(即上述车辆高速直行场景),进而可将在此场景下,将瞳孔方向偏离区域较大的眼睛作为斜视眼输出。The present disclosure detects the pupil direction angle of the driver's eyes on the picture that passes the FaceID registration requirement (ie, the above-mentioned front face image), that is, the picture that meets the driver's requirement of facing the camera. If the difference in pupil direction angles of both eyes (i.e., the angle difference between the gaze angle of the left eye and the gaze angle of the right eye) exceeds a certain threshold (i.e., the first predetermined angle range), it is considered that the driver has both pupils. There are non-parallel situations. Based on this scenario, the driver should be looking directly at the camera, so the eye with a larger deviation angle from the normal direction of the camera's imaging plane is output as a strabismus eye. Although the driver's head posture and eye orientation are relatively stable and standardized when FaceID is registered, it cannot be completely Fully control whether the driver actually looks at the camera at this time. In order to further reduce random errors in the FaceID registration process, the DMS system is used to perform dynamic calibration of strabismus in certain scenarios during driving. During the operation of DMS, by setting the dynamic calibration scene of strabismus, a scene is created in which the driver looks towards a certain area with a high probability (that is, the above scene of the vehicle driving straight at high speed), and then in this scene, the driver can The eye with a larger pupil direction deviation area is output as a strabismus.
本公开设定DMS动态标定场景为高速行车场景,即在车辆运行时速较高(>80km/h),且方向盘转角偏转较小时,认为驾驶员基本为目视前方(即上述正视图像采集设备)的驾车状态。由于DMS的动态标定流程是通过了多轮的投票结果,因此,拥有更强的鲁棒性。所以若完整的执行了DMS动态标定流程后,可以使用DMS动态标定的结果替换FaceID注册过程中的检测结果。不论是FaceID注册过程,还是DMS动态标定流程,都不用显性的告知驾驶员进行检测的配合,均为无感斜视检测过程。为了使标定结果更加准确,可以适当提高检测轮数N;同时为了防止过长时间陷入标定流程,可以增设标定时长限制,在一定时间内未进行N次检测,则放弃本轮标定。This disclosure sets the DMS dynamic calibration scene as a high-speed driving scene, that is, when the vehicle is running at a high speed (>80km/h) and the steering wheel angle deflection is small, it is considered that the driver is basically looking ahead (i.e., the above-mentioned front-facing image acquisition device) driving status. Since the dynamic calibration process of DMS is the result of multiple rounds of voting, it has stronger robustness. Therefore, if the DMS dynamic calibration process is completely executed, the results of the DMS dynamic calibration can be used to replace the detection results during the FaceID registration process. Whether it is the FaceID registration process or the DMS dynamic calibration process, there is no need to explicitly inform the driver to cooperate in the detection, and they are all non-sensory strabismus detection processes. In order to make the calibration results more accurate, the number of testing rounds N can be appropriately increased; at the same time, in order to prevent the calibration process from being stuck in the calibration process for too long, a calibration time limit can be added. If N testing is not performed within a certain period of time, the current round of calibration will be abandoned.
相比于相关技术,本公开具有以下优点:(1)本公开提供了一种驾驶员无感的斜视检测方法;(2)本公开提供的DMS动态标定流程,补充了FaceID检测过程中可能造成的随机误差问题。Compared with related technologies, the present disclosure has the following advantages: (1) The present disclosure provides a driver-insensitive strabismus detection method; (2) The DMS dynamic calibration process provided by the present disclosure supplements the potential problems caused by the FaceID detection process. random error problem.
图5为本公开实施例提供的一种眼睛类型检测装置的结构示意图,由图5可知,眼睛类型检测装置500包括:Figure 5 is a schematic structural diagram of an eye type detection device provided by an embodiment of the present disclosure. As can be seen from Figure 5, the eye type detection device 500 includes:
第一获取模块501,配置为在确定目标对象正视图像采集设备的情况下,获取所述图像采集设备采集的所述目标对象的正脸图像;The first acquisition module 501 is configured to acquire the front face image of the target object collected by the image acquisition device when it is determined that the target object is facing the image acquisition device;
第一确定模块502,配置为分别确定所述正脸图像中所述目标对象的左眼的瞳孔位置和右眼的瞳孔位置;The first determination module 502 is configured to respectively determine the pupil position of the left eye and the pupil position of the right eye of the target object in the front face image;
第二确定模块503,配置为根据所述左眼的瞳孔位置确定所述左眼的注视角度,以及根据所述右眼的瞳孔位置确定所述目标对象的右眼的注视角度;The second determination module 503 is configured to determine the gaze angle of the left eye according to the pupil position of the left eye, and determine the gaze angle of the right eye of the target object according to the pupil position of the right eye;
第三确定模块504,配置为确定所述左眼的注视角度与所述右眼的注视角度之间的角度差异;The third determination module 504 is configured to determine the angle difference between the gaze angle of the left eye and the gaze angle of the right eye;
第四确定模块505,配置为在所述角度差异超过第一预定角度范围情况下,将所述左眼和所述右眼中所述注视角度的绝对值较大的一者,确定为所述正脸图像中的斜视眼;The fourth determination module 505 is configured to determine the larger absolute value of the gaze angle of the left eye and the right eye as the correct angle when the angle difference exceeds the first predetermined angle range. squinting eyes in face images;
标定模块506,配置为根据所述正脸图像中的斜视眼标定所述目标对象的眼睛斜视类型。The calibration module 506 is configured to calibrate the eye strabismus type of the target object according to the strabismus eye in the front face image.
在一些实施例中,所述正脸图像包括所述图像采集设备采集的所述目标对象的视频流中的多帧正脸图像;In some embodiments, the frontal face image includes multiple frames of frontal face images in the video stream of the target object collected by the image acquisition device;
标定模块506,配置为在所述多帧正脸图像中,确定左眼为斜视眼的图像的帧数达到预定阈值的情况下,标定所述目标对象的眼睛类型为左眼斜视;或者在所述多帧正脸图像中,确定右眼为斜视眼的图像的帧数达到预定阈值的情况下,标定所述目标对象的眼睛类型为右眼斜视。 The calibration module 506 is configured to calibrate the eye type of the target object as left eye strabismus when the number of frames in the multi-frame frontal face image in which it is determined that the left eye is strabismus reaches a predetermined threshold; or when In the multi-frame frontal face images, if the number of frames in which it is determined that the right eye is strabismus reaches a predetermined threshold, the eye type of the target object is calibrated as right eye strabismus.
在一些实施例中,所述正脸图像包括所述图像采集设备采集的所述目标对象的视频流中的N帧正脸图像,N为预定正整数;In some embodiments, the frontal face image includes N frames of frontal face images in the video stream of the target object collected by the image acquisition device, where N is a predetermined positive integer;
所述眼睛类型检测装置500还包括:The eye type detection device 500 also includes:
第五确定模块507,配置为在所述角度差异未超过第一预定角度范围的情况下,确定所述正脸图像中的双眼为非斜视眼;The fifth determination module 507 is configured to determine that both eyes in the front face image are non-strabismus eyes when the angle difference does not exceed the first predetermined angle range;
标定模块506包括:Calibration module 506 includes:
确定子模块,配置为在所述N帧正脸图像中,对确定双眼为非斜视眼的图像的第一帧数、确定左眼为斜视眼的图像的第二帧数、确定右眼为斜视眼的图像的第三帧数进行比较;The determination sub-module is configured to determine, in the N frames of front face images, the first frame number of the image in which both eyes are non-strabismus, the second frame number of the image in which the left eye is determined to be strabismus, and the determination of the right eye to be strabismus. The third frame number of the eye image is compared;
标定子模块,配置为在所述第一帧数大于所述第二帧数和所述第三帧数的情况下,标定所述目标对象的双眼类型为双眼非斜视;或者在所述第一帧数小于所述第二帧数和所述第三帧数中的任意一个的情况下,标定所述第二帧数和所述第三帧数中较大的一个对应的正脸图像中的斜视眼为所述目标对象的斜视眼。A calibration submodule configured to calibrate the binocular type of the target object as binocular non-strabismus when the first frame number is greater than the second frame number and the third frame number; or in the first When the number of frames is less than any one of the second number of frames and the third number of frames, calibrate the front face image corresponding to the larger one of the second number of frames and the third number of frames. The squinting eye is the squinting eye of the target subject.
在一些实施例中,第二确定模块503包括:In some embodiments, the second determination module 503 includes:
第一确定子模块,配置为根据所述图像采集装置的内参数,确定所述正脸图像中左眼的瞳孔中心的位置在所述图像采集装置的坐标系中的第一位置,以及所述正脸图像中右瞳孔中心的位置在所述图像采集装置的坐标系中的第二位置;The first determination sub-module is configured to determine the first position of the pupil center of the left eye in the front face image in the coordinate system of the image acquisition device according to the internal parameters of the image acquisition device, and the The position of the center of the right pupil in the frontal image is at the second position in the coordinate system of the image acquisition device;
第二确定子模块,配置为根据所述第一位置与所述图像采集装置的坐标系的坐标原点之间的连线,与所述图像采集装置的成像面的法线之间的夹角,确定所述左眼的注视角度;The second determination sub-module is configured to determine the angle between the connection line between the first position and the coordinate origin of the coordinate system of the image acquisition device and the normal line of the imaging surface of the image acquisition device, Determine the gaze angle of the left eye;
第三确定子模块,配置为根据所述第二位置与所述图像采集装置的坐标系的坐标原点之间的连线,与所述图像采集装置的成像面的法线之间的夹角,确定所述右眼的注视角度。The third determination sub-module is configured to determine the angle between the connection line between the second position and the coordinate origin of the coordinate system of the image acquisition device and the normal line of the imaging surface of the image acquisition device, Determine the gaze angle of the right eye.
在一些实施例中,所述图像采集设备为车辆内部设置的图像采集设备,所述目标对象为所述车辆的驾驶员;In some embodiments, the image acquisition device is an image acquisition device provided inside a vehicle, and the target object is the driver of the vehicle;
所述眼睛类型检测装置500还包括:The eye type detection device 500 also includes:
第二获取模块508,配置为获取所述车辆的速度以及所述车辆的方向盘的扭转角;The second acquisition module 508 is configured to acquire the speed of the vehicle and the twist angle of the steering wheel of the vehicle;
第六确定模块509,配置为在所述速度大于预定速度阈值,并且所述方向盘的扭转角小于预定角度阈值的情况下,确定所述目标对象正视所述图像采集设备;The sixth determination module 509 is configured to determine that the target object is facing the image acquisition device when the speed is greater than a predetermined speed threshold and the twist angle of the steering wheel is less than a predetermined angle threshold;
第一获取模块501,配置为在确定目标对象正视图像采集设备的情况下,响应于所述驾驶员斜视标定功能开启,获取所述图像采集设备采集的所述驾驶员的正脸图像。The first acquisition module 501 is configured to acquire the driver's front face image collected by the image acquisition device in response to the driver's squint calibration function being turned on when it is determined that the target object is looking directly at the image acquisition device.
在一些实施例中,所述眼睛类型检测装置500还包括:In some embodiments, the eye type detection device 500 further includes:
输出模块510,配置为在脸部图像注册过程中,输出请求目标对象正视图像采集设备的第一提示信息;The output module 510 is configured to, during the facial image registration process, output the first prompt information requesting the target subject to face the image collection device;
第三获取模块511,配置为基于所述第一提示信息,获取脸部图像;The third acquisition module 511 is configured to acquire a facial image based on the first prompt information;
第七确定模块512,配置为将满足预定图像质量条件的所述脸部图像,作为目标对 象正视图像采集设备的情况下获取的所述目标对象的正脸图像。The seventh determination module 512 is configured to use the facial image that meets the predetermined image quality condition as a target pair. A frontal face image of the target object acquired by the image acquisition device.
在一些实施例中,所述眼睛类型检测装置500还包括:In some embodiments, the eye type detection device 500 further includes:
交互模块513,配置为在标定所述目标对象的眼睛类型为左眼斜视或右眼斜视的情况下,根据所述目标对象的非斜视眼的注视信息进行以下至少一项交互:输出所述非斜视眼的注视角度在所述目标对象所处空间内对应的交互对象的信息;在所述目标对象为车辆的驾驶员且车速不为0的情况下,响应于预定时长内所述非斜视眼的注视角度超过第二预定角度范围,输出的第二提示信息。The interaction module 513 is configured to perform at least one of the following interactions based on the gaze information of the target object's non-strabismus eyes when the eye type of the target object is calibrated as left-eye strabismus or right-eye strabismus: output the non-strabismus eye. Information about the interactive object corresponding to the gaze angle of the strabismus in the space where the target object is located; when the target object is the driver of the vehicle and the vehicle speed is not 0, in response to the non-strabismus eye within a predetermined period of time When the gaze angle exceeds the second predetermined angle range, the second prompt information is output.
图6为本公开实施例提供的一种计算机设备的硬件实体示意图,如图6所示,该计算机设备800的硬件实体包括:处理器801、通信接口802和存储器803,其中:处理器801通常控制计算机设备800的总体操作。通信接口802可以使计算机设备通过网络与其他终端或服务器通信。Figure 6 is a schematic diagram of the hardware entity of a computer device provided by an embodiment of the present disclosure. As shown in Figure 6, the hardware entity of the computer device 800 includes: a processor 801, a communication interface 802 and a memory 803, where: the processor 801 usually Controls the overall operation of computer device 800. The communication interface 802 can enable the computer device to communicate with other terminals or servers through a network.
存储器803配置为存储由处理器801可执行的指令和应用,还可以缓存待处理器801以及计算机设备800中各模块待处理或已经处理的数据(例如,图像数据、音频数据、语音通信数据和视频通信数据),可以通过闪存(FLASH)或随机访问存储器(Random Access Memory,RAM)实现。处理器801、通信接口802和存储器803之间可以通过总线804进行数据传输。其中,处理器801用于执行上述方法中的部分或全部步骤。The memory 803 is configured to store instructions and applications executable by the processor 801, and can also cache data to be processed or processed by the processor 801 and each module in the computer device 800 (for example, image data, audio data, voice communication data and Video communication data), which can be implemented through flash memory (FLASH) or random access memory (Random Access Memory, RAM). Data transmission can be carried out between the processor 801, the communication interface 802 and the memory 803 through the bus 804. The processor 801 is used to execute some or all steps in the above method.
本公开实施例提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时,实现上述方法中的部分或全部步骤。Embodiments of the present disclosure provide a computer-readable storage medium on which a computer program is stored. When the computer program is executed by a processor, some or all of the steps in the above method are implemented.
本公开实施例提供一种计算机程序产品,所述计算机程序产品包括计算机程序,在所述计算机程序在计算机设备上运行的情况下,使得所述计算机设备执行上述方法中的部分或全部步骤。Embodiments of the present disclosure provide a computer program product, which includes a computer program that, when the computer program is run on a computer device, causes the computer device to execute some or all of the steps in the above method.
这里需要指出的是:以上存储介质和设备实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本公开存储介质和设备实施例中未披露的技术细节,请参照本公开方法实施例的描述而理解。It should be pointed out here that the above description of the storage medium and device embodiments is similar to the description of the above method embodiments, and has similar beneficial effects as the method embodiments. For technical details not disclosed in the storage medium and device embodiments of the present disclosure, please refer to the description of the method embodiments of the present disclosure for understanding.
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本公开的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本公开的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本公开实施例的实施过程构成任何限定。上述本公开实施例序号仅仅为了描述,不代表实施例的优劣。It will be understood that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic associated with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that in various embodiments of the present disclosure, the size of the sequence numbers of the above-mentioned processes does not mean the order of execution. The execution order of each process should be determined by its functions and internal logic, and should not be used in the embodiments of the present disclosure. The implementation process constitutes any limitation. The above serial numbers of the embodiments of the present disclosure are only for description and do not represent the advantages and disadvantages of the embodiments.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。 It should be noted that, in this document, the terms "comprising", "comprises" or any other variations thereof are intended to cover a non-exclusive inclusion, such that a process, method, article or device that includes a series of elements not only includes those elements, It also includes other elements not expressly listed or inherent in the process, method, article or apparatus. Without further limitation, an element defined by the statement "comprises a..." does not exclude the presence of additional identical elements in a process, method, article or apparatus that includes that element.
在本公开所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。In the several embodiments provided in this disclosure, it should be understood that the disclosed devices and methods can be implemented in other ways. The device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods, such as: multiple units or components may be combined, or can be integrated into another system, or some features can be ignored, or not implemented. In addition, the coupling, direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be electrical, mechanical, or other forms. of.
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。The units described above as separate components may or may not be physically separated; the components shown as units may or may not be physical units; they may be located in one place or distributed to multiple network units; Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本公开各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present disclosure can be all integrated into one processing unit, or each unit can be separately used as a unit, or two or more units can be integrated into one unit; the above-mentioned integration The unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。Those of ordinary skill in the art can understand that all or part of the steps to implement the above method embodiments can be completed through hardware related to program instructions. The aforementioned program can be stored in a computer-readable storage medium. When the program is executed, the execution includes: The steps of the above method embodiment; and the aforementioned storage media include: mobile storage devices, read-only memory (Read Only Memory, ROM), magnetic disks or optical disks and other various media that can store program codes.
或者,本公开上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本公开各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。Alternatively, if the above-mentioned integrated units of the present disclosure are implemented in the form of software function modules and sold or used as independent products, they can also be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present disclosure can be embodied in the form of a software product in essence or that contributes to related technologies. The computer software product is stored in a storage medium and includes a number of instructions to enable a computer. A computer device (which may be a personal computer, a server, a network device, etc.) executes all or part of the methods described in various embodiments of the present disclosure. The aforementioned storage media include: mobile storage devices, ROMs, magnetic disks or optical disks and other media that can store program codes.
以上所述,仅为本公开的实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以所述权利要求的保护范围为准。 The above are only embodiments of the present disclosure, but the protection scope of the present disclosure is not limited thereto. Any person familiar with the technical field can easily think of changes or substitutions within the technical scope disclosed in the present disclosure, and should are covered by the protection scope of this disclosure. Therefore, the protection scope of the present disclosure should be subject to the protection scope of the claims.

Claims (12)

  1. 一种眼睛类型检测方法,所述方法包括:An eye type detection method, the method includes:
    在确定目标对象正视图像采集设备的情况下,获取所述图像采集设备采集的所述目标对象的正脸图像;When it is determined that the target object is facing the image acquisition device, obtain the front face image of the target object collected by the image acquisition device;
    分别确定所述正脸图像中所述目标对象的左眼的瞳孔位置和右眼的瞳孔位置;Determine respectively the pupil position of the left eye and the pupil position of the right eye of the target object in the front face image;
    根据所述左眼的瞳孔位置确定所述左眼的注视角度,以及根据所述右眼的瞳孔位置确定所述目标对象的右眼的注视角度;Determine the gaze angle of the left eye based on the pupil position of the left eye, and determine the gaze angle of the right eye of the target object based on the pupil position of the right eye;
    确定所述左眼的注视角度与所述右眼的注视角度之间的角度差异;determining an angular difference between the gaze angle of the left eye and the gaze angle of the right eye;
    在所述角度差异超过第一预定角度范围情况下,将所述左眼和所述右眼中所述注视角度的绝对值较大的一者,确定为所述正脸图像中的斜视眼;When the angle difference exceeds the first predetermined angle range, determine the one with the larger absolute value of the gaze angle of the left eye and the right eye as the strabismus eye in the front face image;
    根据所述正脸图像中的斜视眼标定所述目标对象的眼睛斜视类型。The eye strabismus type of the target object is calibrated according to the strabismus eye in the front face image.
  2. 根据权利要求1所述的方法,其中,所述正脸图像包括所述图像采集设备采集的所述目标对象的视频流中的多帧正脸图像;The method according to claim 1, wherein the frontal face image includes multiple frames of frontal face images in the video stream of the target object collected by the image acquisition device;
    所述根据所述正脸图像中的斜视眼标定所述目标对象的眼睛斜视类型,包括:Calibrating the eye strabismus type of the target object based on the strabismus eye in the front face image includes:
    在所述多帧正脸图像中,确定左眼为斜视眼的图像的帧数达到预定阈值的情况下,标定所述目标对象的眼睛类型为左眼斜视;或者In the multi-frame front face images, if the number of frames in which it is determined that the left eye is strabismus reaches a predetermined threshold, calibrate the eye type of the target object as left eye strabismus; or
    在所述多帧正脸图像中,确定右眼为斜视眼的图像的帧数达到预定阈值的情况下,标定所述目标对象的眼睛类型为右眼斜视。In the multi-frame frontal face images, if the number of frames in which it is determined that the right eye is strabismus reaches a predetermined threshold, the eye type of the target object is calibrated as right eye strabismus.
  3. 根据权利要求1所述的方法,其中,所述正脸图像包括所述图像采集设备采集的所述目标对象的视频流中的N帧正脸图像,N为预定正整数;The method according to claim 1, wherein the frontal face image includes N frames of frontal face images in the video stream of the target object collected by the image acquisition device, and N is a predetermined positive integer;
    所述方法还包括:The method also includes:
    在所述角度差异未超过第一预定角度范围的情况下,确定所述正脸图像中的双眼为非斜视眼;When the angle difference does not exceed the first predetermined angle range, determine that both eyes in the front face image are non-strabismus;
    所述根据所述正脸图像中的斜视眼标定所述目标对象的眼睛斜视类型,包括:Calibrating the eye strabismus type of the target object based on the strabismus eye in the front face image includes:
    在所述N帧正脸图像中,对确定双眼为非斜视眼的图像的第一帧数、确定左眼为斜视眼的图像的第二帧数、确定右眼为斜视眼的图像的第三帧数进行比较;In the N frames of frontal face images, the first frame number of the image in which both eyes are determined to be non-strabismus, the second frame number of the image in which the left eye is determined to be a strabismus, and the third frame number of the image in which the right eye is determined to be strabismus are determined. Frame numbers are compared;
    在所述第一帧数大于所述第二帧数和所述第三帧数的情况下,标定所述目标对象的双眼类型为双眼非斜视;或者In the case where the first frame number is greater than the second frame number and the third frame number, calibrating the binocular type of the target object as binocular non-strabismus; or
    在所述第一帧数小于所述第二帧数和所述第三帧数中的任意一个的情况下,标定所述第二帧数和所述第三帧数中较大的一个对应的正脸图像中的斜视眼为所述目标对象的斜视眼。When the first frame number is less than any one of the second frame number and the third frame number, calibrate the corresponding one of the second frame number and the third frame number. The strabismus eye in the front face image is the strabismus eye of the target object.
  4. 根据权利要求3所述的方法,其中,所述N帧正脸图像为所述视频流中任意两帧的时序差异均不超过预设时长的N帧正脸图像。The method according to claim 3, wherein the N frames of frontal images are N frames of frontal images in which the timing difference between any two frames in the video stream does not exceed a preset time period.
  5. 根据权利要求1至4中任一所述的方法,其中,所述根据所述左眼的瞳孔位置 确定所述左眼的注视角度,以及根据所述右眼的瞳孔位置确定所述右眼的注视角度,包括:The method according to any one of claims 1 to 4, wherein the pupil position of the left eye Determining the gaze angle of the left eye, and determining the gaze angle of the right eye based on the pupil position of the right eye, includes:
    根据所述图像采集装置的内参数,确定所述正脸图像中左眼的瞳孔中心的位置在所述图像采集装置的坐标系中的第一位置,以及所述正脸图像中右瞳孔中心的位置在所述图像采集装置的坐标系中的第二位置;According to the internal parameters of the image acquisition device, it is determined that the position of the pupil center of the left eye in the front face image is the first position in the coordinate system of the image acquisition device, and the position of the right pupil center in the front face image is determined. The second position in the coordinate system of the image acquisition device;
    根据所述第一位置与所述图像采集装置的坐标系的坐标原点之间的连线,与所述图像采集装置的成像面的法线之间的夹角,确定所述左眼的注视角度;The gaze angle of the left eye is determined based on the angle between the connection line between the first position and the coordinate origin of the coordinate system of the image acquisition device and the normal line of the imaging plane of the image acquisition device. ;
    根据所述第二位置与所述图像采集装置的坐标系的坐标原点之间的连线,与所述图像采集装置的成像面的法线之间的夹角,确定所述右眼的注视角度。The gaze angle of the right eye is determined based on the angle between the connection line between the second position and the coordinate origin of the coordinate system of the image acquisition device and the normal line of the imaging surface of the image acquisition device. .
  6. 根据权利要求1至5中任一所述的方法,其中,所述图像采集设备为车辆内部设置的图像采集设备,所述目标对象为所述车辆的驾驶员;The method according to any one of claims 1 to 5, wherein the image acquisition device is an image acquisition device provided inside a vehicle, and the target object is a driver of the vehicle;
    所述方法还包括:The method also includes:
    获取所述车辆的速度以及所述车辆的方向盘的扭转角;Obtain the speed of the vehicle and the twist angle of the steering wheel of the vehicle;
    在所述速度大于预定速度阈值,并且所述方向盘的扭转角小于预定角度阈值的情况下,确定所述目标对象正视所述图像采集设备;When the speed is greater than a predetermined speed threshold and the twist angle of the steering wheel is less than a predetermined angle threshold, it is determined that the target object is facing the image acquisition device;
    所述在确定目标对象正视图像采集设备的情况下,获取所述图像采集设备采集的所述目标对象的正脸图像,包括:When it is determined that the target object is facing the image acquisition device, obtaining the front face image of the target object collected by the image acquisition device includes:
    在确定目标对象正视图像采集设备的情况下,响应于所述驾驶员斜视标定功能开启,获取所述图像采集设备采集的所述驾驶员的正脸图像。When it is determined that the target object is looking directly at the image acquisition device, in response to the driver's squint calibration function being turned on, the front face image of the driver captured by the image acquisition device is acquired.
  7. 根据权利要求1至5中任一所述的方法,其中,所述方法还包括:The method according to any one of claims 1 to 5, wherein the method further includes:
    在脸部图像注册过程中,输出请求目标对象正视图像采集设备的第一提示信息;During the face image registration process, output a first prompt message requesting the target subject to look directly at the image collection device;
    基于所述第一提示信息,获取脸部图像;Based on the first prompt information, obtain a facial image;
    将满足预定图像质量条件的所述脸部图像,作为目标对象正视图像采集设备的情况下获取的所述目标对象的正脸图像。The facial image that satisfies the predetermined image quality condition is used as the front face image of the target object acquired when the target object faces the image acquisition device.
  8. 根据权利要求1至7中任一所述的方法,其中,所述方法还包括:The method according to any one of claims 1 to 7, wherein the method further includes:
    在标定所述目标对象的眼睛类型为左眼斜视或右眼斜视的情况下,根据所述目标对象的非斜视眼的注视信息进行以下至少一项交互:When the eye type of the target object is calibrated as left-eye strabismus or right-eye strabismus, at least one of the following interactions is performed based on the gaze information of the target object's non-strabismus eye:
    输出所述非斜视眼的注视角度在所述目标对象所处空间内对应的交互对象的信息;Output information about the interactive object corresponding to the gaze angle of the non-strabismic eye in the space where the target object is located;
    在所述目标对象为车辆的驾驶员且车速不为0的情况下,响应于预定时长内所述非斜视眼的注视角度超过第二预定角度范围,输出的第二提示信息。In the case where the target object is the driver of the vehicle and the vehicle speed is not 0, the second prompt information is output in response to the gaze angle of the non-strabismic eye exceeding the second predetermined angle range within a predetermined period of time.
  9. 一种眼睛类型检测装置,所述装置包括:An eye type detection device, the device includes:
    第一获取模块,配置为在确定目标对象正视图像采集设备的情况下,获取所述图像采集设备采集的所述目标对象的正脸图像;The first acquisition module is configured to acquire the front face image of the target object collected by the image acquisition device when it is determined that the target object is facing the image acquisition device;
    第一确定模块,配置为分别确定所述正脸图像中所述目标对象的左眼的瞳孔位置和右眼的瞳孔位置;A first determination module configured to respectively determine the pupil position of the left eye and the pupil position of the right eye of the target object in the front face image;
    第二确定模块,配置为根据所述左眼的瞳孔位置确定所述左眼的注视角度,以及根 据所述右眼的瞳孔位置确定所述目标对象的右眼的注视角度;The second determination module is configured to determine the gaze angle of the left eye according to the pupil position of the left eye, and based on Determine the gaze angle of the target object's right eye based on the pupil position of the right eye;
    第三确定模块,配置为确定所述左眼的注视角度与所述右眼的注视角度之间的角度差异;a third determination module configured to determine the angle difference between the gaze angle of the left eye and the gaze angle of the right eye;
    第四确定模块,配置为在所述角度差异超过第一预定角度范围情况下,将所述左眼和所述右眼中所述注视角度的绝对值较大的一者,确定为所述正脸图像中的斜视眼;The fourth determination module is configured to determine the one with a larger absolute value of the gaze angles of the left eye and the right eye as the front face when the angle difference exceeds the first predetermined angle range. squinting eyes in the image;
    标定模块,配置为根据所述正脸图像中的斜视眼标定所述目标对象的眼睛斜视类型。A calibration module configured to calibrate the eye strabismus type of the target object according to the strabismus eye in the front face image.
  10. 一种计算机设备,包括:处理器;用于存储处理器可执行指令的存储器;A computer device, including: a processor; a memory used to store instructions executable by the processor;
    其中,所述处理器被配置为执行如权利要求1至8中任一项所述的方法。Wherein, the processor is configured to perform the method according to any one of claims 1 to 8.
  11. 一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现权利要求1至8中任一项所述的方法。A computer-readable storage medium on which a computer program is stored, which implements the method described in any one of claims 1 to 8 when executed by a processor.
  12. 一种计算机程序产品,所述计算机程序产品包括计算机程序,在所述计算机程序在计算机设备上运行的情况下,使得所述计算机设备执行如权利要求1至8中任一项所述的眼睛类型检测方法。 A computer program product, the computer program product comprising a computer program, when the computer program is run on a computer device, causing the computer device to perform an eye type as claimed in any one of claims 1 to 8 Detection method.
PCT/CN2023/091183 2022-05-31 2023-04-27 Eye type detection method and apparatus, computer device, storage medium and computer program product WO2023231663A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210615649.9 2022-05-31
CN202210615649.9A CN114903424A (en) 2022-05-31 2022-05-31 Eye type detection method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2023231663A1 true WO2023231663A1 (en) 2023-12-07

Family

ID=82771157

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/091183 WO2023231663A1 (en) 2022-05-31 2023-04-27 Eye type detection method and apparatus, computer device, storage medium and computer program product

Country Status (2)

Country Link
CN (1) CN114903424A (en)
WO (1) WO2023231663A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114903424A (en) * 2022-05-31 2022-08-16 上海商汤临港智能科技有限公司 Eye type detection method and device, computer equipment and storage medium
CN116098794B (en) * 2022-12-30 2024-05-31 广州视景医疗软件有限公司 De-inhibition visual training method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109288493A (en) * 2017-07-25 2019-02-01 珠海学院有限公司 A kind of digitlization strabismus diagnostic method, device and system
CN112989939A (en) * 2021-02-08 2021-06-18 佛山青藤信息科技有限公司 Strabismus detection system based on vision
US20210357024A1 (en) * 2018-04-28 2021-11-18 Boe Technology Group Co., Ltd. Geometric parameter measurement method and device thereof, augmented reality device, and storage medium
CN114027782A (en) * 2021-11-30 2022-02-11 首都医科大学附属北京同仁医院 Non-commonality strabismus diagnosis method based on virtual reality and eye movement tracking technology
KR20220053209A (en) * 2020-10-22 2022-04-29 순천향대학교 산학협력단 Apparatus and Method for Measuring The Angel Of Strabismus for Supporting Diagnosis Of Strabismus
CN114903424A (en) * 2022-05-31 2022-08-16 上海商汤临港智能科技有限公司 Eye type detection method and device, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109288493A (en) * 2017-07-25 2019-02-01 珠海学院有限公司 A kind of digitlization strabismus diagnostic method, device and system
US20210357024A1 (en) * 2018-04-28 2021-11-18 Boe Technology Group Co., Ltd. Geometric parameter measurement method and device thereof, augmented reality device, and storage medium
KR20220053209A (en) * 2020-10-22 2022-04-29 순천향대학교 산학협력단 Apparatus and Method for Measuring The Angel Of Strabismus for Supporting Diagnosis Of Strabismus
CN112989939A (en) * 2021-02-08 2021-06-18 佛山青藤信息科技有限公司 Strabismus detection system based on vision
CN114027782A (en) * 2021-11-30 2022-02-11 首都医科大学附属北京同仁医院 Non-commonality strabismus diagnosis method based on virtual reality and eye movement tracking technology
CN114903424A (en) * 2022-05-31 2022-08-16 上海商汤临港智能科技有限公司 Eye type detection method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN114903424A (en) 2022-08-16

Similar Documents

Publication Publication Date Title
WO2023231663A1 (en) Eye type detection method and apparatus, computer device, storage medium and computer program product
CN106598221B (en) 3D direction of visual lines estimation method based on eye critical point detection
US8897502B2 (en) Calibration for stereoscopic capture system
CN111652086B (en) Face living body detection method and device, electronic equipment and storage medium
CN110991266A (en) Binocular face living body detection method and device
CN111854620B (en) Monocular camera-based actual pupil distance measuring method, device and equipment
JPWO2008007781A1 (en) Gaze direction detection device and gaze direction detection method
CN112101124B (en) Sitting posture detection method and device
CN112257696B (en) Sight estimation method and computing equipment
WO2022257120A1 (en) Pupil position determination method, device and system
KR101510312B1 (en) 3D face-modeling device, system and method using Multiple cameras
WO2023272453A1 (en) Gaze calibration method and apparatus, device, computer-readable storage medium, system, and vehicle
CN111832373A (en) Automobile driving posture detection method based on multi-view vision
CN112232128B (en) Eye tracking based method for identifying care needs of old disabled people
CN112183200A (en) Eye movement tracking method and system based on video image
CN112329727A (en) Living body detection method and device
CN115035546B (en) Three-dimensional human body posture detection method and device and electronic equipment
JP2018101212A (en) On-vehicle device and method for calculating degree of face directed to front side
US20180055353A1 (en) Method and apparatus for detecting the viewing direction of a person
JP6906943B2 (en) On-board unit
CN111526286B (en) Method and system for controlling motor motion and terminal equipment
Gada et al. Object recognition for the visually impaired
CN114093007A (en) Binocular camera face image abnormity monitoring method and system
CN110781712B (en) Human head space positioning method based on human face detection and recognition
JP2018149234A (en) Fixation point estimation system, fixation point estimation method, and fixation point estimation program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23814849

Country of ref document: EP

Kind code of ref document: A1